id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2301.13406 | New perspectives on semi-primal varieties | We study varieties generated by semi-primal lattice-expansions by means of
category theory. We provide a new proof of the Keimel-Werner topological
duality for such varieties and, using similar methods, establish its discrete
version. We describe multiple adjunctions between the variety of Boolean
algebras and the variety generated by a semi-primal lattice-expansion, both on
the topological side and explicitly algebraic. In particular, we show that the
Boolean skeleton functor has two adjoints, both defined by taking certain
Boolean powers, and we identify properties of these adjunctions which fully
characterize semi-primality of an algebra. Lastly, we give a new
characterization of canonical extensions of algebras in semi-primal varieties
in terms of their Boolean skeletons. | Alexander Kurz, Wolfgang Poiger, Bruno Teheux | 2023-01-31T04:47:06Z | http://arxiv.org/abs/2301.13406v2 | # New perspectives on semi-primal varieties
###### Abstract.
We study varieties generated by semi-primal lattice-expansions by means of category theory. We provide a new proof of the Keimel-Werner topological duality for such varieties and, using similar methods, establish its discrete version. We describe multiple adjunctions between the variety of Boolean algebras and the variety generated by a semi-primal lattice-expansion, both on the topological side and explicitly algebraic. In particular, we show that the Boolean skeleton functor has two adjoints, both defined by taking certain Boolean powers, and we identify properties of these adjunctions which fully characterize semi-primality of an algebra. Lastly, we give a new characterization of canonical extensions of algebras in semi-primal varieties in terms of their Boolean skeletons.
Key words and phrases:semi-primal algebras, primal algebras, ternary discriminator, stone duality, boolean skeleton, boolean power, canonical extension, universal algebra, category theory 2
Introduction
Let \(\mathcal{A}\) be a field of characteristic \(\mathcal{A}\), and let \(\mathcal{B}\) be a field of characteristic \(\mathcal{A}\). A _field_\(\mathcal{A}\) is a field of characteristic \(\mathcal{A}\), and \(\mathcal{B}\) is a field of characteristic \(\mathcal{A}\). The field \(\mathcal{A}\) is called _field_ if it is a field of characteristic \(\mathcal{A}\).
particular, we illustrate why the subalgebra adjunction \(\mathsf{Q}\dashv\mathsf{I}\) corresponding to the smallest subalgebra of \(\mathbf{L}\) is of special interest. Indeed, towards the end of Section 4 we also show that the existence of an adjoint situation resembling \(\mathsf{I}\dashv\mathfrak{S}\dashv\mathfrak{P}\) fully characterizes semi-primality of a lattice-based algebra (see Theorem 4.19). Building on the results of Section 4, in Section 5 we prove the above-mentioned discrete duality for \(\mathsf{Pro}(\mathcal{A}^{\omega})\). It is well-known that the algebras in this category correspond to the _canonical extensions_[33, 21] of algebras in \(\mathcal{A}\). Notably, we show that these canonical extensions may be characterized almost purely in terms of their Boolean skeletons (see Theorem 5.10). Lastly we connect Sections 4 and 5 by describing an analogue of the Stone-Cech compactification in our setting (see Proposition 5.11).
We summarize our results schematically in Section 6 (see Figure 6). In addition to the logical ramifications already mentioned, we believe that there are more potential ways to follow up our results. In particular, we hope to inspire further research in universal algebra through the lens of category-theory. Some open questions directly related to the content of this paper are also collected in Section 6.
## 2. Semi-primal algebras and the varieties they generate
In the 1950s, Foster introduced the concept of primality in [24, 25], generalizing functional completeness of the two-element Boolean algebra \(\mathbf{2}\). A finite algebra \(\mathbf{L}\) is called _primal_ if, for all \(n\geq 1\), every function \(f\colon L^{n}\to L\) is term-definable in \(\mathbf{L}\). Besides the two-element Boolean algebra \(\mathbf{2}\), the \((n+1)\)-element Post chain \(\mathbf{P}_{n}\) and the field of prime order \(\mathbb{Z}/p\mathbb{Z}\) with \(0\) and \(1\) as constants are some famous examples of primal algebras.
Using Stone duality, Hu [36, 37] showed that a variety \(\mathcal{A}\) is generated by a primal algebra (in other words, \(\mathcal{A}=\mathbb{H}\mathbb{P}(\mathbf{L})\) for some primal algebra \(\mathbf{L}\)) if and only if \(\mathcal{A}\) is categorically equivalent to the variety of Boolean algebras \(\mathsf{BA}\) (see also [51] for a treatment using Lawvere theories). Of course we don't expect any more meaningful category theoretical results about the relationship between \(\mathcal{A}\) and \(\mathsf{BA}\) in this case. One purpose of this paper is to demonstrate that, in contrast, such results _do_ arise as soon as we assume that \(\mathbf{L}\) is semi-primal.
### Characterizations of semi-primality
Since Foster's original work, many variations of primality have been introduced (for an overview see, e.g., [54]). Among them, intuitively speaking, semi-primality seems to still be rather close to primality (a central theme of this paper is to show why this intuition is justified). In a slogan: _semi-primal algebras are like primal algebras which allow subalgebras_.
Note that a primal algebra \(\mathbf{L}\) does not have any proper subalgebra \(\mathbf{S}\lneq\mathbf{L}\). Otherwise, picking any \(s\in S\) and \(\ell\in L\backslash S\), no function \(f:L\to L\) with \(f(s)=\ell\) can possibly be term-definable.
Semi-primality, introduced by Foster and Pixley in 1964 (see [29]) does not impose this restriction. Recall that a function \(f\colon L^{n}\to L\)_preserves subalgebras_ if \(f(a_{1},\ldots,a_{n})\) is in the subalgebra generated by \(\{a_{1},\ldots,a_{n}\}\) for any choice of \(a_{1},\ldots,a_{n}\in L\). Clearly, if a function is term-definable, then it preserves subalgebras. In semi-primal algebras, the converse also holds.
**Definition 2.1**.: A finite algebra \(\mathbf{L}\) is _semi-primal_ if for every \(n\geq 1\), every function \(f\colon L^{n}\to L\) which preserves subalgebras is term-definable in \(\mathbf{L}\).
For example, the field of prime-order \(\mathbb{Z}/p\mathbb{Z}\) with only \(0\) as constant is semi-primal but not primal anymore - it now has \(\{0\}\) as proper subalgebra. More interesting examples are described in detail in Subsection 2.3. In the following we recall two well-known equivalent characterizations of semi-primality. The first one is based on the ternary discriminator term and the second one is based on the existence of a majority term.
First we recall the characterization of semi-primal algebras as special instances of _discriminator algebras_. These are the algebras in which the _ternary discriminator_
\[t(x,y,z)=\begin{cases}z&\text{ if }x=y\\ x&\text{ if }x\neq y\end{cases}\]
is term-definable. Finite discriminator algebras are also called _quasi-primal_.
An _internal isomorphism_ of \(\mathbf{L}\) is an isomorphism \(\varphi\colon\mathbf{S}_{1}\to\mathbf{S}_{2}\) between any two (not necessarily distinct) subalgebras \(\mathbf{S}_{1}\) and \(\mathbf{S}_{2}\) of \(\mathbf{L}\). For example, if \(\mathbf{S}\leq\mathbf{L}\) is a subalgebra, then the identity \(id_{S}\) is an internal isomorphism of \(\mathbf{L}\). In semi-primal algebras, there are no other internal isomorphisms.
**Proposition 2.2**.: _[_50_, Theorem 3.2.]_ _A finite algebra \(\mathbf{L}\) is semi-primal if and only if it is quasi-primal and the only internal isomorphisms of \(\mathbf{L}\) are the identities on subalgebras of \(\mathbf{L}\)._
Secondly, we recall the characterization of semi-primality based on a majority term, which can be useful to generate examples (see, for example, [22]). Recall that a _majority term_ is a ternary term \(m(x,y,z)\) satisfying
\[m(x,x,y)=m(x,y,x)=m(y,x,x)=x.\]
In particular, every lattice \(\mathbf{L}=(L,\wedge,\vee)\) has a majority term given by the _median_
\[m(x,y,z)=(x\wedge y)\vee(x\wedge z)\vee(y\wedge z).\]
**Proposition 2.3**.: _[_3_, Theorem 7.2.]_ _A finite algebra \(\mathbf{L}\) is semi-primal if and only if it has a majority term and every subalgebra of \(\mathbf{L}^{2}\) is either the product of two subalgebras or the diagonal of a subalgebra of \(\mathbf{L}\)._
The structure of semi-primal varieties was already well-studied in the original work by Foster and Pixley during the 1960s. To stay self-contained, we recall some results about these varieties which will be of use for us later.
**Proposition 2.4**.: _[_29_, Theorem 4.2]_ _The variety \(\mathcal{A}\) generated by a semi-primal algebra \(\mathbf{L}\) coincides with the quasi-variety generated by \(\mathbf{L}\), that is \(\mathcal{A}=\mathbb{ISP}(\mathbf{L})\)._
In addition to the characterizations above, there is a nice characterization of semi-primality of \(\mathbf{L}\) in terms of \(\mathcal{A}\). Recall that a variety is called _arithmetical_ if it is congruence distributive and congruence permutable.
**Proposition 2.5**.: _[_30_, Theorem 3.1]_ _A finite algebra \(\mathbf{L}\) is semi-primal if and only if the variety generated by \(\mathbf{L}\) is arithmetical, every subalgebra of \(\mathbf{L}\) is simple, and the only internal isomorphisms of \(\mathbf{L}\) are the identities of subalgebras._
_Remark 1_.: Together with Proposition 3.5 this implies that if \(\mathbf{L}\) is semi-primal, then the collection of subalgebras \(\mathbb{S}(\mathbf{L})\) considered as a _subcategory_ of the variety generated by \(\mathbf{L}\), forms a lattice.
The finite members of \(\mathcal{A}\) are particularly well-behaved. For notation, given a concrete category \(\mathsf{C}\), we use \(\mathsf{C}^{\omega}\) to denote the full subcategory of \(\mathsf{C}\) generated by its finite members. In particular, if \(\mathcal{A}\) is a variety, we use \(\mathcal{A}^{\omega}\) to denote the category of finite algebras in \(\mathcal{A}\).
**Proposition 2.6**.: _[_29_, Theorem 7.1]_ _Every finite algebra \(\mathbf{A}\in\mathcal{A}^{\omega}\) is isomorphic to a direct product of subalgebras of \(\mathbf{L}\)._
We add yet another characterization of semi-primality in our particular case of interest (in which the algebra is based on a bounded lattice) in the following subsection (see Proposition 2.8).
### Semi-primal bounded lattice expansions
In this subsection we set the scene for the remainder of this paper. We aim to describe the relationship between the variety \(\mathsf{BA}\) of Boolean algebras and the variety generated by a semi-primal algebra _with underlying bounded lattice_.
Under the additional assumption that \(\mathbf{L}\) is based on a bounded lattice, there is another nice characterization of semi-primality of \(\mathbf{L}\) which will be particularly useful for our purposes. It relies on the following unary terms.
**Definition 2.7**.: Let \(\mathbf{L}\) be an algebra based on a bounded lattice \(\mathbf{L}^{\flat}=(L,\wedge,\vee,0,1)\). For all \(\ell\in L\) we define \(T_{\ell}\colon L\to L\) and \(\tau_{\ell}\colon L\to L\) to be the characteristic function of \(\{\ell\}\) and \(\{\ell^{\prime}\geq\ell\}\), respectively. That is,
\[T_{\ell}(x)=\begin{cases}1&\text{ if }x=\ell\\ 0&\text{ if }x\neq\ell\end{cases}\qquad\text{and}\qquad\tau_{\ell}(x)=\begin{cases} 1&\text{ if }x\geq\ell\\ 0&\text{ if }x\not\geq\ell.\end{cases}\]
Even though the following result is essentially an instance of the more general [26, Theorem 4.1], we include an easy direct proof here.
**Proposition 2.8**.: _[_26_, Theorem 4.1]_ _Let \(\mathbf{L}\) be a finite algebra with an underlying bounded lattice. Then the following conditions are equivalent:_
1. \(\mathbf{L}\) _is semi-primal._
2. _For every_ \(\ell\in\mathbf{L}\)_, the function_ \(T_{\ell}\) _is term-definable in_ \(\mathbf{L}\)_._
3. \(T_{0}\) _is term-definable and for every_ \(\ell\in\mathbf{L}\)_, the function_ \(\tau_{\ell}\) _is term-definable in_ \(\mathbf{L}\)_._
Proof.: \((1)\Rightarrow(2)\): Since every subalgebra of \(\mathbf{L}\) contains the set \(\{0,1\}\), semi-primality of \(\mathbf{L}\) implies that all \(T_{\ell}\) are term-definable, since they preserve subalgebras.
\((2)\Rightarrow(1)\): First we show that the ternary discriminator is term-definable in \(\mathbf{L}\). Consider the term
\[c(x,y)=\bigvee_{\ell\in L}\big{(}(T_{\ell}(x)\wedge T_{\ell}(y)\big{)},\]
which satisfies
\[c(x,y)=\begin{cases}1&\text{ if }x=y\\ 0&\text{ if }x\neq y\end{cases}\]
and \(d(x,y):=T_{0}(c(x,y))\) (note that this is the discrete metric). The term
\[t(x,y,z)=(d(x,y)\wedge x)\vee(c(x,y)\wedge z)\]
yields the ternary discriminator on \(\mathbf{L}\). Now we show that the only internal isomorphisms of \(\mathbf{L}\) are the identities of subalgebras. Let \(\varphi:\mathbf{S}_{1}\to\mathbf{S}_{2}\) be an internal
isomorphism of \(\mathbf{L}\) and let \(s\in S_{1}\) be arbitrary. Then
\[1=T_{\varphi(s)}\big{(}\varphi(s)\big{)}=\varphi\big{(}T_{\varphi(s)}(s)\big{)}\]
Since \(\varphi(0)=0\) we necessarily have \(T_{\varphi(s)}(s)=1\), which is equivalent to \(\varphi(s)=s\). Altogether, due to Proposition 2.2, we showed that \(\mathbf{L}\) is semi-primal.
\((2)\Rightarrow(3)\): If the \(T_{\ell}\) are term-definable we can define
\[\tau_{\ell}(x)=\bigvee_{\ell^{\prime}\geq\ell}T_{\ell^{\prime}}(x).\]
\((3)\Rightarrow(2)\): If \(T_{0}\) and the \(\tau_{\ell}\) are term-definable we can define
\[T_{\ell}(x)=\tau_{\ell}(x)\wedge\bigwedge_{\ell^{\prime}>\ell}T_{0}\big{(}\tau _{\ell^{\prime}}(x)\big{)},\]
which concludes the proof.
_Remark 2_.: In light of this result, we can turn any finite bounded lattice into a semi-primal algebra by adding \(T_{\ell}\) as unary operation for every element \(\ell\in L\). One might wonder how this differs from adding a constant symbol (i.e., a nullary operation) for every element. The difference is that adding a constant imposes the requirement that every subalgebra needs to contain the element corresponding to this constant. Thus, the algebra that results after adding all constants does not have any proper subalgebras.
We now state our main assumption, which from now on holds for the remainder of this paper.
**Assumption 2.9**.: **The finite algebra \(\mathbf{L}\) is semi-primal and has an underlying bounded lattice.**
From now on, let \(\mathcal{A}:=\mathbb{HSP}(\mathbf{L})\) denote the variety generated by \(\mathbf{L}\). In Subsection 2.3 we provide various examples of algebras satisfying Assumption 2.9.
As noted in [43] (where the same assumption on \(\mathbf{L}\) is made), from the point of view of many-valued logic, semi-primal algebras make good candidates for algebras of truth-values. In this context the underlying bounded lattice is a natural minimal requirement.
### Examples of semi-primal algebras
In this subsection we collect some examples of semi-primal algebras. All of them are bounded lattice expansions (since most of them stem from many-valued logic), thus they all fit the scope of this paper (see Assumption 2.9). For other examples we refer the reader to [10, 58, 45].
First, we describe several different semi-primal algebras based on finite chains. To get examples based on lattices which are not necessarily totally ordered, in Example 2.3.2 (and Appendix A) we discuss semi-primal residuated lattices. In particular we describe a systematic way to identify them among the \(\mathsf{FL}_{ew}\)-algebras. Similarly, Example 2.3.3 illustrates how to identify semi-primal algebras which need not be totally ordered among the pseudo-logics. At the end of this subsection we recall Murskii's Theorem which states that, in some sense, almost all finite lattice-based algebras are semi-primal.
#### 2.3.1. Chain-based algebras
We will describe several different ways of turning the \((n+1)\)-element chain \(\{0,\frac{1}{n},\ldots,\frac{n-1}{n},1\}\) with its usual lattice-order into a semi-primal algebra. We present the examples ordered decreasingly by the amount of subalgebras.
First, turning a chain into a semi-primal algebra without any further impositions may be achieved as follows.
**Example 2.10**.: The \(n\)_-th general semi-primal chain_ is given by
\[\mathbf{T}_{n}=\big{(}\{0,\tfrac{1}{n},\ldots,\tfrac{n-1}{n},1\},\wedge, \vee,0,1,(T_{\frac{i}{n}})_{i=0}^{n}\big{)},\]
where the unary operations \(T_{\frac{i}{n}}\) are the ones from Definition 2.7. For all \(n\geq 1\) the algebra \(\mathbf{T}_{n}\) is semi-primal (this immediately follows from Proposition 2.8). Every subset of \(T_{n}\) which contains the set \(\{0,1\}\) defines a subalgebra of \(\mathbf{T}_{n}\).
Next we find examples among the _Lukasiewicz-Moisil algebras_, which were originally intended to give algebraic semantics for Lukasiewicz finitely-valued logic. It turns out, however, that they encompass a bit more than that (see [15]). The logic corresponding to these algebras is nowadays named after Moisil.
**Example 2.11**.: The \(n\)_-th Lukasiewicz-Moisil chain_ is given by
\[\mathbf{M}_{n}=\big{(}\{0,\tfrac{1}{n},\ldots,\tfrac{n-1}{n},1\},\wedge, \vee,\neg,0,1,(\tau_{\frac{i}{n}})_{i=1}^{n}\big{)},\]
where \(\neg x=1-x\) and the unary operations \(\tau_{\frac{i}{n}}\) are the ones from Definition 2.7. For all \(n\geq 1\), the algebra \(\mathbf{M}_{n}\) is semi-primal. This follows from characterization (3) of Proposition 2.8 - we only have to check that \(T_{0}\) is term-definable. To see this note that we can define \(T_{1}(x)=\tau_{1}(x)\) and \(T_{0}(x)=T_{1}(\neg x)\).
We proceed with a classical example from many-valued logic among the finite _MV-algebras_ introduced by Chang (see [13, 14]). They give rise to the algebraic counterpart of Lukasiewicz finite-valued logic.
**Example 2.12**.: The \(n\)_-th Lukasiewicz chain_ is given by
\[\mathbf{L}_{n}=\big{(}\{0,\tfrac{1}{n},\ldots,\tfrac{n-1}{n},1\},\wedge, \vee,\oplus,\odot,\neg,0,1\big{)},\]
where \(x\oplus y=\min(x+y,1)\), \(x\odot y=\max(x+y-1,0)\) and \(\neg x=1-x\). For all \(n\geq 1\), the algebra \(\mathbf{L}_{n}\) is semi-primal. The proof of this fact can be found in [48, Proposition 2.1]. The subalgebras of \(\mathbf{L}_{n}\) correspond to the divisors \(d\) of \(n\) and are of the form
\[\mathbf{L}_{d}=\{0,\tfrac{k}{n},\ldots,\tfrac{(d-1)k}{n},1\}\text{ where }n=kd.\]
Other semi-primal chains are found among the _Cornish algebras_, which generalize Ockham algebras (see [18, 19]).
**Example 2.13**.: The \(n\)_-th semi-primal Cornish chain_ is given by
\[\mathbf{CO}_{n}=\big{(}\{0,\tfrac{1}{n},\ldots,\tfrac{n-1}{n},1\},\wedge, \vee,\neg,f,0,1\big{)},\]
where \(\neg x=1-x\), \(f(0)=0,f(1)=1\) and \(f(\tfrac{i}{n})=\tfrac{i+1}{n}\) for \(1\leq i\leq n-1\). For all \(n\geq 1\), the algebra \(\mathbf{CO}_{n}\) is semi-primal. The proof of this fact can be found in [19, Example 5.15]. The only proper subalgebra of \(\mathbf{CO}_{n}\) is \(\{0,1\}\).
Finally, among the _Post-algebras_ we find the well-known examples of chain-based algebras which are not only semi-primal, but even primal.
**Example 2.14**.: The _\(n\)-th Post chain_ is given by
\[\mathbf{P}_{n}=\left(\{0,\frac{1}{n},\ldots,\frac{n-1}{n},1\},\wedge,\vee,^{ \prime},0,1\right)\]
where \(1^{\prime}=0\) and \((\frac{i}{n})^{\prime}=(\frac{i+1}{n})\) for \(0\leq i<n\). For all \(n\geq 1\), the algebra \(\mathbf{P}_{n}\) is primal (see, e.g., [24, Theorem 35])
#### 2.3.2. Residuated Lattices
For a general survey of residuated lattices we refer the reader to [32, 39]. We only consider bounded commutative residuated lattices here, with a particular focus on _\(\mathsf{FL}_{ew}\)-algebras_.
**Definition 2.15**.: A _(bounded commutative) residuated lattice_ is an algebra
\[\mathbf{R}=(R,\wedge,\vee,0,1,\odot,e,\rightarrow)\]
such that \((R,\wedge,\vee,0,1)\) is a bounded lattice, \((R,\odot,e)\) is a commutative monoid and the binary operation \(\rightarrow\) satisfies the residuation condition
\[x\odot y\leq z\Leftrightarrow x\leq y\to z.\]
We call \(\mathbf{R}\) a _\(\mathsf{FL}_{ew}\)-algebra_ if, in addition, it satisfies \(e=1\).
Our main tool to identify semi-primal \(\mathsf{FL}_{ew}\)-algebras is [42, Theorem 3.10], which implies that a \(\mathsf{FL}_{ew}\)-algebra \(\mathbf{R}\) is quasi-primal if and only if there is some \(n\geq 1\) such that
\[x\vee\neg(x^{n})=1\text{ for all }x\in R, \tag{1}\]
where, as usual, we define \(\neg x\) as \(x\to 0\) (and \(x^{n}\) refers to the \(n\)-th power with respect to \(\odot\)). For our purposes this theorem has the following practical consequence.
**Corollary 2.16**.: _Let \(\mathbf{R}\) be a finite \(\mathsf{FL}_{ew}\)-algebra. If \(\mathbf{R}\) does not contain any idempotent elements (that is, elements with \(x\odot x=x\)) other than \(0\) and \(1\), then \(\mathbf{R}\) is quasi-primal. If \(\mathbf{R}\) is based on a chain, the converse also holds._
Proof.: Let \(\mathbf{R}\) be a finite \(\mathsf{FL}_{ew}\)-algebra with no other idempotent elements than \(0\) and \(1\). Recall that, for any \(a\in R\), we have \(\neg a=a\to 0=\bigvee\{b\in R\mid a\odot b\leq 0\}\). Let \(a\in R\backslash\{0,1\}\). We show that there is some \(n_{a}\) such that \(a^{n_{a}}=0\). Since \(a\) is not idempotent we have \(a^{2}<a\). Either \(a^{2}=0\) and we are done or \(a^{2}\) is again not idempotent. In this case we have \(a^{4}<a^{2}\) and we repeat the argument. Since \(\mathbf{R}\) is finite, continuing this process we eventually need to find \(a^{2^{k}}=0\). Now \(\mathbf{R}\) satisfies equation (1) for \(n=\bigvee\{n_{a}\mid a\in R\backslash\{0,1\}\}\), since we always have
\[a\vee\neg(a^{n})=a\vee\neg 0=a\lor 1=1.\]
Thus \(\mathbf{R}\) is quasi-primal.
Now suppose that \(\mathbf{R}\) is based on a chain. If \(a\in R\backslash\{0,1\}\) is idempotent, then \(\neg a<a\) since for all \(b\geq a\) we have \(a\odot b\geq a\odot a=a\). Therefore, for all \(n\geq 1\) we have \(a\vee\neg(a^{n})=a\vee\neg a=a\neq 1\). Thus, \(\mathbf{R}\) does not satisfy equation (1) and is not quasi-primal.
_Remark 3_.: The second part of the argument really requires \(\mathbf{R}\) to be based on a chain. For example, consider the \(4\)-element diamond lattice \(0\leq a,b\leq 1\) with \(a\wedge b=0\) and \(a\lor b=1\). We can define a \(\mathsf{FL}_{ew}\)-algebra based on this lattice by stipulating \(a^{2}=a\), \(b^{2}=b\) and \(a\odot b=0\). Even though \(a\) and \(b\) are idempotent, we have \(a\vee\neg a=a\lor b=1\) and \(b\vee\neg b=b\lor a=1\). Therefore, this algebra is quasi-primal (it is, however, not semi-primal, since it has the non-trivial automorphism swapping \(a\) and \(b\)).
In [31] Galatos and Jipsen provide a list of all finite residuated lattices of size up to 6. Corollary 2.16 enables us to find quasi-primal \(\mathsf{FL}_{ew}\)-algebras among them and thus, using Proposition 2.2, we can identify the semi-primal ones by ruling out the existence of non-trivial internal isomorphisms. For example, there is a total of six quasi-primal \(\mathsf{FL}_{ew}\)-chains with 5 elements \((R_{1,17}^{5,1},R_{1,18}^{5,1}\ldots R_{1,22}^{5,1}\) in [31]), five of which are semi-primal (all except \(R_{1,17}^{5,1}\)). Examples of semi-primal \(\mathsf{FL}_{ew}\)-algebras not based on a chain are, e.g., \(R_{1,11}^{6,2}\) and \(R_{1,9}^{6,3}\) in [31]. The algebras in question are depicted in Appendix A, where we also provide detailed proofs of these claims.
While until now we discussed how to identify semi-primal \(\mathsf{FL}_{ew}\)-algebras, we end this subsection with two examples of semi-primal algebras based on residuated lattices where \(1\neq e\).
Specifically, we consider the _bounded De Morgan monoids_\(\mathbf{C_{4}^{01}}\) and \(\mathbf{D_{4}^{01}}\) depicted in Figure 1.
They are bounded commutative residuated lattices with an additional involution \(\sim\) which, in both examples, is defined by \(\sim\)\(e=a\) and \(\sim\)\(0=1\). Our names for these algebras are inspired by [46], where \(\mathbf{C_{4}}\) and \(\mathbf{D_{4}}\) are used for the corresponding De Morgan monoids with the bounds 0 and 1 excluded from the signature (in [46] it is shown that each of these two algebras generates a minimal subvariety of the variety of all De Morgan monoids).
**Proposition 2.17**.: _The algebras \(\mathbf{C_{4}^{01}}\) and \(\mathbf{D_{4}}^{01}\) are primal. Their reducts obtained by removing the neutral element \(e\) from the signature, are semi-primal._
Proof.: Starting with \(\mathbf{C_{4}^{01}}\), we directly verify that it satisfies characterization (3) of Proposition 2.8. First we define \(T_{1}\) and, therefore, \(T_{0}(x)=T_{1}(\sim\)\(x)\). As in [22], we do this by, for all \(\ell\in\{0,e,a\}\), defining unary terms \(u_{\ell}\) satisfying
\[u_{\ell}(x)=\begin{cases}1&\text{ if }x=1\\ 0&\text{ if }x=\ell\\ *&\text{ otherwise,}\end{cases}\]
were \(*\) indicates that any value is allowed. For instance, we can define such terms by
\[u_{0}(x)=x\wedge 1,\ \ u_{e}(x)=\sim\bigl{(}(\sim\!x)^{2}\bigr{)}\text{ and }u_{a}(x)=\sim\bigl{(}(\sim\!x)\odot 1 \bigr{)}.\]
Through these terms we can clearly define \(T_{1}(x)=u_{0}(x)\wedge u_{e}(x)\wedge u_{a}(x)\). Lastly, we need to define \(\tau_{\ell}\) for \(\ell\in\{e,a\}\). Again, it suffices to find terms \(\tau_{\ell}^{*}\) which satisfy
\[\tau_{\ell}^{*}(x)=\begin{cases}1&\text{if }x\geq\ell\\ \neq 1&\text{if }x\ngeq\ell,\end{cases}\]
since then we get \(\tau_{\ell}=T_{1}(\tau_{\ell}^{*})\). Our desired terms are given by
\[\tau_{e}^{*}(x)=\left(\left(\sim\!x\right)^{2}\odot x\right)\lor x^{2}\text{ and }\tau_{a}^{*}(x)=x^{2}.\]
This concludes the proof for \(\mathbf{C_{4}^{01}}\). The proof for \(\mathbf{D_{4}^{01}}\) is completely analogous, except that we use \(\tau_{e}^{*}(x)=\left(\left(\sim\!x\right)^{2}\odot x\right)\lor x\) instead. Thus we showed that these two algebras are semi-primal, and since they don't have any proper subalgebras they are primal. Since we never relied on the constant \(e\) in the above, the last part of the statement follows. Note that in both cases, if we exclude \(e\) from the signature then \(\{0,1\}\) becomes a proper subalgebra.
#### 2.3.3. Pseudo-logics
We illustrate how to generate more examples of semi-primal algebras which are based on a bounded lattice which is not necessarily a chain. The results and terminology are due to [17, 22]. A _pseudo-logic_
\[\mathbf{L}=(L,\wedge,\vee,^{\prime},0,1)\]
is a bounded lattice with an additional unary operation \({}^{\prime}\) which satisfies \(0^{\prime}=1\) and \(1^{\prime}=0\). In [22] it is shown that every subalgebra of \(\mathbf{L}^{2}\) which is not the graph of an internal isomorphism is a product of subalgebras if the following two properties are satisfied:
1. There is no \(a\in L\backslash\{0\}\) with \(a^{\prime}=1\),
2. For all \(a\in L\) there exists an \(n\geq 1\) with \(a\wedge a^{(2n)}=0\) (where \(a^{(k)}\) denotes the \(k\)-fold iteration of \({}^{\prime}\) on \(a\)).
Using this and the characterization of Proposition 2.3, we can find more examples of semi-primal algebras. Here, we only need to assure that the above mentioned conditions are satisfied and that there are no non-trivial internal isomorphisms. For example, the three algebras depicted in Figure 2 are semi-primal (the pseudo-negation \({}^{\prime}\) is indicated by dotted arrows).
#### 2.3.4. Murskii's Theorem
While semi-primal algebras may seem rare, quite the opposite is suggested by the following. In 1975, Murskii proved his surprising theorem about the proportion of semi-primal algebras of a fixed signature under increasing order. The original paper [47] is in Russian, the version we recall here is due to [6, Section 6.2].
Figure 2. Some semi-primal pseudo-logics ([22, 17]).
**Theorem 2.18**.: _[_47_]_ _Let \(\sigma\) be an algebraic type which contains at least one operation symbol which is at least binary. Let \(A_{\sigma,n}\) be the number of algebras of type \(\sigma\) and size \(n\) and let \(SP_{\sigma,n}\) be the number of such algebras which are semi-primal. Then_
\[\lim_{n\to\infty}\frac{SP_{\sigma,n}}{A_{\sigma,n}}=1.\]
## 3. Semi-primal duality
One of the nice features of the variety of Boolean algebras BA is the famous _Stone duality_[56]. Categorically speaking, it asserts that there is a dual equivalence between BA and the category Stone of Stone spaces (that is, compact, Hausdorff and zero-dimensional topological spaces) with continuous maps:
The functor \(\Sigma\) assigns to a Boolean algebra \(\mathbf{B}\) its collection of ultrafilters and the functor \(\Pi\) assigns to a Stone space \(X\) the Boolean algebra of its clopen subsets with the usual set-theoretical Boolean operations. Note that these functors can be defined on objects by
\[\Sigma(\mathbf{B})=\mathsf{BA}(\mathbf{B},\mathbf{2})\text{ and }\Pi(X)= \mathsf{Stone}(X,2),\]
where in the latter equation 2 denotes the two-element discrete space.
Stone duality has been extended to quasi-primal algebras by Keimel and Werner in [41]. This duality fits the general framework of _Natural Dualities_. For us, the Semi-primal Strong Duality Theorem [17, Theorem 3.3.14] is of high importance. However, we present it self-contained and in a way which particularly suits our purpose. Furthermore, we will use categorical constructions to provide a new proof of this duality. Such a proof has, to the best of our knowledge, not appeared in the literature yet.
First we introduce the dual category of \(\mathcal{A}\). In the following, we always consider \(\mathbb{S}(\mathbf{L})\) as a complete lattice in its usual ordering.
**Definition 3.1**.: The category \(\mathsf{Stone_{L}}\) has objects \((X,\mathbf{v})\) where \(X\in\mathsf{Stone}\) and
\[\mathbf{v}\colon X\to\mathbb{S}(\mathbf{L})\]
assigns to every point \(x\in X\) a subalgebra \(\mathbf{v}(x)\leq\mathbf{L}\), such that for every subalgebra \(\mathbf{S}\leq\mathbf{L}\) the preimage \(\mathbf{v}^{-1}(\mathbf{S}\downarrow)\) is closed. A morphism \(m\colon(X,\mathbf{v})\to(Y,\mathbf{w})\) in \(\mathsf{Stone_{L}}\) is a continuous map \(X\to Y\) which, for all \(x\in X\), satisfies
\[\mathbf{w}(m(x))\leq\mathbf{v}(x).\]
_Remark 4_.: In the framework of natural dualities [17], the dual category of \(\mathcal{A}\) is defined slightly differently, using Stone spaces with unary relations (i.e., subsets). Let \(\mathcal{X}\) be the category with objects \((X,\{R_{\mathbf{S}}\mid\mathbf{S}\leq\mathbf{L}\})\), where \(X\in\mathsf{Stone}\) and \(R_{\mathbf{S}}\) is a closed subset of \(X\) for each subalgebra \(\mathbf{S}\leq\mathbf{L}\), satisfying \(R_{\mathbf{L}}=X\) and \(R_{\mathbf{S_{1}}}\cap R_{\mathbf{S_{2}}}=R_{\mathbf{S_{1}}\cap\mathbf{S_{2}}}\) for all \(\mathbf{S_{1}},\mathbf{S_{2}}\leq\mathbf{L}\). A morphism \(m\colon(X,\{R_{\mathbf{S}}\mid\mathbf{S}\leq\mathbf{L}\})\to(X^{\prime},\{R^{ \prime}_{\mathbf{S}}\mid\mathbf{S}\leq\mathbf{L}\})\) in \(\mathcal{X}\) is a continuous relation-preserving map \(X\to X^{\prime}\), i.e., it satisfies \(x\in R_{\mathbf{S}}\Rightarrow m(x)\in R^{\prime}_{\mathbf{S}}\) for all \(x\in X\) and \(\mathbf{S}\in\mathbb{S}(\mathbf{L})\).
The categories \(\mathcal{X}\) and \(\mathsf{Stone_{L}}\) are isomorphic, as witnessed by the following mutually inverse functors \(\phi\) and \(\psi\). The functor \(\phi:\mathcal{X}\to\mathsf{Stone_{L}}\) is given on objects by \((X,\{R_{\mathbf{S}}\mid\mathbf{S}\leq\mathbf{L}\})\mapsto(X,\mathbf{v})\), where
\[\mathbf{v}(x)=\bigcap\{\mathbf{S}\mid x\in R_{\mathbf{S}}\}.\]
The functor \(\psi\colon\mathsf{Stone_{L}}\to\mathcal{X}\) is given on objects by \((X,\mathbf{v})\mapsto(X,\{R_{\mathbf{S}}\mid\mathbf{S}\leq\mathbf{L}\})\) where
\[R_{\mathbf{S}}=\{x\in X\mid\mathbf{v}(x)\leq\mathbf{S}\}.\]
Both \(\phi\) and \(\psi\) map every morphism to itself.
We now describe the two contravariant functors \(\Sigma_{\mathbf{L}}\) and \(\Pi_{\mathbf{L}}\) which give rise to the duality between \(\mathcal{A}\) and \(\mathsf{Stone_{L}}\):
On objects \(\mathbf{A}\in\mathcal{A}\), let the functor \(\Sigma_{\mathbf{L}}\) be defined by
\[\Sigma_{\mathbf{L}}(\mathbf{A})=\big{(}\mathcal{A}(\mathbf{A},\mathbf{L}), \mathbf{im}\big{)}\]
where \(\mathbf{im}\) assigns to a homomorphism \(h\colon\mathbf{A}\to\mathbf{L}\) its image \(\mathbf{im}(h)=h(A)\in\mathbb{S}(\mathbf{L})\). A clopen subbasis for the topology on \(\mathcal{A}(\mathbf{A},\mathbf{L})\) is given by the collection of sets of the following form with \(a\in A\) and \(\ell\in L\):
\[[a:\ell]=\{h\in\mathcal{A}(\mathbf{A},\mathbf{L})\mid h(a)=\ell\}.\]
On morphisms \(f\in\mathcal{A}(\mathbf{A}_{1},\mathbf{A}_{2})\) the functor acts via composition
\[\Sigma_{\mathbf{L}}f\colon\mathcal{A}(\mathbf{A}_{2},\mathbf{L}) \to\mathcal{A}(\mathbf{A}_{1},\mathbf{L})\] \[h\mapsto h\circ f.\]
Note that this is a morphism in \(\mathsf{Stone_{L}}\) since \(\mathbf{im}(h\circ f)\leq\mathbf{im}(h)\).
Before we define the functor \(\Pi_{\mathbf{L}}\), we describe the canonical way to consider \(L\) as a member of \(\mathsf{Stone_{L}}\). Simply endow \(L\) with the discrete topology and
\[\langle\cdot\rangle\colon L\to\mathbb{S}(\mathbf{L})\]
assigning to an element \(\ell\in L\) the subalgebra \(\langle\ell\rangle\leq\mathbf{L}\) it generates. Now, as expected, we can define the functor \(\Pi_{\mathbf{L}}\) on objects \((X,\mathbf{v})\in\mathsf{Stone_{L}}\) by
\[\Pi_{\mathbf{L}}(X,\mathbf{v})=\mathsf{Stone_{L}}\big{(}(X,\mathbf{v}),(L, \langle\cdot\rangle)\big{)}\]
with the algebraic operations defined pointwise. This means that the carrier-set of \(\Pi_{\mathbf{L}}(X,\mathbf{v})\) is the set of continuous maps \(g\colon X\to L\) which respect \(\mathbf{v}\) in the sense that, for all \(x\in X\), they satisfy
\[g(x)\in\mathbf{v}(x).\]
Again, on morphisms \(m\colon(X,\mathbf{v})\to(Y,\mathbf{w})\) the functor is defined via composition
\[\Pi_{\mathbf{L}}m\colon\mathsf{Stone_{L}}\big{(}(Y,\mathbf{w}),( L,\langle\cdot\rangle)\big{)} \to\mathsf{Stone_{L}}\big{(}(X,\mathbf{v}),(L,\langle\cdot\rangle) \big{)}\] \[g\mapsto g\circ m.\]
This is well-defined due to the condition on morphisms in \(\mathsf{Stone_{L}}\):
\[(g\circ m)(x)=g(m(x))\in\mathbf{w}(m(x))\subseteq\mathbf{v}(x).\]
It is also clearly a homomorphism since the operations are defined pointwise.
**Theorem 3.2**.: _[_41, 17_]_ _The functors \(\Pi_{\mathbf{L}}\) and \(\Sigma_{\mathbf{L}}\) are fully faithful and establish a dual equivalence between \(\mathcal{A}\) and \(\mathsf{Stone}_{\mathbf{L}}\)._
The remainder of this section is dedicated to an alternative proof of this theorem. The idea is to directly prove the duality on the finite level, and then lift it to the infinite level using the following categorical constructions.
**Definition 3.3**.: For a finitely complete and cocomplete category \(\mathsf{C}\), its completion under filtered colimits is denoted by \(\mathsf{Ind}(\mathsf{C})\) and, dually, its completion under cofiltered limits is denoted by \(\mathsf{Pro}(\mathsf{C})\).
For example, \(\mathsf{Ind}(\mathsf{BA}^{\omega})\simeq\mathsf{BA}\) and \(\mathsf{Pro}(\mathsf{Set}^{\omega})\simeq\mathsf{Stone}\). More material about these completions can be found in Johnstone's book [40, Chapter VI] (in particular, a more rigorous definition of the \(\mathsf{Ind}\)-completion is given in VI.1.2). We only recite the following, which allows us to lift dualities between small categories (following Johnstone, dualities arising this way are called _Stone type_ dualities).
**Lemma 3.4**.: _[_40_, Lemma VI 3.1]_ _Let \(\mathsf{C}\) and \(\mathsf{D}\) be small categories which are dually equivalent. Then \(\mathsf{Ind}(\mathsf{C})\) is dually equivalent to \(\mathsf{Pro}(\mathsf{C})\)._
Our argument to prove Theorem 3.2 now has the following outline. The role of \(\mathsf{C}\) will be played by \(\mathcal{A}^{\omega}\). Since \(\mathcal{A}\) is locally finite (see, e.g., [17, Lemma 1.3.2]), it is well-known that \(\mathsf{Ind}(\mathcal{A}^{\omega})\simeq\mathcal{A}\) (see, e.g., [40, Corollary VI 2.2]). The role of \(\mathsf{D}\) will be played by \(\mathsf{Stone}_{\mathbf{L}}^{\omega}\). Since the topology doesn't matter here (because it is always discrete), we will denote this category by \(\mathsf{Set}_{\mathbf{L}}^{\omega}\) instead. To get the finite dual equivalence, we make the following observation
**Proposition 3.5**.: _Let \(\mathbf{S_{1}},\ldots,\mathbf{S_{n}}\) be subalgebras of \(\mathbf{L}\). Then the set of homomorphisms \(\mathcal{A}(\prod_{i\leq n}\mathbf{S_{i}},\mathbf{L})\) consists exactly of the projections followed by inclusions_
\[\mathsf{pr}_{i}\colon\prod_{i\leq n}\mathbf{S_{i}}\to\mathbf{S_{i}}\hookrightarrow \mathbf{L}\]
_in each component \(i\leq n\)._
Proof.: Our proof is similar to that of [12, Theorem 2.5]. Let \(h\colon\prod_{i\leq n}\mathbf{S_{i}}\to\mathbf{L}\) be a homomorphism. Since \(\mathcal{A}\) is congruence distributive (Proposition 2.5), it has the Fraser-Horn property, meaning that the congruence \(\theta:=\mathsf{ker}(h)\) is a product of congruences \(\theta_{i}\) on \(\mathbf{S_{i}}\). By the isomorphism theorem we find
\[(\prod_{i\leq n}\mathbf{S_{i}})/\theta\cong\prod_{i\leq n}(\mathbf{S_{i}}/ \theta_{i})\cong\mathsf{im}(h).\]
Since \(\mathsf{im}(h)\) is a subalgebra of \(\mathbf{L}\) and thus directly irreducible, at most one factor of \(\prod_{i\leq n}(\mathbf{S_{i}}/\theta_{i})\) can be non-trivial. Since \(\mathsf{im}(h)\) contains at least two elements (that is, \(0\) and \(1\)), precisely one factor, say \(\mathbf{S_{j}}/\theta_{j}\), is non-trivial. Since \(\mathbf{S_{j}}\) is itself semi-primal, it is simple, so \(\mathbf{S_{j}}/\theta_{j}\cong\mathbf{S_{j}}\). So \(h\) induces an internal isomorphism \(\mathbf{S_{j}}\cong\mathsf{im}(h)\), but by Proposition 2.2 this can only be the identity on \(\mathbf{S_{j}}\), thus \(h\) coincides with \(\mathsf{pr}_{j}\).
**Corollary 3.6**.: _The (restrictions of the) functors \(\Pi_{\mathbf{L}}\) and \(\Sigma_{\mathbf{L}}\) establish a dual equivalence between the small categories \(\mathsf{Set}_{\mathbf{L}}^{\omega}\) and \(\mathcal{A}^{\omega}\)._
Proof.: Let \((X,\mathbf{v})\in\mathsf{Set}_{\mathbf{L}}^{\omega}\). Then
\[\Sigma_{\mathbf{L}}\Pi_{\mathbf{L}}(X,\mathbf{v})=\Big{(}\mathcal{A}\big{(} \prod_{x\in X}\mathbf{v}(x),\mathbf{L}\big{)},\mathbf{im}\Big{)}.\]
By Proposition 3.5 this is equal to \((\{\mathsf{pr}_{x}\mid x\in X\},\mathbf{im})\), which is clearly isomorphic to \((X,\mathbf{v})\).
On the other hand, starting with \(\mathbf{A}\in\mathcal{A}^{\omega}\), we know by Proposition 2.6 that it is a product of subalgebras \(\mathbf{A}=\prod_{i\leq n}\mathbf{S}_{\mathbf{i}}\). Now, again due to Proposition 3.5, we get \(\Sigma_{\mathbf{L}}(\mathbf{A})=(\{\mathsf{pr}_{i}\mid i\leq n\},\mathbf{im})\), and thus
\[\Pi_{\mathbf{L}}\Sigma_{\mathbf{L}}(\mathbf{A})\cong\prod_{i\leq n}\mathbf{im }(\mathsf{pr}_{i})\cong\prod_{i\leq n}\mathbf{S}_{\mathbf{i}}.\]
To see that \(\Pi_{\mathbf{L}}\) and \(\Sigma_{\mathbf{L}}\) form a dual adjunction we note that for \(\mathbf{A}=\prod_{i\leq n}\mathbf{S}_{\mathbf{i}}\in\mathcal{A}^{\omega}\) and \((X,\mathbf{v})\in\mathsf{Set}_{\mathbf{L}}^{\omega}\) we have
\[\mathcal{A}^{\omega}\big{(}\Pi_{\mathbf{L}}(X,\mathbf{v}),\mathbf{A}\big{)} \cong\prod_{i\leq n}\mathcal{A}^{\omega}\big{(}\Pi_{\mathbf{L}}(X,\mathbf{v} ),\mathbf{S}_{\mathbf{i}}\big{)}\]
and
\[\mathsf{Set}_{\mathbf{L}}^{\omega}\big{(}\Sigma_{\mathbf{L}}(\mathbf{A}),(X, \mathbf{v})\big{)}\cong\mathsf{Set}_{\mathbf{L}}^{\omega}(\prod_{i\leq n}(\{ \mathsf{pr}_{i}\},\mathbf{im}),(X,\mathbf{v}))\cong\prod_{i\leq n}\mathsf{Set }_{\mathbf{L}}^{\omega}\big{(}(\{\mathsf{pr}_{i}\},\mathbf{im}),(X,\mathbf{v} )\big{)}\]
where the coproduct in \(\mathsf{Set}_{\mathbf{L}}^{\omega}\) is the obvious disjoint union. So we only need to show that
\[\mathcal{A}^{\omega}\big{(}\Pi_{\mathbf{L}}(X,\mathbf{v}),\mathbf{S}_{ \mathbf{i}}\big{)}\cong\mathsf{Set}_{\mathbf{L}}^{\omega}\big{(}(\{\mathsf{pr }_{i}\},\mathbf{im}),(X,\mathbf{v})\big{)}.\]
But this is obvious since the elements of the left-hand side are exactly the projections with image contained in \(\mathbf{S}_{\mathbf{i}}\), which are in bijective correspondence with the points of \(X\) with \(\mathbf{v}(x)\leq\mathbf{S}_{\mathbf{i}}\), that is, with elements of the right-hand side.
In order to successfully apply Lemma 3.4, it remains to show the following.
**Theorem 3.7**.: \(\mathsf{Pro}(\mathsf{Set}_{\mathbf{L}}^{\omega})\) _is categorically equivalent to \(\mathsf{Stone_{L}}\)._
Proof.: First we show that the category \(\mathsf{Stone_{L}}\) is complete. For an index set \(I\) (which we often omit), we claim that the product is computed as
\[\prod_{i\in I}(X_{i},\mathbf{v}_{\mathbf{i}})=(\prod_{i\in I}X_{i},\bigvee \mathbf{v}_{\mathbf{i}}),\]
where \(\bigvee\mathbf{v}_{\mathbf{i}}(p)=\bigvee(\mathbf{v}_{\mathbf{i}}(p_{i}))\) for all \(p\in\prod X_{i}\). It follows from
\[(\bigvee\mathbf{v}_{\mathbf{i}})^{-1}(\mathbf{S}\downarrow)=\prod\mathbf{v}_ {\mathbf{i}}{}^{-1}(\mathbf{S}\downarrow)\]
that this defines a member of \(\mathsf{Stone_{L}}\). Note that the projections are morphisms in \(\mathsf{Stone_{L}}\) since
\[\mathbf{v}_{\mathbf{i}}(\pi_{i}(p))=\mathbf{v}_{\mathbf{i}}(p_{i})\leq\bigvee \limits_{j\in I}\mathbf{v}_{\mathbf{j}}(p_{j})=(\bigvee\mathbf{v}_{\mathbf{j}} )(p).\]
If \((\gamma_{i}\colon(Y,\mathbf{w})\to(X_{i},\mathbf{v}_{\mathbf{i}})\mid i\in I)\) is another cone, there is a unique continuous map \(f\colon Y\to\prod X_{i}\) with \(\pi_{i}\circ f=\gamma_{i}\). This map is a morphism in \(\mathsf{Stone_{L}}\) since
\[(\bigvee\mathbf{v}_{\mathbf{i}})(f(y))=\bigvee\mathbf{v}_{\mathbf{i}}\big{(} \pi_{i}(f(y))\big{)}=\bigvee\mathbf{v}_{\mathbf{i}}\big{(}\gamma_{i}(f(y)) \big{)}\leq\mathbf{w}(y),\]
where the last inequality follows from \(\mathbf{v}_{\mathbf{i}}(\gamma_{i})(y)\leq\mathbf{w}(y)\) which is true since the \(\gamma_{i}\) are morphisms in \(\mathsf{Stone_{L}}\). The equalizer of \(f,g\colon(X,\mathbf{v})\to(Y,\mathbf{w})\) is simply given by \((Eq,\mathbf{v}|_{Eq})\) where \(Eq\subseteq X\) is the corresponding equalizer in \(\mathsf{Stone}\). It follows that \(\mathsf{Stone_{L}}\) has all limits. In particular, \(\mathsf{Stone_{L}}\) has all cofiltered limits, so the natural inclusion functor \(\iota\colon\mathsf{Set}_{\mathbf{L}}^{\omega}\hookrightarrow\mathsf{Stone_{L}}\) has a unique cofinitary (that is, cofiltered limit preserving) extension
\[\hat{\iota}\colon\mathsf{Pro}(\mathsf{Set}_{\mathbf{L}}^{\omega})\hookrightarrow \mathsf{Stone_{L}}.\]
Since \(\iota\) is fully faithful, to conclude that the functor \(\hat{\iota}\) is fully faithful as well it suffices to show that \(\iota\) maps all objects to finitely representable objects in \(\mathsf{Stone_{L}}\) (this is due to the analogue of [40, Theorem VI.1.8] for the \(\mathsf{Pro}\)-completion). So we need to show that any \((C,\mathbf{w})\in\mathsf{Stone_{L}}\) where \(C\) is a finite discrete space is finitely representable. In other words, we need to show that, whenever \((X,\mathbf{v})\cong\lim_{i\in I}(X_{i},\mathbf{v_{i}})\) is a cofiltered limit of a diagram \((f_{ij}\colon(X_{j},\mathbf{v_{j}})\to(X_{i},\mathbf{v_{i}})\mid i\leq j)\) in \(\mathsf{Stone_{L}}\) with limit morphisms \(p_{i}\colon(X,\mathbf{v})\to(X_{i},\mathbf{v_{i}})\), any morphism \(f\colon(X,\mathbf{v})\to(C,\mathbf{w})\) factors essentially uniquely through one of the \(p_{i}\). For this we can employ an argument similar to the one in the proof of [55, Lemma 1.1.16(b)]. On the underlying level of \(\mathsf{Stone}\), where finite discrete spaces are finitely representable, the continuous map \(f\) factors essentially uniquely through some \(p_{i}\), say via the continuous map \(g_{i}\colon X_{i}\to C\). However, \(g_{i}\) is not necessarily a morphism in \(\mathsf{Stone_{L}}\). Consider \(J=\{j\geq i\}\) and for each \(j\in J\) define \(g_{j}=f_{ij}\circ g_{i}\). Define the continuous maps \(\mu\colon X\to\mathbb{S}(\mathbf{L})^{2}\) and \(\mu_{j}\colon X_{j}\to\mathbb{S}(\mathbf{L})^{2}\) for all \(j\in J\) by
\[\mu(x)=\big{(}\mathbf{w}(f(x)),\mathbf{v}(x)\big{)}\text{ and }\mu_{j}(x)= \big{(}\mathbf{w}(g_{j}(x)),\mathbf{v_{j}}(x)\big{)}.\]
Since \(\mu(X)=\lim_{j\in J}\mu_{j}(X_{j})=\bigcap_{j\geq i}\mu_{j}(X_{j})\) is contained in the finite set \(\mathbb{S}(\mathbf{L})^{2}\) and \(J\) is directed, there is some \(k\in J\) such that
\[\mu(X)=\mu_{k}(X_{k}).\]
But now, since \(f\) is a morphism in \(\mathsf{Stone_{L}}\), we have that \(\mu(X)\subseteq\{(\mathbf{S},\mathbf{T})\mid\mathbf{S}\leq\mathbf{T}\}\), and thus the same holds for \(\mu_{k}(X_{k})\). Thus \(g_{k}\) is a morphism in \(\mathsf{Stone_{L}}\) which has the desired properties.
To finish the proof we show that \(\hat{\iota}\) is essentially surjective, in other words, we show that every element \((X,\mathbf{v})\) of \(\mathsf{Stone_{L}}\) is isomorphic to a cofiltered limit of elements of \(\mathsf{Set_{L}^{\omega}}\). We do this in a manner similar to [55, Theorem 1.1.12]. Let \(\mathcal{R}\) consist of all finite partitions of \(X\) into clopen sets. Together with the order \(R\leq R^{\prime}\) if and only if \(R^{\prime}\) refines \(R\) this forms a codirected set and in [55, Theorem 1.1.12] it is shown that \(X\cong\lim_{R\in\mathcal{R}}R\). We now turn every \(R\in\mathcal{R}\) into a member of \(\mathsf{Set_{L}^{\omega}}\) by endowing it with an appropriate \(v_{R}\colon R\to\mathbb{S}(\mathbf{L})\) and show that \((X,v)=\lim_{R\in\mathcal{R}}(R,v_{R})\). For \(R\in\mathcal{R}\), say \(R=\{\Omega_{1},\dots,\Omega_{k}\}\), we define
\[v_{R}^{-1}(\mathbf{S}\downarrow)=\{\Omega_{i}\mid\Omega_{i}\cap\mathbf{v}^{-1 }(\mathbf{S}\downarrow)\neq\varnothing\}.\]
The map \(p_{R}\colon X\to R\) defined by \(p_{R}(x)=\Omega_{i}\Leftrightarrow x\in\Omega_{i}\) is a morphism in \(\mathsf{Stone_{L}}\) since \(\mathbf{v}(x)=\mathbf{S}\) and \(x\in\Omega_{i}\) implies \(v_{R}(p_{R}(x))\in v_{R}^{-1}(\mathbf{S}\downarrow)\). Is is easy to see that this defines a cone over the diagram \((R,v_{R})_{R\in\mathcal{R}}\), so there is a unique \(f\colon(X,\mathbf{v})\to\lim_{R\in\mathcal{R}}(R,v_{R})\) in \(\mathsf{Stone_{L}}\). As in \(\mathsf{Stone}\), the map \(f\) is a homeomorphism. To complete the proof it suffices to show that \(f^{-1}\) is a morrhism in \(\mathsf{Stone_{L}}\) as well. Say \(\lim_{R\in\mathcal{R}}(R,v_{R})=(Y,\mathbf{w})\) and let \(\pi_{R}\colon(Y,\mathbf{w})\to(R,v_{R})\) denote the limit morphisms. Assuming \(\mathbf{w}(y)=\mathbf{S}\) we want to show \(f^{-1}(y)\in\mathbf{v}^{-1}(\mathbf{S}\downarrow)\). Let \(\Omega\subseteq X\) be an arbitrary clopen set containing \(f^{-1}(y)\). Then \(R=\{\Omega,X\backslash\Omega\}\in\mathcal{R}\) and
\[\Omega=p_{R}(f^{-1}(y))=\pi_{R}(y)\in v_{R}^{-1}(\mathbf{S}\downarrow).\]
By definition this means that \(\Omega\cap\mathbf{v}^{-1}(\mathbf{S}\downarrow)\neq\varnothing\). Since this holds for every \(\Omega\) containing \(f^{-1}(y)\), this implies that \(f^{-1}(y)\) is in the closure \(\overline{\mathbf{v}^{-1}(\mathbf{S}\downarrow)}\). However, this closure coincides with \(\mathbf{v}^{-1}(\mathbf{S}\downarrow)\), since by definition of \(\mathsf{Stone_{L}}\) this is a closed set already.
As discussed before, this yields our alternative proof of Theorem 3.2. In Section 5 we investigate the other dual equivalence which can be obtained from the finite dual equivalence of Corollary 3.6. More specifically, there we describe \(\mathsf{Ind}(\mathsf{Set}_{\mathbf{L}}^{\omega})\) and its dual, the category of profinite algebras \(\mathsf{Pro}(\mathcal{A}^{\omega})\). This is the'semi-primal version' of the duality between \(\mathsf{Ind}(\mathsf{Set}^{\omega})\simeq\mathsf{Set}\) and \(\mathsf{Pro}(\mathsf{BA}^{\omega})\simeq\mathsf{CABA}\).
Before that, in the following section we investigate the relationship between \(\mathsf{Stone_{L}}\) and \(\mathsf{Stone}\) and, more interestingly, between \(\mathcal{A}\) and \(\mathsf{BA}\).
## 4. A chain of adjunctions
In this section we explore the relationship between Stone duality and the semi-primal duality discussed in the previous section. We start with the connection between \(\mathsf{Stone_{L}}\) and \(\mathsf{Stone}\), which will be expressed in terms of a chain of four adjoint functors (similar to one in [57]). Then we look at the duals of these functors and give them purely algebraic descriptions to gain insight into the structure of \(\mathcal{A}\) relative to that of \(\mathsf{BA}\). The entire situation is summarized in Figure 3, which we will have fully described at the end of this section (note that left-adjoints on the topological side correspond to right-adjoints on the algebraic side and vice-versa, since the functors \(\Pi_{\mathbf{L}},\Sigma_{\mathbf{L}}\) and \(\Sigma,\Pi\) which establish the two dualities are contravariant).
### Four functors on the topological side
Let \(\mathsf{U}\colon\mathsf{Stone_{L}}\to\mathsf{Stone}\) be the obvious forgetful functor. This functor has a left-adjoint and a right-adjoint \(\mathsf{V}^{\top}\dashv\mathsf{U}\dashv\mathsf{V}^{\perp}\). The two functors \(\mathsf{V}^{\top},\mathsf{V}^{\perp}\colon\mathsf{Stone_{L}}\to\mathsf{Stone}\) are given on objects by
\[V^{\top}(X) =(X,\mathbf{v}^{\top})\text{ where }\forall x\in X:\mathbf{v}^{ \top}(x)=\mathbf{L},\] \[V^{\perp}(X) =(X,\mathbf{v}^{\perp})\text{ where }\forall x\in X:\mathbf{v}^{ \perp}(x)=\langle 0,1\rangle\]
and both assign every morphism to itself. Here \(\langle 0,1\rangle\) is the subalgebra generated by \(\{0,1\}\), that is, the (unique) smallest subalgebra of \(\mathbf{L}\).
To see \(V^{\top}\dashv\mathsf{U}\) note that by definition we have
\[m\in\mathsf{Stone_{L}}\big{(}(X,\mathbf{v}^{\top}),(Y,\mathbf{w})\big{)} \Leftrightarrow m\in\mathsf{Stone}(X,Y)\wedge\forall x\in X:\mathbf{w}(m(x)) \leq\mathbf{v}^{\top}(x),\]
and \(\mathbf{w}(m(x))\leq\mathbf{v}^{\top}(x)=\mathbf{L}\) is trivially satisfied for every \(m\in\mathsf{Stone}(X,Y)\).
Figure 3. The chain of adjunctions on the topological and the algebraic side.
Similarly we see \(\mathsf{U}\dashv\mathsf{V}^{\perp}\), since every \(m\in\mathsf{Stone}(X,Y)\) automatically satisfies \(\mathbf{v}^{\perp}(m(x))\leq\mathbf{w}(x)\) and, therefore, \(m\in\mathsf{Stone_{L}}\big{(}(X,\mathbf{w}),(Y,\mathbf{v}^{\perp})\big{)}\).
The functor \(\mathsf{V}^{\perp}\) also has a right-adjoint \(\mathsf{C}:\mathsf{Stone_{L}}\to\mathsf{Stone}\) defined by
\[\mathsf{C}(X,\mathbf{v})=\{x\in X\mid\mathbf{v}(x)=\langle 0,1\rangle\}\]
on objects. On morphisms \(m\colon(X,\mathbf{v})\to(Y,\mathbf{w})\) it acts via restriction \(m\mapsto m|_{\mathsf{C}(X,v)}\), which is well-defined since \(m\in\mathsf{Stone_{L}}\big{(}(X,\mathbf{v}),(Y,\mathbf{w})\big{)}\) and \(x\in\mathsf{C}(X,\mathbf{v})\) means
\[\mathbf{w}(m(x))\leq\mathbf{v}(x)=\langle 0,1\rangle\]
which is equivalent to \(m(x)\in\mathsf{C}(W,\mathbf{w})\). Again \(\mathsf{V}^{\perp}\dashv\mathsf{C}\) is easy to see since
\[m\in\mathsf{Stone_{L}}\big{(}(X,\mathbf{v}^{\perp}),(Y,\mathbf{w})\big{)} \Leftrightarrow\forall x:\mathbf{w}(m(x))\leq\langle 0,1\rangle \Leftrightarrow m\in\mathsf{Stone}\big{(}X,\mathsf{C}(Y,\mathbf{w})\big{)}.\]
The functor \(\mathsf{V}^{\top}\) preserves almost all limits, however, there is one important exception. The terminal object (that is, the limit of the empty diagram) in \(\mathsf{Stone_{L}}\) is given by \((\{*\},\mathsf{v}^{\perp})\), implying that \(\mathsf{V}^{\top}\) does not preserve terminal objects. Therefore, contrary to a claim made in [57], no further left-adjoint of \(\mathsf{V}^{\top}\) exists.
It is obvious that both the unit \(\mathsf{id_{Stone}}\Rightarrow\mathsf{U}\circ\mathsf{V}^{\top}\) of the adjunction \(\mathsf{V}^{\top}\dashv\mathsf{U}\) and the counit \(\mathsf{U}\circ\mathsf{V}^{\perp}\Rightarrow\mathsf{id_{Stone}}\) of the adjunction \(\mathsf{U}\dashv\mathsf{V}^{\perp}\) are natural isomorphisms. We hold on to this fact, which will also be interesting on the algebraic side.
**Proposition 4.1**.: _The category \(\mathsf{Stone}\) is categorically equivalent to_
1. _[label=()]_
2. _a coreflective subcategory of_ \(\mathsf{Stone_{L}}\)_, witnessed by the fully faithful functor_ \(\mathsf{V}^{\top}\)_._
3. _a reflective and coreflective subcategory of_ \(\mathsf{Stone_{L}}\)_, witnessed by the fully faithful functor_ \(\mathsf{V}^{\perp}\)_._
The functors described in this subsection can be carried through the dualities, resulting in a a corresponding chain of adjunctions between \(\mathcal{A}\) and \(\mathsf{BA}\). For example, the dual of \(\mathsf{U}\) is given by \(\Pi\mathsf{U}\Sigma_{\mathbf{L}}\colon\mathcal{A}\to\mathsf{BA}\). In the next subsection we show that this functor can be understood algebraically as the Boolean skeleton. Throughout the subsections that follow, we will give similar algebraic descriptions for all of these functors between \(\mathcal{A}\) and \(\mathsf{BA}\) in Figure 3.
### The Boolean skeleton functor
In the theory of \(\mathsf{MV}_{n}\)-algebras (that is, the case where \(\mathbf{L}=\mathbf{L}_{n}\)), the Boolean skeleton is a well-known and useful tool (see, for example, [16]). An appropriate generalization of this concept to our setting was made by Maruyama in [43] (where it is called the Boolean core).
Due to Proposition 2.8 and [43, Lemma 3.11], the following definition is justified.
**Definition 4.2**.: Let \(\mathbf{A}\in\mathcal{A}\). The _Boolean skeleton_ of \(\mathbf{A}\) is the Boolean algebra \(\mathfrak{S}(\mathbf{A})=(\mathfrak{S}(A),\wedge,\vee,T_{0},0,1)\) on the carrier set
\[\mathfrak{S}(A)=\{a\in A\mid T_{1}(a)=a\},\]
where the lattice operations \(\wedge\) and \(\vee\) are inherited from \(\mathbf{A}\) and the unary operations \(T_{0}\) and \(T_{1}\) correspond to the ones from Definition 2.7 (which by Proposition 2.8 are term-definable in \(\mathbf{L}\)), interpreted in \(\mathbf{A}\).
For example, for each \(\mathbf{A}\in\mathcal{A},a\in\mathbf{A}\) and \(\ell\in\mathbf{L}\) we have \(T_{\ell}(a)\in\mathfrak{S}(\mathbf{A})\). This holds since the equation \(T_{1}(T_{\ell}(x))\approx T_{\ell}(x)\) holds in \(\mathbf{L}\), and therefore also in \(\mathbf{A}\).
_Remark 5_.: For \(\mathbf{A}\in\mathcal{A}\), suppose that \(A^{\prime}\subseteq A\) is a subset such that \((A^{\prime},\wedge,\vee,T_{0},0,1)\) forms a Boolean algebra. Then, for all \(a^{\prime}\in A^{\prime}\), we have \(T_{1}(a^{\prime})=T_{1}(T_{0}(T_{0}(a^{\prime})))=T_{0}(T_{0}(a^{\prime}))=a^{\prime}\) and thus \(a^{\prime}\in\mathfrak{S}(A)\) (the second equation always holds since
\(T_{1}(T_{0}(x))\approx T_{0}(x)\), which is easily checked in \(\mathbf{L}\)). Therefore, \(\mathfrak{S}(A)\) is the largest such subset.
To extend the construction of the Boolean skeleton to a functor \(\mathfrak{S}\colon\mathcal{A}\to\mathsf{BA}\), on homomorphisms \(f\in\mathcal{A}(\mathbf{A}_{1},\mathbf{A}_{2})\) we define \(\mathfrak{S}f\) to be the restriction \(f|_{\mathfrak{S}(\mathbf{A}_{1})}\). This is well-defined since
\[a\in\mathfrak{S}(\mathbf{A}_{1})\Leftrightarrow T_{1}(a)=a\Rightarrow T_{1}(f( a))=f(T_{1}(a))=f(a)\Leftrightarrow f(a)\in\mathfrak{S}(\mathbf{A}_{2}).\]
The following is arguably the most important property of the Boolean skeleton.
**Proposition 4.3**.: _For all \(\mathbf{A}\in\mathcal{A}\), there is a homeomorphism between \(\mathsf{U}\Sigma_{\mathbf{L}}(\mathbf{A})=\mathcal{A}(\mathbf{A},\mathbf{L})\) and \(\Sigma\mathfrak{S}(\mathbf{A})=\mathsf{BA}(\mathfrak{S}(\mathbf{A}),\mathbf{ 2})\) given by \(h\mapsto h|_{\mathfrak{S}(\mathbf{A})}\)._
Proof.: First we show that the map is a bijection. For injectivity, suppose that \(g\) and \(h\) satisfy \(g|_{\mathfrak{S}(A)}=h|_{\mathfrak{S}(A)}\). Take an arbitrary element \(a\in\mathbf{A}\) and let \(g(a)=\ell\). Using that \(T_{\ell}(a)\in\mathfrak{S}(\mathbf{A})\) we get
\[1=T_{\ell}(g(a))=g(T_{\ell}(a))=h(T_{\ell}(a))=T_{\ell}(h(a)),\]
which implies \(h(a)=\ell\) and, since \(a\) was arbitrary, that \(g=h\). For surjectivity, let \(h\in\mathsf{BA}(\mathfrak{S}(\mathbf{A}),\mathbf{2})\) be arbitrary. Due to [43, Lemma 3.12] the following yields a well-defined homomorphism \(\bar{h}\in\mathcal{A}(\mathbf{A},\mathbf{L})\):
\[\bar{h}(a)=\ell\Leftrightarrow h(T_{\ell}(a))=1.\]
Since for \(a\in\mathfrak{S}(\mathbf{A})\) we have
\[h(T_{1}(a)) =1\Leftrightarrow h(a)=1\text{ and }\] \[h(T_{0}(a)) =1\Leftrightarrow T_{0}(h(a))=1\Leftrightarrow h(a)=0,\]
we conclude that \(\bar{h}|_{\mathfrak{S}(\mathbf{A})}=h\).
We now have a bijection between two Stone spaces, so it only remains to show that it is continuous. But this is easy to see since the preimage of an open subbasis element \([a:i]\subseteq\mathsf{BA}(\mathfrak{S}(\mathbf{A}),\mathbf{2})\) is the open subbasis element \([a:i]\subseteq\mathcal{A}(\mathbf{A},\mathbf{L})\).
**Corollary 4.4**.: _There is a natural isomorphism between the functor \(\mathfrak{S}\) and the dual \(\Pi\mathsf{U}\Sigma_{\mathbf{L}}\) of the forgetful functor \(\mathsf{U}\)._
Proof.: By Proposition 4.3, for every \(\mathbf{A}\in\mathcal{A}\), setting
\[\phi_{\mathbf{A}}\colon\mathsf{U}\Sigma_{\mathbf{L}}(\mathbf{A}) \to\Sigma\mathfrak{S}(\mathbf{A})\] \[h\mapsto h|_{\mathfrak{S}(\mathbf{A})}\]
defines a natural isomorphism \(\phi\colon\mathsf{U}\Sigma_{\mathbf{L}}\Rightarrow\Sigma\mathfrak{S}\) (naturality is easy to check using the definitions of \(\Sigma,\Sigma_{\mathbf{L}}\) and \(\mathfrak{S}\) on morphisms). Applying \(\Pi\) and using the fact that \(\Pi\Sigma\) is naturally isomorphic to \(\mathsf{id}_{\mathsf{BA}}\), we get the natural isomorphism \(\Pi\phi\colon\mathfrak{S}\Rightarrow\Pi\mathsf{U}\Sigma_{\mathbf{L}}\).
In the next subsection we explain the right-adjoint of the Boolean skeleton functor.
### The Boolean power functor
In this subsection we give an algebraic description of a functor naturally isomorphic to the dual \(\Pi_{\mathbf{L}}\mathsf{V}^{\top}\Sigma\) of the functor \(\mathsf{V}^{\top}\). This functor, which we call \(\mathfrak{P}\), turns out to be an instance of the the well-known _Boolean power_ (or _Boolean extension_), which was introduced for arbitrary finite algebras in Foster's first paper on primal algebras [24]. Boolean powers are special instances of Boolean products (see, e.g., [11, Chapter IV]), but for our purposes it is more convenient to work with the following equivalent definition found in [9].
**Definition 4.5**.: Given a Boolean algebra \(\mathbf{B}\in\mathsf{BA}\) and a finite algebra \(\mathbf{M}\), the _Boolean power_\(\mathbf{M}[\mathbf{B}]\) is defined on the carrier set
\[M[B]\subseteq B^{M}\]
consisting of all maps \(\xi\colon M\to B\) which satisfy
1. If \(\ell\) and \(\ell^{\prime}\) are distinct elements of \(M\), then \(\xi(\ell)\wedge\xi(\ell^{\prime})=0\),
2. \(\bigvee\{\xi(\ell)\mid\ell\in M\}=1\).
If \(o^{\mathbf{L}}\colon M^{k}\to M\) is a \(k\)-ary operation of \(\mathbf{M}\), we define a corresponding operation \(o^{\mathbf{M}[\mathbf{B}]}\colon M[B]\to M[B]\) by
\[o^{\mathbf{M}[\mathbf{B}]}(\xi_{1},\dots,\xi_{k})(\ell)=\bigvee_{o^{\mathbf{ M}}(\ell_{1},\dots,\ell_{k})=\ell}(\xi_{1}(\ell_{1})\wedge\dots\wedge\xi_{k}( \ell_{k})).\]
The resulting algebra \(\mathbf{M}[\mathbf{B}]=(M[B],o^{\mathbf{M}[\mathbf{B}]})\) is a member of the variety \(\mathbb{HSP}(\mathbf{M})\) generated by \(\mathbf{M}\) (since it satisfies the same equations as \(\mathbf{M}\)).
There is a straightforward way to extend this construction to a functor.
**Definition 4.6**.: Given a finite algebra \(\mathbf{M}\), we define the functor \(\mathfrak{P}_{\mathbf{M}}\colon\mathsf{BA}\to\mathbb{HSP}(\mathbf{M})\) as follows. On objects \(\mathbf{B}\in\mathsf{BA}\) we define
\[\mathfrak{P}_{\mathbf{M}}(\mathbf{B})=\mathbf{M}[\mathbf{B}].\]
For a Boolean homomorphism \(\varphi\colon\mathbf{B}\to\mathbf{B}^{\prime}\), the homomorphism \(\mathfrak{P}_{\mathbf{M}}\varphi\colon\mathbf{M}[\mathbf{B}]\to\mathbf{M}[ \mathbf{B}^{\prime}]\) is defined via composition \(\xi\mapsto\varphi\circ\xi\) (this is a homomorphism because operations in \(\mathbf{M}[\mathbf{B}]\) are defined by Boolean expressions, which commute with \(\varphi\)).
In particular, we will use the shorthand notation \(\mathfrak{P}\) for \(\mathfrak{P}_{\mathbf{L}}\). In the remainder of this subsection we aim to show that \(\mathfrak{P}\) is indeed the right-adjoint of the Boolean skeleton functor \(\mathfrak{S}\). For this, we need the following well-known properties of the Boolean power.
**Lemma 4.7**.: _[_9_, Proposition 2.1]_ _The functor \(\mathfrak{P}_{\mathbf{M}}\) has the following properties:_
1. \(\mathfrak{P}_{\mathbf{M}}(\mathbf{2})\cong\mathbf{M}\)_,_
2. \(\mathfrak{P}_{\mathbf{M}}\) _preserves products._
_In particular, \(\mathfrak{P}_{\mathbf{M}}(\mathbf{2}^{\kappa})\cong\mathbf{M}^{\kappa}\) holds for all index sets \(\kappa\)._
In the next proposition we describe the interplay between the functors \(\mathfrak{S}\) and \(\mathfrak{P}\). Again, the terms \(T_{\ell}\) from Proposition 2.8 play an important role.
**Proposition 4.8**.: _For every \(\mathbf{A}\in\mathcal{A}\) there is an embedding \(\mathcal{T}_{(\cdot)}\colon\mathbf{A}\hookrightarrow\mathfrak{P}(\mathfrak{S }(\mathbf{A}))\) given by \(a\mapsto\mathcal{T}_{a}\) where_
\[\mathcal{T}_{a}(\ell)=T_{\ell}(a).\]
_The restriction to \(\mathfrak{S}(\mathbf{A})\) yields an isomorphism \(\mathfrak{S}(\mathbf{A})\cong\mathfrak{S}\big{(}\mathfrak{P}(\mathfrak{S}( \mathbf{A}))\big{)}\)._
Proof.: The map is well-defined, that is, \(\mathcal{T}_{a}\) is in \(\mathfrak{P}(\mathfrak{S}(\mathbf{A}))\), since the equations \(T_{\ell}(x)\wedge T_{\ell^{\prime}}(x)\approx 0\) (for distinct \(\ell,\ell^{\prime}\)) and \(\bigvee\{T_{\ell}(x)\mid\ell\in L\}\approx 1\) hold in \(\mathbf{L}\).
We now fix an embedding \(\mathbf{A}\hookrightarrow\mathbf{L}^{I}\). It is easy to see that \(\mathcal{T}_{(\cdot)}\) is injective since, for distinct \(a,a^{\prime}\in\mathbf{A}\), there is some component \(i\in I\) with \(a(i)=\ell\neq a^{\prime}(i)\), thus \(\mathcal{T}_{a}(\ell)\neq\mathcal{T}_{a^{\prime}}(\ell)\). To conclude that \(\mathcal{T}_{(\cdot)}\) is an embedding we need to show that it is a homomorphism, that is we want to show that for any \(k\)-ary operation \(o\colon L^{k}\to L\) of \(\mathbf{L}\) we have
\[\mathcal{T}_{o^{\mathbf{A}}(a_{1},\dots,a_{k})}=o^{\mathbf{L}[\mathcal{B}( \mathbf{A})]}(\mathcal{T}_{a_{1}},\dots\mathcal{T}_{a_{k}}).\]
By definition the \(i\)-th component of the left-hand side is given by
\[\mathcal{T}_{o^{\mathcal{A}}(a_{1},\ldots,a_{k})}(\ell)(i)=T_{\ell}\big{(}o^{ \mathbf{L}}(a_{1}(i),\ldots,a_{k}(i))\big{)}=\begin{cases}1&\text{ if }o^{\mathbf{L}}(a_{1}(i),\ldots,a_{k}(i))=\ell\\ 0&\text{ otherwise.}\end{cases}\]
The right-hand side is given by
\[o^{\mathbf{L}[\mathcal{B}(\mathbf{A})]}(\mathcal{T}_{a_{1}},\ldots\mathcal{T} _{a_{k}})(\ell)=\bigvee_{o^{\mathbf{L}}(\ell_{1},\ldots,\ell_{k})=\ell}( \mathcal{T}_{a_{1}}(\ell_{1})\wedge\cdots\wedge\mathcal{T}_{a_{k}}(\ell_{k})).\]
In its \(i\)-th component this again corresponds to
\[\bigvee_{o^{\mathbf{L}}(\ell_{1},\ldots,\ell_{k})=\ell}\big{(}T_{\ell_{1}}(a_ {1}(i))\wedge\cdots\wedge T_{\ell_{k}}(a_{k}(i))\big{)}=\begin{cases}1&\text{ if }o^{\mathbf{L}}(a_{1}(i),\ldots,a_{k}(i))=\ell\\ 0&\text{ otherwise.}\end{cases}\]
Thus \(\mathcal{T}_{(\cdot)}\) is an embedding, which concludes the proof of the first statement.
For the second statement, note that, since \(\mathfrak{S}\) preserves injectivity of homomorphisms, it suffices to show that the restriction of \(\mathcal{T}_{(\cdot)}\) to \(\mathfrak{S}(\mathbf{A})\) is a surjection onto \(\mathfrak{S}\big{(}\mathfrak{R}(\mathfrak{S}(\mathbf{A}))\big{)}\). So consider an element \(\xi\in\mathfrak{S}\big{(}\mathfrak{R}(\mathfrak{S}(\mathbf{A}))\big{)}\), that is \(\xi\in\mathfrak{P}(\mathfrak{S}(\mathbf{A}))\) and \(T_{1}^{\mathbf{L}[\mathfrak{S}(\mathbf{A})]}(\xi)=\xi.\) The latter by definition means
\[T_{1}^{\mathbf{L}[\mathfrak{S}(\mathbf{A})]}(\xi)(1)=\xi(1),\] \[T_{1}^{\mathbf{L}[\mathfrak{S}(\mathbf{A})]}(\xi)(0)=\bigvee\{ \xi(\ell)\mid\ell\in L,\ell\neq 1\}=\xi(0)\text{ and }\] \[T_{1}^{\mathbf{L}[\mathfrak{S}(\mathbf{A})]}(\xi)(\ell)=\bigvee \varnothing=0=\xi(\ell)\text{ for all }\ell\in L\backslash\{0,1\}.\]
We claim that \(\xi=\mathcal{T}_{\xi(1)}.\) Indeed, we know that \(\xi(1)\in\mathfrak{S}(\mathbf{A})\) so \(\xi(1)=T_{1}(\xi(1))\). Furthermore, in the component \(i\in I\), we have \(\xi(0)(i)=1\) if and only if \(\xi(1)(i)=0\), so \(T_{0}(\xi(1))=T_{1}(\xi(0))=\xi(0)\) since \(\xi(0)\in\mathfrak{S}(\mathbf{A})\). Finally, for \(\ell\not\in\{0,1\}\) we have \(T_{\ell}(\xi(1))=0\) since for all \(i\in I\) we have \(\xi(1)(i)\in\{0,1\}\). This concludes the proof.
Since \(\mathfrak{S}\) is dual to the essentially surjective functor \(\mathsf{U}\), we know that every \(\mathbf{B}\in\mathsf{BA}\) is isomorphic to \(\mathfrak{S}(\mathbf{A})\) for some \(\mathbf{A}\in\mathcal{A}\). Therefore, the following is a direct consequence of the second part of Proposition 4.8.
**Corollary 4.9**.: _Every Boolean algebra \(\mathbf{B}\in\mathsf{BA}\) is isomorphic to \(\mathfrak{S}(\mathfrak{P}(\mathbf{B}))\)._
Another immediate consequence of Proposition 4.8 is the following.
**Corollary 4.10**.: _For every Boolean algebra \(\mathbf{B}\in\mathsf{BA}\), the algebra \(\mathfrak{P}(\mathbf{B})\) is the largest algebra in \(\mathcal{A}\) which has \(\mathbf{B}\) as Boolean skeleton. That is, for every algebra \(\mathbf{A}\in\mathcal{A}\) with \(\mathfrak{S}(\mathbf{A})\cong\mathbf{B}\) there exists an embedding \(\mathbf{A}\hookrightarrow\mathfrak{P}(\mathbf{B})\)._
We now have everything at hand to prove the main theorem of this subsection.
**Theorem 4.11**.: \(\mathfrak{P}\) _is naturally isomorphic to the dual of \(\mathsf{V}^{\top}\) and, therefore,_
\[\mathfrak{S}\dashv\mathfrak{P}.\]
Proof.: First we prove the statement on the finite level. In other words, we want to show that, in \(\mathsf{Stone_{L}}\),
\[\Sigma_{\mathbf{L}}\mathfrak{P}(\mathbf{B})\cong\mathsf{V}^{\top}\Sigma( \mathbf{B})\]
holds for every finite Boolean algebra \(\mathbf{B}\). More explicitly, after spelling out the definition of the functors involved we want to show
\[\big{(}\mathcal{A}(\mathfrak{P}(\mathbf{B}),\mathbf{L}),\mathbf{im}\big{)} \cong\big{(}\mathsf{BA}(\mathbf{B},\mathbf{2}),\mathbf{v}^{\top}\big{)} \tag{2}\]
for every finite Boolean algebra \(\mathbf{B}\). First, since \(\mathbf{B}\) is finite there is some positive integer \(k\) such that \(\mathbf{B}\cong\mathbf{2}^{k}\). We combine the following isomorphisms in Stone. Due to Proposition 4.3 we know
And due to Corollary 4.9 we know
\[\mathfrak{S}(\mathfrak{P}(\mathbf{B}))\cong\mathbf{B}.\]
Putting these together, we get
\[\mathcal{A}(\mathfrak{P}(\mathbf{B}),\mathbf{L})\cong\mathsf{BA}(\mathbf{B}, \mathbf{2}).\]
In fact, this even yields an isomorphism in \(\mathsf{Stone_{L}}\) as desired in (2), because
\[\big{(}\mathcal{A}(\mathfrak{P}(\mathbf{B}),\mathbf{L}),\mathbf{im}\big{)} \cong\big{(}\mathcal{A}(\mathbf{L}^{k},\mathbf{L}),\mathbf{im}\big{)}\cong \big{(}\mathcal{A}(\mathbf{L}^{k},\mathbf{L}),\mathbf{v}^{\top}\big{)}\]
where the last equation holds due to Proposition 3.5.
So we know that the restriction of \(\mathfrak{P}\) to the category of finite Boolean algebras \(\mathfrak{P}^{\omega}\colon\mathsf{BA}^{\omega}\to\mathcal{A}\) is dual to the restriction \((\mathsf{V}^{\top})^{\omega}\) of \(\mathsf{V}^{\top}\) to the category \(\mathsf{Set}^{\omega}_{\mathbf{L}}\). There is a unique (up to natural iso) finitary (i.e., filtered colimit preserving) extension of \(\mathfrak{P}^{\omega}\) to \(\mathsf{Ind}(\mathsf{BA}^{\omega})\simeq\mathsf{BA}\), and this extension is naturally isomorphic to the dual of \(\mathsf{V}^{\top}\) (since \(\mathsf{V}^{\top}\) preserves all limits except for the terminal object, it is the unique cofinitary extension of \((\mathsf{V}^{\top})^{\omega}\)). To show that \(\mathfrak{P}\) coincides with this unique extension (up to natural isomorphism), it suffices to show that \(\mathfrak{P}\) is finitary as well. Since \(\mathfrak{P}\) preserves monomorphisms (it is easy to see by definition that if \(\varphi\in\mathsf{BA}(\mathbf{B}_{1},\mathbf{B}_{2})\) is injective, then \(\mathfrak{P}\varphi\) is injective as well), we can apply [2, Theorem 3.4], which states that \(\mathfrak{P}\) is finitary if and only if the following holds.
_For every Boolean algebra \(\mathbf{B}\in\mathsf{BA}\), for every finite subalgebra \(\mathbf{A}\hookrightarrow\mathfrak{P}(\mathbf{B})\) the inclusion factors through the image of the inclusion of some finite subalgebra \(\mathbf{B}^{\prime}\hookrightarrow\mathbf{B}\) under \(\mathfrak{P}\)._
To see this write \(\mathbf{A}\cong\prod_{i\leq n}\mathbf{S}_{\mathbf{i}}\) as product of finite subalgebras of \(\mathbf{L}\). Then, by Corollary 4.9, we know that \(\mathfrak{S}(\mathbf{A})\cong\mathbf{2}^{n}\) embeds into \(\mathbf{B}\). Now by Lemma 4.7 we have \(\mathfrak{P}(\mathbf{2}^{n})\cong\mathbf{L}^{n}\) and the natural inclusion \(\prod_{i\leq n}\mathbf{S}_{\mathbf{i}}\hookrightarrow\mathbf{L}^{n}\) yields our factorization
This concludes the proof.
In particular, if \(\mathbf{L}\) is primal, we get an explicit categorical equivalence witnessing Hu's theorem.
**Corollary 4.12**.: _[_36_]_ _If \(\mathbf{L}\) is primal, then \(\mathfrak{S}\dashv\mathfrak{P}\) yields a categorical equivalence between \(\mathcal{A}\) and \(\mathsf{BA}\)._
We also get an algebraic analogue of Proposition 4.1(i).
**Corollary 4.13**.: _The functor \(\mathfrak{P}\) is fully faithful and identifies \(\mathsf{BA}\) with a reflective subcategory of \(\mathcal{A}\)._
By now we found detailed descriptions of most of the functors appearing in Figure 3. We are still missing is an algebraic understanding of the adjunction \(\mathsf{Q}\dashv\mathsf{I}\). This
gap is filled in the next subsection. As we will see, it is closely connected to the adjunction \(\mathfrak{S}\dashv\mathfrak{P}\).
### The subalgebra adjunctions
For every subalgebra \(\mathbf{S}\leq\mathbf{L}\), there is an adjunction
(3)
which we explore in this subsection.
The functor \(\mathsf{V}^{\mathbf{S}}\colon\mathsf{Stone}\to\mathsf{Stone}_{\mathbf{L}}\) is given on objects by
\[\mathsf{V}^{\mathbf{S}}(X)=(X,\mathbf{v}^{\mathbf{S}})\text{ where }\forall x \in X:\mathbf{v}^{\mathbf{S}}(x)=\mathbf{S},\]
and assigns every morphism to itself.
The functor \(\mathbf{C}^{\mathbf{S}}\colon\mathsf{Stone}_{\mathbf{L}}\to\mathsf{Stone}\) is given on objects by
\[\mathsf{C}^{\mathbf{S}}(X,\mathbf{v})=\{x\in X\mid\mathbf{v}(x)\leq\mathbf{S}\}.\]
On morphisms it acts via restriction, that is, given a morphism \(m\colon(X,\mathbf{v})\to(Y,\mathbf{w})\), define \(m\mid_{\mathsf{C}^{\mathbf{S}}(X)}\colon\mathsf{C}^{\mathbf{S}}(X)\to\mathsf{ C}^{\mathbf{S}}(Y)\). This is well-defined since
\[x\in\mathsf{C}^{\mathbf{S}}(X,\mathbf{v})\Leftrightarrow\mathbf{v}(x)\leq \mathbf{S}\Leftrightarrow\mathbf{w}(m(x))\leq\mathbf{v}(x)\leq\mathbf{S} \Leftrightarrow m(x)\in\mathsf{C}^{\mathbf{S}}(Y,\mathbf{w}).\]
Comparing this with Subsection 4.1, the reader may easily verify \(V^{\mathbf{S}}\dashv\mathsf{C}^{\mathbf{S}}\). Indeed, the adjunction \(V^{\mathbf{S}}\dashv\mathsf{C}^{\mathbf{S}}\) generalizes the following adjunctions in Figure 3:
* \(\mathsf{V}^{\top}\dashv\mathsf{U}\) in the case where \(\mathbf{S}=\mathbf{L}\) is the largest subalgebra of \(\mathbf{L}\),
* \(\mathsf{V}^{\perp}\dashv\mathsf{C}\) in the case where \(\mathbf{S}=\langle 0,1\rangle\) is the smallest subalgebra of \(\mathbf{L}\).
What is special about these two extreme cases is the additional adjunction \(\mathsf{U}\dashv\mathsf{V}^{\top}\), which 'glues' the two adjunctions into the chain described in Subsection 4.1.
To better understand the adjunction corresponding to a subalgebra \(\mathbf{S}\leq\mathbf{L}\), we dissect it into two parts as follows.
Here, \(\iota^{\mathbf{S}}\) is the natural inclusion and the functor \((\mathsf{C}^{\mathbf{S}},-)\) is defined by
\[(X,\mathbf{v})\mapsto(\mathsf{C}^{\mathbf{S}}(X),\mathbf{v}|_{\mathsf{C}^{ \mathbf{S}}(X)})\]
on objects and, exactly like \(\mathsf{C}^{\mathbf{S}}\), acts via restriction on morphisms. It is easy to see that this really is a decomposition of the adjunction (3), that is,
\[\mathsf{V}^{\mathbf{S}}=\iota^{\mathbf{S}}\circ\mathsf{V}^{\top}\text{ and }\mathsf{C}^{\mathbf{S}}=\mathsf{U}\circ(\mathsf{C}^{\mathbf{S}},-).\]
As before, we want to carry everything over to the algebraic side, where the dissection takes place through the subvariety
\[\mathcal{A}\mathbf{s}:=\mathbb{H}\mathbb{S}\mathbb{P}(\mathbf{S}).\]
We illustrate the entire situation in Figure 4. Note that \(\mathbf{S}\leq\mathbf{L}\) is itself semi-primal, so the semi-primal duality given by \(\Sigma_{\mathbf{S}}\) and \(\Pi_{\mathbf{S}}\) as well as the adjunction \(\mathfrak{S}\dashv\mathfrak{P}\mathbf{s}\) make sense in this context.
Again, \(\iota_{\mathbf{S}}\) denotes the natural inclusion. Although it may seem obvious, it is not immediate that \(\iota_{\mathbf{S}}\) really is the dual of \(\iota^{\mathbf{S}}\). To prove it, we make use of the following unary term, which will play an important role for the remainder of the subsection:
\[\chi_{S}(x)=\bigvee_{s\in S}T_{s}(x).\]
On \(\mathbf{L}\), this simply corresponds to the characteristic function of \(S\subseteq L\). It is, furthermore, characteristic for the subvariety \(\mathcal{A}_{\mathbf{S}}\) in the following sense.
**Lemma 4.14**.: _An algebra in \(\mathcal{A}\) is a member of \(\mathcal{A}_{\mathbf{S}}\) if and only if it satisfies the equation \(\chi_{S}(x)\approx 1\)._
Proof.: Clearly every member of \(\mathcal{A}_{\mathbf{S}}\) satisfies the equation since \(\mathbf{S}\) satisfies it. For the other direction, let \(\mathbf{A}\in\mathcal{A}\) satisfy \(\chi_{S}(a)=1\) for all \(a\in A\). We know that \(\mathbf{A}\) can be embedded into some \(\mathbf{L}^{I}\) and for each \(a\in\mathbf{A}\) and \(i\in I\), we have \(\chi_{\mathbf{S}}(\pi_{i}(a))=1\) which implies that \(\pi_{i}(a)\in\mathbf{S}\). Therefore, \(\mathbf{A}\) can be embedded into \(\mathbf{S}^{I}\).
Now, let \(\mathbf{A}\in\mathcal{A}_{\mathbf{S}}\) and let \(h\in\mathcal{A}(\iota_{\mathbf{S}}(\mathbf{A}),\mathbf{L})\) be a homomorphism. Since \(h\) preserves equations, for every \(a\in A\) we get
\[\chi_{S}(a)=1\Rightarrow\chi_{S}(h(a))=1\]
and, therefore, \(h\in\mathcal{A}(\mathbf{A},\mathbf{S})\). So we showed \(\mathcal{A}(\mathbf{A},\mathbf{L})=\mathcal{A}_{\mathbf{S}}(\mathbf{A}, \mathbf{S})\) for \(\mathbf{A}\in\mathcal{A}_{\mathbf{S}}\), which immediately implies the following.
**Corollary 4.15**.: _The inclusion functor \(\iota_{\mathbf{S}}\) is the dual of the inclusion functor \(\iota^{\mathbf{S}}\)._
To complete the picture, we only need to describe the functor \(\mathsf{Q}_{\mathbf{S}}\) from Figure 4. Let \(\alpha\colon\mathsf{id}_{\mathcal{A}}\Rightarrow\iota_{\mathbf{S}}\circ \mathsf{Q}_{\mathbf{S}}\) be the unit of the adjunction \(\mathsf{Q}_{\mathbf{S}}\dashv\iota_{\mathbf{S}}\). For any \(\mathbf{A}\in\mathcal{A}\), the algebra \(\mathsf{Q}_{\mathbf{S}}(\mathbf{A})\) is universal for \(\mathcal{A}_{\mathbf{S}}\) in the following sense:
_For every \(\mathbf{B}\in\mathcal{A}_{\mathbf{S}}\) and every homomorphism \(f\colon\mathbf{A}\to\mathbf{B}\), there is a unique \(\hat{f}\colon\mathsf{Q}_{\mathbf{S}}(\mathbf{A})\to\mathbf{B}\) such that \(\hat{f}\circ\alpha_{\mathbf{A}}=f\)._
Figure 4. Dissecting the subalgebra adjunction of \(\mathbf{S}\leq\mathbf{L}\).
Therefore, the functor \(\mathsf{Q_{S}}\) may be understood as a quotient. There is a well-known connection between quotients and equations introduced by Banaschewski and Herrlich in [4]. Not surprisingly, the equation corresponding to \(\mathsf{Q_{S}}\) is given by \(\chi_{S}(x)\approx 1\), which is an easy consequence of the above discussion together with Lemma 4.14. We summarize the results of this subsection as follows.
**Theorem 4.16**.: _For every subalgebra \(\mathbf{S}\leq\mathbf{L}\), there is an adjunction_
_which can be dissected as_
_where \(\iota_{\mathbf{S}}\) is the natural inclusion functor of the subvariety \(\mathbb{HSP}(\mathbf{S})\hookrightarrow\mathbb{HSP}(\mathbf{L})\) and \(\mathsf{Q_{S}}\) is the quotient functor corresponding to the equation \(\chi_{S}(x)\approx 1\)._
In particular, in the case where \(\mathbf{S}\) is the smallest subalgebra of \(\mathbf{L}\), we can recover the functors \(\mathsf{I}=\iota_{\mathbf{S}}\circ\mathfrak{P_{S}}\) and \(\mathsf{Q}\) from Figure 3.
**Corollary 4.17**.: _The functor \(\mathsf{I}\colon\mathsf{BA}\to\mathcal{A}\) is, up to categorical equivalence, an inclusion. The functor \(\mathsf{Q}\colon\mathcal{A}\to\mathsf{BA}\) is, up to categorical equivalence, the quotient by the equation_
\[\chi_{E}(x)\approx 1,\]
_where \(\mathbf{E}=\langle 0,1\rangle\) is the smallest subalgebra of \(\mathbf{L}\)._
Proof.: Being the smallest subalgebra of a semi-primal algebra, \(\mathbf{E}\) is primal. Therefore, by Corollary 4.12, the adjunction \(\mathfrak{S}\dashv\mathfrak{P_{E}}\) is an equivalence of categories. The statement follows from Theorem 4.16.
Clearly Corollary 4.13 holds not only for \(\mathfrak{P}\), but for all the functors \(\mathsf{I_{S}}\). Among them, \(\mathsf{I}\) is special since it also has a right-adjoint. This yields the following algebraic version of Proposition 4.1(ii).
**Corollary 4.18**.: _The functor \(\mathsf{I}\) is fully faithful and identifies \(\mathsf{BA}\) with a reflective and coreflective subcategory of \(\mathcal{A}\)._
We showed that, if a finite lattice-based algebra \(\mathbf{M}\) is semi-primal, then there is an adjunction \(\mathfrak{P_{E}}\dashv\mathfrak{S}\dashv\mathfrak{P_{M}}\), where \(\mathbf{E}\) is the smallest subalgebra of \(\mathbf{M}\). In the next subsection we show that, conversely, the existence of an adjunction resembling this one fully characterizes semi-primality of a finite lattice-based algebra \(\mathbf{M}\).
### Characterizing semi-primality via adjunctions
The aim of this subsection is to find sufficient conditions for an adjoint of \(\mathfrak{P_{M}}\) to imply semi-primality of the algebra \(\mathbf{M}\). We will then show that, in particular, these conditions are consequences of \(\mathsf{U}\) and \(\mathfrak{S}\) from Figure 3 being (essentially) _topological functors_.
Recall that, in Definition 4.6, the Boolean power functor \(\mathfrak{P_{M}}\colon\mathsf{BA}\to\mathbb{HSP}(\mathbf{M})\) was defined for arbitrary finite algebras \(\mathbf{M}\). Of course, if \(\mathbf{S}\) is a subalgebra of \(\mathbf{M}\), then \(\mathfrak{P_{S}}\) can also be seen as a functor into \(\mathbb{HSP}(\mathbf{M})\), and in the following there is no need to distinguish between these two functors in our notation. The functor
\(\mathfrak{P}_{\mathbf{M}}\) is faithful (unless \(\mathbf{M}\) is trivial), but it is usually not full. In fact, it is easy to see that \(\mathfrak{P}_{\mathbf{M}}\) can only be full if \(\mathbf{M}\) does not have any non-trivial automorphisms.
In the main theorem of this subsection we show that, if \(\mathfrak{P}_{\mathbf{M}}\) is full and has a left-adjoint resembling \(\mathfrak{S}\), then a lattice-based algebra \(\mathbf{M}\) is semi-primal.
**Theorem 4.19**.: _Let \(\mathbf{M}\) be a finite lattice-based algebra. Then \(\mathbf{M}\) is semi-primal if and only if \(\mathfrak{P}_{\mathbf{M}}\) is full and there is a faithful functor \(\mathfrak{s}\colon\mathbb{HSP}(\mathbf{M})\to\mathsf{BA}\) which satisfies_
\[\mathfrak{P}_{\mathbf{E}}\dashv\mathfrak{s}\dashv\mathfrak{P}_{\mathbf{M}},\]
_where \(\mathbf{E}=\langle 0,1\rangle\) is the smallest subalgebra of \(\mathbf{M}\)._
Proof.: If \(\mathbf{M}\) is semi-primal, then \(\mathfrak{P}_{\mathbf{M}}\) is full since it is dual to the full functor \(\mathsf{V}^{\top}\), the functor \(\mathfrak{s}=\mathfrak{S}\) is faithful since it is dual to the faithful functor \(\mathsf{U}\) and \(\mathfrak{P}_{\mathbf{E}}\dashv\mathfrak{S}\dashv\mathfrak{P}_{\mathbf{M}}\) was shown in the last two subsections.
Now for the converse, assume that \(\mathfrak{P}_{\mathbf{M}}\) is full and there is a faithful functor \(\mathfrak{s}\colon\mathbb{HSP}(\mathbf{M})\to\mathsf{BA}\) with \(\mathfrak{P}_{\mathbf{E}}\dashv\mathfrak{s}\dashv\mathfrak{P}_{\mathbf{M}}\). For abbreviation we write \(\mathcal{V}\) for \(\mathbb{HSP}(\mathbf{M})\). We will make use of the following properties of \(\mathfrak{s}\):
1. The unit \(\eta\colon\mathsf{id}_{\mathcal{V}}\Rightarrow\mathfrak{P}_{\mathbf{M}} \circ\mathfrak{s}\) is a monomorphism in each component,
2. \(\mathfrak{s}\) preserves monomorphisms and finite products.
Condition (i) follows from \(\mathfrak{s}\) being faithful and (ii) follows from \(\mathfrak{s}\) being a right-adjoint.
Our first goal is to prove the equivalence
\[\mathfrak{s}(\mathbf{A})\cong\mathbf{2}\Leftrightarrow\exists\mathbf{S}\in \mathbb{S}(\mathbf{M}):\mathbf{A}\cong\mathbf{S}. \tag{4}\]
If \(\mathfrak{s}(\mathbf{A})\cong\mathbf{2}\), use that by (i) there is an embedding \(\mathbf{A}\hookrightarrow\mathfrak{P}_{\mathbf{M}}(\mathfrak{s}(\mathbf{A}))\). Since \(\mathfrak{P}_{\mathbf{M}}(\mathfrak{s}(\mathbf{A}))\cong\mathbf{M}\), it follows that \(\mathbf{A}\) is isomorphic to a subalgebra of \(\mathbf{M}\). Conversely, first note that \(\mathfrak{s}(\mathbf{M})\cong\mathbf{2}\) since, using that \(\mathfrak{P}_{\mathbf{M}}\) is full and \(\mathfrak{s}\dashv\mathfrak{P}_{\mathbf{M}}\), we have
\[1=|\mathsf{BA}(\mathbf{2},\mathbf{2})|=|\mathcal{V}(\mathbf{M},\mathbf{M})|=| \mathcal{V}\big{(}\mathbf{M},\mathfrak{P}_{\mathbf{M}}(\mathbf{2})\big{)}|=| \mathsf{BA}(\mathfrak{s}(\mathbf{M}),\mathbf{2}))|,\]
which is only possible for \(\mathfrak{s}(\mathbf{M})\cong\mathbf{2}\). Now if \(\mathbf{A}\cong\mathbf{S}\in\mathbb{S}(\mathbf{M})\) then, due to (ii), the natural embedding \(\mathbf{S}\hookrightarrow\mathbf{M}\) induces an embedding \(\mathfrak{s}(\mathbf{S})\hookrightarrow\mathfrak{s}(\mathbf{M})\). Therefore \(\mathfrak{s}(\mathbf{S})\cong\mathbf{2}\) since \(\mathfrak{s}(\mathbf{M})\cong\mathbf{2}\) does not have any proper subalgebras.
Next we show that \(\mathbf{M}\) does not have any non-trivial internal isomorphisms. For every subalgebra \(\mathbf{S}\in\mathbb{S}(\mathbf{M})\), there is a bijection between the set of Boolean homomorphisms \(\mathfrak{s}(\mathbf{S})\to\mathbf{2}\) and the set of homomorphisms \(\mathbf{S}\to\mathfrak{P}_{\mathbf{M}}(\mathbf{2})\). Due to (4) we have \(\mathfrak{s}(\mathbf{S})\cong\mathbf{2}\), so the former only has one element. Since \(\mathfrak{P}_{\mathbf{M}}(\mathbf{2})\cong\mathbf{M}\) this means that there is only one homomorphism \(\mathbf{S}\to\mathbf{M}\), namely the identity on \(\mathbf{S}\). Every non-trivial internal isomorphism with domain \(\mathbf{S}\) would define another such homomorphism, resulting in a contradiction.
We now show that \(\mathbf{M}\) is semi-primal, using the characterization of semi-primality in Proposition 2.3. That is, we want to show that \(\mathbf{M}\) has a majority term and every subalgebra of \(\mathbf{M}^{2}\) is either a product of subalgebras or the diagonal of a subalgebra of \(\mathbf{M}\). Since \(\mathbf{M}\) is based on a lattice, a majority term is given by the median (see the paragraph before Proposition 2.3). Let \(\mathbf{A}\leq\mathbf{M}^{2}\) be a subalgebra and let \(\iota\colon\mathbf{A}\hookrightarrow\mathbf{M}\) be its natural embedding. Due to (ii), this embedding induces an embedding \(\mathfrak{s}(\mathbf{A})\hookrightarrow\mathfrak{s}(\mathbf{M}^{2})\) into \(\mathfrak{s}(\mathbf{M}^{2})\cong\mathbf{2}^{2}\). Therefore, either \(\mathfrak{s}(\mathbf{A})\cong\mathbf{2}^{2}\) or \(\mathfrak{s}(\mathbf{A})\cong\mathbf{2}\). Let \(p_{1}\colon\mathbf{A}\to\mathbf{M}\) and \(p_{2}\colon\mathbf{A}\to\mathbf{M}\) be \(\iota\) followed by the respective projections \(\mathbf{M}^{2}\to\mathbf{M}\).
First assume that \(p_{1}\) and \(p_{2}\) coincide. Then clearly \(\mathbf{A}\) embeds into \(\mathbf{M}\), and therefore it is isomorphic to some subalgebra \(\mathbf{S}\) of \(\mathbf{M}\). Since \(\mathbf{M}\) has no non-trivial internal isomorphisms, \(\mathbf{A}\) needs to coincide with the diagonal of \(\mathbf{S}\).
If \(p_{1}\) and \(p_{2}\) are distinct then, using that \(\mathfrak{s}\) is faithful, the morphisms \(\mathfrak{s}p_{1}\colon\mathfrak{s}(\mathbf{A})\to\mathbf{2}\) and \(\mathfrak{s}p_{2}\colon\mathfrak{s}(\mathbf{A})\to\mathbf{2}\) are distinct as well. This implies that \(\mathfrak{s}(\mathbf{A})\cong\mathbf{2}^{2}\). Using the adjunction \(\mathfrak{P}_{E}\dashv\mathfrak{s}\) we get
\[4=|\mathsf{BA}(\mathbf{2}^{2},\mathfrak{s}(\mathbf{A}))|=|\mathcal{V}( \mathbf{E}^{2},\mathbf{A})|\text{ and }4=|\mathsf{BA}(\mathbf{2}^{2},\mathfrak{s}(\mathbf{M}^{2}))|=| \mathcal{V}(\mathbf{E}^{2},\mathbf{M}^{2})|.\]
So there are exactly four distinct homomorphisms \(\mathbf{E}^{2}\to\mathbf{A}\) and, since \(\iota\) is a monomorphism, their compositions with \(\iota\) are also four distinct homomorphisms \(\mathbf{E}^{2}\to\mathbf{M}^{2}\). Therefore every of the former homomorphisms arises in such a way. In particular, the natural embedding \(\mathbf{E}^{2}\hookrightarrow\mathbf{M}^{2}\) arises in this way, which implies \((0,1)\in\mathbf{A}\) and \((1,0)\in\mathbf{A}\). As noted in [22], this leads to \(\mathbf{A}=p_{1}(\mathbf{A})\times p_{2}(\mathbf{A})\), since whenever \((a,b),(c,d)\in\mathbf{A}\) we also have
\[(a,d)=\big{(}(a,b)\wedge(1,0)\big{)}\vee\big{(}(c,d)\wedge(0,1)\big{)}\in \mathbf{A}.\]
This concludes the proof.
In the remainder of this subsection we show how this theorem relates to the theory of _topological functors_ (see, e.g., [1, Chapter VI.21] or [8, Chapter 7]). Intuitively speaking, topological functors behave similarly to the forgetful functor \(\mathsf{Top}\to\mathsf{Set}\) out of the category of all topological spaces. Still, the definitions involved are rather technical and the reader not familiar with this topic may skip this part.
**Definition 4.20**.: We call a functor \(\mathsf{F}\colon\mathcal{C}\to\mathcal{D}\)
1. _topological_ if it is faithful and every \(\mathsf{F}\)-structured source has an initial lift,
2. _essentially topological_ if it is topological up to categorical equivalence of \(\mathcal{C}\) and \(\mathcal{D}\).
The need for this distinction arises because certain properties of topological functors, e.g. _amnesticity_[1, Definition 3.27], are not preserved under categorical equivalence (this issue is addressed in [49]).
The following is our key observation for the last part of this subsection.
**Proposition 4.21**.: _The forgetful functor \(\mathsf{U}\colon\mathsf{Stone_{L}}\to\mathsf{Stone}\) is topological and the Boolean skeleton functor \(\mathfrak{S}\colon\mathcal{A}\to\mathsf{BA}\) is essentially topological._
Proof.: We only need to show that \(\mathsf{U}\) is topological, which immediately implies that \(\mathfrak{S}\) is essentially topological due to [1, Theorem 21.9] together with the fact that \(\mathfrak{S}\) is naturally isomorphic to the dual of \(\mathsf{U}\).
Of course \(\mathsf{U}\) is faithful since it is the identity on morphisms. Now let \(X\in\mathsf{Stone}\) and let \((f_{i}\colon X\to\mathsf{U}(X_{i},\mathbf{v}_{i}))_{i\in I}\) be a \(\mathsf{U}\)-structured source (i.e., a collection of continuous maps) indexed by a class \(I\). We define \(\mathbf{v}\colon X\to\mathbb{S}(\mathbf{L})\) by
\[\mathbf{v}(x)=\bigvee_{i\in I}\mathbf{v}_{i}(f_{i}(x)).\]
Note that this is well-defined, since \(\mathbb{S}(\mathbf{L})\) is finite and that \((X,\mathbf{v})\) is a member of \(\mathsf{Stone_{L}}\), since \(\mathbf{v}^{-1}(\mathbb{S}\!\!\downarrow)=\bigcap_{i\in I}f_{i}^{-1}(\mathbf{ v}_{i}^{-1}(\mathbb{S}\!\!\downarrow))\) is closed. Every \(f_{i}\) is now also a morphism in \(\mathsf{Stone_{L}}\), which defines a lift of the source. To show that it is initial, assume there are \(\mathsf{Stone_{L}}\)-morphisms \((g_{i}\colon(Y,\mathbf{w})\to(X_{i},\mathbf{v}_{i}))_{i\in I}\) and a continuous
map \(g\colon Y\to X\) with \(f_{i}\circ g=g_{i}\). All we need to show is that \(g\) defines a \(\mathsf{Stone}_{\mathbf{L}}\)-morphism \((Y,\mathbf{w})\to(X,\mathbf{v})\). To see this simply note that
\[\mathbf{v}(g(y))=\bigvee_{i\in I}\mathbf{v}_{i}\big{(}f_{i}(g(y))\big{)}= \bigvee_{i\in I}\mathbf{v}_{i}(g_{i}(y))\leq\mathbf{w}(y),\]
which concludes the proof.
We can now easily show the following.
**Corollary 4.22**.: _Let \(\mathbf{M}\) be a finite lattice-based algebra. Then \(\mathbf{M}\) is semi-primal if and only if there is an essentially topological functor \(\mathfrak{s}\colon\mathbb{HSP}(\mathbf{M})\to\mathsf{BA}\) which satisfies_
\[\mathfrak{P}_{\mathbf{E}}\dashv\dashv\mathfrak{P}_{\mathbf{M}},\]
_where \(\mathbf{E}=\langle 0,1\rangle\) is the smallest subalgebra of \(\mathbf{M}\)._
Proof.: In the previous proposition we showed that if \(\mathbf{M}\) is semi-primal, then \(\mathfrak{S}\) is essentially topological.
Conversely, if such an essentially topological \(\mathfrak{s}\) exists, it is faithful by definition and both its adjoints \(\mathfrak{P}_{\mathbf{M}}\) and \(\mathfrak{P}_{\mathbf{E}}\) are full by [1, Proposition 21.12]. Therefore, due to Theorem 4.19, \(\mathbf{M}\) is semi-primal.
In this section we gained an algebraic understanding of all the functors between \(\mathcal{A}\) and \(\mathsf{BA}\) appearing on the right-hand side of Figure 3. Furthermore, we now showed how properties of the Boolean skeleton functor \(\mathfrak{S}\) characterize semi-primality. In the next section we investigate how canonical extensions of algebras in \(\mathcal{A}\) behave under these functors. One of the main results is that the Boolean skeleton functor \(\mathfrak{S}\) may be used to identify canonical extensions of algebras in \(\mathcal{A}\).
## 5. Discrete duality and canonical extensions
In this section we describe a semi-primal discrete duality similar to the well-known discrete duality between \(\mathsf{Set}\) and \(\mathsf{CABA}\), the category of complete atomic Boolean algebras with complete homomorphisms. It can be obtained from the finite duality in a similar way to the one of Section 3, except that now we lift it to the level of \(\mathsf{Ind}(\mathsf{Set}_{\mathbf{L}}^{\omega})\) and \(\mathsf{Pro}(\mathcal{A}^{\omega})\). The members of the latter category are known to be precisely the _canonical extensions_[33] of members of \(\mathcal{A}\) (see [21]), and we will provide two new characterizations of this category (Corollary 5.8 and Theorem 5.10). Lastly we show that, as in the primal case \(\mathbf{L}=\mathbf{2}\), the topological duality from Section 3 can be connected to its discrete version via an analogue of the _Stone-Cech compactification_.
Our first goal is to identify \(\mathsf{Ind}(\mathsf{Set}_{\mathbf{L}}^{\omega})\). Although it may not be surprising, it will still take some work to prove that it can be identified with the following category.
**Definition 5.1**.: The category \(\mathsf{Set}_{\mathbf{L}}\) has objects of the form \((X,v)\) where \(X\in\mathsf{Set}\) and \(v\colon X\to\mathbb{S}(\mathbf{L})\) is an arbitrary map. A morphism \(m\colon(X,v)\to(Y,w)\) is a map \(X\to Y\) which always satisfies
\[w(m(x))\leq v(x).\]
_Remark 6_.: In the context of fuzzy sets, Goguen [34, 35] initiated the study of such categories. This research was continued, e.g., in [5, 57]. In this remark we stick to the notation of [35]. Given a complete lattice \(\mathcal{V}\), the category \(\mathsf{Set}(\mathcal{V})\) of \(\mathcal{V}\)-fuzzy sets has objects \((X,A)\) where \(A\colon X\to\mathcal{V}\). Morphisms \((X,A)\to(X^{\prime},A^{\prime})\) are maps
\(m\colon X\to Y\) which satisfy \(A^{\prime}(m(x))\geq A(x)\) for all \(x\in X\). In the context of fuzzy set theory, people were mainly interested in the case where \(\mathcal{V}=[0,1]\). However, we retrieve \(\mathsf{Set_{L}}\) in the case where \(\mathcal{V}\) is the order-dual of \(\mathbb{S}(\mathbf{L})\).
Since we are interested in the \(\mathsf{Ind}\)-completion of \(\mathsf{Set_{L}^{\omega}}\), we will first discuss (filtered) colimits in this category.
**Lemma 5.2**.: _The category \(\mathsf{Set_{L}}\) is cocomplete. The colimit \(\operatorname{colim}_{i\in I}(X_{i},v_{i})\) of a filtered diagram \(\big{(}f_{ij}\colon(X_{i},v_{i})\to(X_{j},v_{j})\mid i\leq j\big{)}\) is realized by \(\big{(}(\coprod_{i\in I}X_{i})/{\sim},\bar{v}\big{)}\). Here, for \(x_{i}\in X_{i}\) and \(x_{j}\in X_{j}\),_
\[x_{i}\sim x_{j}\iff\exists k\geq i,j:f_{ik}(x_{i})=f_{jk}(x_{j})\]
_and_
\[\bar{v}([x_{i}])=\bigwedge_{x_{i}\sim x_{j}\in X_{j}}v_{j}(x_{j}).\]
Proof.: The proof that \(\mathsf{Set_{L}}\) is cocomplete is completely analogous to the one in [57]. For filtered colimits, on the underlying level of \(\mathsf{Set}\) we know that \(X:=\coprod_{i\in I}(X_{i})/{\sim}\) with the canonical inclusions \(\rho_{i}\colon X_{i}\to X\) is the colimit of the diagram. To see that all the \(\rho_{i}\) are morphisms in \(\mathsf{Set_{L}}\) note
\[\bar{v}(\rho_{i}(x_{i}))=\bigwedge_{x_{i}\sim x_{j}\in X_{j}}v_{j}(x_{j})\leq v _{i}(x_{i}).\]
Given another cocone \(\gamma_{i}\colon(X_{i},v_{i})\to(Z,u)\), the unique map \(g\colon X\to Z\) is a morphism in \(\mathsf{Set_{L}}\) since, for \(x_{i}\in X_{i}\) and \(x_{i}\sim x_{j}\in X_{j}\) we have \(u\big{(}g(\rho_{j}(x_{j}))\big{)}=u(\gamma_{j}(x_{j}))\leq v_{j}(x_{j})\) and thus
\[u\big{(}g([x_{i}])\big{)}\leq\bigwedge_{x_{i}\sim x_{j}\in X_{j}}v_{j}(x_{j}) =\bar{v}([x_{i}]),\]
which concludes the proof.
We will also make use of the following general result.
**Lemma 5.3**.: _Let \(\mathsf{F}\colon\mathcal{C}\to\mathcal{D}\) be a functor between categories \(\mathcal{C}\) and \(\mathcal{D}\) which both admit filtered colimits. If \(\mathsf{F}\) has a right-adjoint \(\mathsf{G}\) which preserves filtered colimits, then \(\mathsf{F}\) preserves finitely presentable objects._
Proof.: Let \(C\in\mathcal{C}\) be finitely presentable. We want to show that \(F(C)\) is finitely presentable in \(\mathcal{D}\). Let \(\operatorname{colim}_{i}D_{i}\) be a filtered colimit in \(\mathcal{D}\). Then
\[\mathcal{D}\big{(}\mathsf{F}(C),\operatorname{colim}_{i}D_{i}\big{)}\cong \operatorname{colim}_{i}\mathcal{C}\big{(}C,\mathsf{G}(\operatorname{D_{i}}) \big{)}\cong\operatorname{colim}_{i}\mathcal{D}\big{(}\mathsf{F}(C),D_{i} \big{)},\]
where the first isomorphism comes from the fact that \(\mathsf{G}\) preserves filtered colimits and \(C\) is finitely presentable.
**Corollary 5.4**.: _If \(X\) is a finite set, then \((X,v)\) is finitely presentable in \(\mathsf{Set_{L}}\) for every \(v\colon X\to\mathbb{S}(\mathbf{L})\)._
Proof.: Let \(X=\{x_{1},\ldots,x_{n}\}\) and let \(v(x_{i})=\mathbf{S_{i}}\). Then we can clearly identify
\[(X,v)\cong\coprod_{1\leq i\leq n}(\{x_{i}\},v^{\mathbf{S_{i}}}).\]
Since filtered colimits commute with finite limits in \(\mathsf{Set}\), it now suffices to show that all \((\{x_{i}\},v^{\mathbf{S_{i}}})\) are finitely presentable. Just like in Subsection 4.4 we can define the adjunction \(\mathsf{V^{S}}\dashv\mathsf{C^{S}}\) between \(\mathsf{Set_{L}}\) and \(\mathsf{Set}\) for every subalgebra \(\mathbf{S}\leq\mathbf{L}\). By Lemma
5.3 it now suffices to show that \(\mathsf{C}^{\mathsf{S}}\) preserves filtered colimits. So let \((X,\bar{v})\) be a filtered colimit as in Lemma 5.2. We know that \(\mathsf{C}^{\mathsf{S}}(X)=\{[x_{i}]\mid\exists x_{i}\sim x_{j}\in X_{j},v_{j}(x_ {j})\leq\mathbf{S}\}\). Therefore, for all \([x_{i}]\in\mathsf{C}^{\mathsf{S}}\) we can choose representatives with \(x_{i}\in\mathsf{C}^{\mathsf{S}}(X_{i},v_{i})\). This yields a bijection between \(\mathsf{C}^{\mathsf{S}}(X)\) and \(\operatorname{colim}\mathsf{C}^{\mathsf{S}}(X_{i},v_{i})\).
We now have everything at hand to easily prove the following.
**Theorem 5.5**.: \(\mathsf{Ind}(\mathsf{Set}^{\omega}_{\mathbf{L}})\) _is categorically equivalent to \(\mathsf{Set}_{\mathbf{L}}\)._
Proof.: Since \(\mathsf{Set}_{\mathbf{L}}\) is cocomplete, the inclusion \(\iota\colon\mathsf{Set}^{\omega}_{\mathbf{L}}\to\mathsf{Set}_{\mathbf{L}}\) has a unique finitary extension \(\hat{\iota}\colon\mathsf{Ind}(\mathsf{Set}^{\omega}_{\mathbf{L}})\to\mathsf{ Set}_{\mathbf{L}}\). Since \(\iota\) is fully faithful and, by the above corollary, maps all objects to finitely presentable objects in \(\mathsf{Set}_{\mathbf{L}}\), this extension is also fully faithful. To see that it is essentially surjective note that, just like in \(\mathsf{Set}\), every member of \(\mathsf{Set}_{\mathbf{L}}\) is the filtered colimit of its finite subsets.
We now take a closer look at the category \(\mathsf{Pro}(\mathcal{A}^{\omega})\). It is well-known that it consists of the _canonical extensions_[33] of algebras in \(\mathcal{A}\). In [21] a description of these canonical extensions as topological algebras can be found. But, as in the case of complete atomic Boolean algebras \(\mathsf{CABA}\simeq\mathbb{IP}(\mathbf{2})\), this need not be the only description. In the following we apply results of Section 4 to find two easy alternatives. The first one is in terms of (arbitrary) products of subalgebras of \(\mathbf{L}\) with complete homomorphisms.
**Definition 5.6**.: Let \(\hat{\mathcal{A}}\) be the category with algebras from \(\mathbb{IPS}(\mathbf{L})\) as objects and complete homomorphisms as morphisms.
We can essentially repeat our proof of the finite duality from Corollary 3.6, once we prove the following result analogous to Proposition 3.5.
**Proposition 5.7**.: _Let \(\mathbf{A}=\prod_{i\in I}\mathbf{S}_{i}\in\hat{\mathcal{A}}\). Then the complete homomorphisms \(\mathbf{A}\to\mathbf{L}\) are precisely the projections (followed by inclusions) in each component._
Proof.: By Proposition 4.3 there is a bijection between \(\mathcal{A}(\mathbf{A},\mathbf{L})\) and \(\mathsf{BA}(\mathfrak{S}(\mathbf{A}),\mathbf{2})\) given by \(h\mapsto h\big{|}_{\mathfrak{S}(\mathbf{A})}\). In particular, if \(h\) is complete, then so is its restriction. Since \(\mathfrak{S}(\mathbf{A})=\mathbf{2}^{I}\), the only complete homomorphisms \(\mathfrak{S}(\mathbf{A})\to\mathbf{2}\) are the projections, and they are the restrictions of the respective projections \(\mathbf{A}\to\mathbf{L}\).
**Corollary 5.8**.: \(\mathsf{Pro}(\mathcal{A}^{\omega})\) _is categorically equivalent to \(\hat{\mathcal{A}}\)_
Proof.: By Theorem 5.5 it suffices to show that \(\mathsf{Set}_{\mathbf{L}}\) is dually equivalent to \(\hat{\mathcal{A}}\). This is done completely analogous to the proof of Corollary 3.6.
The second description of \(\mathsf{Pro}(\mathcal{A}^{\omega})\) is in terms of the Boolean skeleton.
**Definition 5.9**.: The category \(\mathsf{CAA}\) has as objects algebras \(\mathbf{A}\in\mathcal{A}\) which have a complete lattice-reduct and which satisfy \(\mathfrak{S}(\mathbf{A})\in\mathsf{CABA}\). The morphisms in \(\mathsf{CAA}\) are the complete homomorphisms.
**Theorem 5.10**.: \(\mathsf{Pro}(\mathcal{A}^{\omega})\) _is categorically equivalent to \(\mathsf{CAA}\)._
Proof.: Using Corollary 5.8 we show that \(\mathsf{CAA}\) is categorically equivalent to \(\hat{\mathcal{A}}\). Clearly there is a fully faithful inclusion functor \(\hat{\mathcal{A}}\hookrightarrow\mathsf{CAA}\). So it suffices to show that this functor is essentially surjective. In other words, we want to show that every object of \(\mathsf{CAA}\) is isomorphic to a product of subalgebras of \(\mathbf{L}\).
So consider \(\mathbf{A}\in\mathsf{CA}\mathcal{A}\). Since the adjunction \(\mathfrak{S}\dashv\mathfrak{P}\) restricts to \(\mathsf{CABA}\) and \(\mathsf{CA}\mathcal{A}\), we can use Corollary 4.10 to get a _complete_ embedding \(\eta_{\mathbf{A}}\colon\mathbf{A}\hookrightarrow\mathfrak{P}(\mathfrak{S}( \mathbf{A}))\). Since \(\mathfrak{S}(\mathbf{A})\) is in \(\mathsf{CABA}\) it is isomorphic to \(\mathbf{2}^{I}\) for some index set \(I\). Thus \(\mathfrak{P}(\mathfrak{S}(\mathbf{A}))\cong\mathfrak{P}(\mathbf{2}^{\mathbf{I }})\cong\mathbf{L}^{I}\). We show that \(\mathbf{A}\) is isomorphic to the direct product of subalgebras \(\prod_{i\in I}\mathsf{pr}_{i}(\eta_{\mathbf{A}}(A))\). For this it suffices to show that the injective homomorphism \(\eta_{\mathbf{A}}\) maps onto it. So let \(\alpha\) be an element of this product. For each \(i\in I\) choose \(a_{i}\in\mathbf{A}\) such that \(\mathsf{pr}_{i}(\eta_{\mathbf{A}}(a_{i}))=\alpha(i)\). Since \(\mathbf{2}^{I}\cong\mathfrak{S}(\mathbf{A})\subseteq\mathbf{A}\) all atoms \(b_{i}\in\mathbf{2}^{I}\) (defined by \(b_{i}(j)=1\) iff \(j=i\)) can be considered as members of \(\mathbf{A}\). Now define
\[a=\bigvee\{a_{i}\wedge b_{i}\mid i\in I\}.\]
Since \(\mathbf{A}\) is complete, we have \(a\in\mathbf{A}\). And since \(\eta_{\mathbf{A}}\) is a complete homomorphism we have \(\eta_{\mathbf{A}}(a)=\alpha\) (because \(\mathsf{pr}_{i}(\eta_{\mathbf{A}}(a))=\eta_{\mathbf{A}}(a_{i})=\alpha(i)\)).
With the results from this section thus far, it is clear that the chains of adjunctions from Section 4 (summarized in Figure 3) have their discrete counterparts, equally defined, between \(\mathsf{Set}_{\mathbf{L}}\) and \(\mathsf{Set}\) and \(\mathsf{CA}\mathcal{A}\) and \(\mathsf{CABA}\), respectively. To make the connection between Figure 3 and its discrete counterpart, we finish this section by connecting the respective dualities as indicated in Figure 5.
Here \((-)^{\flat}\colon\mathsf{Stone}_{\mathbf{L}}\to\mathsf{Set}_{\mathbf{L}}\) is the forgetful functor with respect to topology and \(\iota_{c}\colon\mathsf{CA}\mathcal{A}\to\mathcal{A}\) is the obvious inclusion functor (note that both these functors are not full). The functor \((-)^{\delta}\) takes an algebra to its canonical extension. In the primal case \(\mathbf{L}=\mathbf{2}\), it is well-known that \(\beta_{\mathbf{2}}=:\beta\) is the _Stone-Cech compactification_ (see, e.g., [40, Section IV.2]). This has been generalized to the _Bohr compactification_ in a (much broader) framework which includes ours in [20]. However, since things are particularly simple in our setting, we directly show how to define \(\beta_{\mathbf{L}}\).
Given \((X,v)\in\mathsf{Set}_{\mathbf{L}}\), there is a natural way to extend \(v\) to the Stone-Cech compactification \(\beta(X)\) of \(X\). Indeed, since \(v\colon X\to\mathbb{S}(\mathbf{L})\) can be thought of as a continuous map between discrete spaces, by the universal property of \(\beta\) it has a unique continuous extension \(\widetilde{\mathbf{v}}\colon\beta(X)\to\mathbb{S}(\mathbf{L})\). Here, \(\widetilde{\mathbf{v}}^{-1}(\mathbb{S}\downarrow)\) is given by the topological closure of \(v^{-1}(\mathbb{S}\downarrow)\) in \(\beta(X)\). Thus, for every morphism \(f\colon(X,v)\to(Y,w)\) in \(\mathsf{Set}_{\mathbf{L}}\), the continuous map \(\beta f\) defines a morphism \((\beta(X),\widetilde{\mathbf{v}})\to(\beta(Y),\widetilde{\mathbf{w}})\) in \(\mathsf{Stone}_{\mathbf{L}}\). This is due to the observation that whenever \(x\in\widetilde{\mathbf{v}}^{-1}(\mathbb{S}\downarrow)=\overline{v^{-1}(\mathbb{ S}\downarrow)}\), by continuity of \(\beta f\) and the morphism property of \(f\), we have \(\beta f(x)\in\overline{w^{-1}(\mathbb{S}\downarrow)}=\widetilde{\mathbf{w}}^{-1 }(\mathbb{S}\downarrow)\).
**Proposition 5.11**.: _The functor \(\beta_{\mathbf{L}}\colon\mathsf{Set}_{\mathbf{L}}\to\mathsf{Stone}_{\mathbf{L}}\) defined on objects by_
\[\beta_{\mathbf{L}}(X,v)=(\beta(X),\widetilde{\mathbf{v}})\]
Figure 5. Compactification and canonical extension.
_and by \(f\mapsto\beta f\) on morphisms is the dual of the canonical extension functor \((-)^{\delta}\colon\mathcal{A}\to\mathsf{CA}\mathcal{A}\)._
Proof.: It suffices to show that \(\beta_{\mathbf{L}}\) satisfies the following universal property. Given \((Y,\mathbf{w})\in\mathsf{Stone_{L}}\), every \(\mathsf{Set_{L}}\)-morphism \(f\colon(X,v)\to(Y,\mathbf{w})\) extends uniquely to a \(\mathsf{Stone_{L}}\)-morphism \(\tilde{f}\colon(\beta(X),\tilde{\mathbf{v}})\to(Y,\mathbf{w})\). On the levels of \(\mathsf{Set}\) and \(\mathsf{Stone}\) we get a unique continuous extension \(\tilde{f}\). To show it is a \(\mathsf{Stone_{L}}\)-morphism, similarly to before, note that if \(x\in\overline{v^{-1}(S\downarrow)}\), then by continuity
\[\tilde{f}(x)\in\overline{f\big{(}v^{-1}(\mathbf{S}\downarrow)\big{)}}\subseteq \overline{\mathbf{w}^{-1}(\mathbf{S}\downarrow)}.\]
Since \(\mathbf{w}^{-1}(\mathbf{S}\downarrow)\) is closed it equals its own closure. This concludes the proof.
This nicely wraps up this paper by connecting all of its main sections. In the last section we give a quick summary and discuss some possible directions of future research along similar lines.
## 6. Concluding Remarks and Further Research
We explored semi-primality by means of category theory, showing how a variety generated by a semi-primal lattice expansion relates to the variety of Boolean algebras. Various adjunctions provide insight into the many similarities between these varieties. A schematic summary of our results can be found in Figure 6, which also emphasizes once more how close BA and \(\mathcal{A}\) really are.
We plan to follow up this research by developing a coalgebraic framework for modal extensions of the many-valued logic corresponding to a semi-primal variety. As mentioned before, from this point of view it is reasonable to assume that \(\mathbf{L}\) is based on a lattice. However, it seems plausible that our results generalize to the slightly more general case of semi-primal algebras which possess a _coupling_ in the sense of [26], essentially since Proposition 2.8 and Theorem 3.2 still apply to this case.
We will now sketch some more potential ways to follow up this research. In general, we hope to have set an example in exploring concepts in universal algebra through the lens of (mostly elementary) category theory.
Figure 6. Summary of our results.
For example, other variants of primality could be investigated in a similar manner.
**Definition 6.1**.: A finite algebra \(\mathbf{M}\) is called
1. _demi-semi-primal_ if it is quasi-primal and every internal isomorphism of \(\mathbf{M}\) can be extended to an automorphism of \(\mathbf{M}\) (see [53]).
2. _demi-primal_ if it is quasi-primal and has no proper subalgebras (see [53]).
3. _infra-primal_ if it is demi-semi primal and every internal isomorphism is an automorphism on its domain (see [27]).
4. _hemi-primal_ if every operation on \(\mathbf{M}\) which preserves congruences is term-definable in \(\mathbf{M}\) (see [28]).
_Question 1_.: What is the categorical relationship between \(\mathsf{BA}\) and the variety generated by an algebra which is quasi-primal or which satisfies one of the properties of Definition 6.1? What about the relationship between distinct variations of primality to each other?
For quasi-primal algebras (and thus, in particular, for algebras satisfying (1), (2) or (3)), there is the duality theorem by Keimel-Werner [41] (which is also a natural duality [17]), possibly a good starting point to a discussion similar to the one presented here.
Homi-primality seems to have received less attention. To the best of the authors knowledge, no duality for varieties generated by hemi-primal algebras is known thus far.
_Question 2_.: Is it possible to obtain a duality for hemi-primal varieties, for example one which stems from a finite dual equivalence using methods similar to our proof of semi-primal duality in Section 3?
The Boolean power functor \(\mathfrak{P}_{\mathbf{M}}\colon\mathsf{BA}\to\mathbb{HSP}(\mathbf{M})\) was defined for an arbitrary finite algebra \(\mathbf{M}\). In the light of our results from Section 4, the following question arises.
_Question 3_.: Under which circumstances does the functor \(\mathfrak{P}_{\mathbf{M}}\) have a left-adjoint? Which information about \(\mathbf{M}\) can be retrieved from properties of the functors of the form \(\mathfrak{P}_{\mathbf{S}}\) with \(\mathbf{S}\leq\mathbf{M}\)?
If we consider this work as not only comparing varieties but _comparing dualities_, another range of questions appears.
_Question 4_.: What is the category theoretical relationship between different dual equivalences? For example, one could consider _Priestley duality_[52] or _Esakia duality_[23].
Lastly, another category theoretical approach to universal algebra, which has not been discussed in this paper, is given by Lawvere theories. For example, Hu's theorem has been analyzed from this angle in [51]. Of course, one could also try to find out more about other variants of primality in this context.
_Question 5_.: How can semi-primality and other variants of primality be expressed in terms of Lawvere theories?
## Appendix A Some semi-primal \(\mathsf{FL}_{ew}\)algebras
Here we go into more detail in some claims made in Subsection 2.3.2. We provide examples of semi-primal \(\mathsf{FL}_{ew}\)-algebras, both chain-based and non chain-based, including the proof of semi-primality for each one of them. All of the examples and their labels are taken from the list [31] by Galatos and Jipsen. For simplicity we only discuss \(\mathsf{FL}_{ew}\)-algebras without any idempotent elements other than \(0\) and \(1\). Due to Corollary 2.16 they are all quasi-primal. To prove semi-primality, by Proposition 2.2, it suffices to describe all subalgebras and argue why there can't be any non-trivial isomorphisms between then.
We begin with the quasi-primal \(\mathsf{FL}_{ew}\)-chains of five elements \(R_{1,17}^{5,1}\) to \(R_{1,22}^{5,1}\) in [31, p.2, row 2] depicted in Figure 7.
_Claim 1_.: Except for the first one, all algebras depicted in Figure 7 are semi-primal.
Proof.: \(R_{1,17}^{1,5}\) is not semi-primal because it has isomorphic subalgebras \(\{0,1,a,c\}\) and \(\{0,1,a,d\}\).
In the following we show why the other ones are semi-primal by describing the subalgebras other than the obvious ones \(\{0,1\}\) and \(\{0,1,a,b,c\}\). Since isomorphisms need to be order-preserving, it suffices to note that there are never two subalgebras of the same size in the examples below.
\(R_{1,18}^{1,5}\): There are no other subalgebras since \(\{\neg a,a^{2}\}=\{b,c\}\subseteq\langle a\rangle\) and \(\neg b=\neg c=a\), thus \(a\in\langle b\rangle\) and \(a\in\langle c\rangle\).
\(R_{1,19}^{1,5}\): There is the subalgebra \(\langle a\rangle=\langle b\rangle=\{0,1,a,b\}\) since \(a\to b=a\), \(\neg a=b\) and \(\neg b=a\). Since \(a=\neg c\) we have \(a\in\langle c\rangle\), so \(c\) generates the entire algebra.
\(R_{1,20}^{1,5}\): There are two different sized subalgebras \(\langle a\rangle=\langle c\rangle=\{0,1,a,c\}\) (since \(\neg a=c,\neg c=a\) and \(a\to c=a\)) and \(\langle b\rangle=\{0,1,b\}\) (since \(\neg b=b\to b=b\))
\(R_{1,21}^{1,5}\): Note that this algebra corresponds to the Lukasiewicz-chain \(\mathbf{L}_{4}\). As thus expected, there is the subalgebra \(\langle b\rangle=\{0,1,b\}\), while \(b\in\langle a\rangle\cap\langle c\rangle\) since \(a=\neg c,c=\neg a\) and \(b=a^{2}\).
\(R_{1,22}^{1,5}\): There is the subalgebra \(\langle a\rangle=\langle c\rangle=\{0,1,a,c\}\) (since \(\neg a=c\), \(\neg c=a\) and \(a\to c=a\)). Since \(\neg b=c\) and \(\neg c=a\) we find that \(b\) generates the entire algebra.
To also provide non chain-based examples, we examine the \(\mathsf{FL}_{ew}\)-algebras \(R_{1,11}^{6,2}\) ([31, p.18, row 4]) and \(R_{1,9}^{6,3}\) ([31, p.20, row 1]) depicted in Figure 8.
Figure 7. The quasi-primal \(\mathsf{FL}_{ew}\)-chains of order five.
_Claim 2_.: The two \(\mathsf{FL}_{ew}\)-algebras depicted in Figure 8 are semi-primal.
Proof.: \(R_{1,11}^{6,2}\): The only possible candidate for an automorphism of this algebra is the bijection \(f\) exchanging \(c\) and \(d\) (since it needs to be order-preserving). This map, however, is not a homomorphism, as witnessed by the fact that \(f(a^{2})=f(c)=d\) while \(f(a)^{2}=a^{2}=c\). The only other subalgebra other than \(\{0,1\}\) is \(\langle a\rangle=\{0,1,a,b,c\}\) since we have \(\neg a=b\), \(a^{2}=c\), \(\neg c=a\), \(a\to b=a\), \(a\to c=a\) and \(b\to c=a\). Since this subalgebra is a chain, it does not have any non-trivial isomorphisms. Since \(\neg d=a\) we know that \(d\) generates the entire algebra, so there are no more subalgebras to consider.
\(R_{1,9}^{6,3}\): Again, there is only one possible candidate for an automorphism of this algebra, namely the bijection \(g\) exchanging \(b\) and \(c\). This is not a homomorphism because \(g(b^{2})=g(0)=0\) while \(g(b)^{2}=c^{2}=d\). The only other subalgebra except \(\{0,1\}\) is \(\langle a\rangle=\{0,1,a,b,d\}\) since \(\neg a=b\), \(\neg b=\neg d=a\) and \(a\to b=a\to d=b\to d=a\). This subalgebra has no non-trivial isomorphisms because it is a chain. Since \(c^{2}=d\), the element \(c\) generates the entire algebra.
## Acknowledgments
The second author is supported by the Luxembourg National Research Fund under the project PRIDE17/12246620/GPS.
|
2309.08162 | Adaptive Pricing in Unit Commitment Under Load and Capacity Uncertainty | The increase of renewables in the grid and the volatility of the load create
uncertainties in the day-ahead prices of electricity markets. Adaptive robust
optimization (ARO) and stochastic optimization have been used to make
commitment and dispatch decisions that adapt to the load and capacity
uncertainty. These approaches have been successfully applied in practice but
current pricing approaches used by US Independent System Operators (marginal
pricing) and proposed in the literature (convex hull pricing) have two major
disadvantages: a) they are deterministic in nature, that is they do not adapt
to the load and capacity uncertainty, and b) require uplift payments to the
generators that are typically determined by ad hoc procedures and create
inefficiencies that motivate self-scheduling. In this work, we extend
pay-as-bid and uniform pricing mechanisms to propose the first adaptive pricing
method in electricity markets that adapts to the load and capacity uncertainty,
eliminates post-market uplifts and deters self-scheduling, addressing both
disadvantages. | Dimitris Bertsimas, Angelos G. Koulouras | 2023-09-15T05:06:08Z | http://arxiv.org/abs/2309.08162v1 | # Adaptive Pricing in Unit Commitment Under Load and Capacity Uncertainty
###### Abstract
The increase of renewables in the grid and the volatility of the load create uncertainties in the day-ahead prices of electricity markets. Adaptive robust optimization (ARO) and stochastic optimization have been used to make commitment and dispatch decisions that adapt to the load and capacity uncertainty. These approaches have been successfully applied in practice but current pricing approaches used by US Independent System Operators (marginal pricing) and proposed in the literature (convex hull pricing) have two major disadvantages: a) they are deterministic in nature, that is they do not adapt to the load and capacity uncertainty, and b) require uplift payments to the generators that are typically determined by ad hoc procedures and create inefficiencies that motivate self-scheduling. In this work, we extend pay-as-bid and uniform pricing mechanisms to propose the first adaptive pricing method in electricity markets that adapts to the load and capacity uncertainty, eliminates post-market uplifts and deters self-scheduling, addressing both disadvantages.
Pricing, Robust Optimization, Adaptive Optimization, Unit Commitment, Energy Markets, Uncertainty
## I Introduction
The future of electricity markets is expected to prominently feature renewable energy sources, which bring volatility and unpredictability to the markets [1, 2]. With the increase of wind and solar power comes an increase in the price volatility, among other things, which creates issues for both the market operators and the market participants. Specifically, [3] shows that as the penetration of wind power increases, the market-clearing price becomes more volatile and uncertain, making it difficult for wind power producers to forecast and bid their power accurately. In addition, changes are also expected on the consumer side. For example, the introduction of electric vehicles could change demand patterns as well as available storage options [4, 5]. Also, consumers could simultaneously play the role of producers thereby becoming "prosumers" [6, 7, 8].
So far, most energy markets do not directly address these challenges as they have been set up using deterministic approaches [9]. However, there has been recent work in robust and stochastic optimization with promising results in addressing the unit commitment (UC) problem under uncertainty [10, 11, 12, 13, 14]. Adaptive robust optimization (ARO) minimizes the cost under the worst-case scenario in an uncertainty set, while stochastic optimization accounts for the probability distribution of the uncertain parameters by minimizing the expected cost. In general, ARO has been used to protect against different types of uncertainty, from contingencies and demand planning to wind energy output and has lower average dispatch and total costs, indicating better economic efficiency and significantly reduces the volatility of the total costs [10]. In [10], the authors use ARO to find solutions in the UC problem that are robust to uncertain nodal injections. This work is extended in [11], which deals with multistage UC and dynamic uncertainty sets related to the capacity of renewable energy sources. The authors of [12] also address wind uncertainty. They propose a distributionally robust approach, where they define a family of wind power distributions and minimize the expected cost under the worst-case distribution. Our approach is based on ARO but there has also been promising work on stochastic optimization. See [13, 15] for an introduction to these methods.
While the previous approaches have been successful in dealing with the volatility in energy systems and markets, the pricing methods available are still mostly deterministic [16, 17]. One of the more popular schemes is uniform marginal-cost pricing or "IP pricing", which may result in losses for some generators [18, 19]. Therefore, side-payments or uplifts are provided to these generators to make them whole, which may modify their incentives. Alternative mechanisms have been proposed, including removing the negative uplifts, such that no generator incurs a loss, and raising the commodity price above marginal cost to reduce uplifts [20, 21]. One principled version of that is the "convex hull" pricing scheme, which minimizes the uplifts by langragifying the energy balance constraint and maximizing over its dual price [22, 23]. Possibly only [24] and [25] discuss pricing in robust UC. However, they do not use ARO and [25] does not offer extended theoretical results. For a detailed review of the pricing methods, see [26].
Current pricing approaches used by Independent System Operators (ISOs) (marginal pricing) and proposed in the literature (convex hull pricing) have two major disadvantages: a) they are deterministic in nature, that is they do not adapt to the load and capacity uncertainty, and b) require uplift payments to the generators that are typically determined by ad hoc procedures and create inefficiencies that motivate self-scheduling. In contrast, we extend pay-as-bid and uniform pricing mechanisms to propose the first adaptive pricing method in electricity markets that adapts to the load and capacity uncertainty, eliminates post-market uplifts and deters self-scheduling, addressing both disadvantages.
### _Contributions_
In this work, we propose the first pricing method for ARO in energy markets with load and capacity uncertainty by offering contracts contingent to the uncertainty in the data of the day-ahead problem. We summarize the main contributions:
* We introduce adaptive pay-as-bid and marginal pricing contracts for UC problems featuring load and capacity uncertainty. We specify fully the day-ahead payments and provide an upper-bound on the next-day or intra-day payments based on the day-ahead commitments. The payments are functions of the load and capacity uncertainty.
* We show that the adaptive day-ahead pay-as-bid scheme is equivalent to the adaptive uniform pricing scheme. Also, if the worst-case uncertainty is realized in the next day, the forecasted intra-day payments are optimal and the generators are indifferent between the market schedule and their optimal schedule, eliminating self-scheduling.
* We display the adaptive pricing in detail on the adaptive robust version of the Scarf example, see [18], and on realistic UC problems featuring ramp constraints. We compare it to deterministic marginal and convex hull pricing and find that adaptive pricing eliminates the corrections that are necessary in deterministic day-ahead problems.
The paper is organized as follows: in Section II, we introduce the ARO formulations for UC. In Section III, we present the adaptive pricing and its theoretical properties on UC with load and capacity uncertainty. In Section IV and in Section V we provide adaptive pricing on examples with load and capacity uncertainty. In Section VI, we demonstrate our method on a realistic example with ramp constraints and compare it to convex hull pricing. Finally, Section VII summarizes our conclusions.
The notation that we use is as follows: we use bold faced characters such as \(\mathbf{p}\) to represent vectors and capital letters such as \(\mathbf{V}\) to represent matrices. Also, \(\mathbf{p}^{T}\) denotes the transpose of the column vector \(\mathbf{p}\) and \(\mathbf{e}_{i}\) is the \(i\)th unit vector. We define \([I]=\{1,\ldots,I\}\). The \(\mathcal{L}_{1}\) norm of a vector refers to the norm \(\|\mathbf{x}\|_{1}=\sum_{i=1}^{I}|x_{i}|\), the \(\mathcal{L}_{2}\) norm refers to \(\|\mathbf{x}\|_{2}=\sqrt{\sum_{i=1}^{I}x_{i}^{2}}\) and \(\mathcal{L}_{\infty}\) refers to \(\|\mathbf{x}\|_{\infty}=\max_{i\in[I]}|x_{i}|\).
## II Adaptive Robust UC
In this section, we introduce the ARO formulation for an energy market robust to load and capacity uncertainty. The formulation is based on the popular Scarf example, see [18], but considers uncertainty in the load and capacity parameters. Reserves, transmission and ramp constraints can be added using linear constraints with small changes.
Consider the following example which tries to minimize the total cost of meeting a fixed level of demand.
\[\min_{x,p} \sum_{i=1}^{I}F_{i}x_{i}+C_{i}p_{i}\] s.t. \[\sum_{i=1}^{I}p_{i}=\sum_{j=1}^{J}\bar{q}_{j},\] \[p_{i}\leq p_{i}^{\max}x_{i},\quad\forall i\in[I],\] \[p_{i}\geq 0,\quad\forall i\in[I],\] \[x_{i}\in\{0,1\},\quad\forall i\in[I].\]
We have \(I\) generators, each with a turn-on cost of \(F_{i}\) and a unit production cost of \(C_{i}\). The expected maximum production levels of generator \(i\) are \(\bar{p}_{i}^{\max}\). We also have \(J\) demand nodes, with expected demand or load \(\bar{q}_{j}\) at each node \(j\). The binary variable \(x_{i}=1\), if generator \(i\) is turned on, otherwise \(x_{i}=0\). Note, the variable \(p_{i}\) represents the dispatch of generator \(i\).
We can expand on this model by considering uncertainty in the load \(\mathbf{q}\) and in the capacity \(\mathbf{p}^{\max}\). In ARO, we describe the uncertainty in the parameters with an uncertainty set that contains all scenarios against which we want to be robust. Essentially, the constraints in our problem should be satisfied for all possible realizations of \(\mathbf{q}\) and \(\mathbf{p}^{\max}\) in their uncertainty sets [27, 28, 29]. We consider uncertainty sets where the norm of the residuals from the expected load \(\mathbf{d}=\mathbf{q}-\bar{\mathbf{q}}\) is at most \(\Gamma_{q}\) and where the norm of residuals \(\mathbf{r}=\mathbf{p}^{\max}-\bar{\mathbf{p}}^{\max}\) from the expected capacity is at most \(\Delta_{p}\).
\[\mathcal{D}=\{\|\mathbf{d}\|_{\ell}\leq\Gamma_{q}\},\ \ \mathcal{U}=\{\|\mathbf{r}\|_{\ell}\leq \Delta_{p}\}.\]
with \(\ell\in\{1,2,\infty\}\) corresponding to the budget, ellipsoidal and box uncertainty sets. The values of \(\Gamma_{q}\) and \(\Delta_{p}\) control the conservativeness of our formulation [30].
In ARO, the second-stage decisions \(\mathbf{p}\) are functions of both uncertain parameters \(\mathbf{d},\mathbf{r}\) and the problem that minimizes the worst-case commitment and dispatch cost is
\[\min_{\mathbf{x},\mathbf{p}}\max_{\mathbf{d}\in\mathcal{D},\mathbf{r}\in\mathcal{U}} \sum_{\begin{subarray}{c}i=1\\ I\end{subarray}}^{I}F_{i}x_{i}+C_{i}p_{i}(\mathbf{d},\mathbf{r})\] s.t. \[\sum_{i=1}^{I}p_{i}(\mathbf{d},\mathbf{r})\geq\sum_{j=1}^{J}(d_{j}+\bar{ q}_{j}),\quad\forall\mathbf{d}\in\mathcal{D},\] \[p_{i}(\mathbf{d},\mathbf{r})\leq(r_{i}+p_{i}^{\max})x_{i},\ \forall i\in[I],\ \ \mathbf{r}\in\mathcal{U},\] \[p_{i}(\mathbf{d},\mathbf{r})\geq 0,\quad\forall i\in[I],\] \[x_{i}\in\{0,1\},\quad\forall i\in[I].\]
The optimization variable \(p_{i}=p_{i}(\mathbf{d},\mathbf{r})\), called a decision rule, is in fact a vector function [29]. In this paper, to make the problem tractable, we restrict \(p_{i}(\mathbf{d},\mathbf{r})\) to linear functions or linear decision rules (LDR). Such a decision rule may not be optimal, because of the restriction to a certain class, but LDR have shown very good performance in practice and are optimal in many settings [29, 31, 32]. Alternatively, we can use decomposition schemes to learn the decision rule implicitly [33, 10].
Using LDR, the second-stage decisions \(\mathbf{p}\) are linear functions of the uncertain parameters \(\mathbf{d},\mathbf{r}\). So, we have an \(I\)-dimensional vector \(\mathbf{u}\), an \(I\times J\) matrix \(\mathbf{V}\), an \(I\times I\) matrix \(\mathbf{Z}\) and \(\mathbf{p}(\mathbf{d},\mathbf{r})=\mathbf{u}+\mathbf{V}\mathbf{d}+\mathbf{Z}\mathbf{r}\) or, for each \(i\), \(p_{i}(\mathbf{d},\mathbf{r})=u_{i}+\sum_{j=1}^{J}V_{ij}d_{j}+\sum_{k=1}^{I}Z_{ik}r_{k}\). The dispatch \(\mathbf{p}\) contains a
non-adaptive part \(\mathbf{u}\) and an adaptive part that depends on \(\mathbf{d}\) and \(\mathbf{r}\). We will use these terms for the rest of the paper. If we set \(\mathbf{V}=\mathbf{0}\) and \(\mathbf{Z}=\mathbf{0}\), the problem becomes an RO problem. Using LDR, the previous formulation is equivalent to
\[\min_{\mathbf{x},\mathbf{u},\mathbf{V},\mathbf{Z},q} \max_{\mathbf{d}\in\mathcal{D},\eta\in\mathcal{U}} \sum_{i=1}^{I}F_{i}x_{i}+C_{i}(u_{i}+\sum_{j=1}^{J}V_{ij}d_{j}+ \sum_{k=1}^{I}Z_{ik}r_{k})\] (1) s.t. \[\sum_{i=1}^{I}(u_{i}+\sum_{j=1}^{J}V_{ij}d_{j}+\sum_{k=1}^{I}Z_{ik }r_{k})\geq\sum_{j=1}^{J}(d_{j}+\bar{q}_{j}),\] \[\sum_{i=1}^{I}u_{i}=\sum_{j=1}^{J}\bar{q}_{j},\] \[u_{i}+\sum_{j=1}^{J}V_{ij}d_{j}+\sum_{k=1}^{I}Z_{ik}r_{k}\leq(r_ {i}+\bar{p}_{i}^{\max})x_{i},\quad\forall i,\] \[u_{i}+\sum_{j=1}^{J}V_{ij}d_{j}+\sum_{k=1}^{I}Z_{ik}r_{k}\geq 0,\quad\forall i,\] \[x_{i}\in\{0,1\},\quad\forall i,\]
where constraints involving \(\mathbf{d},\mathbf{r}\) hold for all \(\mathbf{d}\in\mathcal{D}\) and \(\mathbf{r}\in\mathcal{U}\).
Let the objective function of Problem (1) be \(\xi^{*}\). The solution \(\mathbf{x}^{*}\) and \(\mathbf{p}^{*}(\mathbf{d},\mathbf{r})=\mathbf{u}^{*}+\mathbf{V}^{*}\mathbf{d}+\mathbf{Z}^{*}\mathbf{r}\) satisfies the constraints for any realization of \(\mathbf{d}\in\mathcal{D}\) and of \(\mathbf{r}\in\mathcal{U}\). For example, the second constraint ensures that the total production will be more than the total demand for all scenarios in the uncertainty sets. In addition, to make pricing more intuitive, we have included the third constraint, which ensures that the total non-adaptive dispatch \(\sum_{i=1}^{I}u_{i}\) is equal to the expected load. The previous formulation is equivalent to:
\[\min_{\mathbf{x},\mathbf{u},\mathbf{V},\mathbf{Z},\eta} \sum_{i=1}^{I}F_{i}x_{i}+\eta\] (2) s.t. \[\sum_{i=1}^{I}C_{i}u_{i}+\max_{\mathbf{d}\in\mathcal{D}}\sum_{i=1}^{ I}C_{i}\sum_{j=1}^{J}V_{ij}d_{j}+\max_{\mathbf{r}\in\mathcal{U}}\sum_{i=1}^{I}C_{i} \sum_{k=1}^{I}Z_{ik}r_{k}\leq\eta,\] \[\max_{\mathbf{d}\in\mathcal{D}}\sum_{j=1}^{J}(1-\sum_{i=1}^{I}V_{ij}) d_{j}+\max_{\mathbf{r}\in\mathcal{U}}\sum_{i=1}^{I}-Z_{ik}r_{k}\leq 0,\quad\forall i,\] \[\sum_{i=1}^{I}u_{i}=\sum_{j=1}^{J}\bar{q}_{j},\] \[u_{i}+\max_{\mathbf{d}\in\mathcal{D}}\sum_{j=1}^{J}V_{ij}d_{j}+\max_{ \mathbf{r}\in\mathcal{U}}\{-x_{i}r_{i}+\sum_{k=1}^{I}Z_{ik}r_{k}\}\leq\bar{p}_{i}^ {\max}x_{i},\forall i,\] \[-u_{i}+\max_{\mathbf{d}\in\mathcal{D}}\sum_{j=1}^{I}-V_{ij}d+\max_{ \mathbf{r}\in\mathcal{U}}\sum_{k=1}^{I}-Z_{ik}r_{k}\leq 0,\quad\forall i,\] \[x_{i}\in\{0,1\},\quad\forall i,\]
or, by using the robust counterpart with \(\mathcal{D}=\{\|\mathbf{d}\|_{\ell}\leq\Gamma_{q}\}\) and \(\mathcal{U}=\{\|\mathbf{r}\|_{\ell}\leq\Delta_{p}\}\), where \(\ell^{*}\) is the dual norm of \(\ell\), and \(\mathbf{V}_{i}\) is the \(i\)-th row of \(\mathbf{V}\) and \(\mathbf{Z}_{i}\) is the \(i\)-th row of \(\mathbf{Z}\),
\[\min_{\mathbf{x},\mathbf{u},\mathbf{V},\mathbf{Z},\eta} \sum_{i=1}^{I}F_{i}x_{i}+\eta\] s.t. \[\sum_{i=1}^{I}C_{i}u_{i}+\Gamma_{q}\|\sum_{i=1}^{I}C_{i}\mathbf{V}_ {i}\|_{\ell^{*}}+\Delta_{p}\|\sum_{i=1}^{I}C_{i}\mathbf{Z}_{i}\|_{\ell^{*}}\leq\eta,\] \[\Gamma_{q}\|\mathbf{1}-\sum_{i=1}^{I}\mathbf{V}_{i}\|_{\ell^{*}}+\Delta_{p }\|-\sum_{i=1}^{I}\mathbf{Z}_{i}\|_{\ell^{*}}\leq 0,\] \[\sum_{i=1}^{I}u_{i}=\sum_{j=1}^{I}\bar{q}_{j},\] \[\sum_{\begin{subarray}{c}u_{i}+\Gamma_{q}\|\mathbf{V}_{i}\|_{\ell^{*} }+\Delta_{p}\|x_{i}\mathbf{e}_{i}-\mathbf{Z}_{i}\|_{\ell^{*}}\leq p_{i}^{\max}x_{i}, \quad\forall i,\\ -u_{i}+\Gamma_{q}\|\mathbf{V}_{i}\|_{\ell^{*}}+\Delta_{p}\|\mathbf{Z}_{i}\|_{\ell^{*}} \leq 0,\forall i,\\ x_{i}\in\{0,1\},\quad\forall i,\end{subarray}}\]
We are going to work with the following version of the previous formulation, because we want to use the dual problem as well. Following the example of [18], we also set the binary variables to their optimal values \(\mathbf{x}^{*}\).
\[\min_{\mathbf{x},\mathbf{u},\mathbf{V},\mathbf{Z},q} \sum_{i=1}^{I}F_{i}x_{i}+\eta\] s.t. \[\sum_{i=1}^{I}C_{i}u_{i}+\Gamma_{q}\|\mathbf{\omega}\|_{\ell^{*}}+ \Delta_{p}\|\bar{\mathbf{\omega}}\|_{\ell^{*}}\leq\eta, \nu\] \[\mathbf{\omega}=\sum_{i=1}^{I}C_{i}\mathbf{V}_{i},\quad\bar{\mathbf{\omega}}= \sum_{i=1}^{I}C_{i}\mathbf{Z}_{i}, \mathbf{\alpha},\bar{\mathbf{\alpha}}\] \[\Gamma_{q}\|\mathbf{\tau}\|_{\ell^{*}}+\Delta_{p}\|\bar{\mathbf{\tau}}\|_{ \ell^{*}}\leq 0, \lambda\] \[\mathbf{\tau}=\mathbf{1}-\sum_{i=1}^{I}\mathbf{V}_{i},\quad\bar{\mathbf{\tau}}=- \sum_{i=1}^{I}\mathbf{Z}_{i}, \mathbf{\theta},\bar{\mathbf{\theta}}\] \[\sum_{i=1}^{I}u_{i}=\sum_{j=1}^{I}\bar{q}_{j}, \mu\] \[\mathbf{\psi}_{i}=\mathbf{V}_{i},\quad\psi_{i}=x_{i}\mathbf{e}_{i}-\mathbf{Z}_{i}, \mathbf{\beta}_{i},\mathbf{\beta}_{i},\] \[-u_{i}+\Gamma_{q}\|\mathbf{\phi}_{i}\|_{\ell^{*}}+\Delta_{p}\|\bar{\mathbf{ \phi}}_{i}\|_{\ell^{*}}\leq 0,\quad\forall i, \zeta_{i}\] \[\mathbf{\phi}_{i}=\mathbf{V}_{i},\quad\mathbf{\phi}_{i}=\mathbf{Z}_{i}, \mathbf{\gamma}_{i}, \mathbf{\gamma}_{i}, \mathbf{\gamma}_{i}\] \[x_{i}=x_{i}^{*},\quad\forall i, \rho_{i}\] \[x_{i}\geq 0,\quad\forall i,\]
where the right column contains the dual variables for the corresponding constraints. The Lagrangian of the previous problem is
\[\mathcal{L}=\mu\sum_{j=1}^{J}\bar{q}_{j}+\sum_{j=1}^{J}\theta_{j}+ \sum_{i=1}^{I}x_{i}^{*}\rho_{i}\] (3) \[+(1-\nu)\eta\] \[+\sum_{i=1}^{I}(F_{i}-\sigma_{i}p_{i}^{\max}-\rho_{i}+\bar{\beta}_{ ii})x_{i}\] \[+\sum_{i=1}^{I}(\nu C_{i}-\mathbf{\theta}+\sigma_{i}-\zeta_{i})u_{i}\] \[+\sum_{i=1}^{I}(C_{i}\mathbf{\alpha}-\mathbf{\theta}+\mathbf{\beta}_{i}+\mathbf{ \gamma}_{i})^{T}\mathbf{V}_{i}\] \[+\sum_{i=1}^{I}(C_{i}\mathbf{\alpha}-\mathbf{\theta}-\mathbf{\beta}_{i}+\mathbf{ \gamma}_{i})^{T}\mathbf{Z}_{i}\] \[-(\mathbf{\alpha}^{T}\mathbf{\omega}-\nu\Gamma_{q}\|\mathbf{\omega}\|_{\ell^{*}} )-(\bar{\mathbf{\alpha}}^{
The corresponding dual problem is
\[\begin{array}{llll}\max&\mu\sum_{j=1}^{J}\bar{q}_{j}+\sum_{j=1}^{J} \theta_{j}+\sum_{i=1}^{I}x_{i}^{*}\rho_{i},\\ \text{s.t.}&F_{i}\geq\rho_{i}+\sigma_{i}p_{i}^{\max}-\bar{\beta}_{ii},\quad \forall i,&x_{i}\\ &\nu=1,&\eta\\ &\nu C_{i}=\mu-\sigma_{i}+\zeta_{i},\quad\forall i,&u_{i}\\ &C_{i}\alpha_{i}=\theta_{j}-\beta_{ji}-\gamma_{ij},\quad\forall i,j,&Y_{ij}\\ &C_{i}\alpha_{ik}=\theta_{k}+\beta_{ik}-\gamma_{ik},&\forall i,k,&Z_{ik}\\ &\|\mathbf{\alpha}\|_{\ell}\leq\Gamma_{p}\nu,\quad\|\bar{\mathbf{\alpha}}\|_{\ell} \leq\Delta_{p}\nu\\ &\|\mathbf{\theta}\|_{\ell}\leq\Gamma_{q}\lambda,\quad\|\bar{\mathbf{\theta}}\|_{\ell} \leq\Delta_{p}\lambda\\ &\|\mathbf{\beta}_{i}\|_{\ell}\leq\Gamma_{q}\sigma_{i},\quad\|\bar{\mathbf{\beta}}\|_{ \ell}\leq\Delta_{p}\sigma_{i},&\forall i,\\ &\|\mathbf{\gamma}_{i}\|_{\ell}\leq\Gamma_{q}\zeta_{i},\quad\|\bar{\mathbf{\gamma}}_{ i}\|_{\ell}\leq\Delta_{p}\zeta_{i}&\forall i,\\ &\lambda,\sigma_{i},\zeta_{i}\geq 0,\quad\forall i.\end{array} \tag{7}\]
We use \((\cdot)^{*}\) to denote the optimal solutions to Problem (2) and Problem (7). We will use this notation for the rest of this work.
## III Adaptive Pricing
In this section, we introduce adaptive pricing and provide its theoretical properties.
We suggest that the day-ahead payments include only the commitment and non-adaptive dispatch costs. So, the day-ahead payments reflect the cost of meeting the expected load while planning for the worst-case scenario. The adaptive part of the dispatch serves as an upper bound on the intra-day payments, which take place the following day based on the economic dispatch problems. This upper bound is also equal to the optimal intra-day payments, when the worst-case uncertainty is realized.
Pay-as-bid pricingThe day-ahead payments to generator \(i\) are based on their bids \(F_{i},C_{i}\) and the non-adaptive part of the dispatch. Specifically, each generator is paid
\[F_{i}x_{i}^{*}+C_{i}u_{i}^{*}.\]
Marginal pricingThe price for the non-adaptive dispatch is \(\mu\) and each generator is paid some uplift in a discriminatory way. So, each generator \(i\) is paid
\[\begin{array}{llll}&\mu^{*}u_{i}^{*}+(\rho_{i}^{*}-\bar{\beta}_{ii}^{*})x_ {i}^{*}\\ &+\sigma_{i}^{*}\ (\Gamma_{q}\|V_{i}^{*}\|_{\ell^{*}}+\Delta_{p}\|x_{i}^{*} \mathbf{e}_{i}-\mathbf{z}_{i}^{*}\|_{\ell^{*}})\\ &+\zeta_{i}^{*}\ (\Gamma_{q}\|V_{i}^{*}\|_{\ell^{*}}+\Delta_{p}\|\mathbf{z}_{i}^{*} \|_{\ell^{*}}).\end{array}\]
In contrast to deterministic pricing, we also price the uncertainty by introducing payments based on sizes \(\Gamma_{q}\) and \(\Delta_{p}\) of the uncertainty sets. Note that if we set \(\Delta_{p}=0\) in the uncertainty set \(\mathcal{U}\), we do not consider the capacity uncertainty. Table I summarizes the payments when there is only load uncertainty or \(\Delta_{p}=0\). Similarly, if we set \(\Gamma_{q}=0\), we do not consider uncertainty in the load.
### _Pay-As-Bid and Uniform Pricing Equivalence_
One important theoretical property of our approach is that the day-ahead pay-as-bid and marginal pricing payments are the same. This result is presented more formally in the following theorem.
**Theorem 1**.: _The pay-as-bid payment \(F_{i}x_{i}^{*}+C_{i}u_{i}^{*}\) and the uniform price payment to each generator \(i\)_
\[\begin{array}{llll}&\mu^{*}u_{i}^{*}+(\rho_{i}^{*}-\bar{\beta}_{ii}^{*})x_ {i}^{*}\\ &+\sigma_{i}^{*}\ (\Gamma_{q}\|V_{i}^{*}\|_{\ell^{*}}+\Delta_{p}\|x_{i}^{*}\mathbf{e}_{i}- \mathbf{Z}_{i}^{*}\|_{\ell^{*}})\\ &+\zeta_{i}^{*}\ (\Gamma_{q}\|\mathbf{V}_{i}^{*}\|_{\ell^{*}}+\Delta_{p}\|\mathbf{Z}_{i}^{*} \|_{\ell^{*}})\end{array}\]
_are equal._
Proof.: Using complementary slackness between Problems (2) and (7), for all \(i\in[I]\),
\[F_{i}x_{i}^{*}=\rho_{i}^{*}x_{i}^{*}+\sigma_{i}^{*}p_{i}^{\max}x_{i}^{*}-\bar{ \beta}_{ii}^{*}x_{i}^{*},\]
\[C_{i}u_{i}^{*}=\mu^{*}u_{i}^{*}-\sigma_{i}^{*}u_{i}^{*}+\zeta_{i}^{*}u_{i}^{*},\]
\[\zeta_{i}^{*}u_{i}^{*}-\zeta_{i}^{*}\Gamma_{q}\|\mathbf{\phi}_{i}^{*}\|_{\ell^{*}} -\zeta_{i}^{*}\Delta_{p}\|\mathbf{\bar{\phi}}_{i}^{*}\|_{\ell^{*}}=0,\]
\[\sigma_{i}^{*}p_{i}^{\max}x_{i}^{*}-\sigma_{i}^{*}u_{i}^{*}-\sigma_{i}^{*} \Gamma_{q}\|\mathbf{\psi}_{i}^{*}\|_{\ell^{*}}-\sigma_{i}^{*}\Delta_{p}\|\mathbf{\bar{ \psi}}_{i}^{*}\|_{\ell^{*}}=0.\]
So, for all \(i\in[I]\),
\[\begin{array}{llll}&F_{i}x_{i}^{*}+C_{i}u_{i}^{*}\\ &=\rho_{i}^{*}x_{i}^{*}+\sigma_{i}^{*}p_{i}^{\max}x_{i}^{*}-\bar{\beta}_{ii}^{*} x_{i}^{*}+\mu^{*}u_{i}^{*}-\sigma_{i}^{*}u_{i}^{*}+\zeta_{i}^{*}u_{i}^{*}\\ &=(\rho_{i}^{*}-\bar{\beta}_{ii}^{*})x_{i}^{*}+\mu u_{i}^{*}+\sigma_{i}^{*}(p_{i} ^{\max}x_{i}^{*}-u_{i}^{*})+\zeta_{i}^{*}u_{i}^{*}\\ &=(\rho_{i}^{*}-\bar{\beta}_{ii}^{*})x_{i}^{*}+\mu u_{i}^{*}\\ &+\sigma_{i}^{*}\Gamma_{q}\|\mathbf{\psi}_{i}^{*}\|_{\ell^{*}}+\sigma_{i}^{*}\Delta_{p} \|\mathbf{\bar{\psi}}_{i}^{*}\|_{\ell^{*}}+\zeta_{i}^{*}\Gamma_{q}\|\mathbf{\phi}_{i}^{*} \|_{\ell^{*}}+\zeta_{i}^{*}\Delta_{p}\|\mathbf{\bar{\phi}}_{i}^{*}\|_{\ell^{*}}\\ &=\mu^{*}u_{i}^{*}+(\rho_{i}^{*}-\bar{\beta}_{ii}^{*})x_{i}^{*}\\ &+\sigma_{i}^{*}\Gamma_{q}\|\mathbf{\psi}_{i}^{*}\|_{\ell^{*}}+\sigma_{i}^{*}\Delta_{p} \|x_{i}^{*}\mathbf{e}_{i}-\mathbf{Z}_{i}^{*}\|_{\ell^{*}}+\zeta_{i}^{*}\Gamma_{q}\|\mathbf{V}_{ i}^{*}\|_{\ell^{*}}+\zeta_{i}^{*}\Delta_{p}\|\mathbf{Z}_{i}^{*}\|_{\ell^{*}}. \end{array}\]
The last equality follows from the constraints \(\mathbf{\psi}_{i}=\mathbf{V}_{i}\), \(\mathbf{\phi}_{i}=\mathbf{V}_{i}\), \(\mathbf{\bar{\psi}}_{i}=\mathbf{x}_{i}\mathbf{e}_{i}-\mathbf{Z}_{i}\) and \(\mathbf{\bar{\phi}}_{i}=\mathbf{Z}_{i}\) of Problem (2).
### _Total Worst-Case Cost Equivalence_
In the previous section, we considered only the day-ahead payments. In this section, we show that when the worst-case uncertainty is realized, the pay-as-bid and marginal pricing payments are the same. Let \(\mathbf{d}^{*}=\arg\max_{\mathbf{d}\in\mathcal{D}}\sum_{i=1}^{I}C_{i}\sum_{j=1}^{J}V_{ ij}^{*}d_{j}\) and \(\mathbf{r}^{*}=\arg\max_{\mathbf{r}\in\mathcal{U}}\sum_{i=1}^{I}C_{i}\sum_{k=1}^{I}Z_{ ik}^{*}r_{k}\) be the worst case realization of the residual load \(\mathbf{d}\) and the residual capacity \(\mathbf{r}\) respectively for the objective function of Problem (1).
**Theorem 2**.: _If the worst-case uncertainty \(\mathbf{d}^{*},\mathbf{r}^{*}\) is realized, then the total pay-as-bid payment to generator \(i\)_
\[f_{i}:=F_{i}x_{i}^{*}+C_{i}u_{i}^{*}+C_{i}\sum_{j=1}^{J}V_{ij}^{*}d_{j}^{*}+C_ {i}\sum_{k=1}^{I}Z_{ik}^{*}r_{k}^{*},\]
_and the uniform price payment to generator \(i\)_
\[g_{i}:=\rho_{i}^{*}x_{i}^{*}+\mu^{*}u_{i}^{*}+\sum_{j=1}^{J}\theta_{j}^{*}V_{ ij}^{*}+\sum_{k=1}^{I}\bar{\theta}_{k}^{*}Z_{ik}^{*}.\]
_are equal and satisfy \(\sum_{i=1}^{I}f_{i}=\sum_{i=1}^{I}g_{i}=\xi^{*}\), the optimal solution value of Problem (1)._
Proof.: By complementary slackness between Problems (2) and (7), for all \(i\in[I],j\in[J]\), we have
\[C_{i}\alpha_{j}^{*}V_{ij}^{*} =(\theta_{j}^{*}-\beta_{ij}^{*}-\gamma_{ij}^{*})V_{ij}^{*},\] \[C_{i}\bar{\alpha}_{k}^{*}Z_{ik}^{*} =(\theta_{k}^{*}+\beta_{ik}^{*}-\gamma_{ik}^{*})Z_{ik}^{*},\] \[C_{i}\sum_{j=1}^{J}\alpha_{j}^{*}V_{ij}^{*} =\sum_{j=1}^{J}(\theta_{j}^{*}-\beta_{ij}^{*}-\gamma_{ij}^{*})V_{ij}^{*},\] \[C_{i}\sum_{k=1}^{I}\bar{\alpha}_{k}^{*}Z_{ik}^{*} =\sum_{k=1}^{I}(\theta_{k}^{*}+\beta_{ik}^{*}-\gamma_{ik}^{*})Z_{ ik}^{*}.\]
Also, using equations (5) and (6) and the constraints \(\mathbf{\psi}_{i}=\mathbf{V}_{i}\), \(\mathbf{\phi}_{i}=\mathbf{V}_{i}\), \(\mathbf{\tilde{\psi}}_{i}=x_{i}\mathbf{e}_{i}-\mathbf{Z}_{i}\) and \(\mathbf{\bar{\phi}}_{i}=\mathbf{Z}_{i}\) of Problem (2), for all \(i\),
\[(\mathbf{\beta}_{i}^{*})^{T}\mathbf{V}_{i}^{*}=(\mathbf{\beta}_{i}^{*})^{T} \mathbf{\psi}_{i}^{*}=\sigma_{i}^{*}\Gamma_{q}\|\mathbf{\psi}_{i}^{*}\|_{\ell^{*}},\] \[(\mathbf{\gamma}_{i}^{*})^{T}\mathbf{V}_{i}=(\mathbf{\gamma}_{i}^{*})^{T}\mathbf{ \phi}_{i}^{*}=\zeta_{i}^{*}\Gamma_{q}\|\mathbf{\phi}_{i}^{*}\|_{\ell^{*}},\] \[(\mathbf{\beta}_{i}^{*})^{T}(x_{i}^{*}\mathbf{e}_{i}-\mathbf{Z}_{i}^{*})=(\bm {\beta}_{i}^{*})^{T}\bar{\mathbf{\psi}}_{i}^{*}=\sigma_{i}^{*}\Delta_{p}\|\mathbf{\psi }_{i}^{*}\|_{\ell^{*}},\] \[(\mathbf{\tilde{\gamma}}_{i}^{*})^{T}\mathbf{Z}_{i}=(\mathbf{\tilde{\gamma}}_ {i}^{*})^{T}\bar{\mathbf{\phi}}_{i}^{*}=\zeta_{i}^{*}\Delta_{p}\|\mathbf{\tilde{\phi}}_ {i}^{*}\|_{\ell^{*}}.\]
So, for all \(i\in[I]\), using the same complementary slackness conditions as Theorem 1,
\[F_{i}x_{i}^{*}+C_{i}u_{i}^{*}+C_{i}\sum_{j=1}^{J}\alpha_{j}^{*}V _{ij}^{*}+C_{i}\sum_{k=1}^{I}\bar{\alpha}_{k}^{*}Z_{ik}^{*}\] \[=\rho_{i}x_{i}^{*}+\sigma_{i}^{*}p_{i}^{\max}x_{i}^{*}-\bar{\beta }_{ik}^{*}x_{i}^{*}+\mu^{*}u_{i}^{*}-\sigma_{i}^{*}u_{i}^{*}+\zeta_{i}^{*}u_{i} ^{*}\] \[+\sum_{j=1}^{J}(\theta_{j}^{*}-\beta_{ij}^{*}-\gamma_{ij}^{*})V_{ ij}^{*}\] \[+\sum_{k=1}^{I}(\bar{\theta}_{k}^{*}+\bar{\beta}_{ik}^{*}-\gamma _{ik}^{*})Z_{ik}^{*}\] \[=\rho_{i}^{*}x_{i}^{*}+\mu^{*}u_{i}^{*}+\sum_{j=1}^{J}\theta_{j} ^{*}V_{ij}^{*}\] \[+(\zeta_{i}^{*}u_{i}^{*}-\zeta_{i}^{*}\Gamma_{q}\|\mathbf{\phi}_{i}^{ *}\|_{\ell^{*}}^{2}-\zeta_{i}^{*}\Delta_{p}\|\mathbf{\phi}_{i}^{*}\|_{\ell^{*}})\] \[+(\sigma_{i}^{*}p_{i}^{\max}x_{i}^{*}-\sigma_{i}^{*}u_{i}^{*}- \sigma_{i}^{*}\Gamma_{q}\|\mathbf{\psi}_{i}^{*}\|_{\ell^{*}}-\sigma_{i}^{*}\Delta_{p }\|\mathbf{\tilde{\psi}}_{i}^{*}\|_{\ell^{*}})\] \[=\rho_{i}^{*}x_{i}^{*}+\mu^{*}u_{i}^{*}+\sum_{j=1}^{J}\theta_{j} ^{*}V_{ij}^{*}+\sum_{k=1}^{I}\bar{\theta}_{k}^{*}Z_{ik}^{*}=g_{i}.\]
Next, we show that
\[F_{i}x_{i}^{*}+C_{i}u_{i}^{*}+C_{i}\sum_{j=1}^{J}\alpha_{j}^{*}V_{ ij}^{*}+C_{i}\sum_{k=1}^{I}\bar{\alpha}_{k}^{*}Z_{ik}^{*}\] \[=F_{i}x_{i}^{*}+C_{i}u_{i}^{*}+C_{i}\sum_{j=1}^{J}V_{ij}^{*}d_{j}^{ *}+C_{i}\sum_{k=1}^{I}Z_{ik}^{*}r_{k}^{*}=f_{i}.\]
Using equation (3) and the constraints \(\mathbf{\omega}=\sum_{i=1}^{I}C_{i}\mathbf{V}_{i}\) and \(\mathbf{\bar{\omega}}=\sum_{i=1}^{I}C_{i}\mathbf{Z}_{i}\) of Problem (2) and \(\nu=1\) of Problem (7),
\[\sum_{i=1}^{I}C_{i}\sum_{j=1}^{J}\alpha_{j}^{*}V_{ij}^{*}=(\mathbf{ \alpha}^{*})^{T}\mathbf{\omega}^{*}\] \[=\nu^{*}\Gamma_{q}\|\mathbf{\omega}^{*}\|_{\ell^{*}}=\Gamma_{q}\|\mathbf{ \omega}^{*}\|_{\ell^{*}}\] \[=\Gamma_{q}\|\sum_{i=1}^{I}C_{i}\mathbf{V}_{i}^{*}\|_{\ell^{*}}=\max_{ \mathbf{d}\in\mathcal{D}}\sum_{i=1}^{I}C_{i}\sum_{j=1}^{J}V_{ij}^{*}d_{j}\] \[=\sum_{i=1}^{I}C_{i}\sum_{j=1}^{J}V_{ij}^{*}d_{j}^{*},\]
and
\[\sum_{i=1}^{I}C_{i}\sum_{k=1}^{I}\bar{\alpha_{k}}^{*}Z_{ik}^{*}=(\bar{ \mathbf{\alpha}}^{*})^{T}\bar{\mathbf{\omega}}^{*}\] \[=\nu^{*}\Delta_{p}\|\mathbf{\omega}^{*}\|_{\ell^{*}}=\Delta_{p}\|\mathbf{ \omega}^{*}\|_{\ell^{*}}\] \[=\Delta_{p}\|\sum_{i=1}^{I}C_{i}Z_{i}^{*}\|_{\ell^{*}}=\max_{\mathbf{r} \in\mathcal{U}}\sum_{i=1}^{I}C_{i}\sum_{k=1}^{I}Z_{ik}^{*}r_{k}\] \[=\sum_{i=1}^{I}C_{i}\sum_{k=1}^{I}Z_{ik}^{*}r
"centralized" Problem (2) and sets prices \(\mu\) for the non-adaptive dispatch, \(\mathbf{\theta}\) and \(\mathbf{\tilde{\theta}}\) for the adaptive dispatch, and \(\mathbf{\rho}\) for the commitment. Also, each generator \(i\) has commitment costs \(F_{i}\) and dispatch costs \(C_{i}\). The cost of the adaptive dispatch is a linear function of \(\mathbf{d}^{*}\) and \(\mathbf{r}^{*}\), as defined earlier and specified by the ISO. So, generator \(i\) has revenue \(\mu u_{i}+\sum_{j=1}^{J}\theta_{j}V_{ij}+\sum_{k=1}^{I}\tilde{\theta}_{k}Z_{ik }+\rho_{i}x_{i}\), while it has a cost \(F_{i}x_{i}+C_{i}u_{i}+C_{i}\sum_{j=1}^{J}V_{ij}d_{j}^{*}+C_{i}\sum_{k=1}^{I}Z_ {ik}r_{k}^{*}\). Each generator \(i\) decides if they will self-schedule by maximizing the difference between the revenue and the costs. They solve the following "decentralized" problem which maximizes their individual profit:
\[\begin{array}{ll}\max_{\mathbf{x},\mathbf{u},\mathbf{V},\mathbf{Z}}&\mu u_{i}+\sum_{j=1}^{J }\theta_{j}V_{ij}+\sum_{k=1}^{I}\tilde{\theta}_{k}Z_{ik}+\rho_{i}x_{i}\\ &-(F_{i}x_{i}+C_{i}u_{i}+C_{i}\sum_{j=1}^{J}V_{ij}d_{j}^{*}+C_{i}\sum_{k=1}^{I }Z_{ik}r_{k}^{*})\\ \text{s.t.}&u_{i}+\Gamma_{q}\|\mathbf{V}_{i}\|_{\ell^{*}}+\Delta_{p}\|x_{i}\mathbf{e}_{ i}-\mathbf{Z}_{i}\|_{\ell^{*}}\leq p_{i}^{\max}x_{i},\\ &u_{i}-\Gamma_{q}\|\mathbf{V}_{i}\|_{\ell^{*}}-\Delta_{p}\|\mathbf{Z}_{i}\|_{\ell^{*}} \geq 0,\\ &x_{i}\in\{0,1\},\end{array} \tag{8}\]
The constraints are the robust counterparts of the capacity and non-negativity constraints for each \(i\), which means that the dispatch of the decentralized problem will be non-negative and less than the maximum capacity of generator \(i\) for all \(\mathbf{d}\in\mathcal{D}\) and \(\mathbf{r}\in\mathcal{U}\).
Let \(h_{i}(x_{i},u_{i},\mathbf{V}_{i},\mathbf{Z}_{i})\) be the objective of the decentralized problem (8) for commitment \(x_{i}\) and dispatch \(u_{i},\mathbf{V}_{i},\mathbf{Z}_{i}\). Also, let \(\mathbf{x}^{*},\mathbf{u}^{*},\mathbf{V}^{*},\mathbf{Z}^{*}\) be an optimal solution to the centralized problem (2) and \(x_{i}^{**},u_{i}^{**},\mathbf{V}_{i}^{**},\mathbf{Z}_{i}^{**}\) be a solution to the decentralized problem (8) for generator \(i\). Generator \(i\) has no incentive to self-schedule only if
\[h_{i}(x_{i}^{**},u_{i}^{**},\mathbf{V}_{i}^{**},\mathbf{Z}_{i}^{**})\leq h_{i}(x_{i}^{ *},u_{i}^{*},\mathbf{V}_{i}^{*},\mathbf{Z}_{i}^{*}).\]
**Theorem 3**.: _Suppose generator \(i\) solves the decentralized problem (8) using the prices of the centralized problem (2). Then, they cannot obtain a schedule giving greater profit than the centralized market schedule or_
\[h_{i}(x_{i}^{**},u_{i}^{**},\mathbf{V}_{i}^{**},\mathbf{Z}_{i}^{**})\leq h_{i}(x_{i}^{ *},u_{i}^{*},\mathbf{V}_{i}^{*},\mathbf{Z}_{i}^{*}).\]
Proof.: Let \(\mu^{*},\mathbf{\theta}^{*},\mathbf{\rho}^{*}\) be the prices determined by the dual variables of the Problem (7). Using the results of Theorem 2, \(h_{i}(x_{i}^{*},u_{i}^{*},\mathbf{V}_{i}^{*})=0\), because \(f_{i}=g_{i}\).
Consider \(h_{i}(x_{i}^{**},u_{i}^{**},\mathbf{V}_{i}^{**})\),
\[\begin{array}{l}h_{i}(x_{i}^{**},u_{i}^{**},\mathbf{V}_{i}^{**})=\mu^{*}u_{i}^{**} +\sum_{j=1}^{J}\theta_{j}^{*}V_{ij}^{**}+\sum_{k=1}^{I}\tilde{\theta}_{k}^{*}Z_ {ik}^{**}+\rho_{i}^{*}x_{i}^{**}\\ -(F_{i}x_{i}^{**}+C_{i}u_{i}^{**}+C_{i}\sum_{j=1}^{J}V_{ij}^{**}d_{j}^{*}+C_{i} \sum_{k=1}^{I}Z_{ik}^{**}r_{k}^{*})\\ \leq(\mu^{*}-C_{i})u_{i}^{**}+(\rho_{i}^{*}-F_{i})x_{i}^{**}\\ +\sum_{j=1}^{J}(\theta_{j}^{*}-C_{i}\sigma_{j}^{*})V_{ij}^{**}-\sum_{j=1}^{J}( \tilde{\theta}_{k}^{*}-C_{i}\tilde{\alpha}_{k}^{*})Z_{ik}^{**}\\ +\sigma_{i}^{*}(p_{i}^{\max}x_{i}^{**}-u_{i}^{**}-\Gamma_{q}\|V_{i}^{**}\|_{\ell^ {*}}-\Delta_{p}\|x_{i}^{**}\mathbf{e}_{i}-\mathbf{Z}_{i}^{**}\|_{\ell^{*}})\\ +\zeta_{i}^{*}(u_{i}^{**}-\Gamma_{q}\|V_{i}^{**}\|_{\ell^{*}}-\Delta_{p}\|\mathbf{Z} _{i}\|_{\ell^{*}})\\ \leq(\mu^{*}-\sigma_{i}^{*}+\zeta_{i}^{*}-C_{i})u_{i}^{**}+(\rho_{i}^{*}+\sigma_ {i}^{*}p_{i}^{\max}-\beta_{ii}^{*}-F_{i})x_{i}^{**}\\ +\sum_{j=1}^{J}(\theta_{j}^{*}-\beta_{ij}^{*}-\gamma_{ij}^{*}-C_{i}\alpha_{j}^{ *})V_{ij}^{**}\\ +\sum_{k=1}^{J}(\theta_{k}^{*}+\beta_{ik}^{*}-\gamma_{ik}^{*}-C_{i}\alpha_{k}^{ *})Z_{ik}^{**}\leq 0.\end{array}\]
The first inequality is valid because \(p_{i}^{\max}x_{i}^{**}-u_{i}^{**}-\Gamma_{q}\|V_{i}^{**}\|_{\ell^{*}}-\Delta_{p}\|x _{i}^{**}\mathbf{e}_{i}-\mathbf{Z}_{i}^{**}\|_{\ell^{*}}\geq 0\), \(u_{i}^{**}-\Gamma_{q}\|V_{i}^{**}\|_{\ell^{*}}-\Delta_{p}\|\mathbf{Z}_{i}^{**}\|_{\ell^{ *}}\geq 0\) by the constraints of Problem (8) and \(\sigma_{i}^{*}\geq 0\) and \(\zeta_{i}^{*}\geq 0\) by Problem (7). The third inequality is valid because \((\mu^{*}-\sigma_{i}^{*}+\zeta_{i}^{*}-C_{i})=0\), \((\theta_{j}^{*}-\beta_{ij}^{*}-\gamma_{ij}^{*}-C_{i}\alpha_{j}^{*})=0\), \((\theta_{k}^{*}+\beta_{ik}^{*}-\gamma_{ik}^{*}-C_{i}\alpha_{k}^{*})=0\) and \((\rho_{i}^{*}+\sigma_{i}^{*}p_{i}^{\max}-\beta_{ii}^{*}-F_{i})\leq 0\) by the constraints of Problem (7). Also, the second inequality is valid, because
\[(\mathbf{\beta}_{i}^{*})^{T}V_{i}^{**} \leq\|\mathbf{\beta}_{i}^{*}\|_{\ell}\ \|V_{i}^{**}\|_{\ell^{*}}\leq\sigma_{i}^{*} \Gamma_{q}\|V_{i}^{**}\|_{\ell^{*}},\] \[(\gamma_{i}^{*})^{T}V_{i}^{**} \leq\|\mathbf{\gamma}_{i}^{*}\|_{\ell}\ \|V_{i}^{**}\|_{\ell^{*}}\leq\zeta_{i}^{ *}\Gamma_{q}\|V_{i}^{**}\|_{\ell^{*}},\] \[(\mathbf{\beta}_{i}^{*})^{T}(x_{i}^{**}\mathbf{e}_{i}-\mathbf{Z}_{i}^{**}) \leq\|\mathbf{\beta}_{i}^{*}\|_{\ell}\ \|x_{i}^{**}\mathbf{e}_{i}-\mathbf{Z}_{i}^{**}\|_{\ell^{*}}\leq \sigma_{i}^{*}\Delta_{p}\|x_{i}^{**}\mathbf{e}_{i}-\mathbf{Z}_{i}^{**}\|_{\ell^{*}},\] \[(\mathbf{\gamma}_{i}^{*})^{T}Z_{i}^{**} \leq\|\mathbf{\gamma}_{i}^{*}\|_{\ell}\ \|\mathbf{Z}_{i}^{**}\|_{\ell^{*}}\leq\zeta_{i}^{
Suppose we have two generators of Type 1 and 6 generators of type 2. We also have five consumers with expected load \([8,8,3,5,16]\), so the total expected load is \(\sum_{j=1}^{J}\bar{q}_{j}=40\).
The results for the deterministic problem are summarized in Table III.
* Objective: $ 260,
* Dual price of the load: 2 $ / unit.
Table IV summarizes the pay-as-bid payments, which are \(F_{i}x_{i}^{*}+C_{i}p_{i}^{*}\) for each \(i\).
The uniform price payments are \(2p_{i}^{*}+\rho_{i}^{*}x_{i}^{*}\) for each \(i\), where \(\rho_{i}^{*}\) is the dual variable of the \(x_{i}=x_{i}^{*}\) constraint. They are are summarized in Table V. In general, \(F_{i}x_{i}^{*}\) is not the same as \(\rho_{i}^{*}x_{i}^{*}\).
In the adaptive problem we also need to select the budget of uncertainty. Suppose we use a budget uncertainty set with \(\Gamma_{q}=20\), which means that we are protected from an increase of the expected demand from 40 to 60 or a decrease from 40 to 20. In this case, \(\Delta_{p}=0\). The results for the ARO problem are summarized in Table VI.
* Objective: $ 378,
* Dual price of the load: 3 $ / unit.
The LDR for each generator \(i\) is \(p_{i}(\mathbf{d})=u_{i}+\sum_{j=1}^{J}V_{ij}d_{j}\). The matrix \(\mathbf{V}^{*}\) is
\[\begin{bmatrix}0.4&0.4&0.4&0.4&0.4\\ 0.075&0.075&0.075&0.075\\ 0.0&0.0&0.0&0.0&0.0\\ 0.0&0.0&0.0&0.0&0\\ 0.0&0.0&0.0&0.0&0.0\\ 0.175&0.175&0.175&0.175&0.175\\ 0.175&0.175&0.175&0.175&0.175\\ 0.175&0.175&0.175&0.175&0.175\end{bmatrix}.\]
For each \(i\), \(V_{ij}^{*}\) is the same for all consumers \(j\), because the load \(\sum_{j=1}^{J}\bar{q}_{j}\) bundles consumers together. This can change if we add different coefficients for each \(j\) in the uncertainty set.
Table VII summarizes the day-ahead payments in an adaptive pay-as-bid scheme, where each generator \(i\) is paid \(F_{i}x_{i}^{*}+C_{i}u_{i}^{*}\) and the total payments are $ 328.5.
Table VIII summarizes the day-ahead payments in the adaptive marginal pricing scheme. They payments are the same as the pay-as-bid scheme. Note that the uplifts all less than the deterministic case for each generator of Type 2.
Both the deterministic problem and the commitments and non-adaptive dispatch in the ARO problem try to meet a load of 40. However, the deterministic cost is $ 280, while the non-adaptive ARO cost is $ 328.5. This is because the ARO commitment is also feasible for an increase in the load up to 20, which is the worst-case scenario in our uncertainty set. The deterministic problem cannot meet such an increase, because Type 2 generators are used at capacity, except the first one, which can provide only two more units of power. To meet the demand of \(40+20\), we would need to turn on more generators the following day, which would be very costly and would require large payments.
### _Intra-day dispatch and payments_
In this section, we consider the intra-day economic dispatch problem and the pricing implications of uncertain scenarios.
Suppose that the following day the uncertainty \(\mathbf{d}\geq 0\) is realized, so the realized load is \(\mathbf{q}=\bar{\mathbf{q}}+\mathbf{d}\). We have made some commitments in the day-ahead market, so we can solve the following linear optimization problem (LP) to find the optimal dispatch, based on the new data.
\[\min_{\mathbf{p}} \sum_{i=1}^{I}C_{i}p_{i}\] s.t. \[\sum_{i=1}^{I}p_{i}=\sum_{j=1}^{J}d_{j},\] \[\begin{array}{l}p_{i}\leq p_{i}^{\max}x_{i}^{*}-u_{i}^{*},\quad \forall i\in[I],\\ p_{i}\geq 0,\quad\forall i\in[I],\end{array}\]
where \(x_{i}^{*}\) and \(u_{i}^{*}\) are the solution for generator \(i\) based on the ARO problem. So, we want to meet only the additional realized load \(\mathbf{d}\), while we have already committed to produce \(u_{i}^{*}\) to meet the expected load.
The intra-day electricity price is 2 for loads smaller than 10 and 3 for loads larger than 10, which is equal to the ARO price. Using ARO, we can provide the intra-day electricity price for all realizations of the uncertain parameters. As a result, the pricing is more transparent and predictable and the market participants can plan their bids.
In Figure 1, we plot the cost of the LP and compare it to the adaptive part of the ARO cost \(\sum_{i=1}^{I}C_{i}\sum_{j=1}^{J}V_{ij}^{*}d_{j}\). We gradually increase the total realized load \(\sum_{j=1}^{J}d_{j}\) from zero to \(\Gamma_{q}=20\), which is the worst-case scenario. The adaptive dispatch is an upper bound to the optimized intra-day dispatch. However, we have optimized for the worst-case uncertainty, so the adaptive problem and the LP will have the same cost, if the worst-case load is realized.
## V Example with load and capacity uncertainty
We use the example of Section IV with uncertain load and consider the uncertainty in the capacity of each generator. Again, we set \(\Gamma_{q}=20\), which means that we are protected from a load increase from 40 to 60. Also, we choose \(\Delta_{p}=0.5\), which means that we are protected from a decrease of the expected capacity from \(\bar{p}_{i}^{\max}\) to \(\bar{p}_{i}^{\max}-0.5\) for each generator \(i\). The results for the ARO problem are summarized in Table IX.
* Objective: $ 402.25,
* Dual price of the load: 3 $ / unit.
The LDR is \(p_{i}(\mathbf{d},\mathbf{r})=u_{i}+\sum_{j=1}^{J}V_{ij}d_{j}+\sum_{k=1}^{I}Z_{ik}r_{k}\). The matrix \(\mathbf{V}^{*}\) is
\[\begin{bmatrix}0.25625&0.25625&0.25625&0.25625&0.25625\\ 0.25625&0.25625&0.25625&0.25625\\ 0.1625&0.1625&0.1625&0.1625&0.1625\\ 0.0&0.0&0.0&0.0&0.0\\ 0.0&0.0&0.0&0.0&0.0\\ 0.0&0.0&0.0&0.0&0.0\\ 0.1625&0.1625&0.1625&0.1625&0.1625\\ 0.0&0.0&0.0&0.0&0.0\\ \end{bmatrix}.\]
The matrix \(\mathbf{Z}^{*}\) is
\[\begin{bmatrix}5.75&4.75&5.75&-5.75&-5.75&-5.75&5.75&-5.75\\ -5.75&-4.75&-5.75&5.75&5.75&-5.75&5.75\\ 0.0&-0.5&0.5&-0.5&0.0&0.5&0.5\\ 0.5&0.0&0.0&0.5&0.0&0.5&-0.5\\ -0.5&-0.5&-0.5&0.0&0.5&-0.5&0.5\\ -0.5&0.5&-0.5&-0.5&0.5&-0.5&-0.5\\ 0.5&0.5&-0.5&0.5&-0.5&0.5&0.0\\ 0.0&0.0&0.0&0.0&0.0&0.0\\ \end{bmatrix}.\]
Note that the sum of each column of \(\mathbf{V}^{*}\) is one and of \(\mathbf{Z}^{*}\) is zero, because of the robust constraint \(\Gamma_{q}\|\mathbf{1}-\sum_{i=1}^{I}\mathbf{V}_{i}\|_{\ell^{*}}+\Delta_{p}\|-\sum_{i=1}^ {J}\mathbf{Z}_{i}\|_{\ell^{*}}\leq 0\). The norms are non-negative, so \(\|\mathbf{1}-\sum_{i=1}^{I}\mathbf{V}_{i}\|_{\ell^{*}}=0\) and \(\|\sum_{i=1}^{I}\mathbf{Z}_{i}\|_{\ell^{*}}=0\).
Table X summarizes the day-ahead payments in a day-as-bid scheme, where the total payments are $ 352.
Table XI summarizes the day-ahead payments in the adaptive marginal pricing scheme. The payments are the same as the pay-as-bid scheme. The uplifts for Type 2 generators are still smaller than the deterministic uplifts, while they are close to the adaptive uplifts with load uncertainty.
In this case, we also protect against drops in the available capacity, so we are more conservative. As a result, the prices and payments increase from $ 328.5 to $ 352. We turn on all generators, as the commitments in the deterministic problem
Fig. 1: Comparison of the optimal intra-day dispatch cost and the adaptive part of the day-ahead cost. The adaptive part of the day-ahead cost is an upper bound and is equal to the optimal intra-day dispatch for the worst-case scenario.
and in the ARO problem with load capacity are not robust to a decrease of 0.5 in the capacity of the generators.
## VI Multiperiod Pricing
In this section, we present our method on a realistic multi-period formulation with ramp constraints and compare it to deterministic marginal and convex hull pricing. Again, the ARO commitments and dispatch are more conservative, so there is an increase in the payments and the prices compared to the deterministic case. However, there are no uplifts related to optimality gaps that are present in convex hull pricing.
Consider the following example by Chen et al. [35], which includes two generators and a 3-hour horizon with expected load \(\bar{q}_{t}\) 95, 100, and 130 MW. G1 has a maximum of 100 MW, and energy offer \(\$10\) /MWh. G2 has a 20 MW minimum, 35 MW maximum, energy offer \(\$50\) /MWh, start-up cost \(\$1000\), no-load cost \(\$30\), ramp rate 5 MW/hour, start-up rate 22.5 MW/hour, shut-down rate 35 MW/hour, minimum up/down times of one hour, and is initially offline. The UC formulation is provided below.
\[\begin{array}{ll}\min_{\mathbf{x},\mathbf{p}}&\sum_{t=1}^{3}10p_{1,t}+30x_{2,t}^{ ON}+50p_{2,t}+1000x_{2,t}^{RU}\\ \text{s.t.}&p_{1,t}+p_{2,t}=\bar{q}_{t},&1\leq t,\\ &0\leq p_{1,t}\leq 100,&1\leq t,\\ &20x_{2,t}^{ON}\leq p_{2,t}5x_{2,t}^{ON},&1\leq t,\\ &p_{2,t}-p_{2,t-1}\leq 5x_{2,t-1}^{ON}+22.5x_{2,t}^{RU},&1\leq t,\\ &p_{2,t-1}-p_{2,t}\leq 5x_{2,t}^{ON}+35x_{2,t}^{RD},&2\leq t,\\ &x_{2,t}^{ON}-x_{2,t-1}^{ON}=x_{2,t-1}^{RU}-x_{2,t}^{RD},&1\leq t,\\ &x_{2,t}^{RU}\leq x_{2,t}^{ON},&x_{2,t}^{RU}\leq 1-x_{2,t-1}^{ON},&1\leq t, \end{array}\]
with \(\mathbf{p}\geq 0\) and \(x_{2,t}^{ON},x_{2,t}^{RU},x_{2,t}^{RU}\in\{0,1\}\)\(\forall t\) representing the status, start-up and shut-down variables respectively. The results for the deterministic problem are summarized in Table XII. We turn on G2 at \(t=1\) and both generators are on for all time periods. The objective is \(\mathbf{\zeta}\) 7340 and the electricity price is \(\mathbf{\mu}=[10,10,90]\) $ / MW.
The deterministic pay-as-bid payments are summarized in Table XIII. They are are the same as the convex hull payments but do not feature a duality gap. In addition, these payments are equivalent to a marginal pricing scheme by [18].
The deterministic pay-as-bid payments are summarized in Table XIV. They are are the same as the convex hull payments but do not feature a duality gap. In addition, these payments are equivalent to a marginal pricing scheme by [18].
In the ARO problem we use budget uncertainty sets with \(\Gamma_{q}=[10,10,2]\) and \(\Delta_{p}=[0,7.5,0.5]\) at each time period. The objective increases to $ 7860 and the electricity price is \(\mathbf{\mu}^{*}=[10,10,130]\) $ / MW. Again, we turn on G2 at \(t=1\) and both generators are on for all time periods.
The LDR is \(p_{i,t}(\mathbf{d},\mathbf{r})=u_{it}+\sum_{j=1}^{J}V_{ijt}d_{jt}+\sum_{k=1}^{I}Z_{ikt }r_{kt}\). The non-adaptive dispatch results are summarized in Table XV. The matrix \(\mathbf{V}_{t}^{*}\) at each time period is
\[\begin{bmatrix}1.0&1.0&1.0&1.0\\ 0.0&0.0&0.0&0.0\end{bmatrix}\quad t=1,2,3\]
with the rows corresponding to the G1 and G2 and the columns corresponding to three consumers that share the load. Also, \(\mathbf{Z}^{*}\) contains only zeros.
The day-ahead payments to the generators are $ 7640. They pay-as-bid payments are summarized in Table XVI and each generator is paid \(F_{i}^{ON}x_{i,t}^{ON}+F_{i}^{RU}x_{i,t}^{RU}+F_{i}^{RD}x_{i,t}^{RD}+C_{i,t} u_{i,t}^{*}\), where the coefficients correspond to the commitment and dispatch costs.
The adaptive marginal price payments are summarized in Table XVII.
The non-adaptive dispatch in the ARO problem and the dispatch in the deterministic problem try to meet the same level of demand, namely [95, 100, 130]. However, the cost is higher in the ARO problem, because we protect against uncertainty in the load and in the capacity. For example, the commitments and dispatch are still feasible for a drop of [0, 7.5, 0.5] in the capacity of each generator and an increase of [10, 10, 2] in the load. So, the payments to the generators and the price \(\mathbf{\mu}^{*}\) of electricity are higher compared to the deterministic case. In addition, the adaptive payment mechanisms we use do not feature a duality gap.
## VII Conclusion
In this work, we introduce the first pay-as-bid and marginal pricing methods for energy markets with non-convexities under uncertainty. We consider ARO formulations that protect against uncertainty in the load and capacity parameters and we provide the corresponding adaptive pricing schemes. We apply our method to realistic examples with increasing degrees of complexity and show, both theoretically and empirically, that it eliminates uplifts and corrections that are necessary in deterministic approaches.
|
2309.11433 | A Systematic Review of Few-Shot Learning in Medical Imaging | The lack of annotated medical images limits the performance of deep learning
models, which usually need large-scale labelled datasets. Few-shot learning
techniques can reduce data scarcity issues and enhance medical image analysis,
especially with meta-learning. This systematic review gives a comprehensive
overview of few-shot learning in medical imaging. We searched the literature
systematically and selected 80 relevant articles published from 2018 to 2023.
We clustered the articles based on medical outcomes, such as tumour
segmentation, disease classification, and image registration; anatomical
structure investigated (i.e. heart, lung, etc.); and the meta-learning method
used. For each cluster, we examined the papers' distributions and the results
provided by the state-of-the-art. In addition, we identified a generic pipeline
shared among all the studies. The review shows that few-shot learning can
overcome data scarcity in most outcomes and that meta-learning is a popular
choice to perform few-shot learning because it can adapt to new tasks with few
labelled samples. In addition, following meta-learning, supervised learning and
semi-supervised learning stand out as the predominant techniques employed to
tackle few-shot learning challenges in medical imaging and also best
performing. Lastly, we observed that the primary application areas
predominantly encompass cardiac, pulmonary, and abdominal domains. This
systematic review aims to inspire further research to improve medical image
analysis and patient care. | Eva Pachetti, Sara Colantonio | 2023-09-20T16:10:53Z | http://arxiv.org/abs/2309.11433v2 | # A Systematic Review of Few-Shot Learning in Medical Imaging
###### Abstract
The lack of annotated medical images limits the performance of deep learning models, which usually need large-scale labelled datasets. Few-shot learning techniques can reduce data scarcity issues and enhance medical image analysis, especially with meta-learning. This systematic review gives a comprehensive overview of few-shot learning in medical imaging. We searched the literature systematically and selected 80 relevant articles published from 2018 to 2023. We clustered the articles based on medical outcomes, such as tumour segmentation, disease classification, and image registration; anatomical structure investigated (i.e. heart, lung, etc.); and the meta-learning method used. For each cluster, we examined the papers' distributions and the results provided by the state-of-the-art. In addition, we identified a generic pipeline shared among all the studies. The review shows that few-shot learning can overcome data scarcity in most outcomes and that meta-learning is a popular choice to perform few-shot learning because it can adapt to new tasks with few labelled samples. In addition, following meta-learning, supervised learning and semi-supervised learning stand out as the predominant techniques employed to tackle few-shot learning challenges in medical imaging and also best performing. Lastly, we observed that the primary application areas predominantly encompass cardiac, pulmonary, and abdominal domains. This systematic review aims to inspire further research to improve medical image analysis and patient care.
Few-shot learning Medical imaging Systematic review
## 1 Introduction
### Rationale
The demand for deep learning (DL) models that can generalize well and achieve high performance with limited data is constantly increasing. Few-Shot Learning (FSL) plays a crucial role in addressing this challenge by enabling models to learn from only a few examples, mimicking the way humans naturally learn. In contrast to the typical practice in DL, which involves pre-training models on large datasets and fine-tuning them on specific tasks, FSL allows models to learn effectively with minimal labelled examples. Among the most prominent models that have successfully addressed this limitation is GPT-3 [1]. Unlike traditional models, GPT-3 does not require fine-tuning on specific tasks. Instead, it leverages FSL during inference by being exposed, for each task, to a few demonstrations for conditioning without updating its parameters [1]. This approach allows GPT-3 to perform various tasks with just a few examples, showcasing the power of FSL in natural language processing.
FSL finds one of its most crucial applications in medical image analysis for several compelling reasons. Firstly, medical datasets are often limited in size due to privacy concerns, high data acquisition costs, and the laborious process of expert annotation. FSL enables models to achieve robust generalization with minimal labelled examples, making it
possible to develop effective medical imaging solutions even with scarce data. Secondly, FSL alleviates the burden of manual annotation by requiring only a few annotated examples for each new task or medical condition. This capability streamlines the annotation process and supports clinicians in their time-consuming tasks. Moreover, FSL proves particularly valuable for handling rare medical conditions where acquiring sufficient data for traditional DL approaches may be impractical. Leveraging knowledge from more prevalent diseases, FSL empowers models to adapt to new and rare cases with limited examples. Furthermore, the medical field constantly encounters new diseases, conditions, and imaging modalities. FSL enables medical imaging models to swiftly adapt and learn from a few examples of these novel tasks, facilitating their seamless integration into clinical practice. Finally, FSL holds potential in personalized medicine, where models must rapidly adapt to analyze images from individual patients. With just a few examples from each patient, FSL allows the model to tailor its analysis based on specific patient characteristics, enhancing the precision of medical diagnoses and treatments.
While existing reviews have primarily focused on FSL in computer vision as a whole [2, 3, 4], the ones specific to FSL in medical imaging have often focused on particular aspects such as Neural Architecture Search [5] or have examined only a subset of published studies [6]. In contrast, we believe that a comprehensive review in the field of FSL for medical imaging can provide a global understanding of the current state of the art (SOTA). Specifically, this review will discuss how FSL declines in segmentation, classification, and registration tasks when used in medical image analysis. These three applications within the FSL are all generally characterized by using a limited number of annotations in the training phase, whether labels in the case of classification or annotations on image data in the case of segmentation and registration. To address this challenge, some works, for example, cope with the lack of annotated data by exploiting a large amount of unannotated data in the pre-training phase [7, 8, 9]. Others, instead, generate data artificially [10]. However, one thing that most work share consists of implementing meta-learning techniques. Indeed, meta-learning presents one promising direction for FSL by extracting and propagating transferable knowledge from a set of tasks to avoid overfitting [2]. Therefore, we will analyze the methods used in the FSL for medical imaging literature by paying particular attention to whether meta-learning methods were applied. Researchers can use this as a guide in developing new techniques and exploring uncharted territory. In conclusion, we will offer the reader an overview of the most frequently employed approaches, aside from meta-learning, for tackling FSL in the medical imaging domain, and we will also propose a comprehensive pipeline that encompasses all the studies we have reviewed.
### Objectives
This systematic review aims to provide a comprehensive overview of the SOTA in FSL techniques applied to medical imaging and to offer readers insight into the most valuable works in this field. Alongside the theoretical background, we aim to collect and highlight papers that, in the authors' opinion, make substantial and genuine contributions to this domain. Specifically, we focus on the primary applications of DL in medical imaging, namely segmentation, classification, and registration. The objective is to present innovative techniques that have demonstrated tangible results, catalyzing advancements for each outcome and, specifically, in each medical application. A particular emphasis is placed on meta-learning, as it is a common approach used to tackle FSL problems. Below, we provide a detailed breakdown of the specific objectives of this study:
* **Present a distribution of studies by outcome.** The aim is to highlight the distribution of studies across the three outcomes: segmentation, classification and registration. This analysis will provide insight into the emphasis placed on each task in the field of FSL for medical imaging.
* **Present a distribution of studies and their results based on the anatomical structures investigated.** For each outcome, we analyze the most commonly addressed tasks w.r.t. the anatomical structures investigated and examine the average performance achieved by the SOTA methods.
* **Offer an analysis of the distribution of studies and their results w.r.t. the meta-learning methods employed.** We provide a distribution analysis of the meta-learning methods used for each outcome. This analysis will reveal which meta-learning techniques are predominantly employed and highlight cases where meta-learning methods are not utilized. Additionally, for each meta-learning set of techniques, we present the average performance achieved by SOTA.
* **Provide distributions for training data, imaging modalities, and evaluations of robustness.** Further to the above analyses, we provide data usage information for each study, examine the most commonly used imaging modalities and explore the model robustness assessment methods employed by the reviewed studies.
* **Identify a standard pipeline among the studies.** In conclusion, we identify a generic pipeline shared among all the studies we reviewed. This pipeline illustrates the most frequently used methodologies across all studies that aim to conduct FSL. For each study, we explicitly indicate which elements of the pipeline are adopted and offer an indication of the prevalence of various techniques across all the reviewed studies.
By accomplishing these objectives, our systematic review aims to offer a comprehensive and up-to-date understanding of the current landscape of FSL in medical imaging analysis. This review will serve as a valuable resource, providing researchers and practitioners with an overview of the SOTA techniques and approaches in FSL applied to medical imaging. By synthesizing the existing literature and highlighting key findings, the review will facilitate progress in the field by identifying gaps, challenges, and opportunities for future research. Furthermore, our work will aid in identifying the best practices and effective methodologies in FSL for medical imaging tasks, enabling researchers and practitioners to make informed decisions when designing and implementing FSL-based solutions in their work.
In the following, we explain our manuscript's organization:
* We begin with a theoretical introduction to the concepts of FSL and meta-learning, followed by a discussion of the key SOTA works in the field of meta-learning for FSL.
* Next, we outline the methods employed to perform the literature search, including the eligibility criteria and the databases utilized. We also detail the key aspects examined for each work and the synthesis methods employed.
* We present the obtained results by providing a comprehensive overview of the main characteristics of each selected paper and reporting the analyses conducted according to the review objectives. Additionally, we present the results of the risk of bias assessment and applicability analysis for each study and draw a synthesis of the employed methods.
* Finally, we discuss the findings regarding each objective of the review and draw conclusions based on the evidence presented.
## 2 Theoretical background
FSL has been gaining significant attention, particularly with the rise of meta-learning. Meta-learning, a.k.a. _learning-to-learn_, is a powerful paradigm that empowers models to rapidly adapt and generalize to new tasks with minimal training examples. Unlike the traditional training scheme where models are trained on data, meta-learning operates on a higher level by training models on _tasks_ or **episodes**. Thus, this form of training is often referred to as _episodic training_. During training, the meta-learning model is exposed to multiple episodes, each comprising a few examples of a specific task. As a result, the model acquires transferable knowledge and learns to identify common patterns. Consequently, when faced with a new episode during the testing phase, the model can efficiently leverage its acquired meta-knowledge to make accurate predictions, even with limited examples. The combination of FSL and meta-learning has shown remarkable results, especially where data availability is limited or when handling novel tasks. Below, we provide a more formal formulation of the meta-learning framework, as outlined in [11]. The inner algorithm (\(f\)) solves the task \(i\) by updating the model parameters \(\theta\) to \(\theta^{\prime}{}_{i}\): this phase is called _base learning_. During the _meta-learning_ phase, an outer algorithm updates the model parameters \(\theta\) across all the tasks according to an outer objective; the updating entity is regulated by a meta-step hyperparameter \(\beta\). As pointed out by Hospedales et al. [11], several classic algorithms, such as hyperparameter optimization, can match this definition; however, what actually defines a modern meta-learning algorithm is the definition of an outer objective with the simultaneous optimization of the inner algorithm w.r.t. to this objective.
A meta-learning training procedure consists of a _meta-training_ and a _meta-testing_ stage. During meta-training, a set of _source_ tasks is sampled from the distribution of the tasks \(P(\tau)\). Each source task is composed by a _support_ (S = \(\{(x_{j},y_{j})\}_{j=1}^{k}\)) and a _query_ set (Q = \(\{(\hat{x}_{j},\hat{y}_{j})\}_{j=1}^{k}\)), which corresponds to training and validation data in a classical training paradigm, respectively. The goal is to minimize a loss function \(\mathcal{L}\) on the query samples conditioned to the support set. During the meta-testing stage, several _target_ tasks are sampled as well. In this phase, the base learner is trained on the previously unseen tasks by exploiting the _meta-knowledge_ learned during the meta-training phase. To speak about FSL, the number of examples for each class within the support set should be typically less than 10. Figure 1 illustrates the meta-learning training process based on the N-way K-shot paradigm in a generic context where the model's task involves classifying medical images according to the depicted organ.
Meta-knowledge can manifest in various forms, such as initial parameters, optimization strategy, and learning algorithm [11]. Accordingly, we adopt the taxonomy proposed by [2] to categorize meta-learning algorithms for FSL into three categories: _Initialization-based_ methods, _Metric learning-based_ methods, and _Hallucination-based_ methods. Figure 2 illustrates this taxonomy. In the subsequent paragraphs, we provide an overview of the most renowned algorithms developed within each category.
Figure 1: N-way K-shot paradigm representation.
Figure 2: Meta-learning methods taxonomy.
### Initialization-based methods
Initialization-based methods refer to a class of approaches that focus on learning effective initializations for model parameters, i.e. _learning to initialize_. The model learns to adjust its parameters or initialization to better adapt to each task during the meta-training phase. The goal is to find parameter initializations that can be readily fine-tuned with only a few examples from a new episode, facilitating rapid generalization. The following are some of the most relevant SOTA algorithms that belong to the category of initialization-based methods in meta-learning.
#### 2.1.1 Model-Agnostic Meta-Learning
In their paper, Finn et al. [12] present Model-Agnostic Meta-Learning (MAML), a meta-learning framework applicable to any model trained with gradient descent. The objective of MAML is to enable the model \(f_{\theta}\) to adapt quickly to new tasks \(\tau_{i}\) by finding the model parameters most sensitive to changes in the episode. In particular, the model's parameters are updated to \(\theta^{{}^{\prime}}_{i}\) for a new task \(\tau_{i}\) as follows:
\[\theta^{{}^{\prime}}_{i}=\theta-\alpha\nabla_{\theta}\mathcal{L}\tau_{i}(f \theta) \tag{1}\]
where \(\alpha\) is the step size of the gradient descent and \(\mathcal{L}\) the loss function. The overall meta-objective is to minimize the loss across all tasks \(P(\tau)\):
\[\min_{\theta}\sum_{\tau_{i}\sim P(\tau_{i})}\mathcal{L}\tau_{i}(f\theta^{{}^{ \prime}}_{i}) \tag{2}\]
The model parameters are updated through stochastic gradient descent (SGD) as follows:
\[\theta\leftarrow\theta-\beta\nabla_{\theta}\sum_{\tau_{i}\sim P(\tau_{i})} \mathcal{L}\tau_{i}(f\theta^{{}^{\prime}}_{i}) \tag{3}\]
Since computing gradients for both task and meta objectives can be computationally expensive, the authors also explored a first-order approximation (FOMAML) that omits the second derivatives. Surprisingly, their results showed that FOMAML performed almost as well as the original MAML. A possible explanation for this observation is that certain ReLU neural networks are nearly linear locally, causing the second derivatives to be close to zero in practice.
#### 2.1.2 Reptile
In their work Nichol, Achiam and Schulman [13] propose a variant of FOMAML called Reptile. Similar to MAML and FOMAML, Reptile updates the global parameters to create task-specific parameters. However, instead of following Equation 3, Reptile uses the following update rule for \(N\) tasks:
\[\theta\leftarrow\theta+\beta\frac{1}{N}\sum_{i=1}^{N}(\theta^{{}^{\prime}}_{i }-\theta) \tag{4}\]
Here, the difference \((\theta^{{}^{\prime}}_{i}-\theta)\), instead of being updated towards \(\theta\), is treated as a gradient and can be utilized with an adaptive algorithm like Adam for the final update. This update rule is computationally more efficient compared to the complex second-order differentiation used in MAML. This efficiency makes Reptile easier to implement and can lead to faster training times.
#### 2.1.3 Optimization as Long Short-Term Memory network cell update
In their work, Ravi and Larochelle [14] propose a meta-learning approach based on Long Short-Term Memory (LSTM) networks, aiming to learn an optimization algorithm for training another model in an FSL manner. The main idea stems from the observation that the parameter updating law in a generic gradient descent network is similar to the update equation of the cell state in an LSTM [15]:
\[c_{t}=f_{t}\odot c_{t-1}+i_{t}\odot\widetilde{c}_{t} \tag{5}\]
where \(f_{t}=1\), \(c_{t-1}=f_{\theta}\), \(i_{t}=\alpha\), and \(\widetilde{c}_{t}=-\nabla\theta\mathcal{L}\). Exploiting this relationship, the learning rate can be formulated as a function of the current parameter value \(\theta\), the current gradient \(\nabla_{\theta}\mathcal{L}\), the current loss \(\mathcal{L}\), and the previous learning rate \(\alpha_{t-1}\). By doing so, the meta-learner can effectively control the learning rate value, enabling the model to learn quickly. During training, while iterating on the episode's training set, the LSTM meta-learner receives the values (\(\nabla_{\theta}\mathcal{L}\tau_{i}\), \(\mathcal{L}\tau_{i}\)) from the model for each task \(\tau_{i}\). Subsequently, it generates the updated parameters \(\theta^{{}^{\prime}}_{i}\) as its output. This process is repeated for a predefined number of steps, and at the end of these steps, the model's parameters are evaluated on the test set to compute the loss, which is then used for training the meta-learner.
#### 2.1.4 Optimization with Markov decision process and Reinforcement Learning
In their paper, Li and Malik [16] propose a novel approach to learning an optimization algorithm using guided policy search through reinforcement learning in the form of a Markov decision process (MDP) [17]. The goal is to learn an optimization algorithm, represented by a _policy_\(\pi\), that can efficiently update the current location in an iterative optimization process. The optimization algorithm under consideration performs updates to the current location using a step vector computed by a generic function \(\pi\) of the objective function, the current location, and past locations. Each value of \(\pi\) corresponds to a different optimization algorithm, so by learning \(\pi\), one can effectively learn multiple optimization algorithms. However, learning a generic function \(\pi\) is challenging, so the authors restrict the dependence of \(\pi\) to the objective values and gradients evaluated at the present and past locations. Consequently, \(\pi\) can be modelled as a function that takes the objective values and gradients along the optimizer's trajectory and outputs the next step vector for the optimization.
The authors observe that executing an optimization algorithm can be seen as executing a policy in an MDP, where the current location serves as the state, the step vector as the action, and the transition probability is similar to the location update formula (\(x^{(i)}\gets x^{(i-1)}+\Delta x\)). The implemented policy corresponds to the choice of \(\pi\) used by the optimization algorithm. By searching over policies, they effectively explore a range of possible first-order optimization algorithms. To learn the policy \(\pi\), they use reinforcement learning, with the speed of convergence serving as the cost function (policies that lead to slow convergence are penalized). Since \(\pi\) could be stochastic in general, the authors use a neural network to parameterize the mean of \(\pi\). The current state in the MDP corresponds to the parameters of the neural network, and the system updates these parameters (takes an action from the policy) and receives a reward based on how the loss function changes.
#### 2.1.5 Memory-augmented Neural Networks
In their paper, Santoro et al. [18] propose a solution to the FSL task using a differentiable version of Memory-augmented Neural Networks (MANNs) known as Neural Turing Machines (NTMs). An NTM consists of a controller, which can be a feed-forward network or a Long Short-Term Memory (LSTM) network, that interacts with an external memory module through reading and writing heads. The NTM's memory reading and writing operations are fast, making it suitable for meta-learning and few-shot predictions. It can store information for both short-term and long-term durations, making it capable of handling tasks with limited data. During training, the model is fed with an input while its label is provided a one-time step later. Specifically, at time step \(t\), the model receives the input \(x_{t}\) and the label \(y_{t-1}\), the label at the previous time step. This approach prevents the model from simply learning to map the label to the output. To further ensure this, inputs and their corresponding labels are shuffled in each episode so that the model cannot learn the input sequence directly. The external memory is utilized to store the input-label pairs discovered by the model during the training process. When a previously encountered input shows up again, the corresponding label is retrieved from the external memory, effectively making it a prediction for the current input. This retrieval process is performed using a key \(k_{t}\) associated with the input \(x_{t}\), produced by the controller and stored in a memory matrix \(M_{t}\). The retrieval is done by computing the cosine similarity between the key \(k_{t}\) and the contents of the memory matrix \(M_{t}\). Once the label is retrieved, the error is backpropagated, and the model's weights are updated to improve the input-label binding strategy.
### Metric learning-based methods
The metric-learning-based category comprises all the algorithms that enable the model to _learn to compare_. The main idea is to train the model to understand the similarity between images, allowing it to classify a new instance based on its distance w.r.t the seen categories. Below, we report some of the most relevant SOTA metric-learning-based algorithms.
#### 2.2.1 Siamese Neural Networks
Bromley et al. [19] first introduced Siamese Neural Networks for signature verification. In 2015, they were newly proposed by Koch, Zemel and Salakhutdinov [20], where they exploited Convolutional Neural Networks (CNNs) to perform one-shot image classification. A Siamese Network consists of two identical networks accepting different inputs and having bound weights to ensure that similar images are mapped close in the feature space. As the network undergoes training, it learns to differentiate between pairs of images that belong to the same class and those that belong to different classes. In the inference phase, a test image is compared with one image per novel class and a similarity score is computed. The network then assigns the highest probability to the pair with the highest score. Because the model is trained on an extensive set of training classes, it becomes proficient at general data discrimination during the training process.
#### 2.2.2 Triplet Networks
Triplet Networks, introduced by Hoffer et al. [21], were inspired by Siamese Networks and share the same architectural criterion. Here, the model is composed of three identical networks, having shared parameters, which are trained by triplets composed of anchor, positive and negative samples (positive examples belong to the same class as the anchor, while negative belongs to a different class). The network outputs the \(L_{2}\) distances between the anchor and the positive and negative examples. The objective is to classify which between the positive and negative examples belongs to the same class as the anchor. During inference time, the model is fed with two inputs and assesses whether they belong to the same class by applying a threshold to the distance in the embedding space.
#### 2.2.3 Matching Networks
Matching Networks proposed by Vinyals et al. [22], differently from Siamese and Triple Networks, can work in a multi-class way instead of in a pair-wise one. Matching Networks aim to map a support set to a classifier, which, given a query example, can produce a probability distribution of the output according to the following equation:
\[P(\hat{y_{j}}|\hat{x_{j}})=\sum_{j=1}^{k}a(\hat{x_{j}},x_{j})y_{j} \tag{6}\]
where \(a\) acts as an attention mechanism. In the simplest implementation, \(a\) consists of computing a softmax over the cosine distance. At each iteration, a training episode is constructed, composed of a support and a query set. Based on the support set, the network provides the query label and the error is minimized.
#### 2.2.4 Prototypical Networks
Prototypical Network, proposed by Snell, Swersky, and Zemel [23], compute a representation or _prototype_ of each class using an embedding function with trainable parameters. Given a class \(c\), the prototypes are computed by averaging the embeddings of the support samples belonging to each class:
\[p_{c}=\frac{1}{|S_{c}|}\sum_{(x_{j},y_{j})\in S_{c}}f_{\theta}(x_{j}) \tag{7}\]
Given a generic distance function \(d\), the prototypical network provides an output distribution based on the distance between the query embeddings and the prototypes of each class:
\[P(\hat{y_{j}}=c|\hat{x}_{j})=\frac{exp(-d(f_{\theta}(\hat{x}_{j}),p_{c}))}{ \sum_{c^{\prime}}exp(-d(f_{\theta}(\hat{x}_{j}),p_{c^{\prime}}))} \tag{8}\]
As for Matching Networks, training episodes are built by sampling a set of classes from the training set and choosing two groups of examples for each class as the support and query set, respectively. While in the original paper on Matching Networks, cosine distance was used as a distance function, here, the authors employ the negative squared Euclidean distance (greater distances provide smaller values). As pointed out by the authors, while prototypical networks differ from matching networks in a few-shot scenario, One-Shot Learning (OSL) makes them equivalent.
It is also possible to use this architecture for Zero-Shot Learning (ZSL). Here, instead of having training points, we have a class meta-data vector for each class, which can be already known or learned, for example, from raw text ([24]). Here, the prototype becomes an embedding of the meta-data vector.
#### 2.2.5 Relation Networks
Relation Networks were introduced by Santoro et al. in their paper [25], and they were initially employed in the FSL and ZSL domains in [26]. In contrast to Matching and Prototypical Networks, which use predefined distance functions, a relation network is trained end-to-end, including the metric to compare support and query embeddings. This part of the network is called _relation module_. In a one-shot setting, embeddings from support and query samples are first produced and concatenated in depth through an operator \(Z(\cdot,\cdot)\). Concatenated embeddings are provided to the relation module \(g_{\phi}\), which outputs a scalar representing the similarity between the support and query embeddings:
\[r=g_{\phi}(Z(f_{\theta}(x_{j}),f_{\theta}(\hat{x}_{j}))) \tag{9}\]
For a generic FSL, the class feature map is calculated by summing all embedding module outputs from each sample in the training set. The class-level feature map is then combined with the query image feature map as in the one-shot scenario.
Relation Networks can be employed in a ZSL as well. In this case, a semantic class embedding vector is provided for each class. Since support and query vectors belong to different modalities (attributes and images, respectively), two embedding modules are employed. The relation module instead works as before.
### Hallucination-based methods
The hallucination-based methods directly address the scarcity of data by _learning to augment_. These methods focus on generating additional data to overcome the limitations of the available dataset. In the following, we describe in detail the most prominent hallucination-based methods.
#### 2.3.1 Hallucinating with Intra-class Analogies
Harihan and Girshick [27] propose to exploit intra-class analogies to augment the dataset when few examples are available. Their framework employs a learner, two training and a testing phase. In the first training phase, known as _representation learning_ phase, the learner is fed with several base classes (\(C_{base}\)), for which a lot of examples are available for each class. The learner uses these data to set the parameters of its feature extractor. During the second phase (_low-shot_ phase), the learner needs to distinguish a set of classes, both base and novel ones. For the novel classes, the learner has access only to a few examples, while for the base classes, it has access to the same dataset used for learning the feature extractor. During the test phase, the model predicts labels from both classes. For the categories with few examples, the idea is to hallucinate additional data using the many examples seen for the base classes to improve the model's performance. The goal is to learn a transformation that maps two images belonging to the same base class (e.g., bird on grass and bird on the sky) and apply this transformation to a novel class image. To achieve this, a function \(G\) is trained that takes the concatenated feature vectors of three examples and outputs a "hallucinated" feature vector. As \(G\), they exploited an MLP with three fully connected layers.
#### 2.3.2 Classificator and Hallucinator End-to-End Model
Wang et al. [28] further deepened the previously described method by combining a generator of "hallucinated" examples, with a meta-learning framework, by optimizing the two models jointly. The "hallucinator" \(G\) takes as input an example \(x\), a noise vector \(z\) and produces a hallucinated example as the output according to the hallucinator parameters \(\theta_{G}\). During meta-testing, several hallucinated examples are computer by sampling from the initial training set \(S_{train}\), producing a new training set \(S_{train}^{G}\). The final training set \(S_{train}^{aug}\) is obtained by combining the two datasets. This dataset is then used to train the classification algorithm. During the meta-training phase, the hallucinator is trained jointly with the classification algorithm, exploiting a meta-learning paradigm. From the set of all classes, \(m\) classes are sampled, specifically \(n\) examples for one. The generator \(G\) is exploited to produce additional \(n\) augmented examples to add to the training set. This new dataset is employed to train the classification algorithm. This training process is agnostic w.r.t. specific meta-learning algorithm used.
After categorizing and describing the main meta-learning methods for FSL in the literature, the following chapter outlines the methods used for searching, selecting, and analyzing SOTA works in the field of FSL for medical image analysis.
## 3 Methods
### Study Design
We conducted a systematic review in accordance with the "Preferred reporting items for systematic reviews and meta-analyses" (PRISMA) 2020 checklist [29]. The review has five main objectives. Firstly, it aims to analyze the distribution of studies among the three outcomes (segmentation, classification, and registration) in the field of FSL for medical imaging. Secondly, for each outcome, it examines the most commonly addressed tasks concerning the anatomical structures studied. Thirdly, it provides a distribution of the meta-learning methods used for both classification, segmentation and registration tasks by shedding light on prevalent meta-learning techniques and cases where meta-learning methods are not utilized. In addition, the review offers additional insights into the data usage, including the most commonly used imaging modalities, and explores the techniques of model robustness assessment employed in the reviewed studies. Finally, it offers an overview of the most commonly used methods among the selected studies while also outlining a general pipeline for conducting FSL in the field of medical imaging.
### Eligibility criteria
We established the inclusion criteria for paper selection in this systematic review based on three primary aspects:
* **Implementation of FSL techniques**: We selected papers that claimed to implement FSL in their work.
* **Application in medical imaging domain**: We considered papers that performed at least one experiment applied to the medical imaging domain.
* **Low data usage in training**: We included only papers that demonstrated using a small amount of data during training. In particular, we considered all the studies that employed a maximum of 20 training examples per class.
In addition, during the selection process, we excluded abstracts, non-peer-reviewed papers, papers written in languages other than English, and papers deemed to have significant theoretical errors. Furthermore, we did not include papers dealing with few-shot domain adaptation methods (FSDA), as [30, 31, 32]. FSDA, as highlighted by Li et al. [32], FSL focuses on adapting pre-trained models to perform well on novel tasks with limited training examples, whereas FSDA involves adapting models across different domains. Therefore, we considered FSDA papers outside the scope of this systematic review. By applying these inclusion and exclusion criteria, we aimed to ensure the selection of relevant and high-quality papers that specifically addressed the application of FSL techniques in medical imaging with limited training data.
### Information sources
We searched for papers using the following databases:
* Web of science
* Scopus
* IEEE Xplore
* ACM Digital Library
To ensure comprehensive coverage and include recent studies in our analysis, we performed a two-step search, the first on September 7, 2022, and the second on January 25, 2023. In cases where we didn't have full access to the papers, we took advantage of the Network Inter-Library Document Exchange (NILDE) platform, a web-based Document Delivery service through which we requested access to the missing PDF files, enabling us to obtain the complete papers for inclusion in our review.
### Research strategies
For each of the mentioned databases, we listed the queries used in the study search in Table 1.
### Selection process
During the review process, a single reviewer, examinated each record, including titles, abstracts, and any accompanying reports obtained during the research. No machine learning algorithms were employed to aid in eliminating records or to streamline the screening process. Additionally, no crowdsourcing or pre-screened datasets were employed for the records screening.
### Data collection process
For data collection, a single reviewer was responsible for collecting the relevant information from each report. No automation processes was employed for the data collection process. During the review, all articles were examinated in their original language. The selection of articles was based on the predefined eligibility criteria described above. No software or automated tools were used to extract data from the figures or graphical representations in the articles. Finally, the data collection process entailed a manual analysis of the articles to extract the pertinent information for the review.
### Data item
In our study, we examined three primary outcomes: segmentation, classification, and registration. All of the reviewed studies were compatible with these three outcome domains. We did not alter or introduce any changes to the outcome
domains or their significance in the review. Likewise, there were no modifications made to the selection processes within these eligible outcome domains. Beyond the three outcomes previously mentioned, we also explored data pertaining to the utilization of FSL, OSL, and ZSL techniques, as well as their applications within the field of medical imaging.
### Assessment of bias in studies
To evaluate the potential risk of bias (ROB) or concerns regarding applicability in each study, we utilized the PROBAST tool [33], designed for assessing the quality of diagnostic accuracy studies. For each outcome, we created a table denoting studies with low risk or concerns using a green checkmark symbol \(\check{\mathsf{\check{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf}}}}}}}}}}}}}}}}\) and those with high risk or concerns using a red cross symbol \(\check{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ }}}}}}}}}}}}}}}}}\).
### Effect measures
In the segmentation studies included in our review, we evaluated the performance using two commonly used metrics: the Dice score and the Intersection over Union (IoU). These metrics provide quantitative measures of the overlap between the predicted segmentation and the ground truth. For the classification outcome, we evaluated its effectiveness through various measures. One of the metrics employed was Accuracy, which determines the proportion of correctly classified samples. Additionally, we considered the F1-score and Recall. Moreover, we investigated the Area Under the Receiver Operating Characteristic (AUROC) curve as a performance metric, particularly for binary classification tasks. In the registration domain, we investigated different metrics to evaluate the effectiveness: the Dice score, the average landmark distance (ALD), and the target registration error (TRE). These metrics were used to quantify the performance of the models in their respective outcome domains, providing objective measures of effectiveness and allowing for comparisons between different approaches.
### Synthesis methods
In our systematic review, we structured the results of each study within dedicated tables for each outcome category, including segmentation, classification, and registration. The tables included the following information: first author,
\begin{table}
\begin{tabular}{c l} \hline \hline
**Database** & **Query** \\ \hline Web of Science & (TS=(”few-shot”) OR TS=(”low-shot”) OR TS=(”one-shot”) OR TS=(”zero-shot”)) AND (TS=(”medical imag*”)) AND (TS=(”classif*”) OR TS=(”segment*”) OR TS=(”regist*”)) \\ \hline Scopus & TITLE-ABS-KEY (few-shot ) OR TITLE-ABS-KEY (low-shot ) OR TITLE-ABS-KEY ( one-shot) OR TITLE-ABS-KEY ( zero-shot ) AND TITLE-ABS-KEY ( medical imaging ) OR TITLE-ABS-KEY ( medical image ) OR TITLE-ABS-KEY ( medical images ) AND TITLE-ABS-KEY ( classical*) OR TITLE-ABS-KEY ( segment* ) OR TITLE-ABS-KEY ( \\ \hline IEEE Xplore & (((”Abstract”:“few-shot” OR “Abstract”:”low-shot” OR “Abstract”:“one-shot” OR “Abstract”:“zero-shot”) AND “Abstract”:”medical imag*” AND (”Abstract”:classification OR “Abstract”:segmentation OR “Abstract”:registration) OR “Document Title”:“few-shot” OR “Document Title”:“low-shot” OR “Document Title”:“one-shot” OR “Document Title”:“zero-shot” AND “Document Title”:“medical imag*” AND “Document Title”:“localiment Title”:classification OR “Document Title”:“low-shot” OR “Author Keywords”:”one-shot” OR “Author Keywords”:”one-shot” OR “Author Keywords”:“:“one-shot” OR “Author Keywords”:“zero-shot”) AND “Author Keywords”:“medium “Kewords”:“classif* OR “Author Keywords”:“segment* OR “Author Keywords”:regist*))) \\ \hline ACM Digital Library & ”(Abstract” few-shot” OR “low-shot” OR “one-shot” OR “zero-shot” OR “zero-shot”) OR Keyword.”(”few-shot” OR “low-shot” OR “one-shot” OR “zero-shot”) OR “zero-shot”) OR Title:(”few-shot” OR “low-shot” OR “one-shot” OR “zero-shot”)) AND (Abstract:(””medical imaging*” OR “medical images” OR “’medical image”) OR “medical image”) OR “medical image”)) AND (Title:(classif* OR segment* OR registst*) OR registst*) OR Keyword:(classif* OR segment* OR registst*)) OR Keyword:(classif* OR segment* OR registst*))) \\ \hline \hline \end{array}\) \end{array}\)
\end{table}
Table 1: Research queries employed for each database.
year of publication, the algorithm or framework used, the number of training data, the best performance achieved by the model, and whether the study utilized the meta-learning paradigm. To provide a visual summary of the results, we used forest plots. We generated these plots by grouping the studies based on the anatomical structure investigated and the meta-learning method employed in each outcome. We created separate forest plots for each performance metric (accuracy, AUROC, etc.), considering, in each study, the highest performance achieved (across various experiments and image modalities). In each forest plot, we reported the mean and the 95% confidence interval (CI) across all the studies within the corresponding group, whether organized by organ or meta-learning algorithm. It is important to note that we did not conduct a meta-analysis of the collected results. This is because the studies included in our review encompassed various clinical applications, making direct comparisons between the results inappropriate. Therefore, the forest plots served as a visual representation of the individual study findings rather than a quantitative synthesis of the data. In conclusion, we furnished a comprehensive overview by creating a unified pipeline that encompasses all the papers reviewed within each outcome. For each outcome, we presented a table that delineated the specific elements of the core pipeline utilized by each study.
## 4 Results
### Study selection
In Figure 3, we show the PRISMA diagram where we summarize the data selection flow. In total, we retrieved 314 studies and included 80 studies in the final analysis.
### Studies characteristics
In this section, we present the findings resulting from our analysis of the selected research papers. Figure 4 displays the distribution of studies across the three primary outcomes, while Figure 5 illustrates the proposed unified pipeline for the various methods employed in performing FSL. Below, we present the results of our analysis grouped by the primary outcome: segmentation, classification, and registration. It's worth noting that several studies, namely [34], [35], [36], [37], and [38], are included multiple times, as they address multiple outcomes simultaneously.
#### 4.2.1 Segmentation
We selected 50 relevant studies, each focusing on medical segmentation as its primary task. All pertinent information from the selected studies is provided in Table 2. In addition, we present ROB and the applicability analyses of each study in Table 3.
Figure 3: PRISMA flow diagram.
Figure 4: Studies distribution by outcome.
Figure 5: Summary diagram.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline
**Study** & **Pub. ref.** & **Algorithm/Pipeline** & **K-shot** & **Best performance** & **Meta-learning** \\
**ID** & & & & & **type** \\ \hline \multirow{4}{*}{1} & Blendowski, & \multirow{4}{*}{Sianese Network + SSL} & \multirow{4}{*}{1-shot 9-shot 9-shot 0.657 (Spleen) 0.663 (Kidney) 0.665 (Posa) 0.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Study** & **Pub. ref.** & **Algorithm/Pipeline** & **K-shot** & **Best performance** & **Meta-learning** \\
**ID** & & & & & **type** \\ \hline \hline \multirow{9}{*}{12} & \multirow{9}{*}{Hansen et al. [49]} & \multirow{9}{*}{Anomaly detection-inspired model + SSL} & \multirow{9}{*}{1-shot 2-shot 3-shot 4-shot 5-Splenet 6-Splenet 75 (LV-BP)} & \multirow{9}{*}{Metric learning} \\ & & & & 0.875 (LV-BP) & \\ & & Joint Model & & 0.773 (RV) & \\ & & & 2-shot 3-shot 4-Splenet 6-Splenet 75 (Splenet 8-Dex) & & \\ & & & 0.808 (Liver) & \\ \hline \multirow{9}{*}{13} & \multirow{9}{*}{He et al. [34]} & \multirow{9}{*}{Deep Complementary Joint Model (Segmentation model + Pixel-wise discriminator + Registration model)} & \multirow{9}{*}{4-shot 0.870 (MYO)} & \\ & & & 0.800 (PA) & \\ & & & 0.800 (RA) & \\ & & & 0.810 (RV) & \\ \hline \multirow{9}{*}{14} & \multirow{9}{*}{He et al. [35]} & \multirow{9}{*}{Knowledge Consistency Constraint strategy + Space-style Sampling Program + Mix Misalignment Regularization} & \multirow{9}{*}{1-shot 5-shot 0.872 (Brain, MAS)} & \multirow{9}{*}{Hallocation} \\ & & & & 0.870 (MYO) & \\ \hline \multirow{9}{*}{15} & \multirow{9}{*}{Jenssen et al. [50]} & \multirow{9}{*}{Anomaly detection-inspired model (Segmentation model + SSL} & \multirow{9}{*}{1-shot 0.840 (LV) 0.585 (MYO) 0.697 (RV)} & \multirow{9}{*}{Metric learning} \\ & & & & 0.870 (Brain, MAS) & \\ & & & & \\ \hline \multirow{9}{*}{16} & \multirow{9}{*}{Joyce and Kozerke [51]} & \multirow{9}{*}{Anatomical model + SSL} & \multirow{9}{*}{1-shot 3-shot 0.630 (Heart)} & \multirow{9}{*}{None} \\ & & & & 0.840 (LV) & \\ & & & 0.585 (MYO) & \\ & & & 0.697 (RV) & \\ \hline \multirow{9}{*}{17} & \multirow{9}{*}{Khadka et al. [52]} & \multirow{9}{*}{Multi-stage GAN} & \multirow{9}{*}{10-shot 0.940 (Brain, MAS)} & \multirow{9}{*}{None} \\ & & & & 0.690 (Brain, MAS) & \\ \hline \multirow{9}{*}{19} & \multirow{9}{*}{Khandelwal and Yushkevich [54]} & \multirow{9}{*}{domain generalization + 3D U-Net + Fine-tuning} & \multirow{9}{*}{2-shot 4-shot 6-shot 0.823 (Spine, MAS)} & \multirow{9}{*}{Initialization} \\ & & & & 0.870 (Brain, MAS) & \\ \hline \multirow{9}{*}{20} & \multirow{9}{*}{Kim et al. [55]} & \multirow{9}{*}{VGG16 + Bidirectional gated recurrent unit + U-Net + Fine-tuning} & \multirow{9}{*}{5-shot 0.887 (Liver)} & \multirow{9}{*}{Metric learning} \\ & & & & 0.771 (Bladder) & \\ \hline \multirow{9}{*}{21} & \multirow{9}{*}{Li et al. [56]} & \multirow{9}{*}{3D U-Net + Prototypical learning + Image alignment module} & \multirow{9}{*}{1-shot 0.417 (Prostate, MAS)} & \multirow{9}{*}{Metric learning} \\ \cline{1-1} \cline{5-5} & & & & 0.973 (Knee) & \\ \cline{1-1} \cline{5-5} & & & 0.948 (Lung) & \\ \cline{1-1} \cline{5-5} & & & 0.970 (Phalanx) & \\ \hline \hline \end{tabular}
\end{table}
Table 2: _(continued)._
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline
**Study** & **Pub. ref.** & **Algorithm/Pipeline** & **K-shot** & **Best performance** &
\begin{tabular}{c} **Meta-learning** \\ **type** \\ \end{tabular} \\ \hline
23 & Lu and Ye [58] & \begin{tabular}{c} TractSeg + \\ Knowledge transfer \\ with warmup \\ \end{tabular} & 1-shot &
\begin{tabular}{c} Dice: \\ 0.812 \\ (Brain, WM) \\ \end{tabular} & None \\ \hline
24 & Ma et al. [59] & \begin{tabular}{c} Segmentation network + \\ Zero-shot segmentation \\ network + \\ Spatial Context \\ Attention module \\ \end{tabular} & 0-shot &
\begin{tabular}{c} Dice: \\ 0.882 \\ (Brain, tumour) \\ \end{tabular} & None \\ \hline
25 & Niu et al. [60] & \begin{tabular}{c} Conditioner + Segmenter + \\ Symmetrical Supervision \\ Mechanism + \\ Transformer-based Global \\ Feature Alignment module \\ \end{tabular} & 1-shot &
\begin{tabular}{c} Dice: \\ 0.870 (LV-BP) \\ 0.815 (Kidney) \\ 0.738 (Spleen) \\ 0.729 (Liver) \\ \end{tabular} & Metric learning \\ \hline
26 & Ouyang et al. [7] & \begin{tabular}{c} Self-Supervised Adaptive \\ Local Prototype \\ Pooling Network \\ \end{tabular} & 1-shot & \begin{tabular}{c} Dice: \\ 0.862 (Kidney) \\ 0.5-shot \\ \end{tabular} &
\begin{tabular}{c} Dice: \\ 0.757 (Spleen) \\ 0.821 (Liver) \\ 0.870 (LV) \\ 0.721 (MYO) \\ 0.860 (RV) \\ \end{tabular} & Metric learning \\ \hline
27 & Pham et al. [61] & Few-Sample-Fitting & \begin{tabular}{c} 1-shot to \\ 20-shot \\ \end{tabular} &
\begin{tabular}{c} Dice: \\ 0.990 (Femur) \\ \end{tabular} & None \\ \hline
28 & Pham, Dovletov & \begin{tabular}{c} 3D U-Net + \\ Imitating encoder + \\ Prior encoder + \\ Joint decoder \\ \end{tabular} & 1-shot &
\begin{tabular}{c} Dice: \\ 0.776 (Liver) \\ \end{tabular} & None \\ \hline
29 & Roy et al. [63] & \begin{tabular}{c} Conditioner arm + \\ Segmenter arm + \\ Channel Squeeze \& 1-shot \\ Spatial Excitation \\ blocks \\ \end{tabular} & 1-shot &
\begin{tabular}{c} Dice: \\ 0.700 (Liver) \\ 0.607 (Spleen) \\ 0.464 (Kidney) \\ 0.499 (Posa) \\ \end{tabular} & None \\ \hline
30 & Roychowdhury et al. [36] & \begin{tabular}{c} Echo state network + \\ augmented U-Net \\ \end{tabular} & 5-shot &
\begin{tabular}{c} Dice: \\ 0.640 (Eye, IC) \\ \end{tabular} & None \\ \hline
31 & Rutter, Lagergren \\ and Flores [64] & \begin{tabular}{c} CNN for \\ Boundary Optimization \\ \end{tabular} & \begin{tabular}{c} 1-shot \\ 3-shot \\ 5-shot \\ \end{tabular} &
\begin{tabular}{c} Dice: \\ 0.931 (Cells) \\ \end{tabular} & None \\ \hline
32 & Shen et al. [65] & \begin{tabular}{c} Large Deformation \\ Diffeomorphic Metric \\ Mapping model + \\ Sample transformations + \\ Interpolation \\ \end{tabular} & 1-shot &
\begin{tabular}{c} Dice: \\ 0.883 (Knee) \\ \end{tabular} & None \\ \hline
33 & Shen et al. [66] & \begin{tabular}{c} VGG-16 + \\ Poisson learning + \\ Spatial Consistency \\ Calibration \\ \end{tabular} & 1-shot &
\begin{tabular}{c} Dice: \\ 0.619 (Skin, MAD) \\ 0.610 (Liver) \\ 0.536 (Kidney) \\ 0.529 (Spleen) \\ \end{tabular} & None \\ \hline
34 & Shi et al. [37] & \begin{tabular}{c} Joint Registration \\ and Segmentation \\ Self-training \\ Framework (JRSS) \\ \end{tabular} & 5-shot &
\begin{tabular}{c} Dice: \\ 0.795 \\ (Brain, MAS) \\ 0.753 \\ (Abdomen, MAS) \\ \end{tabular} & None \\ \hline \hline \end{tabular}
\end{table}
Table 2: _(continued)._
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Study** & **Pub. ref.** & **Algorithm/Pipeline** & **K-shot** & **Best performance** & **Meta-learning type** \\ \hline \multirow{4}{*}{35} & & 2-branch CNN + & & & \\ & & Spatial Squeeze & & Dice: & \\ & & Excite module + & & 0.495 (Liver) & \\ & & Global Correlation module + & & 0.606 (Spleen) & \\ & & Discriminative Embedding & & 0.830 (Kidney) & \\ & & module & & & \\ \hline \multirow{4}{*}{36} & & Recurrent Prototypical & & Dice: & \\ & & Networks (U-Net + & & 0.788 (Spleen) & \\ & & Contex Relation Encoder + & & 0.851 (Kidney) & \\ & & Prototypical Network) & & 0.819 (Liver) & \\ \hline \multirow{4}{*}{37} & & Generative Style Transfer & & & \\ & & (Appearance model + & & \\ & & Style encoder + & & & \\ & & Flow model + & & & \\ & & Flow Adversarial & & & \\ & & Autoencoder) & & & \\ \hline \multirow{4}{*}{38} & & Label Transfer & & & \\ & & Network & & & \\ & & (Atlas-based & & 0.823 (Brain, MAS) & None \\ & & segmentation + & & & \\ & & Forward-backward & & & \\ & & correspondance) & & & \\ \hline \multirow{4}{*}{39} & & & & Dice: & \\ & & & 0.862 (Brain, MAS) & & \\ & & Siamese model and & & 0.803 (Spleen) & \\ & & Individual-Difference- & & 0.884 (Kidney) & \\ & & Aware model (Encoders + & & 0.916 (Liver) & None \\ & & Forward-backward & & 0.684 (Stonach) & \\ & & & 0.511 (Pancreas) & \\ & & & & 0.485 (Doudenum) & \\ & & & & 0.519 (Esophagus) & \\ \hline \multirow{4}{*}{40} & & & & Dice: & \\ & & & & 0.937 (LV) & \\ & & V-Net + & & 0.890 (RV) & \\ & & Init-crop + & & 0.872 (LA) & \\ & & Self-down + & & 0.909 (RA) & \\ & & Self-crop & & 0.831 (MYO) & \\ & & & & 0.943 (AO) & \\ & & & & 0.798 (PA) & \\ \hline \multirow{4}{*}{41} & & & Prototype learning + & & Dice: & \\ & & & Self-reference + & & 0.756 (Liver) & \\ & & Zheng [72] & Contrastive learning & & 0.737 (Spleen) & \\ & & & & 0.842 (Kidney) & \\ \hline \multirow{4}{*}{42} & & & Alternating Union Network & & Dice: & \\ & & (Image Sub-Network + & & 0.873 (LV) & \\ & & Label Sub-Network & & 0.637 (MYO) & None \\ \hline \multirow{4}{*}{43} & & & Dual Contrastive Learning + & & \\ & & Anatomical Auxiliary & & Dice: & \\ & & Supervision + & & 0.699 (Liver) & \\ & & & Constrained Iterative & & 0.838 (Kidney) & None \\ & & Prediction module & & 0.749 (Spleen) & \\ \hline \multirow{4}{*}{44} & & & Self-Learning + & & Dice: & \\ & & One-Shot & & 0.850 (Spleen) & None \\ & & Learning & & 0.930 (Liver) & \\ \hline \hline \end{tabular}
\end{table}
Table 2: _(continued)._
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Study** & **Pub. ref.** & **Algorithm/Pipeline** & **K-shot** & **Best performance** & **Meta-learning type** \\ \hline \multirow{4}{*}{45} & & DeepAtlas & & & \\ & & (Semi-Supervised & 1-shot & Dice: & \\ & Xu and & Learning + & 5-shot & 0.892 (Knee, MAS) & None \\ & Niethammer [38] & Segmentation network + & 10-shot & 0.612 (Brain) & \\ & & Registration network) & & & \\ \hline \multirow{4}{*}{46} & & Location-Sensitive & & Dice: & \\ & & Location-Sensitive & & 0.793 (Liver) & \\ & & Local Prototype & 1-shot & 0.733 (Spleen) & Metric learning \\ & & Network & & 0.765 (Kidney) & \\ & & & 0.524 (Posa) & \\ \hline \multirow{4}{*}{47} & & & IoU: & \\ & & MetaHistoSeg & & 0.326 (Cells) & \\ & & (U-Net + MAML) & & 0.682 (Cells nuclei) & Initialization \\ & & & 0.557 (Gland) & \\ & & & 0.632 (Colon, tumour) & \\ \hline \multirow{4}{*}{48} & & Spatial and appearance & & & \\ & & transform models + & & & Dice: & \\ & & Semi-supervised learning + & & 0.815 (Brain, MAS) & None \\ & & Supervised learning & & & \\ \hline \multirow{4}{*}{49} & & & & Dice: & \\ & & & 0.756 (AO) & & \\ & & & 4-shot & 0.751 (LA) & Initialization and \\ & & & 0.823 (LV) & Hallucination- & \\ & & & 0.696 (MYO) & based \\ \hline \multirow{4}{*}{50} & & & Dice: & & \\ & & OrganNet & & 0.891 (Spleen) & \\ & & (3 encoders + & & 0.860 (Kidney) & \\ & & Pyramid Reasoning & & 0.770 (Aorta) & \\ & & Modules) & & 0.728 (Pancreas) & \\ & & & 0.826 (Stomach) & & \\ \hline \hline \end{tabular}
\end{table}
Table 2: _(continued)._
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline & & \multicolumn{6}{c}{**Risk of Bias**} & \multicolumn{6}{c}{**Applicability**} \\ \cline{3-11}
**Study** & **Pub. ref.** & **Part.** & **Pred** & **Out.** & **Analysis** & **Overall** & **Part.** & **Pred.** & **Out.** & **Overall** \\
**ID** & & & & & & & & & & & \\ \hline
1 & Blendowski, Nickisch, and Heinrich [39] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
2 & Chan et al. [40] & ✓ & ✓ & ✓ & ✗ & ✗ & ✓ & ✓ & ✓ & ✓ \\ \hline
3 & Chen et al. [10] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
4 & Cui et al. [41] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
5 & Ding, Wangbin et al. [42] & ✓ & ✓ & ✓ & ✗ & ✗ & ✓ & ✓ & ✓ & ✓ \\ \hline
6 & Ding, Yu and Yang [43] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
7 & Farshad et al. [44] & ✓ & ✓ & ✓ & ✗ & ✗ & ✓ & ✓ & ✓ & ✓ \\ \hline
8 & Feng et al. [45] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
9 & Gama, Oliveira and dos & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ & Santos [46] & & & & & & & & & \\ \hline
10 & Gama et al. [47] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
11 & Guo, Odu and Pedrosa [48] & ✓ & ✓ & ✓ & ✗ & ✗ & ✓ & ✓ & ✓ & ✓ \\ \hline
12 & Hansen et al. [49] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
13 & He et al. [34] & ✓ & ✓ & ✓ & ✗ & ✗ & ✓ & ✓ & ✓ & ✓ \\ \hline
14 & He et al. [35] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
15 & Jenssen et al. [50] & ✓ & ✓ & ✓ & ✗ & ✗ & ✓ & ✓ & ✓ & ✓ \\ \hline
16 & Joyce and Kozerke [51] & ✓ & ✓ & ✓ & ✗ & ✗ & ✓ & ✓ & ✓ & ✓ \\ \hline
17 & Khadka et al. [52] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
18 & Khaled, Han and Ghaleb [53] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 3: ROB of FSL studies for medical image segmentation.
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline & & \multicolumn{6}{c}{**Risk of Bias**} & \multicolumn{6}{c}{**Applicability**} \\ \cline{2-11}
**Study** & **Pub. ref.** & **Part.** & **Pred** & **Out.** & **Analysis** & **Overall** & **Part.** & **Pred.** & **Out.** & **Overall** \\
**ID** & & & & & & & & & & & \\ \hline
19 & Khandelwal and & ✓ & ✓ & ✓ & ✗ & ✗ & ✓ & ✓ & ✓ & ✓ \\ & Yushkevich [54] & & & & & & & & & & \\ \hline
20 & Kim et al. [55] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
21 & Li et al. [56] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
22 & Lu et al. [57] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
23 & Lu and Ye [58] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
24 & Ma et al. [59] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
25 & Niu et al. [60] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
26 & Ouyang et al. [7] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
27 & Pham et al. [61] & ✓ & ✓ & ✓ & ✗ & ✗� & ✓ & ✓ & ✓ & ✓ \\ \hline
28 & Pham, Dovletov and Pauli [62] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
29 & Roy et al. [63] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
30 & Roychowdhury et al. [36] & ✓ & ✓ & ✓ & ✗ & ✗ & ✓ & ✓ & ✓ & ✓ \\ \hline
31 & Rutter, Lagergren and & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ & Flores [64] & & & & & & & & & \\ \hline
32 & Shen et al. [65] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
33 & Shen et al. [66] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
34 & Shi et al. [37] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
35 & Sun et al. [67] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
36 & Tang et al. [68] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
37 & Tomar et al. [69] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
38 & Wang et al. [70] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
39 & Wang et al. [71] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
40 & Wang et al. [8] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
41 & Wang, Zhou and Zheng [72] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
42 & Wang et al. [73] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
43 & Wu, Xiao and Liang [74] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
44 & Wu et al. [75] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
45 & Xu and Niethammer [38] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
46 & Yu et al. [76] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
47 & Yuan, Esteva and Xu [77]. & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
48 & Zhao et al. [78] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
49 & Zhao et al. [79] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
50 & Zhou et al. [80]. & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 3: _(continued)._
Here, we present the findings derived from our comprehensive analysis of the segmentation papers.
**Medical application.** The segmentation papers within the field of FSL address various anatomical structures and regions, as well as specific lesions such as polyps or tumours. Here's a breakdown of the papers categorized by the anatomical structure(s) investigated. Eighteen papers (36%) focus on liver segmentation; 18 studies (36%) concentrate on kidney segmentation; 17 papers (34%) centre around spleen segmentation; three papers (6%) pertain to psoas segmentation; four (8%) are related to prostate segmentation; three works (6%) involve bladder segmentation; four papers (8%) deal with breast segmentation; one paper (2%) addresses colon segmentation; six (12%) are concerned with stomach segmentation; 12 (24%) are dedicated to brain segmentation; 14 papers (28%) revolve around heart segmentation; three (6%) involve pancreas segmentation; three (6%) pertain to cell segmentation; two papers (4%) are related to lung segmentation; one (2%) focuses on eye segmentation; two papers (4%) involve mandible segmentation; one (2%) addresses duodenum segmentation; two papers (4%) deal with skin segmentation; three papers (6%) are related to knee segmentatio; one (2%) concerns phalanx segmentation; one (2%) deals with hip segmentation; one paper (2%) is dedicated to spine segmentation. For a visual representation of the distribution, please refer to Figure 6.
**Meta-learning methods.** Out of the 50 studies we selected in the realm of FSL for medical image segmentation, the distribution of their meta-learning methods is as follows: six studies (12%) leverage initialization-based methods; 14 studies (28%) utilize metric learning-based techniques; three studies (6%) employ hallucination-based methods; one study (2%) combines both initialization-based and hallucination-based methods. The remaining 28 studies (56%) do not incorporate meta-learning techniques. For a visual representation of the distribution, refer to Figure 7.
**K-shot.** Among the 50 selected papers, here is the distribution of training shot configurations: 15 studies (30%) utilize k-shot training with k ranging from 2 to 20; 14 studies (28%) perform both OSL and FSL learning; 20 works (40%) exclusively use 1-shot training; 1 paper (2%) employs 0-shot training.
**Image modalities.** In terms of the imaging modalities utilized in the selected papers, here is the distribution: 26 (52%) used CT images; 30 papers (60%) utilized MRI; four (8%) relied on X-ray images; two (4%) involved dermoscopic images; one paper (2%) made use of endoscopic images; one (2%) used histopathology images; two (4%) employed microscopic images; one paper (2%) utilized OCT images.
**Model evaluation.** To examine the behaviour and robustness of the models, the selected studies used different evaluation techniques as follows: 21 studies (42%) exclusively conducted ablation studies; 11 studies (22%) utilized both ablation studies and cross-validation; five studies (10%) relied solely on cross-validation; 13 studies (26%) did not employ any specific model evaluation technique.
**Model performance grouped by organ and meta-learning method.** In Figure 8 and Figure 9, we present a summary of the model performance in forest plots, categorized by anatomical structure, w.r.t. Dice score and IoU, respectively. Conversely, in Figure 10 and Figure 11, we depict the performance in terms of Dice and IoU, respectively, by grouping the studies according to the employed meta-learning methods.
**Overall pipeline.** In Table 4, we outline which steps of the main pipeline are adopted by each segmentation study. Here are the distributions of studies based on their utilization of pre-training, training, and data augmentation techniques:
Figure 6: Segmentation studies grouped by the anatomical structure investigated.
two out of 50 studies (4%) employed meta-learning for pre-training; two studies (4%) utilized self-supervised learning and 13 studies (26%) relied on supervised learning. The majority, 33 out of 50 studies (66%), did not employ any pre-training stage. For their main training stage, 20 studies (40%) utilized meta-learning methods; 12 (24%) employed semi-supervised approaches; four studies (8%) employed self-supervised methods; 16 studies (32%) used traditional supervised techniques; one study (2%) employed a zero-shot learning method. Finally, concerning the data augmentation techniques, 16 studies (32%) exploited classical data augmentation techniques; five studies (10%) utilized generative methods for data augmentation; six studies (12%) relied on registration-based augmentation. The remaining 24 out of 50 studies (48%) did not employ data augmentation.
Figure 7: Segmentation studies grouped by meta-learning method employed.
Figure 8: Forest plot of segmentation studies performance based on Dice metric. Studies are grouped by the anatomical structure investigated. AA = Ascending Aorta; IC = Intracranial Cyst; LA = Left Atrium; LV = Left Ventricle; MAS = Mean Across Structures; MYO = Myocardium; PA = Pulmonary Artery; PZ = Peripheral Zone; RA = Right Atrium; RV = Right Ventricle; TZ = Transitional Zone; WM = White Matter.
## Appendix A Systematic Review of Few-Shot Learning in Medical Imaging
Figure 11: Forest plot of segmentation studies performance based on IoU metric. Studies are grouped by the meta-learning method employed.
Figure 10: Forest plot of segmentation studies performance based on Dice metric. Studies are grouped by the meta-learning method employed.
Figure 9: Forest plot of segmentation studies performance based on IoU metric. Studies are grouped by the anatomical structure investigated. IE = Intracranial Ematoma; MAS = Mean Across Structures; PZ = Peripheral Zone; TZ = Transitional Zone.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Study ID** & **Pub. ref.** & **Pre-training** & **Training** & **Data augmentation** \\ \hline
1 & Blendowski, Nickisch, & Self-supervised & None & None \\ & and Heinrich [39] & None & Supervised & Registration-based \\
2 & Chan et al. [40] & None & Supervised & Generative \\
3 & Chen et al. [10] & None & Supervised & Generative \\
4 & Cui et al. [41] & None & Meta & Classical \\
5 & Ding, Wangbin et al. [42] & None & Meta & Classical \\
6 & Ding, Yu and Yang [43] & None & Semi-supervised & Generative \\ & & and Meta & & \\
7 & Farshad et al. [44] & Meta & Supervised & None \\
8 & Feng et al. [45] & None & Meta & None \\
9 & Gama, Oliveira and dos & None & Meta & None \\ & Santos [46] & & & \\
10 & Gama et al. [47] & None & Meta & None \\
11 & Guo, Odu and Pedrosa & None & Supervised & Classical \\
12 & Hansen et al. [49] & Supervised & Self-supervised and & None \\ & & & Meta & \\
13 & He et al. [34] & None & Supervised & Registration-based \\
14 & He et al. [35] & None & Meta & Registration-based \\
15 & Jenssen et al. [50] & Supervised & Self-supervised and & None \\ & & & Meta & \\
16 & Joyce and Kozerke [51] & None & Self-supervised & Classical \\
17 & Khadka et al. [52] & Supervised & Meta & None \\
18 & Khaled, Han and Ghaleb & None & Semi & None \\
19 & Khandelwal and & None & Meta & Classical \\ & Yushkevich [54] & & Meta & Classical \\
20 & Kim et al. [55] & Meta & Meta & Classical \\
21 & Li et al. [56] & None & Meta & Classical \\
22 & Lu et al. [57] & Supervised & Semi-supervised & None \\
23 & Lu and Ye [58] & Supervised & Supervised & None \\
24 & Ma et al. [59] & None & Zero-shot & None \\
25 & Niu et al. [60] & None & Meta & None \\
26 & Ouyang et al. [7] & Supervised & Self-supervised and & None \\ & & & Meta & \\
27 & Pham et al. [61] & None & Supervised & Classical \\
28 & Pham, Dovletov and Pauli & None & Supervised & None \\
29 & Roy et al. [63] & None & Supervised & Classical \\
30 & Roychowdhury et al. [36] & None & Supervised & Classical \\
31 & Rutter, Lagergren and & None & Semi-supervised & Classical \\ & Flores [64] & & & \\
32 & Shen et al. [65] & None & Semi-supervised & Registration-based \\
33 & Shen et al. [66] & Supervised & Semi-supervised & None \\
34 & Shi et al. [37] & Supervised & Semi-supervised & Registration-based \\
35 & Sun et al. [67] & Supervised & Meta & None \\
36 & Tang et al. [68] & None & Meta & None \\
37 & Tomar et al. [69] & Self-supervised & Supervised & Generative \\
38 & Wang et al. [70] & None & Semi-supervised & None \\
39 & Wang et al. [71] & None & Supervised & Classical \\
40 & Wang et al. [8] & None & Semi-supervised & Classical \\
41 & Wang, Zhou and Zheng & Supervised & Meta & None \\
72 & & & & \\
42 & Wang et al. [73] & None & Supervised & None \\
43 & Wu, Xiao and Liang [74] & None & Supervised & Classical \\
44 & Wu et al. [75] & None & Semi-supervised & None \\
45 & Xu and Niethammer [38] & Supervised & Semi-supervised & Classical and \\ & & & Registration-based \\
46 & Yu et al. [76] & Supervised & Meta & Classical \\
47 & Yuan, Esteva and Xu [77] & None & Supervised and & Classical \\ & & & Meta & \\
48 & Zhao et al. [78] & None & Semi-supervised & Generative \\ \hline \hline \end{tabular}
\end{table}
Table 4: Main pipeline steps adopted by segmentation studies.
#### 4.2.2 Classification
We identified 27 relevant studies, each focusing on medical classification as its primary task. To enhance clarity and facilitate easy reference, we present all the relevant information from these selected studies in Table 5. In addition, we provide information concerning ROB and the applicability of each study in Table 6.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Study ID** & **Pub. ref.** & **Pre-training** & **Training** & **Data augmentation** \\ \hline
49 & Zhao et al. [79] & None & Meta & Classical and Generative \\
50 & Zhou et al. [80] & Supervised & Supervised & None \\ \hline \hline \end{tabular}
\end{table}
Table 4: _(continued)._
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Study ID** & **Pub. ref.** & **Algorithm/Pipeline** & **K-shot** & **Best performance** & **Meta-learning type** \\ \hline
51 & Ali et al. [81] & Prototypical & 5-shot & Accuracy: 0.906 (Endoscopic images, MAO) & Metric learning \\ \hline
52 & Cai, Hu, and Zheng [82] &
\begin{tabular}{c} Prototypical \\ network + \\ Attention (CBAM) \\ \end{tabular} & 20-shot & Accuracy: 0.924 (Brain, MAT) & Metric learning \\ \hline
53 & Cai et al. [83] &
\begin{tabular}{c} Pre-Moco \\ Diagnosis Network (Pre-training+ \\ Contrastive learning) \\ \end{tabular} & 1-shot & Accuracy: 0.832 (Skin, MAD) & Metric learning \\ \hline
54 & Cano and Cruz-Roa [84] &
\begin{tabular}{c} Siamese \\ Neural Network \\ \end{tabular} & 1-shot & 0.908 (Breast, MAT) & Metric learning \\ \hline
55 & Chen et al. [85] &
\begin{tabular}{c} 2D CNN ranking + \\ 2D CNN classification + \\ Heatmap for segmentation \\ \end{tabular} & 2-shot & 0.883 (Breast, LN metastases) & None \\ \hline
56 & Chou et al. [86] &
\begin{tabular}{c} Siamese Neural Network \\ (Triple encoder + \\ Triple loss) \\ \end{tabular} & 1-shot & 0.986 (Brain, \\ & & & & classification into contrast \\ & & & & type) \\ \hline
57 & Dai et al. [87] & \begin{tabular}{c} Prior Guided Feature \\ Enhancement for \\ Few-shot Medical \\ Image Classification \\ \end{tabular} & \begin{tabular}{c} 3-shot \\ 5-shot \\ 10-shot \\ \end{tabular} &
\begin{tabular}{c} Accuracy: \\ 0.851 (Brain, MAT) \\ 0.960 (Skin, MAT) \\ 0.803 (Cervix, MAT) \\ AUROC: \\ 0.961 (Eye, MAD) \\ 0.955 (Lung, COVID) \\ \end{tabular} & Metric learning \\ \hline
59 & Jiang et al. [89] &
\begin{tabular}{c} Autoencoder + \\ Metric learner + \\ Task learner (Transfer \\ learning phase + \\ Meta-learning phase) \\ \end{tabular} & 1-shot & 10-shot & 0.762 (Colon, MAD) & Metric learning \\ \hline
60 & Jin et al. [90] & \begin{tabular}{c} ViT-L/16 + \\ ResNet50 + \\ Metric-learning \\ \end{tabular} & 1-shot &
\begin{tabular}{c} Accuracy: \\ 0.346 (Lungs, MAD) \\ \end{tabular} & Metric learning \\ \hline \hline \end{tabular}
\end{table}
Table 5: FSL studies for medical image classification.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Study** & **Pub. ref.** & **Algorithm/Pipeline** & **K-shot** & **Best performance** & **Meta-learning type** \\ \hline \multirow{4}{*}{61} & & & & Accuracy: & \\ & & Self-Supervised & & 0.921 (Breast, & \\ & & Clustering Based & & LN metastases) & \\ & & Generalized Zero-shot & & 0.909 (Lungs, MAD) & \\ & & Learning & & 0.942 (Eye, DE) & \\ & & & 0.911 (Prostate, tumour) & \\ \hline \multirow{4}{*}{62} & & Pre and post-hoc & & & \\ & & diagnosis and & & & \\ & & interpretation + & & 0.910 (Breast, tumour) & Initialization \\ & & 3D DenseNet & & & \\ \hline \multirow{4}{*}{63} & & & & Accuracy: & \\ & & Siamese Network + & & 0.930 & \\ & & Classifier & & (Lung, \\ & & Classifier & & COVID and & \\ & & & Pneumonia) & \\ \hline \multirow{4}{*}{64} & & DeepVoro & & & \\ & [36] & Multi-label & & 10-shot & 0.679 (Lung, MAD) & Initialization-based \\ & [36] & ensemble & & & \\ \hline \multirow{4}{*}{65} & & 8 block VGG + & 1-shot to & Accuracy: & \\ & & Wang [20] & MAML++ & 5-shot & 0.857 (Lung, COVID) & Initialization \\ \hline \multirow{4}{*}{66} & & & & Accuracy: & \\ & & Self-attention & 3-shot & 0.703 (Lungs, MAD) & \\ & & augmented MAML & 5-shot & & AUROC: & \\ & & & & 0.843 (Skin,MAD) & \\ & & & & 0.734 (Lungs, MAD) & \\ \hline \multirow{4}{*}{67} & & DenseNet-121 & & & F1-score: & \\ & & (feature extractor) + & & 0.440 (Lung, MAD) & \\ & & Autoencoder ensemble & & Recall: & \\ & & (classification) & & 0.490 (Lung,MAD) & \\ \hline \multirow{4}{*}{68} & & DenseNet + & & F1-score: & \\ & & Vanilla & 5-shot & 0.470 (Lungs, MAD) & \\ & & autoencoder ) & & AUROC: & \\ & & & 0.647 (Lungs, MAD) & \\ \hline \multirow{4}{*}{69} & & DenseNet + & & & \\ & & MVSE network + & & Recall: & \\ & & Self-training & & 0.454 (Lungs, MAD) & None \\ \hline \multirow{4}{*}{30} & & Echo state network & & & \\ & & (ParESN) + & & & \\ & & Target label & & 0.970 (Eye, IE) & None \\ & & & selection algorithm & (TLSA) & \\ \hline \multirow{4}{*}{70} & & & 3-shot & Accuracy: & \\ & & & 5-shot & 0.864 (Breast, MAD) & \\ & & & 10-shot & 0.843 (Skin, MAD) & Initialization \\ & & & 0.934 (Cervix, MAT) & \\ \hline \multirow{4}{*}{71} & & VAE + & & \\ & & Distribution & 0-shot & AUROC: & \\ & & learning & 15-shot & 0.789 (Pancreas) & None \\ \hline \hline \end{tabular}
\end{table}
Table 5: _(continued)._
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{**Study**} & \multicolumn{1}{c}{**Pub. ref.**} & \multicolumn{1}{c}{**Algorithm/Pipeline**} & \multicolumn{1}{c}{**K-shot**} & \multicolumn{1}{c}{**Best performance**} & \multicolumn{1}{c}{**Meta-learning type**} \\ \cline{3-10} \multicolumn{1}{c}{**ID**} & & \multicolumn{1}{c}{CNN} & & & & & & \\ & & feature extractor + & & & & & & & \\ & & classification & & & & & & & \\
72 & Xiao et al. [102] & prototype + & 5-shot & Accuracy: & & & & \\ & & similarity & 10-shot & 0.874 (Skin, MAD) & Metric learning & & & & \\ & & module + & & & & & & \\ & & & rectified & & & & & \\ & & corruption function & & & & & & \\ \hline \multirow{2}{*}{73} & \multirow{2}{*}{Yan et al. [103]} & Siamese- & \multirow{2}{*}{1-shot} & Accuracy: & & & & \\ & & Prototypical Network & & 5-shot & 0.608 (Liver, MAD) & Metric learning & & & \\ & & & similarity & 10-shot & 0.626 (Colon, MAD) & Metric learning & & & \\ \hline \multirow{2}{*}{74} & \multirow{4}{*}{Yarlagadda et al. [104]} & Region proposal network + & & & & & & \\ & & Inception-ResNet-v2 + & & & & & & \\ & & Memory module with & & & & & & \\ & & regional maximum activation & & & & & & \\ & & of convolutions global & & & & & \\ & & descriptors & & & & & \\ \hline \multirow{2}{*}{75} & \multirow{4}{*}{Zhang, Cui and Ren [105]} & \multirow{4}{*}{MAML} & \multirow{4}{*}{1-shot} & Accuracy: & & & & \\ & & & 1-shot & 0.788 (VQA- & & & \\ & & & 3-shot & 0.614 (PathVQA, & & \\ & & & & & & & MAS) & \\ \hline \multirow{2}{*}{76} & \multirow{4}{*}{Zhu et al. [106]} & Query-Relative & & & & & & \\ & & Loss + Adaptive & 1-shot & Accuracy: & & & & \\ & & Hard Margin + & 5-shot & 0.719 (Skin, MAD) & Metric learning & & & \\ & & Prototypical Network/ & & & & & & \\ & & Matching Network & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 5: _(continued)._
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multicolumn{1}{c}{**Study**} & \multicolumn{1}{c}{**Pub. ref.**} & \multicolumn{1}{c}{**Algorithm/Pipeline**} & \multicolumn{1}{c}{**K-shot**} & \multicolumn{1}{c}{**Best performance**} & \multicolumn{1}{c}{**Meta-learning type**} \\ \cline{3-10} \multicolumn{1}{c}{**ID**} & & CNN & & & & & \\ & & feature extractor + & & & & & \\ & & classification & & & & & \\
72 & Xiao et al. [102] & prototype + & 5-shot & Accuracy: & & & \\ & & similarity & 10-shot & 0.874 (Skin, MAD) & Metric learning & & \\ & & module + & & & & \\ & & rectified & & & & \\ & & corruption function & & & & \\ \hline \multirow{2}{*}{73} & \multirow{2}{*}{Yan et al. [103]} & Siamese- & \multirow{2}{*}{1-shot} & Accuracy: & & & \\ & & Prototypical Network & & 5-shot & 0.608 (Liver, MAD) & Metric learning & & \\ & & & & & 0.626 (Colon, MAD) & Metric learning & & \\ \hline \multirow{2}{*}{74} & \multirow{4}{*}{Yarlagadda et al. [104]} & Region proposal network + & & & & & \\ & & Inception-ResNet-v2 + & & & & & \\ & & Memory module with & & & & & \\ & & regional maximum activation & & & 0.946 (Cells) & None \\ & & & of convolutions global & & & \\ & & descriptors & & & & \\ \hline \multirow{2}{*}{75} & \multirow{4}{*}{Zhang, Cui and Ren [105]} & \multirow{4}{*}{MAML} & \multirow{4}{*}{1-shot} & Accuracy: & & & \\ & & & 1-shot & 0.788 (VQA- & & \\ & & & 3-shot & 0.840, MAS) & Initialization & \\ & & & 5-shot & 0.614 (PathVQA, & \\ & & & & & MAS) & \\ \hline \multirow{2}{*}{76} & \multirow{4}{*}{Zhu et al. [106]} & Query-Relative & & & & & \\ & & Loss + Adaptive & 1-shot & Accuracy: & & \\ & & Hard Margin + & 5-shot & 0.719 (Skin, MAD) & Metric learning & & \\ & & Prototypical Network/ & & & & \\ & & Matching Network & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 6: ROB of FSL studies for medical image classification.
Here, we present the findings derived from our comprehensive analysis of the classification papers.
**Medical application.** The classification papers within the FSL domain cover a wide range of anatomical structures and regions, as well as specific lesions. Here's a breakdown of the number of papers categorized by the anatomical structure(s) investigated: two out of 27 studies (7%) perform brain image classification, focusing on different types of tumours and MRI contrast types; six studies (22%) address breast image classification, with four concentrating on breast tumours and two on breast metastases involving nearby lymph nodes; two studies (7%) investigate cell image classification; two studies (7%) focus on cervix image classification; three studies (11%) pertain to colon image classification; four studies (15%) are dedicated to fundus eye image classification, with 2 investigating different diseases; one study (4%) deals with liver disease classification; 11 studies (41%) involve lung image classification; one study (4%) is concerned with pancreas image classification; one study (4%) classifies prostate tumour images; 7 studies (26%) address skin image classification, covering different diseases; one study (4%) investigates esophagus image classification; one study (4%) focuses on stomach image classification. Note that [105] is not included in this analysis as it did not specify which anatomical structures were part of their study. For a visual representation of the distribution, refer to Figure 12.
**Meta-learning methods.** In the context of classification studies employing FSL, the distribution of meta-learning methods is as follows: six out of 27 studies (22%) utilize initialization-based methods; 10 studies (37%) opt for metric-learning-based algorithms; the remaining 11 studies (41%) do not incorporate any meta-learning techniques. For a visual representation of the distribution, refer to Figure 13.
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline \multirow{3}{*}{**Study ID**} & \multirow{3}{*}{**Pub. ref.**} & \multicolumn{6}{c}{**Risk of Bias**} & \multicolumn{6}{c}{**Applicability**} \\ \cline{3-11} & & **Part.** & **Pred** & **Out.** & **Analysis** & **Overall** & **Part.** & **Pred.** & **Out.** & **Overall** \\ \hline
58 & Huang, Huang and Tang [88] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
59 & Jiang et al. [89] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
60 & Jin et al. [90] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
61 & Mahapatra, Ge and Reyes [91] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
62 & Maicas et al. [92] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
63 & Mohan et al. [93] & ✓ & ✓ & ✓ & ✗ & ✗ & ✓ & ✓ & ✓ & ✓ \\ \hline
64 & Moukheiber et al. [94] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
65 & Naren, Zhu and Wang [95] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
66 & Ouahab, Ben-Ahmed and Fernandez-Maloigne [96] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
67 & Paul, Tang and Summers [97] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\
68 & Paul et al. [98] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
69 & Paul et al. [99] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
30 & Roychowdhury et al. [36] & ✓ & ✓ & ✓ & ✗ & ✗ & ✓ & ✓ & ✓ & ✓ \\ \hline
70 & Singh et al. [100] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
71 & Vetil et al. [101] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
72 & Xiao et al. [102] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
73 & Yan et al. [103] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
74 & Yarlagadda et al. [104] & ✓ & ✓ & ✓ & ✗ & ✗ & ✓ & ✓ & ✓ & ✓ \\ \hline
75 & Zhang, Cui and Ren [105] & ✓ & ✓ & ✓ & ✗ & ✗ & ✓ & ✓ & ✓ & ✓ \\ \hline
76 & Zhu et al. [106] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 6: (continued).
**K-shot.** Among the 27 selected studies in the classification domain using FSL, the training configurations are distributed as follows: 13 studies (48%) employ k-shot training with k ranging from 2 to 20; six studies (22%) utilize both OSL and FSL; one study (4%) uses both FSL and ZSL; five studies (19%) exclusively perform 1-shot training; two studies (7%) solely employ 0-shot training.
**Image modalities.** In the context of classification studies within the FSL domain, the distribution of imaging modalities is as follows: three studies out of 27 (11%) use CT images; three studies (11%) employ MRI images; seven studies (26%) utilize dermoscopic images; 11 studies (41%) rely on X-ray images; three studies (11%) involve fundus images; two studies (7%) make use of microscopic images; nine studies (33%) employ histopathological images; one study (4%) utilizes endoscopy images; one study (4%) involves cytological images; one study (4%) uses OCT images.
**Model evaluation.** To assess the behaviour and robustness of the models in the selected studies, various evaluation techniques were employed as follows: nine studies (33%) utilized ablation studies; one study (4%) conducted both ablation studies and cross-validation; one study (4%) solely relied on cross-validation; two studies (7%) repeated experiments multiple times for evaluation. The remaining 14 studies (52%) did not employ any specific model evaluation technique.
**Model performance grouped by organ and meta-learning method.** In Figure 14, Figure 15, Figure 16, and Figure 17, we present a summary of the model performance in forest plots, categorized by anatomical structure, in terms of Accuracy, AUROC, F1-score, and Recall, respectively. Conversely, in Figure 18, Figure 19, Figure 20, and Figure
Figure 12: Classification studies grouped by the anatomical structure investigated.
Figure 13: Classification studies grouped by meta-learning method employed.
21, we depict the performance in terms of Accuracy, AUROC, F1-score, and Recall, respectively, by grouping the studies according to the employed meta-learning methods. For each study, we have considered the highest performance achieved (across different experiments and image modalities). In each forest plot, we provide the mean and the 95% CI across all the studies within the corresponding group. Note that the results of [81] and [105] are not included in the forest plot since they provide average results across different anatomical structures.
**Overall pipeline.** In Table 7, we delineate which stages of the defined pipeline are employed by each study. Here are the distributions of studies based on their utilization of pre-training, training, and data augmentation techniques: one out of 27 studies (4%) employed a meta-learning algorithm for pre-training; 13 studies (48%) employed classical supervised pre-training; one study (4%) used unsupervised pre-training; 12 studies (44%) did not employ any pre-training stage. For
Figure 16: Forest plot of classification studies performance based on F1-score metric. Studies are grouped by the anatomical structure investigated. MAD = Mean Across Diseases.
Figure 14: Forest plot of classification studies performance based on Accuracy metric. Studies are grouped by the anatomical structure investigated. DR = Diabetic Retinopathy; IC = Intraretinal Cyst; LN = Limph Nodes; MAD = Mead Across Diseases; MAS = Mean Across Structures; MAT = Mean Across Tumours.
Figure 15: Forest plot of classification studies performance based on AUROC metric. Studies are grouped by the anatomical structure investigated. MAD = Mean Across Diseases.
their main training, fifteen studies (56%) utilized meta-learning; one study (4%) employed semi-supervised training; one study (4%) employed self-supervised training; nine studies (33%) used traditional supervised training; two studies (7%) employed zero-shot learning methods. Finally, concerning the data augmentation techniques, 10 out of 27 studies (37%)
Figure 19: Forest plot of classification studies performance based on AUROC metric. Studies are grouped by the meta-learning method employed.
Figure 17: Forest plot of classification studies performance based on Recall metric. Studies are grouped by the anatomical structure investigated. MAD = Mean Across Diseases.
Figure 20: Forest plot of classification studies performance based on F1-score metric. Studies are grouped by the meta-learning method employed.
realied on classical data augmentation techniques; two studies (7%) utilized generative methods for data augmentation. The remaining 15 studies did not employ data augmentation.
Figure 21: Forest plot of classification studies performance based on Recall metric. Studies are grouped by the meta-learning method employed.
#### 4.2.3 Registration
We included six relevant studies, each focusing on medical registration as its primary task. Table 8 summarizes of all the essential information from these selected studies. In addition, we provide information concerning ROB and the applicability of each study in Table 9.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Study ID** & **Pub. ref.** & **Pre-training** & **Training** & **Data augmentation** \\ \hline
51 & Ali et al. [81] & Supervised & Meta & None \\
52 & Cai, Hu, and Zheng [82] & None & Meta & Classical \\
53 & Cai et al. [83] & Supervised & Meta & Classical \\
54 & Cano and Cruz-Roa [84] & None & Meta & None \\
55 & Chen et al. [85] & Unsupervised & Supervised & None \\
56 & Chou et al. [86] & None & Supervised & None \\
57 & Dai et al. [87] & Supervised & Meta & None \\
58 & Huang, Huang and Tang & None & Supervised & Generative \\
[88] & & & & \\
59 & Jiang et al. [89] & Supervised & Meta & Classical \\
60 & Jin et al. [90] & None & Meta & Classical \\
61 & Mahapatra, Ge and Reyes & Supervised & Self-supervised and & Generative \\
[91] & & Supervised & & \\
62 & Maícas et al. [92] & Meta & Supervised & None \\
63 & Mohan et al. [93] & Supervised & Supervised & Classical \\
64 & Moukheiber et al. [94] & Supervised & Meta & None \\
65 & Naren, Zhu and Wang & None & Meta & None \\
[95] & & & & \\
66 & Ouahab, Ben-Ahmed and & Supervised & Meta & Classical \\ & Fernandez-Maloigne [96] & & & \\
67 & Paul, Tang and Summers & Supervised & Supervised & None \\
[97] & & & & \\
68 & Paul et al. [99] & Supervised & Zero-shot and & None \\ & & & Semi-supervised & \\
69 & Paul et al. [98] & Supervised & Supervised & None \\
30 & Roychowdhury et al. [36] & None & Supervised & Classical \\
70 & Singh et al. [100] & None & Meta & Classical \\
71 & Vetil et al. [101] & None & Zero-shot and & Classical \\ & & & Supervised & \\
72 & Xiao et al. [102] & None & Meta & None \\
73 & Yan et al. [103] & Supervised & Meta & Classical \\
74 & Yarlagadda et al. [104] & Supervised & Supervised & None \\
75 & Zhang, Cui and Ren [105] & None & Meta & None \\
76 & Zhu et al. [106] & None & Meta & None \\ \hline \hline \end{tabular}
\end{table}
Table 7: Main pipeline steps adopted by classification studies.
Here, we present the findings derived from our comprehensive analysis of the registration papers.
**Medical application.** The selected registration papers address a range of anatomical regions. Here's the breakdown of the number of studiescategorized by the anatomical structure investigated: three out of 6 studies (50%) explore brain registration; one study (17%) focuses on the registration of knee bones and cartilages; three studies (50%) delve into heart registration; three studies (50%) concentrate on lung registration; one study (17%) pertains to abdominal registration; one study (17%) deals with cervical vertebra registration. For a visual representation of the distribution, refer to Figure 22.
**Meta-learning methods.** In the domain of registration studies, all of the selected papers (100 %) do not employ the meta-learning paradigm. For a visual representation of the distribution of these studies according to the anatomical structure investigated, refer to Figure 23.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Study** & **Pub. ref.** & **Algorithm/Pipeline** & **K-shot** & **Best performance** & **Meta-learning type** \\ \hline \multirow{3}{*}{77} & \multirow{3}{*}{Fechter and Baltas [107]} & U-net + & & Landmark distance: & \\ & & Differential spatial & 1-shot & 1.49 (Lungs) & None \\ & & [107] & transformer module & & Dice: & \\ & & & & 0.860 (Heart) & & \\ \hline \multirow{3}{*}{78} & \multirow{3}{*}{Ferrante et al. [108]} & U-net + & & Dice: & \\ & & Unsupervised & 1-shot & 0.920 (Heart) & None \\ & [108] & learning & & 0.890 (Lungs) & \\ \hline \multirow{3}{*}{79} & \multirow{3}{*}{He et al. [109]} & Perception- & & & Dice: & \\ & & & & 0.857 (Heart, MAS) & & \\ & & & 0.867 (Cervical & & \\ & & Registration & & vertebra, MAS) & \\ & & & 0.800 (Brain, MAS) & & \\ \hline \multirow{3}{*}{34} & \multirow{3}{*}{Shi et al. [37]} & Joint Registration & & Dice: & \\ & & and Segmentation & 5-shot & 0.759 (Brain, MAS) & None \\ & & Self-training Framework & & 0.539 (Abdomen, MAS) & \\ \hline \multirow{3}{*}{45} & \multirow{3}{*}{Xu and Niethammer [38]} & Semi-Supervised & 1-shot & Dice: & \\ & & Learning + & 5-shot & 0.759 (Brain, MAS) & None \\ & & Segmentation network + & 10-shot & 0.539 Abdomen (MAS) & \\ \hline \multirow{3}{*}{80} & \multirow{3}{*}{Zhang et al. [110]} & CNN + & & & \\ & & Spatial transformer + & & & \\ & & similarity loss + & 1-shot & TRE: & \\ & & smooth loss + & & 1.03 (Lung) & None \\ & & cyclic loss & & & \\ \hline \hline \end{tabular}
\end{table}
Table 8: FSL studies for medical image registration.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline & & & & **Risk of Bias** & & & **Applicability** & \\ \cline{2-10}
**Study** & **Pub. ref.** & **Part.** & **Pred** & **Out.** & **Analysis** & **Overall** & **Part.** & **Pred.** & **Out.** & **Overall** \\ \hline
77 & Fechter, Baltas [107] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
78 & Ferrante et al. [108]. & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
79 & He et al. [109] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
34 & Shi et al. [37] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
45 & Xu and Niethammer [38] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline
80 & Zhang et al. [110] & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 9: ROB of FSL studies for medical image registration.
**K-shot.** Among the six selected studies, the distribution of training strategies is as follows: two studies (33%) solely employ FSL; three studies (50%) exclusively investigate OSL; one study (17%) performs both FSL and OSL.
**Image modalities.** In the context of registration studies, the distribution of imaging modalities used among the selected papers is as follows: four out of six studies (67%) employ CT acquisitions; five out of six studies (83%) utilize MRI images; one out of six studies (17%) involves X-ray images.
**Model evaluation.** To examine the behaviour and robustness of the models in the selected registration studies various evaluation techniques were employed as follows: two studies (33%) utilized only ablation studies; one study (17%) used cross-validation. The remaining studies (50%) did not employ any specific model evaluation technique.
**Model performance grouped by organ and meta-learning method.** In Figure 24, Figure 25, and Figure 26, we provide a summary of the model performance in forest plots, categorized by anatomical structure, in terms of Dice score, Average Landmark Distance (ALD), and Target Registration Error (TRE), respectively. Conversely, in Figure 27, Figure 28, and Figure 29, we depict the performance in terms of Dice score, ALD, and TRE, respectively, by grouping the studies according to the employed meta-learning methods. For each study, we considered the highest performance achieved (across different experiments and image modalities). In each forest plot, we provided the mean and the 95% CI across all the studies within the corresponding group.
Figure 23: Registration studies grouped by the meta-learning method employed.
Figure 22: Registration studies grouped by the anatomical structure investigated.
**Overall pipeline.** In Table 10, we delineate which stages of the defined pipeline are employed by each study. Here are the distributions of studies based on their utilization of pre-training, training, and data augmentation techniques: two out of six studies (33%) employed classical supervised pre-training. The remaining four studies did not employ any pre-training at all. Five out of six studies (83%) utilized supervised training as their primary training approach. One
Figure 27: Forest plot of registration studies performance based on Dice metric. Studies are grouped by the meta-learning method employed.
Figure 24: Forest plot of registration studies performance based on Dice metric. Studies are grouped by the anatomical structure investigated.
Figure 25: Forest plot of registration studies performance based on ALD metric. Studies are grouped by the anatomical structure investigated.
study (17%) employed unsupervised learning for the main training stage. Two out of six studies (33%) used classical data augmentation techniques. The other four studies (67%) did not exploit data augmentation.
Figure 28: Forest plot of registration studies performance based on ALD metric. Studies are grouped by the meta-learning method employed.
Figure 29: Forest plot of registration studies performance based on TRE metric. Studies are grouped by the meta-learning method employed.
## 5 Discussion
This review assessed 80 FSL studies applied to the field of medical imaging. We organized these studies into three distinct categories: segmentation, classification, and registration, according to their main outcome. For each category, we collected essential information such as the algorithm or pipeline used, the employed meta-learning methods, the quantity of labelled data utilized during training, and the highest achieved performance. Furthermore, we summarized the outcomes of each category, classifying them based on both the specific anatomical structures investigated and the presence or absence of meta-learning techniques. In addition, we applied the PROBAST method to evaluate both ROB and the applicability of each study in the context of each outcome. Finally, we defined a generic pipeline enclosing all the techniques shared among the selected papers.
Below, we delve into the results derived from our analysis according to the objectives outlined in Sec 1.2.
**Studies distribution per outcome.** Figure 4 clearly illustrates the predominant focus of FSL studies in medical imaging. Segmentation tasks are the most prominent, constituting the majority at 61%, followed by classification tasks at 32% and registration tasks at 7%. In the following paragraphs, we provide a more detailed exploration of how these studies are further distributed, considering both the specific anatomical structures investigated and the meta-learning methods employed.
**Studies distribution and results per anatomical structure investigated.** Figure 6 provides an insightful overview of the distribution of anatomical structures studied in segmentation tasks in medical imaging. Notably, the heart emerges as the most extensively investigated anatomical structure, comprising 34% of the studies. Following closely are the kidney, spleen, and liver, each accounting for 13% of the research. The brain also features significantly, representing 10% of the studies. In Figure 12, we shift our focus to the distribution of anatomical structures in classification studies. Here, the lungs take the lead, constituting the primary focus in 36% of the research. The skin follows closely with 21%, while the breast and eye account for 15% and 10%, respectively. Lastly, Figure 22 highlights the distribution of anatomical structures studied in registration tasks. The heart emerges as the most commonly examined organ in this category, representing 52% of the investigations, followed by lungs and brain, accounting for 14% of the studies and the knee for 10%. Finally, the cervical vertebra and abdomen are the main application in 5% of the studies.
Figure 8 provides valuable insights into the performance of segmentation tasks in terms of the Dice score across various anatomical structures. Notably, femur segmentation demonstrates the highest Dice score, although it's worth mentioning that only one study addresses this task, making the result partially reliable. In contrast, AA and LV segmentation exhibit consistently good average results across multiple peer studies, achieving Dice scores of 0.89 and 0.88, respectively. The worst-performing segmentation task appears to be the prostate, with a mean Dice score of 0.42 across different structures. Shifting our focus to IoU, Figure 9 highlights that hip, knee, and phalanx segmentation provide the best IoU results. However, it's important to note that these results are based on a limited number of studies, which may affect their reliability. On the other hand, lung segmentation demonstrates a high IoU of 0.91, with a small CI, across several studies, indicating robust and consistent performance. Conversely, prostate segmentation consistently yields lower IoU scores, with a 0.23 IoU for the segmentation of the peripheral zone, indicating room for improvement in this specific task.
utilizing the F1-score and Recall metrics. However, the results in both cases are poor, scoring below 0.5, as shown in Figure 16 and Figure 17.
Lastly, turning our attention to the registration task in the following, we examine the key findings. In terms of the Dice score (Figure 24), our analysis suggests that FSL registration achieves the highest result in the registration of knee bones. However, it's important to note that this result should be interpreted cautiously due to its reliance on a single study. On the other hand, when it comes to the registration of whole heart images, there is a consistently high mean Dice score of 0.88, with a small CI derived from several studies. Regarding the ALD and TRE metrics (Figure 25 and Figure 26), it's noteworthy that these metrics are employed only in the context of lung image registration.
**Studies distribution and results per meta-learning method employed.** Concerning segmentation papers, according to Figure 7, most studies do not employ meta-learning methods for segmentation tasks, accounting for 55% of the investigations. Among the studies that do use meta-learning methods, metric-learning-based approaches are the most commonly utilized, constituting 26% of the selected studies, followed by initialization-based studies (13%) and hallucination-based (6%). Shifting the focus to the classification tasks, Figure 13 reveals that 40% of the studies do not use any meta-learning algorithm. Among the meta-learning algorithms instead, 40 % of all studies employ metric learning-based methods, and 20% initialization-based methods. No classification study instead employ hallucination-based methods. Finally, as for registration purposes, no study employ meta-learning methods.
Regarding the models' performance, Figure 10 reveals that both no-meta-learning methods and hallucination-based methods yield the highest mean Dice scores (0.84). Notably, no-meta-learning methods exhibit a wider CI, which is expected given their application across a greater number of studies. In contrast, metric-learning-based methods, while being the most commonly employed among all the meta-learning methods in segmentation studies, yield slightly lower results, with a mean Dice score of 0.79 and a larger CI. For the IoU metric, as demonstrated in Figure 11, only initialization-based methods and no-meta-learning methods utilize this metric. Remarkably, no-meta-learning methods outperform the others significantly, delivering a notably better performance with a smaller CI. In the context of classification tasks, as illustrated in Figure 18, it becomes evident that studies opting not to utilize meta-learning methods consistently yield the most impressive accuracy results, coupled with a notably narrow CI. Conversely, metric-learning-based methods exhibit notably poorer performance, with a mean accuracy of just 0.81 and a larger CI. Regarding AUROC, as emphasized in Figure 19, the absence of meta-learning methods consistently delivers the most remarkable performance, boasting a mean AUROC of 0.84. Interestingly, initialization-based and metric-learning-based methods both yield a mean AUROC of 0.79 despite metric-learning techniques exhibiting a wider CI. In the context of registration tasks, Figures 27, 28, and 29 illustrate the mean performance metrics for Dice, ALD, and TRE, respectively. In this specific case, all the studies opted for utilising non-meta-learning methods.
**Training data, imaging modalities and robustness evaluation distributions.** When analyzing the training set size in studies focused on the segmentation task, it's worth highlighting that most of these studies, comprising 58%, incorporate one or more labelled data samples into their training phase. A significant proportion, 40%, also utilize OSL, while only 2% employ ZSL. A similar trend is observed in classification studies, where a substantial 70% of the investigations involve at least one labelled example during their training process, while 19% exclusively rely on OSL. In this context, just 7% of the examined studies make use of ZSL. However, when we turn our attention to registration studies, unlike those in segmentation and classification, mainly rely on OSL (50%). Among the remaining studies, 40% opt for FSL with more than one labelled image in the training set, and only 17% perform ZSL.
In terms of imaging modalities, MRI data are the most commonly utilized in both segmentation and registration studies, constituting 60% and 83% of the cases, respectively. On the other hand, when it comes to classification studies, X-ray imaging takes the lead, being employed in 41% of all studies.
In terms of evaluating the robustness of models, it's worth noting that 74% of the segmentation studies incorporate some form of model robustness evaluation. This typically involves conducting ablation studies and or employing cross-validation techniques. However, considering classification and registration studies, only half of them incorporate robustness evaluation for the models, with ablation studies being the most common approach in these cases.
**Identification of a standard pipeline.** We present our comprehensive analysis of the examined studies in Figure 5, illustrating the components of the pipelines common to each study. We summarized all the steps into three main clusters: pre-training, training, and data augmentation. We found that pre-training is performed using four paradigms: supervised, unsupervised, self-supervised, or meta-learning. On the other hand, the final training phase encompasses these four methods, including semi-supervised learning and zero-shot learning approaches as well. Regarding data augmentation techniques, we emphasized three methods that are mainly employed: classical data augmentation, which includes geometric or image channel transformations, generative-based augmentation and registration-based augmentation. The latter approach, notably, found extensive application in segmentation studies, where segmentation and registration models are trained jointly.
As anticipated, meta-learning emerged as the prevailing approach for addressing FSL tasks, particularly within segmentation and classification studies. As well, in the realm of segmentation and classification, supervised learning is the most commonly employed method immediately following meta-learning. It's worth noting that the interpretation of classic supervised learning varies based on the specific modules introduced in each study. In segmentation studies, semi-supervised learning is also a commonly employed method involving the joint utilization of labelled and unlabeled data. In contrast, when examining registration studies, almost all employed traditional supervised training, with only one study opting for unsupervised training.
Concerning the application of data augmentation techniques, the predominant practice involves augmenting datasets through standard geometric or colour channel transformations. In segmentation studies, a common approach is to integrate a registration network, trained in conjunction with the segmentation network, to provide additional training data--a technique not observed in classification studies, where classical data augmentation is prevalent, and only two studies explored generative techniques. Finally, among registration studies, those incorporating data augmentation exclusively rely on classical augmentation techniques.
**Concluding statements.** Our comprehensive analysis offers several valuable insights into the methods employed for addressing FSL tasks. To begin with, although meta-learning methods are widely adopted and successful, they are implemented in a variety of ways. Metric learning-based methods have garnered substantial attention, whereas hallucination-based techniques have not been as extensively explored. Secondly, our analysis highlights that the heart, abdomen, and lungs have been the primary areas of focus in FSL studies. This is likely due to the availability of well-established benchmark datasets such as CHAOS [111], MS-CMRSeg [112], and NIH Chest X-ray [113]. However, there exists untapped potential for researchers to delve into relatively less-explored medical applications, including the prostate, digestive organs, and various bones. Furthermore, we observed that some studies, particularly those in classification and registration tasks, may not conduct comprehensive model investigation analyses. This gap can potentially lead to incomplete and unreliable performance assessments. Lastly, we noted issues related to ROB in some studies. Many studies lack clarity in explaining how they address the FSL task, even when claiming to use reduced amounts of labelled data. In light of these findings, we encourage future researchers in the field to consider the following actions:
* Explore and invest in hallucination-based methods, given their promising performance potential.
* Expand the scope of medical applications investigated, especially in less-explored areas.
* Prioritize thorough model validation and comprehensive analyses to facilitate fair comparisons and the practical implementation of FSL models in clinical settings.
In addition, our analysis underscores the prevalence of meta-learning as a commonly used approach for FSL tasks. However, it also highlights the versatility of alternative methods, including supervised learning with innovative modules and semi-supervised learning, which have proven effective, particularly in segmentation tasks and registration studies. These diverse strategies, coupled with appropriate data augmentation techniques, demonstrate the adaptability of FSL methodologies to address the challenges posed by limited data.
## 6 Conclusions
In our extensive systematic review, we conducted a thorough examination of the application of FSL in medical image analysis. We categorized the selected studies based on their intended outcome domains, i.e. segmentation, classification, and registration. Our analysis entailed a detailed investigation of these studies, focusing on the specific anatomical structures targeted and the meta-learning methods employed. Moreover, we provided a comprehensive performance summary by grouping the studies according to the anatomical structures studied and the chosen meta-learning techniques. This summary included mean performance values along with a 95% CI. Additionally, we explored supplementary aspects, such as the quantity of training data used, the imaging modalities employed, and the methods used to assess the robustness of the models. We meticulously evaluated each study for its ROB and applicability to ensure the credibility of the findings presented. Finally, we introduced a general pipeline that all the studies in our analysis either partially or fully adopted.
The key findings from our systematic review are as follows. Concerning the outcome domain, segmentation tasks are the most prominently addressed outcome in FSL applied to medical image analysis, and among the various anatomical structures investigated, the abdomen and heart receive the most attention. In terms of the training data, most of the studies demonstrate the effectiveness of FSL by utilizing more than one example during the training phase. Notably, CT and X-ray imaging modalities are the most frequently employed. Regarding the robustness evaluation, our review unveil a significant gap: indeed, a considerable number of studies, particularly those in the classification and registration domains, lack proper robustness assessment, underscoring the need for improved evaluation practices in these areas.
Concerning the meta-learning techniques employed, metric-learning-based approaches are the predominant choice among meta-learning methods, despite providing poorer results w.r.t. other meta-learning and non-meta-learning methods. Among non-meta-learning approaches, classical supervised learning with custom modules, as well as semi-supervised learning, are commonly applied. In general, non-meta-learning methods perform better w.r.t meta-learning ones. Finally, in terms of data augmentation, most studies address the challenge of limited data by incorporating data augmentation techniques. Classical augmentation methods are the most widely employed for this purpose. Our systematic review is intended to serve as a valuable resource for future researchers in the field, offering guidance on areas of anatomical interest and methodological exploration that warrant further investigation. Our ultimate goal is to promote the advancement and broader adoption of FSL techniques within the medical imaging domain by addressing identified gaps, emphasizing robustness evaluation and showing an overview of the methods currently used in the SOTA.
|
2309.04770 | After-Fatigue Condition: A Novel Analysis Based on Surface EMG Signals | This study introduces a novel muscle activation analysis based on surface
electromyography (sEMG) signals to assess the muscle's after-fatigue condition.
Previous studies have mainly focused on the before-fatigue and fatigue
conditions. However, a comprehensive analysis of the after-fatigue condition
has been overlooked. The proposed method analyzes muscle fatigue indicators at
various maximal voluntary contraction (MVC) levels to compare the
before-fatigue, fatigue, and after-fatigue conditions using amplitude-based,
spectral-based, and muscle fiber conduction velocity (CV) parameters. In
addition, the contraction time of each MVC level is also analyzed with the same
indicators. The results show that in the after-fatigue condition, the muscle
activation changes significantly in the ways such as higher CV, power spectral
density shifting to the right, and longer contraction time until exhaustion
compared to the before-fatigue and fatigue conditions. The results can provide
a comprehensive and objective evaluation of muscle fatigue and recovery, which
can be helpful in clinical diagnosis, rehabilitation, and sports performance. | Van Hieu Nguyen, Gia Thien Luu, Thien Van Luong, Mai Xuan Trang, Philippe Ravier, Olivier Buttelli | 2023-09-09T12:09:40Z | http://arxiv.org/abs/2309.04770v1 | # After-Fatigue Condition: A Novel Analysis Based on Surface EMG Signals
###### Abstract
This study introduces a novel muscle activation analysis based on surface electromyography (sEMG) signals to assess the muscle's after-fatigue condition. Previous studies have mainly focused on the before-fatigue and fatigue conditions. However, a comprehensive analysis of the after-fatigue condition has been overlooked. The proposed method analyzes muscle fatigue indicators at various maximal voluntary contraction (MVC) levels to compare the before-fatigue, fatigue, and after-fatigue conditions using amplitude-based, spectral-based, and muscle fiber conduction velocity (CV) parameters. In addition, the contraction time of each MVC level is also analyzed with the same indicators. The results show that in the after-fatigue condition, the muscle activation changes significantly in the ways such as higher CV, power spectral density shifting to the right, and longer contraction time until exhaustion compared to the before-fatigue and fatigue conditions. The results can provide a comprehensive and objective evaluation of muscle fatigue and recovery, which can be helpful in clinical diagnosis, rehabilitation, and sports performance.
Surface electromyography, maximal voluntary contraction, conduction velocity, EMG, power spectral density.
## I Introduction
Surface electromyography (sEMG) is a non-invasive technique to measure the electrical activity of skeletal muscles. Surface EMG signals can provide valuable information about muscle fatigue, defined as the decrease of maximal force output to maintain or repeat tasks by muscles [1]. Muscle fatigue can affect the performance and health of muscles in various fields such as medicine, sports, rehabilitation, ergonomics, and human-machine interaction [2, 3, 4]. Therefore, assessing muscle fatigue based on sEMG signals is of great significance and especially interesting to researchers and health professionals.
Surface EMG signals are a valuable tool to evaluate muscle fatigue, and their applications in life and industrial fields are promising and diverse. However, assessing muscle fatigue based on sEMG is complex because it involves many complex factors, such as muscle type, contraction type, electrode placement, signal processing methods, and fatigue indices [5]. In addition, surface EMG signals are often affected by noise from various sources, such as motion artifacts, skin perspiration, skin impedance, and electromagnetic interference [6, 7]. Therefore, robust and reliable methods are in demand to extract meaningful features from sEMG signals and to classify the states of muscles before, during, and after muscle fatigue. Feature extraction of sEMG signals is a method of extracting useful information from signals under different muscle conditions. Muscle fatigue is usually identified by the decrease of frequency domain indices and time-frequency domain indices and the increase of time domain indices. The traditional approach to acquiring sEMG signals is still commonly used in physiological and clinical studies, based on a pair of electrodes placed on the skin over the muscle. However, the acquired signal depends on the location, the distance between electrodes, the size of the electrode pair, and the area along the muscle fiber, which can lead to very different spectral characteristics and amplification [5, 6].
Initially, sEMG signal features can be divided into three domains: time domain, frequency domain, and time-frequency domain. However, in the past 40 years, a large number of parameters extracted from sEMG signals to evaluate muscle fatigue have been developed. Currently, sEMG features are divided into amplitude-based parameters, spectral-based parameters, non-linear parameters, and muscle fiber conduction velocity (CV) estimation [8, 9]. For the classical bipolar measurement method, using methods based on more than two electrodes arranged in series allows collecting sEMG signals along the vertical or horizontal axis of the muscle [10]. A relatively recent approach includes using multiple electrode groups arranged in one- or two-dimensional arrays. In addition, the two-dimensional electrode array can determine the distribution of sEMG amplitude and describe the spectrum over the entire skin area covering the muscle [11]. Multi-channel sEMG signals also allow for more accurate and reliable estimation of CV [12]. Previous studies have investigated the changes in power spectral density (PSD) of sEMG signals during different muscle contraction and load levels. For example, [8, 13] examined the PSD of sEMG signals at various maximal voluntary contraction (MVC) levels and found that the PSD shifted to lower frequencies with increasing MVC. And [9] used a linear array sEMG to study the relationship between CV and the shift of PSD over time with different load levels. They observed that the CV decreased, and the PSD shifted to
lower frequencies as the load increased.
Most studies mentioned above have mainly focused on the before-fatigue and fatigue conditions, but the after-fatigue condition has not been well explored. Moreover, previous studies exploited only several separate parameters, which did not fully interpret the objective and comprehensive evaluation of muscle in different conditions. The main contributions and findings in this work are summarized as follows:
* A novel analysis that analyzes all muscle fatigue indicators, such as Root mean square, Mean frequency, Power spectral density, and CV at various maximal voluntary contraction (MVC) levels. In contrast, the previous works analyzed only a few fatigue indicators at a few MVC levels and muscle conditions [14].
* The first attempt to analyze after-fatigue conditions compared to before-fatigue and fatigue conditions. Note that the previous studies only focused on before-fatigue and fatigue conditions as shown in [8, 9, 13]. Such analysis and comparison showed a significantly increasing contraction time of the after-fatigue condition until exhaustion, and this phenomenon was partially explained.
The rest of this work is constructed as follows. Section II presents the signal acquisition protocol and our analysis method, and then Section III discusses the experimental results. Finally, Section IV concludes the paper.
## II Materials and methods
### _Signal acquisition protocol and data pre-processing_
A monopolar sEMG signal was collected by the two-dimensional 64-electrode sEMG sensor, divided into thirteen rows and five columns, with an inter-electrode distance of eight millimeters and a sampling frequency of 2048 hertz. The sensor provides five groups of differential signals with thirteen electrodes along with the Biceps Brachini muscle group. A force signal over time is measured from the force gauge collected with each MVC level, as shown in Fig. 1 (A). In addition, the representation of the sensor matrix and the direction of signal propagation is described in Fig. 1 (B). The sEMG signals from all electrodes obtained from the MVC levels were digitally filtered (20 Hz-400 Hz) to reduce noise.
The data was collected on ten healthy subjects: three women and seven men (mean age 24, standard deviation 1.5 years). The MVC of an individual subject was defined as the average value of three times producing the maximum force, which separated every five minutes. After determining the MVC, the tests can be conducted at different force levels, expressed as a percentage of MVC. The subject should hold the force for at least ten seconds if possible. The experiment starts with a test at 10% of MVC so the subject can learn to maintain a certain force level. Furthermore, the process continues with the random order tests at 20%, 40%, 60%, and 90% of MVC, an exhaustion test at 70% of MVC, and a final test at 10% of MVC. Each subject takes a rest of five minutes between two consecutive tests. As such, the states before muscle fatigue include 10%MVC, 20%MVC, 40%MVC, 60%MVC, and 90%MVC, while the state during muscle fatigue is 70%MVC and the state after muscle fatigue is 10%MVC.
### _Amplitude-based parameters_
Root mean square (RMS) is the main parameter used to investigate the amplitude of the sEMG signal. The formula for RMS is as follows:
\[RMS=\sqrt{\frac{1}{N}\sum_{n}x_{n}^{2}}, \tag{1}\]
where \(x_{n}\) are the values of the sEMG signal, and \(N\) is the number of data samples. RMS is believed that during maximal isometric contractions, amplitude falls progressively, in parallel with the decrease in force [15].
### _Spectral-based parameters_
Two characteristic frequencies that have been used to quantify changes in spectral content based on the Fourier transform are the mean frequency (MNF) and the median frequency (MDF) of the power spectrum [16]. MDF is calculated as follows:
\[MDF=\int_{f_{1}}^{f_{median}}PS(f)df=\int_{f_{median}}^{f_{2}}PS(f)df, \tag{2}\]
while MNF is calculated as follows:
\[MNF=\frac{\int_{f_{1}}^{f_{2}}f.PS(f)df}{\int_{f_{1}}^{f_{2}}PS(f)df}, \tag{3}\]
where \(PS(f)\) is the power spectrum calculated from the Fourier transform, \(f_{1}\) and \(f_{2}\) determine the lowest and highest frequency of the signal bandwidth, usually ranging from 20 hertz to 400 hertz. MDF and MNF are related to the change in CV and have been shown in isometric contractions that MNF will shift to lower frequencies during fatigue [17].
### _Conduction velocity estimation_
The muscle's CV was initially calculated by placing electrodes along the muscle. However, this method can be biased in estimating CV when muscle fibers are not placed in a parallel plane. Moreover, electrodes placed on an undefined muscle domain can lead to misleading information [18]. Recently, multi-channel sEMG signals allowed more accurate estimation of CV both at the global level of the muscle and CV of individual motor units [15]. Therefore, this study conducted CV
Fig. 1: (A) Representation of the longer force’s contraction time until exhausted. (B) The matrix sensor is used in signal acquisition.
estimation based on the multi-channels maximum likelihood estimation algorithm [19]. First, three single differential sEMG signals were selected to remove the mean value. Then, the algorithm calculates two double-differential sEMG signals and tries to maximize the likelihood delay function for observing the delay time between the two signals. CV was obtained after calculating the delay time by dividing the electrode distance by the delay time. The formula for calculating CV is as follows:
\[CV=\frac{d}{\theta}, \tag{4}\]
where d is the distance between two electrodes, \(\theta\) is the delay time between two electrodes. The algorithm is described in more detail in [19].
CV parameter is related to the properties of the fiber membrane, fiber diameter, and contraction properties of the fiber. Therefore, measuring CV degradation is considered as the strongest indicator of muscle fatigue [20]. In addition, changes in the CV in muscle fatigue have a profound impact on the motor unit action potential waveform and, therefore, affect both amplitude parameters and spectral parameters extracted from sEMG, as analyzed in [21, 22]. Previous studies [23, 24] have also shown that CV is reduced due to the consequences of local metabolic changes in active muscles, mainly due to H+ and K+ distribution in the sarcolemma.
### _The proposed after-fatigue analysis methods_
One of the novel aspects of this analysis method is that it calculates all indicators on the sEMG signal according to MVC levels and contraction time. While in [2] only shown a few indicators. The amplitude parameters (RMS) and spectral parameters (MNF) were obtained from the entire sEMG signal for each MVC level, excluding the noise parts at the beginning and end. The parameters were averaged over all signals. The electrodes' signals were visualized through a single differential filter to determine the innervation zone, as shown in Fig. 2.
The multi-channel algorithm mentioned earlier estimated the CV with selected sEMG signals. The CV value will be accepted if the correlation coefficient between adjacent signals is more than 0.75. The estimated CV value of each MVC level was averaged over channels.
Furthermore, according to the analysis method of contraction time, each selected signal was divided into intervals, spaced 1000ms and 500ms long, which is believed to be stationarity. MNF and RMS obtained from each MVC level were averaged over all selected signals. The PSD representation of the signal was divided into three intervals, each with a length of 1000ms. "Onset" is taken at the beginning, "Middle" is taken in the middle, and "End" is taken at the end of the signal. These intervals allow capturing the changes in sEMG parameters over time and comparing them across MVC levels.
Finally, CV was calculated on each selected signal with segments 500ms long and spaced 1000ms apart. The CV value was accepted if the correlation coefficient between adjacent signals was more significant than 0.75. The estimated value of CV for each MVC level was averaged over channels. By using these techniques, the analysis method provides a comprehensive and accurate evaluation of muscle fatigue and recovery.
## III Experimental Results
### _Results from analysis of each MVC level_
In this section, The analysis of the PSD, RMS, CV, and MMF of the sEMG signals recorded from the biceps brachii muscle at different levels of maximum voluntary contraction (MVC) was compared with previous studies. The findings' implications according to 10%MVC after-fatigue are also discussed for muscle fatigue assessment. The results interpretation will be constructed as follows.
The PSD in normalized units for each MVC level is shown in Fig. 3 (A). The PSD of 70%MVC fatigue is more concentrated in the lower frequency range, indicating a higher level of muscle activation. This is consistent with the report by [13]. On the other hand, the PSD of other MVC levels tends to shift to lower frequencies as the MVC level increases, suggesting a decrease in muscle activation.
Fig. 3 (B) compares the PSD of 10%MVC before-fatigue and 10%MVC after-fatigue. A significant difference between the two conditions was first found: the PSD of 10%MVC after-fatigue has a higher power spectrum in the higher frequency range. In comparison, the PSD of 10%MVC before-fatigue has a lower power spectrum at the same frequency range. This implies that 10%MVC after-fatigue signal has a higher muscle activation than 10%MVC before-fatigue. Note that this phenomenon has not been mentioned in any previous work and needs further analysis at higher MVC levels.
Fig. 3 (C) shows the general trend of RMS, which linearly increases to higher MVC levels. This coincides with previous studies such as [21, 23]. The RMS of 10%MVC after-fatigue and 10%MVC before-fatigue is almost unchanged. However, the RMS indicator needs to be combined with other indicators, such as MNF and CV, for a comprehensive understanding of muscle fatigue. In addition, 70%MVC fatigue has a higher RMS amplitude compared to other MVC levels because of higher muscle activation when the muscle is in the fatigue stage, which has been investigated in [17, 20].
As shown in Fig. 3 (D), the general trend of CV linearly decreases to higher MVC levels. This coincides with previous studies such as [9]. When the MNF of 10%MVC after-fatigue was compared with other MVC levels, we found that the CV of 10%MVC after-fatigue was the highest compared to other MVC levels. Note that the phenomenon has been overlooked
Fig. 2: Representation of innervation zone and signals selection.
in previous works. Therefore, we suggest that the robust increase in CV affects the PSD of 10%MVC after-fatigue, as mentioned above. In addition, 70%MVC fatigue has the lowest CV compared to other MVC because of the muscle fatigue stage. Finally, Fig. 3 (E) represents the general trend of MNF, which linearly increases to higher MVC levels. This also coincides with previous works such as [13, 16]. The MNF of 10%MVC after-fatigue was first observed, which has a higher frequency than 10%MVC before-fatigue; this phenomenon combines with indicators mentioned above that suggested a first evaluation of recovery conditions of muscle. In addition, 70%MVC has the lowest MNF compared to other MVC levels, which is the same as expected according to the above indicators.
In summary, novelty findings were found according to 10%MVC after-fatigue compared to other MVC levels. In addition, a shift of PSD to the right at 10%MVC after-fatigue was first found, compared to 10%MVC before-fatigue. This phenomenon comes with a robust increase in MNF and CV compared to other MVC levels. Based on the findings, We
Fig. 4: Representation of the PSD shift during the contraction time.
Fig. 3: (A) PSD normalized each MVC level. (B) PSD normalized of 10%MVC before-fatigue and after-fatigue. (C) RMS each MVC level. (D) CV each MVC level. (E) MNF each MVC level.
suggested a novelty evaluation of the after-fatigue conditions of muscle, where the shift to the right of PSD and increasing MNF denote higher muscle activation and a robust increase in CV denotes the reabsorption and distribution of ions such as H+ and K+ in the muscle membrane, leading to an increase in CV, as mentioned earlier. This further proved the more extended maintenance of contraction time at 10%MVC after-fatigue conditions.
### _Results from analysis contraction time_
This section represents an analysis of the PSD indicator over time and evaluates the slope change of each parameter, such as MNF, RMS, and CV. Moreover, findings according to 10%MVC after-fatigue compared to other MVC levels and discuss their implications for muscle fatigue assessment. The results interpretation will be constructed as follows.
Fig. 4 represents the analysis of the PSD indicator over time. The compression trend of PSD was first generalized at various MVC levels and conditions over time, while [13] only exploited PSD on different MVC levels and some conditions. The PSD has a wide frequency, and the mean frequency is almost unchanged according to 10%MVC and 20%MVC before-fatigue. However, when MVC levels are increased, PSD tends to compress at a lower frequency apparent over time according to 60%MVC and 90%MVC. In addition, when the muscle is in the fatigue stage according to 70%MVC, the compression of PSD is not only at a lower frequency but also a robust increase in the power spectrum. Note that this finding has been overlooked in the previous studies.
In Table I, the slope change of all parameters such as MNF, RMS, and CV of each MVC level during contraction time were summarized. Firstly, we observed a general trend of MNF's slope at before-fatigue MVC levels. We first found that the higher the MVC levels, the more apparent MNF's slope decreases, which means the muscle tends to fatigue faster as higher force is required. In addition, MNF's slope of 70%MVC fatigue seems unaffected in fatigue conditions. MNF's slope of 10%MVC after-fatigue is nearly zero, which is almost unchanged over time and needs further analysis to understand fully.
In contrast to the general trend of MNF's slope, RMS' slope tends to increase as higher force is required according to before-fatigue conditions, as shown in Table I. In addition, RMS's slope of 70%MVC fatigue is nearly equal to 90%MVC, which means muscle needs more power to retain force than other MVC levels in fatigue conditions. In 10%MVC after-fatigue conditions, the slope nearly equals zero, requiring lower power to retain force. This phenomenon has been overlooked and needs further analysis. Finally, Table I shows the general CV's slope. CV's slope tends to decrease as higher force is required according to before-fatigue conditions. The CV's slope of 10%MVC after-fatigue was first found, which has a contrasted trend, slightly increasing compared to other MVC levels. Note that this phenomenon has not been reported in any previous works. We suggest that this phenomenon is related to the reabsorption and distribution of ions such as H+ and K+ in the muscle membrane. This leads to an increase in CV and the shift of PSD to the right, as we mentioned earlier.
In summary, a novelty analysis on the PSD indicators was proposed, which generalized the compression trend of PSD at various MVC levels and conditions over time and evaluated the slope change of each parameter, such as MNF, RMS, and CV. In addition, the findings were presented according to 10%MVC after-fatigue compared to other MVC levels.
## IV Conclusions
In this study, we proposed a method that analyzes muscle fatigue indicators at various MVC levels to fully evaluate muscle conditions according to the before-fatigue, fatigue, and after-fatigue conditions, and analysis of the contraction time of each MVC level is also analyzed with the same indicators to understand better how indicators change to time. The decline of CV, MNF, an increase of RMS, and a shift of power spectrum to lower frequency according to different MVC levels before and during muscle fatigue all provide results that we expect about the trend of muscle fatigue. Regarding the significant increase in muscle contraction maintenance time at 10%MVC level after muscle fatigue, we believe this increase in time is related to the reabsorption and redistribution of ions after muscle fatigue. As mentioned earlier, the change of CV has a solid relationship to the change of metabolism at the cell membrane. The CV also significantly impacts the amplitude and spectrum parameters, shown in the increased power spectrum density at higher frequencies and a slight increase in RMS and muscle force. However, this finding needs further investigation of the properties of the individual motor unit, such as firing rate, recruitment threshold, and synchronization. |
2309.07279 | Levi-Equivariant Restriction of Spherical Perverse Sheaves | We study the equivariant cohomology of spherical perverse sheaves on the
affine Grassmannian of a connected reductive group $G$ with support in the
affine Grassmannian of any Levi subgroup $L$ of $G$. In doing so, we extend the
work of Ginzburg and Riche on the $T$-equivariant cofibers of spherical
perverse sheaves. We obtain a description of this cohomology in terms of the
Langlands dual group $\check{G}$. More precisely, we identify the cohomology of
the regular sheaf on $\mathrm{Gr}_G$ with support along $\mathrm{Gr}_L$ with
the algebra of functions on a hyperspherical Hamiltonian $\check{G}$-variety
$T^*(\check{G}/(\check{U}, \psi_L))$, where the $\textit{Whittaker datum}$
$\psi_L$ is an additive character (determined by $L$) of the maximal unipotent
subgroup $\check{U}$. | Mark Macerato | 2023-09-13T19:56:39Z | http://arxiv.org/abs/2309.07279v2 | # Levi-equivariant restriction of spherical perverse sheaves
###### Abstract.
We study the equivariant cohomology of spherical perverse sheaves on the affine Grassmannian of a connected reductive group \(G\) with support in the affine Grassmannian of any Levi subgroup \(L\) of \(G\). In doing so, we extend the work of Ginzburg and Riche on the \(T\)-equivariant cofibers of spherical perverse sheaves. We obtain a description of this cohomology in terms of the Langlands dual group \(\check{G}\). More precisely, we identify the cohomology of the regular sheaf on \(\operatorname{Gr}_{G}\) with support along \(\operatorname{Gr}_{L}\) with the algebra of functions on a hyperspherical Hamiltonian \(\check{G}\)-variety \(T^{*}(\check{G}/(\check{U},\psi_{L}))\), where the _Whittaker datum_\(\psi_{L}\) is an additive character (determined by \(L\)) of the maximal unipotent subgroup \(\check{U}\).
###### Contents
* 1 Introduction
* 1.1 Geometric Satake
* 1.2 Levi subgroups
* 1.3 Equivariant corestriction
* 1.4 The case of a torus and the work of Ginzburg-Riche
* 1.5 Our results
* 1.6 Notation and conventions
* 1.7 Acknowledgements
* 2 Automorphic Side
* 2.1 Equivariant localization
* 2.2 Monoidal structure on corestriction
* 2.3 Fusion
* 2.4 Parabolic restriction
* 2.5 Action of equivariant homology
* 2.6 Summary
* 3 Spectral Side
* 3.1 Partial Kostant-Whittaker reduction
* 3.2 Action of the regular centralizer
* 3.3 Generic comparison of Hamiltonian and Kostant-Whittaker reductions
* 3.4 Anti-generic comparison of Hamiltonian and Kostant-Whittaker reductions
* 3.5 Affine closure
* 4 Comparison
* 4.1 Parabolic restriction
* 4.2 The Theorem of Ginzburg-Riche
* 4.3 The Theorem of Yun-Zhu
* 4.4 Proof of Theorem 1.5.2
## 1. Introduction
### Geometric Satake
Let \(G\) denote a connected reductive group over \(\mathbb{C}\). Following Lusztig, we associate to \(G\) its _affine Grassmannian_\(\operatorname{Gr}_{G}\)[25, SS11], an ind-projective ind-variety1 over \(\mathbb{C}\) (we refer the reader to [3, SS1.2], [2, SS9.1], [35, SS1], and [5, SS3] for its basic properties). The topology of \(\operatorname{Gr}_{G}\) gives rise to the Langlands dual group \(\check{G}\) through the _geometric Satake equivalence_ of Ginzburg [18] and Mirkovic-Vilonen [28]. More precisely, let \(G(\mathcal{O})=G(\mathbb{C}[[t]])\) denote the arc-group2 of \(G\). The pro-algebraic group \(G(\mathcal{O})\) acts naturally on \(\operatorname{Gr}_{G}\), and we may consider the \(G(\mathcal{O})\)-equivariant constructible derived category3\(D_{G(\mathcal{O})}(\operatorname{Gr}_{G})\) of sheaves of complex vector spaces4 on \(\operatorname{Gr}_{G}\). The triangulated category \(D_{G(\mathcal{O})}(\operatorname{Gr}_{G})\) carries a natural monoidal structure \(\star\) given by the _convolution product_ of sheaves [28, SS4]. Lusztig's "miraculous theorem" [6, SS5.3.6] (reproven conceptually by Gaitsgory [16, Proposition 6]) asserts that the convolution product \(-\star-\) is \(t\)-exact, and therefore restricts to an exact bifunctor on the heart of the perverse \(t\)-structure \(\mathcal{P}_{G(\mathcal{O})}(\operatorname{Gr}_{G})\subseteq D_{G(\mathcal{O} )}(\operatorname{Gr}_{G})\), the abelian category of \(G(\mathcal{O})\)-equivariant perverse sheaves on \(\operatorname{Gr}_{G}\).
Footnote 1: We will not distinguish here between \(\operatorname{Gr}_{G}\) and its reduction \(\operatorname{Gr}_{G}^{\operatorname{red}}\), as our considerations are purely topological.
Footnote 2: We abuse notation here and throughout the paper by writing \(G(\mathcal{O})\) in place of the arc-group scheme \(L^{+}G\)[15, Definition 1], whose group of \(\mathbb{C}\)-points is \(G(\mathcal{O})\).
Footnote 3: See §1.6.4 for our notation and conventions concerning categories of sheaves.
Footnote 4: Of course, one of the most profound aspects of the approach of [28] is that the geometric Satake equivalence holds with essentially arbitrary (in particular, integral) coefficients. We will not need this generality here; our reliance on the equivariant localization theorem in the body of the paper precludes our arguments from extending to the integral setting.
The total cohomology functor
\[H^{*}(\operatorname{Gr}_{G},-):\mathcal{P}_{G(\mathcal{O})}(\operatorname{Gr }_{G})\to\operatorname{Vect}_{\mathbb{C}}\]
admits a natural monoidal structure. Ginzburg's approach to this fact in [18] is to observe that the equivariant cohomology functor \(H^{*}_{G(\mathcal{O})}(\operatorname{Gr}_{G},-)\) has an obvious monoidal structure (see, for example, [35, Proposition 5.2.3]), which induces a monoidal structure on \(H^{*}\) through the canonical isomorphism \(H^{*}(\operatorname{Gr}_{G},-)\simeq H^{*}_{G(\mathcal{O})}(\operatorname{Gr }_{G},-)\otimes_{H^{*}_{G(\mathcal{O})}(\operatorname{pt},\mathbb{C})}H^{*}( \operatorname{pt},\mathbb{C})\) (which arises
from the equivariant formality of all objects of \(\mathcal{P}_{G(\mathcal{O})}(\mathrm{Gr}_{G})\); compare to Proposition 2.1.1 below). See [28, SS3] for the approach of Mirkovic-Vilonen, which works for general coefficients. One moreover shows that \(H^{*}(\mathrm{Gr}_{G},-)\) is exact and conservative [28, Corollary 3.7].
Through a global reinterpretation of the convolution product [28, SS5-6], Mirkovic-Vilonen (following the ideas of Beilinson-Drinfeld [6]) equip the monoidal abelian category \(\mathcal{P}_{G(\mathcal{O})}(\mathrm{Gr}_{G})\) with a symmetric braiding, the _commutativity constraint_. They show [28, Proposition SS6.3] that with respect to this symmetric monoidal structure on \(\mathcal{P}_{G(\mathcal{O})}(\mathrm{Gr}_{G})\), the cohomology functor \(H^{*}(\mathrm{Gr}_{G},-)\) is symmetric monoidal.
**Theorem 1.1.1** (Ginzburg [18], Mirkovic-Vilonen [28]).: _There exists a canonically defined connected reductive group \(\check{G}\) over \(\mathbb{C}\) equipped with an equivalence of symmetric monoidal abelian categories_
\[\mathbb{S}_{G}:\mathcal{P}_{G(\mathcal{O})}(\mathrm{Gr}_{G})\simeq\mathrm{Rep }(\check{G})\]
_and a natural isomorphism of symmetric monoidal functors_
\[\mathrm{For}\circ\mathbb{S}_{G}\simeq H^{*}(\mathrm{Gr}_{G},-):\mathcal{P}_{G (\mathcal{O})}(\mathrm{Gr}_{G})\to\mathrm{Vect}_{\mathbb{C}},\]
_where \(\mathrm{For}:\mathrm{Rep}(\check{G})\to\mathrm{Vect}_{\mathbb{C}}\) is the forgetul functor from the category of finite dimensional \(\check{G}\)-modules to the category of vector spaces. Moreover, \(\check{G}\) is equipped with a canonically defined Borel subgroup \(\check{B}\subseteq\check{G}\) and maximal torus \(\check{T}\subseteq\check{B}\). The root datum \((X^{*}(\check{T}),X_{*}(\check{T}),\Phi,\check{\Phi})\) of the triple \(\check{T}\subseteq\check{B}\subseteq\check{G}\) is dual to the abstract root datum5 of \(G\) (in the sense of Langlands)._
Footnote 5: See the discussion [13, §1.1] of Deligne-Lusztig for how one defines the root datum of a connected reductive group \(G\)_without_ choosing \(T\subseteq B\subseteq G\).
The discussion above can be summarized by saying that the symmetric monoidal abelian category \(\mathcal{P}_{G(\mathcal{O})}(\mathrm{Gr}_{G})\) together with the fiber functor \(H^{*}(\mathrm{Gr}_{G},-)\) defines a _neutralized Tannakian category6_. The Tannakian reconstruction theorem [5, Theorem 2.7] constructs the affine group scheme \(\check{G}\) as the automorphism group scheme of the fiber functor \(H^{*}(\mathrm{Gr}_{G},-)\). That \(\check{G}\) is connected reductive is deduced from a Tannakian characterization [5, Proposition 2.11] of this property. The most important ingredient is the semisimplicity of the abelian category \(\mathcal{P}_{G(\mathcal{O})}(\mathrm{Gr}_{G})\)[5, Theorem 4.2], which follows from the parity vanishing property [5, Lemma 4.5] of the stalks of the intersection complexes of the \(G(\mathcal{O})\)-orbit closures on \(\mathrm{Gr}_{G}\). This parity vanishing property is a geometric reinterpretation of deep computations of Lusztig [25, SS11] in the affine Hecke algebra and can be deduced from the BBD Decomposition Theorem.
Footnote 6: Actually, there are a few technical conditions to check, such as rigidity; these are easily verified for \(\mathcal{P}_{G(\mathcal{O})}(\mathrm{Gr}_{G})\).
Observe that the fiber functor \(H^{*}(\mathrm{Gr}_{G},-)\) is \(\mathbb{Z}\)-graded by the cohomological index. By Tannakian formalism, this \(\mathbb{Z}\)-grading on \(H^{*}(\mathrm{Gr}_{G},-)\) defines a homomorphism \(2\check{\rho}_{G}:\mathbb{G}_{m}\to\check{G}\). It is well known that the maximal torus \(\check{T}\subseteq\check{G}\) (the Borel subgroup \(\check{B}\subseteq\check{G}\)) arises as the centralizer (respectively, the attracting scheme) of the cocharacter \(2\check{\rho}_{G}\) in \(\check{G}\).
### Levi subgroups
We can now approach the problem considered in this paper. We fix a parabolic subgroup \(P\subseteq G\) with unipotent radical \(V\subseteq P\) and Levi quotient \(L=P/V\). Note that \(L\) is a connected reductive group in its own right, so we also have the geometric Satake equivalence
\[\mathbb{S}_{L}:\mathcal{P}_{L(\mathcal{O})}(\operatorname{Gr}_{L})\simeq \operatorname{Rep}(\check{L}).\]
The categories \(\mathcal{P}_{G(\mathcal{O})}(\operatorname{Gr}_{G})\) and \(\mathcal{P}_{L(\mathcal{O})}(\operatorname{Gr}_{L})\) are directly related by a functor, which we shall call the _parabolic restriction_, introduced by Beilinson-Drinfeld in [6, SS5.3.28] (generalizing the _weight functors_ of Mirkovic-Vilonen [28, SS3] in the case that \(L=T\) is a torus). We will have much more to say about it in SS2.4, and also refer the reader to [5, SS15.1] for more details. For now, we note that it is an exact tensor functor
\[\operatorname{Res}_{L}^{G,\natural}:\mathcal{P}_{G(\mathcal{O})}( \operatorname{Gr}_{G})\to\mathcal{P}_{L(\mathcal{O})}(\operatorname{Gr}_{L}).\]
Namely, we have a diagram of ind-schemes
The functor \(q_{*}r^{!}:D_{G(\mathcal{O})}(\operatorname{Gr}_{G})\to D_{L(\mathcal{O})}( \operatorname{Gr}_{L})\) is not \(t\)-exact. However, let \(\mathcal{F}\in\mathcal{P}_{G(\mathcal{O})}(\operatorname{Gr}_{G})\). For each connected component \(\operatorname{Gr}_{L}^{X}\subseteq\operatorname{Gr}_{L}\) of \(\operatorname{Gr}_{L}\), the complex \(q_{*}r^{!}\mathcal{F}|_{\operatorname{Gr}_{L}^{X}}\) is concentrated in a unique perverse degree. Thus, one defines the exact functor \(\operatorname{Res}_{L}^{G,\natural}\) by shifting the functor \(r_{*}q^{!}\) by an appropriate integer on each connected component \(\operatorname{Gr}_{L}^{X}\) (see Remark 2.4.3).
Tannakian formalism now defines a closed embedding \(\check{L}\hookrightarrow\check{G}\) and a canonical identification of the tensor functor \(\mathbb{S}_{L}\circ\operatorname{Res}_{L}^{G,\natural}\circ\mathbb{S}_{G}^{-1}\) with the restriction functor \(\operatorname{Rep}(\check{G})\to\operatorname{Rep}(\check{L})\)[5, Proposition 15.3]. Moreover, the embedding \(\check{L}\hookrightarrow\check{G}\) realizes \(\check{L}\) as a Levi subgroup of \(\check{G}\) containing the maximal torus \(\check{T}\). The dual parabolic subgroup \(\check{P}\subseteq\check{G}\) can now be constructed as the attracting scheme in \(\check{G}\) for the coweight \(2\check{\rho}_{G}-2\check{\rho}_{L}\) of \(\check{T}\).
### Equivariant corestriction
We will now fix a Levi factor \(L\subseteq P\). We obtain a closed immersion
\[i:\operatorname{Gr}_{L}\hookrightarrow\operatorname{Gr}_{G}\]
identifying \(\operatorname{Gr}_{L}\) with a closed subscheme of \(\operatorname{Gr}_{G}\). In fact, \(\operatorname{Gr}_{L}\subseteq\operatorname{Gr}_{G}\) is exactly the fixed point subscheme \(\operatorname{Gr}_{G}^{Z^{\circ}(L)}\subseteq\operatorname{Gr}_{G}\) for the action of the neutral component \(Z(L)^{\circ}\subseteq Z(L)\) of the center of \(L\). We are interested in the (co)restriction functor
\[i^{!}:D_{G(\mathcal{O})}(\operatorname{Gr}_{G})\to D_{L(\mathcal{O})}( \operatorname{Gr}_{L})\]
given by the composition of the forgetful functor \(\operatorname{For}_{L(\mathcal{O})}^{G(\mathcal{O})}\) (forgetting \(G(\mathcal{O})\)-equivariance down to \(L(\mathcal{O})\)-equivariance) followed by the functor \(i^{!}:D_{L(\mathcal{O})}(\operatorname{Gr}_{G})\to D_{L(\mathcal{O})}( \operatorname{Gr}_{L})\) of \(L(\mathcal{O})\)-equivariant \(!\)-restriction.7 Unlike \(\operatorname{Res}_{L}^{G,\natural}\), the functor \(i^{!}\) is not \(t\)-exact, and there is no saving exactness with a mere grading shift. Nonetheless, it will be convenient8 to work with the following "regraded" or "sheared" version of \(i^{!}\):
Footnote 7: We consider \(!\)-restriction over \(*\)-restriction only because the spectral “answer” to our question will be cleaner; of course, the two functors differ only by Verdier duality.
\[i^{!,\natural}:=\bigoplus_{\chi\in\pi_{0}(\operatorname{Gr}_{L})}i^{!}|_{ \operatorname{Gr}_{L}^{\chi}}[-\langle 2\check{\rho}_{G}-2\check{\rho}_{L},\chi \rangle].\]
Here, \(\operatorname{Gr}_{L}^{\chi}\subseteq\operatorname{Gr}_{L}\) is the connected component of \(\operatorname{Gr}_{L}\) labelled by the element \(\chi\in\pi_{0}(\operatorname{Gr}_{L})\simeq\Lambda/Q_{L}\), where \(\Lambda\) is the weight lattice of \(\check{T}\) and \(Q_{L}\subseteq\Lambda\) is the root lattice of \(\check{L}\).
Unlike \(\operatorname{Res}_{L}^{G,\natural}\), \(i^{!,\natural}\) does not admit a monoidal structure. However, in SSSS2.2, we will equip \(i^{!,\natural}\) with a _lax_ monoidal structure. This is enough structure to ensure that \(i^{!,\natural}\) carries ring objects of \(D_{G(\mathcal{O})}(\operatorname{Gr}_{G})\) to ring objects of \(D_{L(\mathcal{O})}(\operatorname{Gr}_{L})\). In fact, we will carry out this construction for any connected reductive subgroup \(L\subseteq G\). We will give two constructions of this lax monoidal structure; the second (given in SSSS2.3) uses Gaitsgory's description [16, Proposition 6] of the convolution product \(-\star-\) in terms of a global nearby cycles construction. Moreover, in SSSS2.4 (see Construction 2.4.5 and Remark 2.6.10), we will show that the natural adjunction morphism
\[\Xi^{\natural}:i^{!,\natural}\to\operatorname{Res}_{L}^{G,\natural} \tag{1.3.1}\]
is compatible with the lax monoidal structures on these functors.
### The case of a torus and the work of Ginzburg-Riche
We now specialize to the case in which \(L=T\) is a maximal torus of \(G\). The affine Grassmannian \(\operatorname{Gr}_{T}\) is the coweight lattice of \(T\), which we identify with the weight lattice \(\Lambda\) of \(\check{T}\). The closed embedding \(i:\Lambda=\operatorname{Gr}_{T}\hookrightarrow\operatorname{Gr}_{G}\) is the familiar embedding of the coweight lattice into \(\operatorname{Gr}_{G}\) taking the coweight \(\lambda\in\Lambda\) to the point \(t^{\lambda}\in\operatorname{Gr}_{G}\) (the image of the uniformizer \(t\in\mathbb{C}((t))^{*}\) under the map \(\lambda(\mathbb{C}((t))):\mathbb{C}((t))^{*}\to G(\mathbb{C}((t)))\twoheadrightarrow \operatorname{Gr}_{G}\)). For each \(\mu\in\Lambda\), let \(i_{\mu}:\{t^{\mu}\}\hookrightarrow\operatorname{Gr}_{G}\) denote the inclusion of the corresponding point of \(\operatorname{Gr}_{T}\). Let \(\lambda\in\Lambda^{+}\) denote a dominant coweight, and let \(\operatorname{IC}_{\lambda}\in\mathcal{P}_{G(\mathcal{O})}(\operatorname{Gr}_ {G})\) denote the intersection complex of \(\operatorname{Gr}_{G}^{\leq\lambda}=\overline{\operatorname{Gr}_{G}^{\lambda}}\). Note that \(L(\lambda):=\mathbb{S}(\operatorname{IC}_{\lambda})\in\operatorname{Rep}( \check{G})\) is the simple \(\check{G}\)-module of highest weight \(\lambda\in\Lambda^{+}\). Since the costalk \(i^{!}_{\mu}\operatorname{IC}_{\lambda}\) is only non-trivial when \(\mu\leq\lambda\), we may assume that this is the case.
Lusztig's description of the (non-equivariant) (co)stalks \(H^{*}(i^{!}_{\mu}\operatorname{IC}_{\lambda})\) predates the geometric Satake equivalence itself.
**Theorem 1.4.1** (Lusztig [25, SS11]).: _Let \(\widetilde{W}_{\operatorname{aff}}=\Lambda\rtimes W\) denote the extended affine Weyl group and \(\ell:\widetilde{W}_{\operatorname{aff}}\to\mathbb{Z}\) its length function; let \(p_{\mu},p_{\lambda}\in\widetilde{W}_{\operatorname{aff}}\) denote the coweights \(\mu,\lambda\in\Lambda\)
_viewed as elements of \(\widetilde{W}_{\rm aff}\). Let \(n_{\mu}\in\widetilde{W}_{\rm aff}\) (resp. \(n_{\lambda}\in\widetilde{W}_{\rm aff}\)) denote the element of maximal length in the \(W\)-double coset \(Wp_{\mu}W\subseteq\widetilde{W}_{\rm aff}\) (resp. \(Wp_{\lambda}W\subseteq\widetilde{W}_{\rm aff}\)). Then, we have the vanishing \(H^{i}(i^{!}_{\mu}{\rm IC}_{\lambda})\simeq 0\) for \(i\) odd, as well as the equality_
\[\sum_{i\geq 0}\dim H^{2i}(i^{!}_{\mu}{\rm IC}_{\lambda})q^{i}=P_{n_{\mu},n_{ \lambda}}(q)=d_{\mu}(L(\lambda);q)\]
_in the ring \(\mathbb{Z}[q]\). Here, for any elements \(y,w\in\widetilde{W}_{\rm aff}\) of the extended affine Weyl group, \(P_{y,w}\in\mathbb{Z}[q]\) denotes the corresponding Kazhdan-Lusztig polynomial (see [25, SS4]). Moreover, \(d_{\mu}(L(\lambda);q)\) denotes Lusztig's \(q\)-analog of the weight multiplicity (see [25, SS6])._
The parity vanishing in Lusztig's theorem implies that \(H^{*}_{T}(i^{!}_{\mu}{\rm IC}_{\lambda})\) is a free module over the ring \(R_{T}:=H^{*}_{T}({\rm pt},\mathbb{C})\). Hence, it implies the (non-canonical) isomorphism of \(R_{T}\)-modules \(H^{*}_{T}(i^{!}_{\mu}{\rm IC}_{\lambda})\simeq H^{*}(i^{!}_{\mu}{\rm IC}_{ \lambda})\otimes R_{T}\) and gives the graded dimension of \(H^{*}(i^{!}_{\mu}{\rm IC}_{\lambda})\).
However, a more intrinsic description of the \(R_{T}\)-module \(H^{*}_{T}(i^{!}_{\mu}{\rm IC}_{\lambda})\) has been obtained by Ginzburg and Riche [20]. It is their work which we aim to extend to arbitrary Levi subgroups of \(G\). We point out that Ginzburg and Riche go further and describe the equivariant costalks \(H^{*}_{T\rtimes\mathbb{G}_{m}}(i^{!}_{\mu}{\rm IC}_{\lambda})\), where \(\mathbb{G}_{m}\) denotes the loop rotation torus. In this paper, we will not consider the loop rotation equivariance (because it is somewhat orthogonal to the applications that we have in mind), although we anticipate that our arguments will extend to this setting.
We can evaluate the natural transformation (1.3.1) on the object \({\rm IC}_{\lambda}\in D_{G(\mathcal{O})}({\rm Gr}_{G})\), restrict to the component \(t^{\mu}\in{\rm Gr}_{T}\) of \({\rm Gr}_{T}\), and take cohomology to obtain an \(R_{T}\)-module map
\[H^{*}_{T}(i^{!}_{\mu}{\rm IC}_{\lambda})\to H^{*}_{T}(\{t^{\mu}\},{\rm Res}_{ T}^{G,\natural}({\rm IC}_{\lambda}))=H^{*}(\{t^{\mu}\},{\rm Res}_{T}^{G,\natural}({ \rm IC}_{\lambda}))\otimes R_{T}. \tag{1.4.2}\]
To formulate their description of \(H^{*}_{T}(i^{!}_{\mu}{\rm IC}_{\lambda})\), we bring in the Lie algebra \(\tilde{\mathfrak{g}}={\rm Lie}(\tilde{G})\), the Cartan subalgebra \(\tilde{\mathfrak{t}}={\rm Lie}(\tilde{T})\), the maximal nilpotent subalgebra \(\tilde{\mathfrak{u}}={\rm Lie}(\tilde{U})\), and the opposite maximal nilpotent subalgebra \(\tilde{\mathfrak{u}}^{-}={\rm Lie}(\tilde{U}^{-})\). We have the triangular decomposition
\[\tilde{\mathfrak{g}}=\tilde{\mathfrak{u}}\oplus\tilde{\mathfrak{t}}\oplus \tilde{\mathfrak{u}}^{-},\]
which induces a projection \(\tilde{\mathfrak{g}}/\tilde{\mathfrak{u}}\twoheadrightarrow\tilde{\mathfrak{t}}\). Let \(\mathbb{C}(-\mu)\) denote the one dimensional \(\check{B}\)-module corresponding to the weight \(-\mu\in\Lambda\) of \(\tilde{T}\). We obtain a canonical map
\[\left(L(\lambda)\otimes{\rm Sym}(\tilde{\mathfrak{g}}/\tilde{ \mathfrak{u}})\otimes\mathbb{C}(-\mu)\right)^{\check{B}} \hookrightarrow\left(L(\lambda)\otimes{\rm Sym}(\tilde{ \mathfrak{g}}/\tilde{\mathfrak{u}})\otimes\mathbb{C}(-\mu)\right)^{\check{T}}\] \[\twoheadrightarrow\left(L(\lambda)\otimes{\rm Sym}(\tilde{ \mathfrak{t}})\otimes\mathbb{C}(-\mu)\right)^{\check{T}}\] \[=\ L(\lambda)_{\mu}\otimes{\rm Sym}(\tilde{\mathfrak{t}}).\]
Here, \(L(\lambda)_{\mu}\subseteq L(\lambda)\) denotes the \(\mu\) weight space. Recall, moreover, that there is a canonical isomorphism \(R_{T}\simeq{\rm Sym}(\mathfrak{t}^{*})\simeq{\rm Sym}(\tilde{\mathfrak{t}})\) which places \(\tilde{\mathfrak{t}}\) in graded degree 2 (see, for example, [2, Theorem 6.6.8]).
**Theorem 1.4.4** (Ginzburg-Riche, [20, Theorem 2.3.1]).: _There exists a unique graded \(R_{T}\simeq\operatorname{Sym}(\check{\mathfrak{t}})\)-module isomorphism_
\[H_{T}^{*}(\check{i}_{\mu}^{!,\natural}\mathrm{IC}_{\lambda})\simeq\left(L( \lambda)\otimes\operatorname{Sym}(\check{\mathfrak{g}}/\check{\mathfrak{u}}) \otimes\mathbb{C}(-\mu)\right)^{\tilde{B}} \tag{1.4.5}\]
_such that the diagram_
_commutes. Here, \(\check{\mathfrak{g}}/\check{\mathfrak{u}}\) is placed in graded degree \(2\) (so that (1.4.3) is a graded map)._
We can formulate their theorem as follows. Let \(\mathcal{F}_{\mathrm{reg}}\in\mathcal{P}_{G(\mathcal{O})}(\mathrm{Gr}_{G})\) denote the _regular sheaf_9, the \(\tilde{G}\)-equivariant ring object of \(\mathcal{P}_{G(\mathcal{O})}(\mathrm{Gr}_{G})\) corresponding to the left regular representation \(\mathcal{O}(\check{G})\in\mathrm{Rep}(\check{G})\) under the geometric Satake equivalence. It admits the Peter-Weyl decomposition
Footnote 9: Strictly speaking, \(\mathcal{F}_{\mathrm{reg}}\) does not belong to \(D_{G(\mathcal{O})}(\mathrm{Gr}_{G})\), but rather to a suitable ind-completion (just as the regular representation \(\mathcal{O}(\check{G})\) does not belong to \(\mathrm{Rep}(\check{G})\)). See §1.6.4. We ignore this point for simplicity.
\[\mathcal{F}_{\mathrm{reg}}=\bigoplus_{\lambda\in\Lambda^{+}}\mathrm{IC}_{ \lambda}\boxtimes L(\lambda)^{*}.\]
We identify \(\operatorname{Sym}(\check{\mathfrak{g}}/\check{\mathfrak{u}})\) with the algebra of functions \(\mathcal{O}\left((\check{\mathfrak{g}}/\check{\mathfrak{u}})^{*}\right)\). By tensoring (1.4.5) with \(L(\lambda)^{*}\) and passing to the direct sum over \(\lambda\in\Lambda^{+}\) and \(\mu\in\Lambda\), we obtain a canonical isomorphism of \(\mathbb{Z}\)-graded \(\Lambda\)-graded \(\tilde{G}\)-equivariant \(\operatorname{Sym}(\check{\mathfrak{t}})\simeq\mathcal{O}(\check{\mathfrak{ t}}^{*})\)-algebras
\[H_{T}^{*}(\check{i}^{!,\natural}\mathcal{F}_{\mathrm{reg}}) \simeq\left(\left(\bigoplus_{\lambda\in\Lambda^{+}}L(\lambda) \boxtimes L(\lambda)^{*}\right)\otimes\mathcal{O}\left((\check{\mathfrak{g}} /\check{\mathfrak{u}})^{*}\right)\right)^{\tilde{U}}\] \[\simeq\left(\mathcal{O}(\check{G})\otimes\mathcal{O}((\check{ \mathfrak{g}}/\check{\mathfrak{u}})^{*})\right)^{\tilde{U}}\] \[\simeq\mathcal{O}\left(\check{G}\times(\check{\mathfrak{g}}/ \check{\mathfrak{u}})^{*}\right)^{\tilde{U}}\] \[\simeq\mathcal{O}(T^{*}(\check{G}/\check{U})).\]
We can interpret the \(\Lambda\)-grading as arising from the \(\check{T}\)-action on the variety \(T^{*}(\check{G}/\check{U})\) (through the identification \(\check{T}\simeq\check{B}/\check{U}\) and the natural \(\check{B}\)-action by right translation). The \(\mathcal{O}(\check{\mathfrak{t}}^{*})\)-algebra structure arises from the \(\check{T}\)-equivariant _moment map_
\[\mu:T^{*}(\check{G}/\check{U})\rightarrow\check{\mathfrak{t}}^{*}.\]
The \(\check{G}\)-module structure arises from the natural action of \(\check{G}\) on the variety \(T^{*}(\check{G}/\check{U})\) by left translation. Finally, the \(\mathbb{Z}\)-grading on \(\mathcal{O}(T^{*}(\check{G}/\check{U}))\) is induced by the \(\mathbb{G}_{m}\)-action on the fibers of the vector bundle \(T^{*}(\check{G}/\check{U})\) through which \(t\in\mathbb{G}_{m}\) acts by \(t^{-2}\).
The isomorphism \(H^{*}_{T}(i^{!,\natural}{\mathcal{F}}_{\rm reg})\simeq{\mathcal{O}}(T^{*}(\check{ G}/\check{U}))\) conversely yields Theorem 1.4.4 by passage to the \(\check{G}\)-isotypic and \(\check{T}\)-isotypic components.
### Our results
The geometric Satake equivalence induces a canonical pinning of the dual group \(\check{G}\). We have already explained that \(\check{G}\) is equipped with a canonical Borel subgroup \(\check{B}\subseteq\check{G}\) (concretely, the subgroup of automorphisms of the fiber functor \(H^{*}\) preserving the Mirkovic-Vilonen filtration) and a canonical maximal torus \(\check{T}\subseteq\check{B}\); the point is that one can moreover find distinguished non-zero root vectors \(X_{\alpha}\in\mathfrak{g}_{\alpha}\). We refer the reader to [20, SS6.5] for a discussion of the precise pinning that we will use. Let \(\Delta\) denote the set of simple roots of \(\check{G}\) and \(I\subseteq\Delta\) the subset of simple roots of \(\check{L}\).
We can use this pinning to define a distinguished additive character \(\psi_{I}\) of \(\check{U}\) as the composition
Here, \({\rm pr}_{I}\) is given by projection onto the factors indexed by \(I\), and the last map is summation. We also denote by \(\psi_{I}\) the linear form \((d\psi_{I})_{1}\in\check{\mathfrak{u}}^{*}\), where \(1\in\check{U}\) is the identity. We write \(\check{\mathfrak{u}}^{\perp}\subseteq\check{\mathfrak{g}}^{*}\) for the linear complement to \(\check{\mathfrak{u}}\subseteq\check{\mathfrak{g}}\). We may regard \(\psi_{I}\) as an element of \(\check{\mathfrak{g}}^{*}\) by extending it trivially over \(\check{\mathfrak{b}}^{-}\). Let \(\psi:=\psi_{\Delta}\) (the _non-degenerate_ additive character).
The algebraic variety \(T^{*}\check{G}\) is equipped with a pair of commuting actions of \(\check{G}\) by left and right translation. We have the \(\check{G}\)-equivariant (for the _left_ translation \(\check{G}\)-action) moment map (for the _right_ translation \(\check{G}\)-action)
\[\mu:T^{*}\check{G}\simeq\check{G}\times\check{\mathfrak{g}}^{*}\xrightarrow{ \mathrm{pr}_{2}}\check{\mathfrak{g}}^{*}\]
given by projection to the second factor. The moment map for the induced \(\check{U}\)-action is the composition \(T^{*}\check{G}\xrightarrow{\mathrm{pr}_{2}}\check{\mathfrak{g}}^{*} \twoheadrightarrow\check{\mathfrak{u}}^{*}\). Since \(\psi_{I}\in\check{\mathfrak{u}}^{*}\) is \(\check{U}\)-invariant, we can form the Hamiltonian reduction of the Hamiltonian \(\check{U}\)-variety \(T^{*}\check{G}\to\check{\mathfrak{u}}^{*}\) at the level \(\psi_{I}\) to obtain the _partial Kostant-Whittaker reduction_
\[T^{*}(\check{G}/(\check{U},\psi_{I})):=T^{*}\check{G}/\!\!/(\check{U},\psi_{I} ):=(T^{*}\check{G}\times_{\check{\mathfrak{u}}^{*}}\{\psi_{I}\})/\check{U}.\]
Our entire SS3 is dedicated to a discussion of this construction and its basic properties.
The variety \(T^{*}(\check{G}/(\check{U},\psi_{I}))\) carries a natural left \(\check{G}\)-action. Moreover, we have a canonical \(\check{G}\)-equivariant projection
\[\pi_{I}:T^{*}(\check{G}/(\check{U},\psi_{I}))\hookrightarrow(T^{*}\check{G}) /\check{U}\twoheadrightarrow\check{\mathfrak{g}}^{*}/\check{U}\twoheadrightarrow \check{\mathfrak{t}}^{*}/\check{U}\to\check{\mathfrak{t}}_{I}\]
to the GIT quotient \(\check{\mathfrak{t}}_{I}:=\check{\mathfrak{l}}^{*}/\check{L}\). The homomorphism \(\pi_{I}^{*}\) equips \({\mathcal{O}}(T^{*}(\check{G}/(\check{U},\psi_{I})))\) with the structure of a \(\check{G}\)-equivariant \({\mathcal{O}}(\check{\mathfrak{t}}_{I})\)-algebra.
The action of \(\mathbb{G}_{m}\) on \(\mathfrak{l}^{*}\) given by \(t\cdot\xi=t^{-2}\xi\) commutes with the coadjoint action of \(\check{L}\), hence descends to a \(\mathbb{G}_{m}\)-action on \(\check{\mathfrak{t}}_{I}\) (which places the generators of the polynomial algebra \({\mathcal{O}}(\check{\mathfrak{t}}_{I})\) in degrees given by _twice_ the characteristic exponents of \(\check{L}\)). We let \(\mathbb{G}_{m}\) act on \(T^{*}\check{G}\simeq\check{G}\times\check{\mathfrak{g}}^{*}\) by the cocharacter \(2\check{\rho}_{I}\) on the \(\check{G}\) factor and by \(t\mapsto t^{-2}\) on \(\check{\mathfrak{g}}^{*}\) (we will discuss gradings more
carefully in Construction 4.4.2). These actions of \(\mathbb{G}_{m}\) make \(\pi_{I}\) a \(\mathbb{G}_{m}\)-equivariant map. Hence, \(\pi_{I}^{*}\) equips \(\mathcal{O}(T^{*}(\check{G}/(\check{U},\psi_{I})))\) with the structure of a _graded_\(\check{G}\)-equivariant \(\mathcal{O}(\tilde{\mathfrak{c}}_{I})\)-algebra.
Let \(\mathfrak{J}_{I}\to\check{\mathfrak{c}}_{I}\) denote Ngo's regular centralizer group scheme [30, Lemme 2.1.1] (see also Riche's treatment [32]) associated to the group \(\check{L}_{I}\). It is a commutative affine group scheme over \(\check{\mathfrak{c}}_{I}\). Moreover, when we discuss the regular centralizer in more detail in SSSS3.2, we shall see that it acts canonically on the variety \(T^{*}(\check{G}/(\check{U},\psi_{I}))\) (or _any_ partial Kostant-Whittaker reduction, for that matter). When the derived subgroup of \(G\) is almost simple, Yun-Zhu [34] provide a canonical isomorphism of affine \(\check{\mathfrak{c}}_{I}\)-group schemes
\[\operatorname{Spec}H^{L(\mathcal{O})}_{*}(\operatorname{Gr}_{L},\mathbb{C}) \simeq\mathfrak{J}_{I}. \tag{1.5.1}\]
We will discuss their results in detail in both SSSS2.5 and SSSS4.3.
On the other hand, we will recall (again following Yun-Zhu [34]) in SSSS2.5 how the group scheme \(\operatorname{Spec}H^{L(\mathcal{O})}_{*}(\operatorname{Gr}_{L},\mathbb{C})\) acts canonically on \(H^{*}_{L(\mathcal{O})}(\operatorname{Gr}_{L},i^{!,\natural}\mathcal{F}_{\rm reg})\).
Finally, recall that there is a canonical algebra isomorphism \(R_{L}:=H^{*}_{L}(\operatorname{pt},\mathbb{C})\simeq\mathcal{O}(\check{ \mathfrak{c}}_{I})\). This isomorphism identifies our grading on \(\mathcal{O}(\check{\mathfrak{c}}_{I})\) with the cohomological grading on \(R_{L}\).
With this notational setup in place, we can state our main result in its basic form.
**Theorem 1.5.2**.: _There is a canonical \(\check{G}\)-equivariant isomorphism of graded \(R_{L}\)-algebras_
\[\Upsilon_{I}:H^{*}_{L(\mathcal{O})}(\operatorname{Gr}_{L},i^{!,\natural} \mathcal{F}_{\rm reg})\simeq\mathcal{O}(T^{*}(\check{G}/(\check{U},\psi_{I}))).\]
_The left hand side is regarded as a ring via the lax monoidal structure on \(i^{!,\natural}\) of SSSS2.2. Assume that the derived subgroup of \(G\) is almost simple (so that the Yun-Zhu isomorphism (1.5.1) is available). Then, \(\Upsilon_{I}\) is moreover \(\mathfrak{J}_{I}\)-equivariant._
In fact, our result is stronger, and describes the canonical \(\check{G}\)-equivariant graded \(R_{L}\)-algebra homomorphism of (1.3.1)
\[\xi^{\natural}:=\Xi^{\natural}(\mathcal{F}_{\rm reg}):H^{*}_{L(\mathcal{O})}( \operatorname{Gr}_{L},i^{!,\natural}\mathcal{F}_{\rm reg})\to H^{*}_{L( \mathcal{O})}(\operatorname{Gr}_{I},\operatorname{Res}^{G,\natural}_{L}( \mathcal{F}_{\rm reg}))\]
in terms of the Langlands dual group. However, to avoid defining yet more notation in this introduction, we will simply note that the version of Theorem 1.5.2 stated and proved in SSSS4.4 as Theorem 4.4.5 asserts and demonstrates these additional compatibliities.
**Remark 1.5.3**.: We want to draw attention to what we find to be one of the more interesting aspects of the proof of Theorem 1.5.2. Namely, our choice to treat the regular object \(\mathcal{F}_{\rm reg}\) rather than the complexes \(\operatorname{IC}_{\lambda}\) individually is _essential_ to our argument. In [20, Proposition 3.2.3], Ginzburg-Riche establish that for any dominant \(\lambda\in\Lambda^{+}\), the \(\mathcal{O}(\check{\mathfrak{c}}^{*})\)-module
\[H^{0}(\widetilde{\mathfrak{g}},\mathcal{O}_{\widetilde{\mathfrak{g}}}(\lambda ))=\operatorname{Ind}^{\check{G}}_{B}(\operatorname{Sym}(\check{\mathfrak{g}} /\check{\mathfrak{u}})\otimes\mathbb{C}(-\lambda))^{\check{B}}\]
is _free_. Here, \(\widetilde{\check{\mathfrak{g}}}\to\check{\mathfrak{g}}\) is the Grothendieck-Springer alteration and \(\mathcal{O}_{\widetilde{\mathfrak{g}}}(\lambda)\) is the pullback of the Borel-Weil line bundle \(\mathcal{O}_{\check{G}/\check{B}}(\lambda)\) on the flag variety \(\check{G}/\check{B}\) along the natural projection \(q:\widetilde{\mathfrak{g}}\to\check{G}/\check{B}\). Their proof appeals to cohomological computations of Broer [23, Proposition
2.5]. By working instead with the entire coordinate ring \(\mathcal{O}(T^{*}(\tilde{G}/\check{U}))\), (a suitable adaptation of) our application of Hartog's principle (Lemma 3.5.1) can be used in place of this freeness. We then obtain the freeness of \(H^{0}(\widetilde{\mathfrak{g}},\mathcal{O}_{\widetilde{\mathfrak{g}}}(\lambda))\) as a _corollary_ to the Ginzburg-Riche theorem.
We can pass to \(\check{G}\)-isotypic components in Theorem 1.5.2 and undo the grading twist in the definition of \(i^{!,\natural}\) to obtain the following more concrete result.
**Corollary 1.5.4**.: _Let \(\lambda\in\Lambda^{+}\) denote a dominant weight of \(\check{T}\) and \(L(\lambda)=\mathbb{S}_{G}(\mathrm{IC}_{\lambda})\) the corresponding simple \(\check{G}\)-module. Then, we have a canonical graded \(R_{L}\simeq\mathcal{O}(\check{\mathfrak{r}}_{I})\)-module isomorphism_
\[\Upsilon_{I,\lambda}:H^{*}_{L(\mathcal{O})}(\mathrm{Gr}_{L},i^{!}\mathrm{IC}_ {\lambda})\simeq(L(\lambda)\otimes\mathcal{O}(\check{\mathfrak{u}}^{\perp}+ \psi_{I}))^{\check{U}}.\]
_The grading of \(H^{*}_{L(\mathcal{O})}(\mathrm{Gr}_{L},i^{!}\mathrm{IC}_{\lambda})\) by the component group \(\pi_{0}(\mathrm{Gr}_{L})\simeq\Lambda/Q_{L}\simeq X^{*}(Z(\check{L}))\) corresponds to the grading of \((L(\lambda)\otimes\mathcal{O}(\check{\mathfrak{u}}^{\perp}+\psi_{I}))^{\check {U}}\) by the eigenvalues of the center \(Z(\check{L})\). Moreover, the cohomological grading on \(H^{*}_{L(\mathcal{O})}(\mathrm{Gr}_{L},i^{!}\mathrm{IC}_{\lambda})\) corresponds under \(\Upsilon_{I,\lambda}\) to the Brylinski-Kostant _grading on \((L(\lambda)\otimes\mathcal{O}(\check{\mathfrak{u}}^{\perp}+\psi_{I}))^{\check {U}}\)._
_Assume moreover that \(G\) has almost simple derived group. Then, \(\Upsilon_{I,\lambda}\) is an isomorphism of \(\mathfrak{J}_{I}\)-modules._
To reiterate Remark 1.5.3, we don't know how to prove Corollary 1.5.4 without first proving Theorem 1.5.2, since our approach to the latter leverages the algebraic geometry of the moment map \(T^{*}(\check{G}/\check{V})\to\check{\mathfrak{l}}^{*}\).
### Notation and conventions
We will now review some of our notation pertaining to root data and dual groups.
#### 1.6.1. Group theoretic notation
We fix a connected reductive group \(G\) over \(\mathbb{C}\). At certain points in SS4, we will take the additional hypothesis that the derived subgroup \(G^{\mathrm{der}}\subseteq G\) is almost simple; we will make this assumption explicit in theorem statements whenever it is needed. We will fix a Borel subgroup \(B\subseteq G\) and a maximal torus \(T\subseteq B\) with weight lattice \(\check{\Lambda}=X^{*}(T)\) and coweight lattice \(\Lambda=X_{*}(T)\). Let \(U\subseteq B\) denote the unipotent radical. We also have the opposite Borel subgroup \(B^{-}\subseteq G\) and its unipotent radical \(U^{-}\subseteq B^{-}\).
Let \(\check{\Phi}\subseteq X^{*}(T)\) denote the set of roots, \(\check{\Phi}^{+}\subseteq\check{\Phi}\) the subset of positive roots (relative to \(B\)), and \(\check{\Delta}\subseteq\check{\Phi}^{+}\) the subset of simple roots. Let \(\Phi\subseteq\Lambda=X_{*}(T)\) denote the set of coroots, \(\Phi^{+}\subseteq\Phi\) the positive coroots, and \(\Delta\subseteq\Phi^{+}\) the simple coroots. We typically denote coroots \(\alpha\in\Phi\) without a check; the corresponding root is denoted \(\check{\alpha}\in\check{\Phi}\). Let \(\Lambda^{+}\subseteq\Lambda\) denote the subset of dominant coweights.
Given a subset \(I\subseteq\check{\Delta}\), we let \(P_{I}\subseteq G\) denote the corresponding standard parabolic subgroup. That is, \(P_{I}\) is the unique parabolic subgroup of \(G\) containing \(B\) such that the negative simple root spaces \(\mathfrak{g}_{-\check{\alpha}}\subseteq\mathfrak{g}\) contained in \(\mathfrak{p}_{I}:=\mathrm{Lie}(P_{I})\) are exactly those labelled by the negative simple roots \(\check{\alpha}\in I\). We let \(V_{I}\subseteq P_{I}\) denote the unipotent radical of \(P_{I}^{-}\), \(P_{I}^{-}\subseteq G\) the opposite parabolic subgroup, and \(V_{I}^{-}\subseteq P_{I}^{-}\) its unipotent radical. Let
\(P_{I}\cap P_{I}^{-}\) denote the unique Levi subgroup of \(P_{I}\) containing \(T\). Let \(B_{I}=B/V_{I}\subseteq L_{I}\) and \(U_{I}=U/V_{I}\subseteq L_{I}\); \(B_{I}\) is a Borel subgroup of \(L_{I}\) and \(U_{I}\subseteq B_{I}\) is the unipotent radical of \(B_{I}\).
Let \(\check{\Phi}_{I}\subseteq\check{\Phi}\) denote the set of roots of \(L_{I}\) and \(\check{\Phi}_{I}^{+}\subseteq\check{\Phi}_{I}\) the subset of positive roots (relative to \(\check{B}_{I}\)). Let \(2\check{\rho}_{I}=\sum_{\check{\alpha}\in\check{\Phi}_{I}^{+}}\check{\alpha} \in X^{*}(T)\) and \(2\rho_{I}=\sum_{\alpha\in\check{\Phi}_{I}^{+}}\alpha\) denote the sum of the positive roots of \(L_{I}\) and positive coroots of \(L_{I}\), repsectively. Note that \(L_{I}=Z_{G}(2\rho_{I})\) is the centralizer of the homomorphism \(2\rho_{I}:\mathbb{G}_{m}\to G\) in \(G\). When \(I=\Delta\), we simply write \(2\rho\) and \(2\check{\rho}\) in place of \(2\rho_{\Delta}\) and \(2\check{\rho}_{\Delta}\), respectively.
Let \(\check{G}\) denote the Langlands dual group. Recall that the geometric Satake equivalence induces a _canonical_ pinning of \(\check{G}\) (independently of our choice of \(T\subseteq B\subseteq G\), which is made only for notational convenience). We adopt the conventions of [20, Section 6.5] and refer the reader there for a discussion of the pinning that we use (although note that their notation differs from ours in that what we call \(G\), they call \(\check{G}\)). In particular, let \(\check{T}\subseteq\check{B}\subseteq\check{G}\) denote the maximal torus and Borel subgroup, \(\check{B}^{-}\subseteq\check{G}\) the opposite Borel subgroup, \(\check{U}\subseteq\check{B}\) and \(\check{U}^{-}\subseteq\check{B}^{-}\) the unipotent radical and opposite unipotent radical, respectively. We have canonical identifications \(X^{*}(T)=X_{*}(\check{T})\) and \(X_{*}(T)=X^{*}(\check{T})\) under which \(\check{\Phi}\subseteq X^{*}(T)\) and \(\Phi\subseteq X_{*}(T)\) correspond to the coroots and roots of \(\check{G}\), respectively. We abuse notation and let \(I\subseteq\Delta\) denote the set of simple coroots of \(G\) (roots of \(\check{G}\)) corresponding a given subset \(I\subseteq\check{\Delta}\) of simple roots of \(G\) (coroots of \(\check{G}\)). Given a subset \(I\subseteq\check{\Delta}\), we let \(\check{P}_{I}\subseteq\check{G}\), \(\check{P}_{I}^{-}\subseteq\check{G}\), \(\check{V}_{I}\subseteq\check{P}_{I}\), \(\check{V}_{I}^{-}\subseteq\check{P}_{I}^{-}\), \(\check{L}_{I}=\check{P}_{I}\cap\check{P}_{I}^{-}\) denote the corresponding parabolic subgroup, opposite parabolic subgroup, unipotent radical, opposite unipotent radical, and Levi subgroup, respectively.
If the subset \(I\subseteq\Delta\) is understood, we will sometimes drop the subscript \(I\) if it is unlikely to cause confusion; for example, we may write \(\check{L}\) in place of \(\check{L}_{I}\).
Given a subset \(I\subseteq\Delta\), we let \(i_{I}:\mathrm{Gr}_{I}:=\mathrm{Gr}_{L_{I}}\hookrightarrow\mathrm{Gr}_{G}\) denote the induced closed immersion of affine Grassmannians. When \(I=\emptyset\), we often write \(\mathrm{Gr}_{T}\) in place of \(\mathrm{Gr}_{\emptyset}\) and \(i_{T}\) in place of \(i_{\emptyset}\).
#### 1.6.2. Groups attached to the formal disc
As in the introduction, we write \(G(\mathcal{O})\) for the arc group associated to \(G\) in place of the more technically correct notation \(L^{+}G\) (where \(L^{+}G(R):=G(R[[t]])\)), except when we are intentionally being more precise. Similarly, we write \(G(\mathcal{K})\) for the loop group \(LG\) (where \(LG(R)=G(R((t)))\)).
#### 1.6.3. Group actions
All group actions are _left_ actions. Given an action of an algebraic group \(G\) on an affine variety \(X\), the induced action of \(G\) on the coordinate ring \(\mathcal{O}(X)\) is characterized by the formula \((g\cdot f)(x)=f(g^{-1}x)\) (where \(g\in G\), \(f\in\mathcal{O}(X)\) and \(x\in X\)). These conventions are relevant for determining the gradings produced by \(\mathbb{G}_{m}\)-actions. For example, suppose that \(\mathbb{G}_{m}\) acts on the vector space \(\check{\mathfrak{t}}^{*}\) by the formula \(t\cdot\xi=t^{-2}\xi\) (for \(t\in\mathbb{G}_{m}\), \(\xi\in\check{\mathfrak{t}}^{*}\)). Then, \(t\in\mathbb{G}_{m}\) acts on a linear form \(x\in\mathfrak{t}\) by \(t\cdot x=t^{2}x\), since \((t\cdot x)(\xi)=x(t^{-1}\cdot\xi)=x(t^{2}\xi)=t^{2}x(\xi)\) for any \(\xi\in\check{\mathfrak{t}}^{*}\). Therefore, this \(\mathbb{G}_{m}\)-action on \(\check{\mathfrak{t}}\) equips the polynomial algebra \(\mathrm{Sym}(\mathfrak{t})\simeq\mathcal{O}(\check{\mathfrak{t}}^{*})\) with the grading in which the generators have degree \(+2\), i.e. the cohomological grading on \(H_{T}^{*}(\mathrm{pt},\mathbb{C})\).
#### 1.6.4. Sheaf-theoretic notation
Let \(H\) denote a complex algebraic group acting on a complex algebraic variety \(X\). We write \(D_{H}(X):=D_{H}(X,\mathbb{C})\) for the triangulated \(H\)-equivariant constructible derived category of sheaves of complex vector spaces on the underlying topological space \(X^{\operatorname{an}}(\mathbb{C})\), as defined in [2, Definition 6.2.11]. We omit the superscript "\(b\)" on \(D_{H}(X)\) as we will not have occasion to consider the non-constructible derived category. Let \(\mathcal{P}_{H}(X)\subseteq D_{H}(X)\) denote the heart of the perverse \(t\)-structure, the abelian category of \(H\)-equivariant perverse sheaves on \(X\). Of course, we will also need to consider the situation in which \(X\) is a ind-scheme over \(\mathbb{C}\) and \(H\) is a pro-algebraic \(\mathbb{C}\)-group. In this situation, we follow the conventions of [29, SS2.2] (see also [2, SS9.1]) in defining \(D_{H}(X)\). In particular, our convention is that every object \(F\in D_{G(\mathcal{O})}(\operatorname{Gr}_{G})\) of the spherical Hecke category is supported on a closed stratum \(\operatorname{Gr}_{G}^{\leq\lambda}\) (for some \(\lambda\in\Lambda^{+}\)).
We have found it convenient to organize the constructions of this paper around the "regular object" \(\mathcal{F}_{\operatorname{reg}}\) (see Remark 2.2.1), the "perverse sheaf" on \(\operatorname{Gr}_{G}\) corresponding under the geometric Satake equivalence \(\mathcal{P}_{G(\mathcal{O})}(\operatorname{Gr}_{G})\simeq\operatorname{Rep}( \check{G})\) to the (left) regular representation \(\mathcal{O}(\check{G})\) of \(\check{G}\). Of course, under the conventions above, there is no such object in the category \(\mathcal{P}_{G(\mathcal{O})}(\operatorname{Gr}_{G})\) (just as the regular representation \(\mathcal{O}(\check{G})\) does not belong to the category \(\operatorname{Rep}(\check{G})\) of _finite dimensional_\(\check{G}\)-modules). It is equally obvious that this observation does not pose a genuine issue. The least technical solution is to simply define \(\mathcal{F}_{\operatorname{reg}}\) as a \(\Lambda^{+}\)-graded object of the abelian category \(\mathcal{P}_{G(\mathcal{O})}(\operatorname{Gr}_{G})\) (in the sense of [29, SS2.1.3]) through the formula
\[\mathcal{F}_{\operatorname{reg}}:=\bigoplus_{\lambda\in\Lambda^{+}} \operatorname{IC}_{\lambda}\boxtimes L(\lambda)^{*}.\]
Here, \(L(\lambda)\in\operatorname{Rep}(\check{G})\) is the simple \(\check{G}\)-module of highest weight \(\lambda\in\Lambda^{+}\) and \(\operatorname{IC}_{\lambda}\in\mathcal{P}_{G(\mathcal{O})}(\operatorname{Gr }_{G})\) denotes the corresponding irreducible perverse sheaf supported on \(\operatorname{Gr}_{G}^{\leq\lambda}:=\overline{\operatorname{Gr}_{G}^{ \lambda}}\), where \(\operatorname{Gr}_{G}^{\lambda}=G(\mathcal{O})t^{\lambda}\subseteq \operatorname{Gr}_{G}\) is the corresponding spherical orbit. The corestriction \(i_{I}^{!}\mathcal{F}_{\operatorname{reg}}\) should then be understood as the \(\Lambda^{+}\)-graded object
\[i_{I}^{!}\mathcal{F}_{\operatorname{reg}}:=\bigoplus_{\lambda\in\Lambda^{+}}i_ {I}^{!}\operatorname{IC}_{\lambda}\boxtimes L(\lambda)^{*}\]
of \(D_{L_{I}(\mathcal{O})}(\operatorname{Gr}_{I})\). However, we have chosen to completely suppress this ultimately notational issue in the body of the paper, and will instead refer to \(\mathcal{F}_{\operatorname{reg}}\) as an object of \(\mathcal{P}_{G(\mathcal{O})}(\operatorname{Gr}_{G})\); the concerned reader should have no issue in making this abuse of notation precise. Similarly, we will refer to the regular representation \(\mathcal{O}(\check{G})\) as an object of \(\operatorname{Rep}(\check{G})\).
### Acknowledgements
We thank David Nadler, Tsao-Hsien Chen, John O'Brien, Jeremy Taylor, Yixuan Li, Peter Haine, Guglielmo Nocera, and Connor Halleck-Dube for many useful comments and discussions. We are especially grateful for the extensive mathematical and academic support that we have received from David Nadler over the past four years.
Of course, this paper was directly motivated by the works of Ginzburg-Riche [20] and Ginzburg-Kazhdan [19] and borrows heavily from their ideas. Moreover, the foundational works of Achar [2] and Achar-Riche [3] were invaluable in preparing this work.
The author was supported by an NSF Graduate Research Fellowship during the writing of this paper.
## 2. Automorphic Side
### Equivariant localization
We begin on the "geometric side" of Theorem 1.5.2. Let \(I\subseteq\mathring{\Delta}\) denote a set of simple roots of \(G\). Let \(\lambda\in\Lambda^{+}\) denote a dominant coweight and let \(\mathrm{IC}_{\lambda}\in\mathcal{P}_{G(\mathcal{O})}(\mathrm{Gr}_{G})\) denote the corresponding irreducible perverse sheaf supported on \(\mathrm{Gr}_{G}^{\leq\lambda}:=\overline{\mathrm{Gr}_{G}^{\lambda}}\), where \(\mathrm{Gr}_{G}^{\lambda}=G(\mathcal{O})t^{\lambda}\subseteq\mathrm{Gr}_{G}\) is the corresponding spherical orbit. Let \(\mathrm{Gr}_{I}=\mathrm{Gr}_{L_{I}}\) denote the affine Grassmannian of the Levi subgroup \(L_{I}\subseteq G\) and \(i_{I}:\mathrm{Gr}_{I}\hookrightarrow\mathrm{Gr}_{G}\) the inclusion. We will study the cohomology \(H^{*}_{L_{I}(\mathcal{O})}(\mathrm{Gr}_{I},i_{I}^{!}\mathrm{IC}_{\lambda})\) through the technique of equivariant localization. We abuse notation and write \(i_{I}^{!}\mathrm{IC}_{\lambda}\in D_{T}(\mathrm{Gr}_{L})\) for the image of \(i_{I}^{!}\mathrm{IC}_{\lambda}\in D_{L(\mathcal{O})}(\mathrm{Gr}_{L})\) under the functor forgetting the \(L(\mathcal{O})\)-equivariance down to \(T\)-equivariance. Recall from [21] (especially the discussion surrounding [21, Theorem 1.6.2]) that a complex \(F\in D_{T}(\mathrm{Gr}_{L})\) is _equivariantly formal_ if the spectral sequence
\[E_{2}^{p,q}=H_{T}^{p}(\mathrm{pt},H^{q}(\mathrm{Gr}_{I},F))\implies H_{T}^{p+q }(\mathrm{Gr}_{I},F)\]
degenerates at \(E_{2}\).
**Proposition 2.1.1**.: _The complex \(i_{I}^{!}\mathrm{IC}_{\lambda}\in D_{T}(\mathrm{Gr}_{L})\) is equivariantly formal._
Proof.: For each point \(x\in\mathrm{Gr}_{I}\), we have that \(H^{j}(i_{x}^{!}i_{I}^{!}\mathrm{IC}_{\lambda})\simeq H^{j}(i_{x}^{!}\mathrm{IC }_{\lambda})\) vanishes for \(j\not\equiv\langle 2\check{\rho},\lambda\rangle\pmod{2}\), by the well-known parity vanishing property of \(\mathrm{IC}_{\lambda}\)[5, Lemma 4.5]. Since \(i^{!}\mathrm{IC}_{\lambda}\) is constructible with respect to an affine paving of \(\mathrm{Gr}_{I}\) (the Schubert stratification by the orbits of an Iwahori subgroup \(\mathcal{I}\subseteq L(\mathcal{O})\)), a standard argument with the Cousin spectral sequence implies that \(H^{j}(\mathrm{Gr}_{I},i_{I}^{!}\mathrm{IC}_{\lambda})\) vanishes for \(j\not\equiv\langle 2\check{\rho},\lambda\rangle\).
It is now clear from parity considerations that the spectral sequence
\[E_{2}^{p,q}=H_{T}^{p}(\mathrm{pt},H^{q}(\mathrm{Gr}_{I},i_{I}^{!}\mathrm{IC}_{ \lambda}))\implies H_{T}^{p+q}(\mathrm{Gr}_{I},i_{I}^{!}\mathrm{IC}_{\lambda})\]
degenerates at \(E_{2}\).
**Remark 2.1.2**.: Proposition 2.1.1 is a special case of the following very simple but quite general observation. Let \(X\) denote a \(T\)-variety equipped with a \(T\)-stable affine paving. Let \(F\in D_{T}(X)\) denote a \(T\)-equivariant complex whose underlying non-equivariant complex \(\mathrm{For}^{T}(F)\in D(X)\) is \(!\)_-parity_ (in the language of [22], and with respect to the dimension parity \(\diamond\)). Then, \(F\) is equivariantly formal. Moreover, the property of being \(!\)-parity is preserved by \(!\)-restriction to any \(T\)-stable closed subvariety \(i:Y\hookrightarrow X\). Thus, if \(Y\) also carries a \(T\)-stable affine paving, then \(i^{!}F\) is equivariantly formal.
**Remark 2.1.3**.: Let \(\mathrm{IC}_{\lambda}\) be equipped with its natural structure of \(T\)-equivariant mixed Hodge module (the theory of equivariant mixed Hodge modules is not very well documented, but see [1] for a treatment) on \(\mathrm{Gr}_{G}\), for which it is pure of weight \(0\). Note that \(i^{!}_{I}\mathrm{IC}_{\lambda}\) is not necessarily pure. Indeed, \(i^{!}_{I}\mathrm{IC}_{\lambda}\) is usually not a semisimple complex (which implies that it is not pure, by the BBD decomposition theorem). Therefore, the equivariant formality of Proposition 2.1.1 really requires the costalk parity considerations used in the proof and does not follow from the proof of [21, Theorem 14.1(7)]. On the other hand, the same argument works without change in settings which lack a theory of weights but in which strong parity vanishing results hold; for example, on the quaternionic affine Grassmannian of [12].
Recall (see [28, Section 3]) that \(\mathrm{Gr}_{G}\) admits the Iwasawa decomposition
\[\mathrm{Gr}_{G}=\coprod_{\nu\in\Lambda}S_{\nu},\]
where \(S_{\nu}=U(\mathcal{K})\cdot t^{\nu}\) is the orbit of the group ind-scheme \(U(\mathcal{K})\) through the point \(t^{\nu}\in\mathrm{Gr}_{G}\). Following [15], we write \(L^{-}B\) for the ind-affine group ind-scheme representing the functor \(R\mapsto L^{-}B(R)=B(R[t^{-1}])\) and \(L^{--}B\subseteq L^{-}B\) for the kernel of the reduction morphism \(L^{-}B\twoheadrightarrow B\) given by \(t^{-1}\mapsto 0\). Then, by the discussion in [3] surrounding Equation 1.2.6, there is a \(T\)-equivariant isomorphism
\[L^{--}B\simeq S_{\nu} \tag{2.1.4}\]
given on \(\mathbb{C}\)-points by \(g\mapsto t^{\nu}gG(\mathcal{O})\in\mathrm{Gr}_{G}\). Here, \(T\) acts on \(L^{--}B\) acts by conjugation. In the proof of Proposition 2.1.5, we write \(V=V_{I}\) for the unipotent radical of \(P=P_{I}\) to ease notation.
**Proposition 2.1.5**.: _For \(x\in\mathrm{Gr}_{G}\), let \(T_{x}\subseteq T\) denote the stabilizer of \(x\) in \(T\). Assume that \(\alpha|_{T_{x}}:T_{x}\to\mathbb{G}_{m}\) is a dominant morphism for every \(\alpha\in\Phi_{G}^{+}\setminus\Phi_{L}^{+}\). Then, \(x\in\mathrm{Gr}_{L}\)._
Proof.: The point \(x\) belongs to the semi-infinite orbit \(S_{\nu}\) for a unique \(\nu\in\Lambda\). We will use the \(T\)-equivariant isomorphism \(L^{--}B\simeq S_{\nu}\) of Equation 2.1.4. Let \(B^{\prime}=B\cap L\). We also have a \(T\)-equivariant isomorphism of schemes \(V\times B^{\prime}\simeq B\) (\((v,b^{\prime})\mapsto vb^{\prime}\)), which induces a \(T\)-equivariant isomorphism \(L^{--}V\times L^{--}B^{\prime}\simeq L^{--}B\). Since \(B^{\prime}\) is a Borel subgroup of \(L\), we also obtain a \(T\)-equivariant isomorphism \(L^{--}B^{\prime}\simeq S_{\nu}^{\prime}\), where \(S_{\nu}^{\prime}\) is the semi-infinite orbit in \(\mathrm{Gr}_{L}\) through \(t^{\nu}\). Hence, there is a \(T\)-equivariant isomorphism \(S_{\nu}\simeq L^{--}V\times S_{\nu}^{\prime}\). Let \(\pi_{1}:S_{\nu}\to L^{--}V\) denote the first projection. To show that \(x\in\mathrm{Gr}_{L}\), it suffices to show that \(\pi_{1}(x)\) is the identity element \(e\in L^{--}V\). Since \(\pi_{1}\) is \(T\)-equivariant, the subgroup \(T_{x}\subseteq T\) also fixes \(\pi_{1}(x)\). Furthermore, we have a \(T\)-equivariant isomorphism of ind-affine ind-schemes
\[L^{--}V\simeq\prod_{\alpha\in\Phi_{G}^{+}\setminus\Phi_{L}^{+}}L^{--}U_{\alpha},\]
where \(U_{\alpha}\subseteq U\) is the root subgroup corresponding to the root \(\alpha\). Let \(\pi_{\alpha}\) denote the projection onto the factor indexed by \(\alpha\). We have an isomorphism \(L^{--}U_{\alpha}\simeq L^{--}\mathbb{A}^{1}\) under which the
adjoint action of \(T\) on \(L^{--}U_{\alpha}\) is induced by the homomorphism \(\alpha:T\to\mathbb{G}_{m}\) and the natural scaling action of \(\mathbb{G}_{m}\) on \(L^{--}\mathbb{A}^{1}\). Since \(\alpha|_{T_{x}}\) dominates \(\mathbb{G}_{m}\), we deduce that \((L^{--}\mathbb{A}^{1})^{T_{x}}=(L^{--}\mathbb{A}^{1})^{\mathbb{G}_{m}}=\{0\}\). Therefore, \(\pi_{\alpha}(\pi_{1}(x))=\pi_{\alpha}(e)\in L^{--}U_{\alpha}\) for each \(\alpha\in\Phi_{G}^{+}\setminus\Phi_{L}^{+}\), which implies that \(\pi_{1}(x)=e\), as claimed.
We will now apply the localization theorem in \(T\)-equivariant cohomology. We again refer the reader to [21] for a general treatment as well as [35, Theorem A.1.13] for the specific statement that we use. Recall that we have a canonical graded ring isomorphism
\[H_{T}^{*}(\operatorname{pt},\mathbb{C})\simeq\operatorname{Sym}(\mathfrak{t} ^{*})\]
obtained (for example) by identifying \(H_{T}^{*}(\operatorname{pt},\mathbb{C})\) with the cohomology \(H^{*}(BT,\mathbb{C})\) of the (topological) classifying space of \(T\). Here, the elements of \(\mathfrak{t}^{*}\) are placed in graded degree \(2\).
Let \(i_{T}:\operatorname{Gr}_{T}\hookrightarrow\operatorname{Gr}_{G}\) denote the inclusion of the Grassmannian of \(T\). We use the same symbol to denote the inclusion \(\operatorname{Gr}_{T}\hookrightarrow\operatorname{Gr}_{I}\).
**Proposition 2.1.6**.: _(a) Let_
\[f_{I}=\prod_{\alpha\in\Phi_{G}^{+}\setminus\Phi_{L}^{+}}\alpha\in \operatorname{Sym}\mathfrak{h}^{*}=H_{T}^{*}(\operatorname{pt}).\]
_The natural map_
\[H_{T}^{*}(\operatorname{Gr}_{I},i_{I}^{!}\mathrm{IC}_{\lambda})\to H_{T}^{*}( \operatorname{Gr}_{G},\mathrm{IC}_{\lambda}) \tag{2.1.7}\]
_is an injective map of \(H_{T}^{*}(\operatorname{pt},\mathbb{C})\)-modules, and becomes an isomorphism after inverting \(f_{I}\)._
_(b) Let_
\[g_{I}=\prod_{\alpha\in\Phi_{L}^{+}}\alpha\in\operatorname{Sym}\mathfrak{h}^{* }=H_{T}^{*}(\operatorname{pt}).\]
_The natural map_
\[H_{T}^{*}(\operatorname{Gr}_{T},i_{T}^{!}\mathrm{IC}_{\lambda})\simeq H_{T}^{ *}(\operatorname{Gr}_{T},i_{T}^{!}i_{I}^{!}\mathrm{IC}_{\lambda})\to H_{T}^{*} (\operatorname{Gr}_{I},i_{I}^{!}\mathrm{IC}_{\lambda})\]
_is an injective map of \(H_{T}^{*}(\operatorname{pt},\mathbb{C})\)-modules, and becomes an isomorphism after inverting \(g_{I}\)._
Proof.: (a) Let \(Z=\operatorname{Gr}_{G}^{\leq\lambda}\), \(Z^{\prime}=Z\cap\operatorname{Gr}_{L}\), and \(U=Z\setminus Z^{\prime}\). By Proposition 2.1.5, \(f_{I}\) vanishes on \(\operatorname{Lie}(T_{x})\subseteq\mathfrak{t}\) for any point \(x\in U\subseteq\operatorname{Gr}_{G}\setminus\operatorname{Gr}_{L}\) (at least one of the roots \(\alpha\in\Phi_{G}^{+}\setminus\Phi_{L}^{+}\) is trivial on \(T_{x}\) since \(x\not\in\operatorname{Gr}_{I}\)). Hence, by the equivariant localization theorem (applied to the projective \(T\)-scheme \(Z\), the \(T\)-stable closed subscheme \(Z^{\prime}\), and the complex \(\mathrm{IC}_{\lambda}\in D_{T}(Z)\)), say in the form of [35, Theorem A.1.13], the morphism 2.1.7 becomes an isomorphism after inverting \(f_{I}\). By Proposition 2.1.1, \(H_{T}^{*}(\operatorname{Gr}_{I},i_{I}^{!}\mathrm{IC}_{\lambda})\) is a free \(H_{T}^{*}(\operatorname{pt},\mathbb{C})\)-module, so the injectivity of the map 2.1.7 follows. The proof of (b) is similar, with the pair \((G,L)\) replaced by \((L,T)\).
**Remark 2.1.8**.: Suppose that the Levi subgroup \(L=L_{I}\) is replaced by an arbitrary connected reductive subgroup \(K\subseteq G\). Let \(i:\operatorname{Gr}_{K}\hookrightarrow\operatorname{Gr}_{G}\) denote the induced closed immersion.
If \(K\) contains a maximal torus \(T\) of \(G\), then the arguments of this section carry over to show that the natural map
\[H^{*}_{T}(\mathrm{Gr}_{K},i^{!}\mathrm{IC}_{\lambda})\to H^{*}_{T}(\mathrm{Gr}_{G },\mathrm{IC}_{\lambda})\]
is injective and becomes an isomorphism after inverting a suitable element of \(H^{*}_{T}(\mathrm{pt},\mathbb{C})\). However, without an analysis of the \(T\)-stabilizers of points \(x\in\mathrm{Gr}_{G}\setminus\mathrm{Gr}_{K}\) analogous to Proposition 2.1.5, it is not clear _which_ element of \(H^{*}_{T}(\mathrm{pt},\mathbb{C})\) must be inverted.
**Remark 2.1.9**.: Proposition 2.1.6 can be understood in geometric terms as follows. The morphisms
\[H^{*}_{T}(\mathrm{Gr}_{T},i^{!}_{T}\mathrm{IC}_{\lambda})\to H^{*}_{T}( \mathrm{Gr}_{I},i^{!}_{I}\mathrm{IC}_{\lambda})\to H^{*}_{T}(\mathrm{Gr}_{G}, \mathrm{IC}_{\lambda})\]
of free \(H^{*}_{T}(\mathrm{pt},\mathbb{C})\simeq\mathrm{Sym}(\mathfrak{t}^{*})\)-modules can be viewed as morphisms of (trivial) vector bundles over the affine space \(\mathrm{Spec}\,\mathrm{Sym}(\mathfrak{t}^{*})=\mathfrak{t}\). Then, the second morphism restricts to an isomorphism _away from_ the walls of the root system of \(\check{G}\) which are not walls of the root system of \(\check{L}_{I}\). On the other hand, the first morphism restricts to an isomorphism away from the walls of the root system of \(\check{L}_{I}\).
### Monoidal structure on corestriction
We will write \(L=L_{I}\) and \(i=i_{I}\) for now. We emphasize that the isomorphism of Theorem 1.5.2 is an isomorphism of _algebras_ over the ring \(R_{L}:=H^{*}_{L}(\mathrm{pt},\mathbb{C})\). However, our proof of Theorem 1.5.2 constructs this isomorphism of \(R_{L}\)-_modules_ without any appeal to the existence of an algebra structure on the left hand side. The existence of a ring structure on \(H^{*}_{L(\mathcal{O})}(\mathrm{Gr}_{L},i^{!}\mathcal{F}_{\mathrm{reg}})\) appeared to us as an unexpected consequence of Theorem 1.5.2, since the spectral side \(\mathcal{O}(T^{*}(\check{G}/(\check{U},\psi_{I})))\) is naturally a ring under the pointwise multiplication of regular functions.
The purpose of this section is therefore to equip \(H^{*}_{L(\mathcal{O})}(\mathrm{Gr}_{L},i^{!}\mathcal{F}_{\mathrm{reg}})\) with an \(R_{L}\)-algebra structure defined entirely on the geometric side. In the proof of Theorem 1.5.2, we will show that our identification \(H^{*}_{L(\mathcal{O})}(\mathrm{Gr}_{L},i^{!}\mathcal{F}_{\mathrm{reg}})\simeq \mathcal{O}(T^{*}(\check{G}/(\check{U},\psi_{I})))\) is in fact an isomorphism of \(R_{L}\)-_algebras_.
We note that the existence of an algebra structure on \(H^{*}_{L(\mathcal{O})}(\mathrm{Gr}_{L},i^{!}\mathcal{F}_{\mathrm{reg}})\) is asserted in [11, Section 5(xi)]. However, we are unaware of a complete construction, so we will provide two in this section (and we will use both). The reader may wish to skim SSSS2.6 to see how the results of this section fit together before reading further.
**Remark 2.2.1**.: Consider the case \(L=G\). The \(R_{G}:=H^{*}_{G}(\mathrm{pt})\)-module \(H^{*}_{G(\mathcal{O})}(\mathcal{F}_{\mathrm{reg}})\) carries an evident \(R_{G}\)-algebra structure which was extensively exploited in [4]. It also plays a prominent role in the construction of Coulomb branches in [11]. Under the geometric Sa-take equivalence \(\mathcal{P}_{G(\mathcal{O})}(\mathrm{Gr}_{G})\simeq\mathrm{Rep}(\check{G})\), \(\mathcal{F}_{\mathrm{reg}}\) corresponds to the left regular representation \(\mathcal{O}(\check{G})\). The pointwise multiplication of regular functions equips \(\mathcal{O}(\check{G})\) with the structure of a _ring object_ in the monoidal category \(\mathrm{Rep}(\check{G})\). Passing back along geometric Satake to perverse sheaves, we find that \(\mathcal{F}_{\mathrm{reg}}\) carries a natural ring structure in the monoidal category \(\mathcal{P}_{G(\mathcal{O})}(\mathrm{Gr}_{G})\).
Recall that the fiber functor \(H^{*}:\mathcal{P}_{G(\mathcal{O})}(\mathrm{Gr}_{G})\to\mathrm{Mod}(\mathbb{C})\) carries a natural monoidal structure [28, Lemma 6.1]. This construction of Mirkovic-Vilonen can be adapted (as in [34, Section 2.3] and [34, Remark 2.5]) to produce a monoidal structure on the equivariant cohomology functor \(H^{*}_{G(\mathcal{O})}:\mathcal{P}_{G(\mathcal{O})}(\mathrm{Gr}_{G})\to\mathrm{ Mod}(R_{G})\). Since a monoidal functor takes ring objects to ring objects, we find a ring structure on \(H^{*}_{G(\mathcal{O})}(\mathrm{Gr}_{G},\mathcal{F}_{\mathrm{reg}})\in\mathrm{ Mod}(R_{G})\).
**Remark 2.2.2**.: We will generalize Remark 2.2.1 by equipping the object \(i^{!}\mathcal{F}_{\mathrm{reg}}\in D_{L(\mathcal{O})}(\mathrm{Gr}_{L})\) with a ring structure. However, we emphasize that the functor \(i^{!}:D_{G(\mathcal{O})}(\mathrm{Gr}_{G})\to D_{L(\mathcal{O})}(\mathrm{Gr}_{ L})\) is _not_ monoidal. For example, consider the case \(L=T\). The functor
\[i^{!}_{0}:D_{T(\mathcal{O})}(\mathrm{Gr}_{T})\to\mathrm{Mod}(R_{T})\]
of costalk at the basepoint \(t^{0}\in\mathrm{Gr}_{T}\) is in fact monoidal. However, the composition \(i^{!}_{0}\circ i^{!}\simeq\mathrm{Hom}_{T(\mathcal{O})}(\mathrm{IC}_{0},-)\) does not admit a monoidal structure: its restriction to \(\mathcal{P}_{G(\mathcal{O})}(\mathrm{Gr}_{G})\simeq\mathrm{Rep}(\tilde{G})\) is naturally isomorphic to the functor \(V\mapsto(V\otimes\mathrm{Sym}(\tilde{\mathfrak{g}}/\tilde{\mathfrak{u}}))^{ \tilde{B}}\) (by the main result of [20], recalled above as Theorem 1.4.4). In particular, \(i^{!}\) does not admit a monoidal structure.
Therefore, we will only aim to establish the weaker (but sufficient) result that \(i^{!}\) admits a natural _lax_ monoidal structure. This will suffice: lax monoidal functors carry algebras to algebras, so \(i^{!}\mathcal{F}_{\mathrm{reg}}\) will acquire a ring structure in \(D_{L(\mathcal{O})}(\mathrm{Gr}_{L})\).
**Remark 2.2.3**.: It is surely possible to give a more elegant construction of the lax monoidal structure on \(i^{!}\) (even at the "cochain level") by appealing to a suitable theory of sheaves and correspondences, such as that of [26, Section A.5]. However, as our Construction 2.2.14 makes essential use of non-invertible natural transformations (specifically, base change morphisms arising from non-Cartesian commutative squares), the kind of formalism described in [26] is not sufficient. It would be necessary to use a theory like that of [17]. However, as we do not wish to make our relatively pedestrian computations dependent on the (only partially documented) machinery of \((\infty,2)\)-category theory, we will proceed directly.
**Remark 2.2.4**.: In the remainder of this section, we will allow the Levi subgroup \(L\subseteq G\) to be replaced by an arbitrary connected reductive subgroup of \(G\). However, our only use case in this paper is when \(L\) is a Levi subgroup.
**Remark 2.2.5**.: Verdier duality defines an equivalence of categories
\[\mathbb{D}_{G}:D_{G(\mathcal{O})}(\mathrm{Gr}_{G})\simeq D_{G(\mathcal{O})}( \mathrm{Gr}_{G})^{\mathrm{op}}.\]
For objects \(A,B\in D_{G(\mathcal{O})}(\mathrm{Gr}_{G})\), there is moreover a natural isomorphism
\[\mathbb{D}_{G}(A\star B)\simeq\mathbb{D}_{G}(A)\star\mathbb{D}_{G}(B).\]
See [2, Lemma 7.2.8] for the analogous fact about the convolution of \(B\)-equivariant sheaves on the flag variety \(G/B\); the proof in our setting is identical. We also have a natural isomorphism \(\mathbb{D}_{L}\circ i^{!}\simeq i^{*}\circ\mathbb{D}_{G}\). Therefore, the problem of equipping \(i^{!}\) with a lax monoidal structure is formally equivalent to that of equipping \(i^{*}\) with a _colax_ monoidal structure (and that is what we will do).
**Remark 2.2.6**.: In order to fix notation, we recall the definition of the convolution product on \(D_{G(\mathcal{O})}(\operatorname{Gr}_{G})\) (from [28, Section 4]), together with its associativity constraint (described briefly in [28, Proposition 4.6]). Let \(\operatorname{Gr}_{G}^{(2)}=G(\mathcal{K})\times^{G(\mathcal{O})}\operatorname{ Gr}_{G}=:\operatorname{Gr}_{G}\tilde{\times}\operatorname{Gr}_{G}\) denote the convolution Grassmannian. Let \(m:\operatorname{Gr}_{G}^{(2)}\to\operatorname{Gr}_{G}\) denote the multiplication map. We denote by \(-\widetilde{\boxtimes}-:D_{G(\mathcal{O})}(\operatorname{Gr}_{G})\times D_{G (\mathcal{O})}(\operatorname{Gr}_{G})\to D_{G(\mathcal{O})}(\operatorname{Gr }_{G}^{(2)})\) the twisted external product.
Similarly, we let
\[\operatorname{Gr}_{G}^{(3)}=G(\mathcal{K})\times^{G(\mathcal{O})}G(\mathcal{ K})\times^{G(\mathcal{O})}\operatorname{Gr}_{G}=:\operatorname{Gr}_{G}\tilde{\times} \operatorname{Gr}_{G}\tilde{\times}\operatorname{Gr}_{G}\]
denote the threefold convolution Grassmannian. Let \(m^{23}:\operatorname{Gr}_{G}^{(3)}\to\operatorname{Gr}_{G}^{(2)}\) denote the action map (given by \(m^{23}(g_{1},g_{2},g_{3}G(\mathcal{O}))=(g_{1},g_{2}g_{3}G(\mathcal{O}))\)), and let \(m^{12}:\operatorname{Gr}_{G}^{(3)}\to\operatorname{Gr}_{G}^{(2)}\) denote the multiplication of the first two factors (given by \(m^{12}(g_{1},g_{2},g_{3}G(\mathcal{O}))=(g_{1}g_{2},g_{3}G(\mathcal{O}))\)). We also have the twisted external product functors:
\[-\widetilde{\boxtimes}- :D_{G(\mathcal{O})}(\operatorname{Gr}_{G}^{(2)})\times D_{G( \mathcal{O})}(\operatorname{Gr}_{G})\to D_{G(\mathcal{O})}(\operatorname{Gr }_{G}^{(3)})\] \[-\widetilde{\boxtimes}- :D_{G(\mathcal{O})}(\operatorname{Gr}_{G})\times D_{G(\mathcal{O })}(\operatorname{Gr}_{G}^{(2)})\to D_{G(\mathcal{O})}(\operatorname{Gr}_{G} ^{(3)})\]
Finally, there is a triple multiplication map
\[m^{123}:\operatorname{Gr}_{G}^{(3)}\to\operatorname{Gr}_{G}\]
given by \(m^{123}(g_{1},g_{2},g_{3}G(\mathcal{O}))=g_{1}g_{2}g_{3}G(\mathcal{O})\), and a triple external product
\[-\widetilde{\boxtimes}-\widetilde{\boxtimes}-:D_{G(\mathcal{O})}(\operatorname {Gr}_{G})\times D_{G(\mathcal{O})}(\operatorname{Gr}_{G})\times D_{G( \mathcal{O})}(\operatorname{Gr}_{G})\to D_{G(\mathcal{O})}(\operatorname{Gr }_{G}^{(3)}).\]
The convolution product of \(A,B\in D_{G(\mathcal{O})}(\operatorname{Gr}_{G})\) is defined by
\[A\star B:=m_{*}(A\widetilde{\boxtimes}B).\]
The triple convolution product of \(A,B,C\in D_{G(\mathcal{O})}(\operatorname{Gr}_{G})\) is similarly defined by
\[A\star B\star C=m_{*}^{123}(A\widetilde{\boxtimes}B\widetilde{\boxtimes}C).\]
The associativity constraint \(\alpha_{A,B,C}\) is a trifunctorial isomorphism
\[\alpha_{A,B,C}:(A\star B)\star C\simeq A\star(B\star C) \tag{2.2.7}\]
defined as a composition \(\alpha_{A,B,C}=\gamma_{A,B,C}^{-1}\circ\beta_{A,B,C}\), where \(\beta_{A,B,C}\) is a natural isomorphism
\[\beta_{A,B,C}:(A\star B)\star C\simeq A\star B\star C \tag{2.2.8}\]
and \(\gamma_{A,B,C}\) is a natural isomorphism
\[\gamma_{A,B,C}:A\star(B\star C)\simeq A\star B\star C.\]
We spell out the definition of \(\beta_{A,B,C}\) (that of \(\gamma_{A,B,C}\) is analogous). We have \(m^{123}=m\circ m^{12}\). Hence, we have a compositional isomorphism \(m_{*}^{123}\simeq m_{*}\circ m_{*}^{12}\). It defines an isomorphism
\[\operatorname{comp}_{A,B,C}^{12}:m_{*}m_{*}^{12}(A\widetilde{\boxtimes}B \widetilde{\boxtimes}C)\simeq m_{*}^{123}(A\widetilde{\boxtimes}B\widetilde{ \boxtimes}C)=A\star B\star C.\]
The next ingredient is the associativity constraint on the twisted external product
\[\delta_{A,B,C}:(A\,\widetilde{\boxtimes}\,B)\,\widetilde{\boxtimes}\,C\simeq A \,\widetilde{\boxtimes}\,B\,\widetilde{\boxtimes}\,C.\]
Applying \(m_{*}^{12}\) yields an isomorphism
\[m_{*}^{12}(\delta_{A,B,C}):m_{*}^{12}((A\,\widetilde{\boxtimes}\,B)\, \widetilde{\boxtimes}\,C)\simeq m_{*}^{12}(A\,\widetilde{\boxtimes}\,B\, \widetilde{\boxtimes}\,C).\]
For any objects \(E\in D_{G(\mathcal{O})}(\operatorname{Gr}_{G}^{(2)})\), \(F\in D_{G(\mathcal{O})}(\operatorname{Gr}_{G})\), we have an isomorphism
\[\epsilon_{E,F}:m_{*}E\,\widetilde{\boxtimes}\,F\simeq m_{*}^{12}(E\, \widetilde{\boxtimes}\,F). \tag{2.2.9}\]
By definition, \(\epsilon_{E,F}\) is obtained by adjunction from the composition of the evident natural maps
\[m^{12,*}(m_{*}E\,\widetilde{\boxtimes}\,F)\xrightarrow{\xi_{E,F}^{12}}m^{*}m_ {*}E\,\widetilde{\boxtimes}\,F\xrightarrow{\eta_{E}\,\widetilde{\boxtimes}\, 1}E\,\widetilde{\boxtimes}\,F.\]
We may now take \(E=A\,\widetilde{\boxtimes}\,B\) and \(F=C\) to obtain the isomorphism
\[\epsilon_{A,B,C}:=\epsilon_{A\,\widetilde{\boxtimes}\,B,C}:m_{*}(A\, \widetilde{\boxtimes}\,B)\,\widetilde{\boxtimes}\,C\simeq m_{*}^{12}((A\, \widetilde{\boxtimes}\,B)\,\widetilde{\boxtimes}\,C).\]
Let
\[\sigma_{A,B,C}=m_{*}(m_{*}^{12}(\delta_{A,B,C})\circ\epsilon_{A,B,C}). \tag{2.2.10}\]
The isomorphism \(\beta_{A,B,C}\) is now defined to be the composition
\[\beta_{A,B,C}=\operatorname{comp}_{A,B,C}^{12}\circ\sigma_{A,B,C}.\]
When dealing with both groups \(G\) and \(L\), we will generally use the same notation for the analogous maps and isomorphisms.
**Remark 2.2.11**.: In addition to the morphism \(i:\operatorname{Gr}_{L}\to\operatorname{Gr}_{G}\), we obtain morphisms \(i_{2}:\operatorname{Gr}_{L}^{(2)}\to\operatorname{Gr}_{G}^{(2)}\) and \(i_{3}:\operatorname{Gr}_{L}^{(3)}\to\operatorname{Gr}_{G}^{(3)}\). We have the identities \(m\circ i_{2}=i\circ m\), \(m^{12}\circ i_{3}=i_{2}\circ m^{12}\), \(m^{23}\circ i_{3}=i_{2}\circ m^{23}\), and \(m^{123}\circ i_{3}=i\circ m^{123}\). We also have evident natural isomorphisms
\[\omega_{A,B}:i_{2}^{*}(A\,\widetilde{\boxtimes}\,B)\simeq i^{*}A \,\widetilde{\boxtimes}\,i^{*}B \tag{2.2.13}\] \[\omega_{A,B,C}:i_{3}^{*}(A\,\widetilde{\boxtimes}\,B\,\widetilde {\boxtimes}\,C)\simeq i^{*}A\,\widetilde{\boxtimes}\,i^{*}B\,\widetilde{ \boxtimes}\,i^{*}C. \tag{2.2.12}\]
**Construction 2.2.14**.: We can now define the (non-unital part of the) colax monoidal structure on \(i^{*}:D_{G(\mathcal{O})}(\operatorname{Gr}_{G})\to D_{L(\mathcal{O})}( \operatorname{Gr}_{L})\). For objects \(A,B\in D_{G(\mathcal{O})}(\operatorname{Gr}_{G})\), we must define a bifunctorial map
\[\theta_{A,B}:i^{*}(A\star B)\to i^{*}A\star i^{*}B.\]
By definition, \(A\star B=m_{*}(A\,\widetilde{\boxtimes}\,B)\). Since \(m\circ i_{2}=i\circ m\), we deduce the existence of a base change morphism \(\rho:i^{*}m_{*}\to m_{*}i_{2}^{*}\). Hence, we have a natural map
\[\rho_{A,B,C}:=\rho_{A\,\widetilde{\boxtimes}\,B}:i^{*}m_{*}(A\, \widetilde{\boxtimes}\,B)\to m_{*}i_{2}^{*}(A\,\widetilde{\boxtimes}\,B). \tag{2.2.15}\]
We also have the natural isomorphism \(\omega_{A,B}:i_{2}^{*}(A\,\widetilde{\boxtimes}\,B)\simeq i^{*}A\,\widetilde{ \boxtimes}\,i^{*}B\). Thus, we can define
\[\theta_{A,B}=m_{*}(\omega_{A,B})\circ\rho_{A,B,C} \tag{2.2.16}\]
The proof of the following proposition is the price that we pay for Remark 2.2.3, i.e. our choice to construct \(\theta_{A,B}\) directly without the use of a higher categorical machinery that would presumably encode the commutativity of the diagrams below implicitly. The reader is encouraged to skip it on a first reading.
**Proposition 2.2.17**.: _The natural transformation \(\theta:i^{*}(-\star-)\to i^{*}(-)\star i^{*}(-)\) of functors \(D_{G(\mathcal{O})}(\mathrm{Gr}_{G})\times D_{G(\mathcal{O})}(\mathrm{Gr}_{G}) \to D_{L(\mathcal{O})}(\mathrm{Gr}_{G})\) of Construction 2.2.14 is compatible with the associativity constraints of Remark 2.2.6 underlying the monoidal categories \(D_{G(\mathcal{O})}(\mathrm{Gr}_{G})\) and \(D_{L(\mathcal{O})}(\mathrm{Gr}_{L})\). That is, for objects \(A,B,C\in D_{G(\mathcal{O})}(\mathrm{Gr}_{G})\), the following diagram in \(D_{L(\mathcal{O})}(\mathrm{Gr}_{L})\) commutes:_
Proof.: We expand the diagram horizontally, using the definition (2.2.7) of the associativity isomorphism \(\alpha\):
It suffices to show that both of the small rectangles in this diagram commute. We will show that the one on the left commutes; the proof that the one on the right commutes is completely analogous. We now expand the diagram on the left, using the definitions (2.2.16)
and (2.2.8) of \(\theta\) and \(\beta\), respectively:
Here, we have introduced the base change maps \(\rho^{12,3}:i_{2}^{*}m_{*}^{12}\to m_{*}^{12}i_{3}^{*}\) and \(\rho^{123}:i^{*}m_{*}^{123}\to m_{*}^{123}i_{3}^{*}\) deduced from the equalities \(i_{2}m^{12}=m^{12}i_{3}\) and \(im^{123}=m^{123}i_{3}\), respectively. The upper left rectangle commutes by the naturality of the base change transformation \(\rho:i^{*}m_{*}\to m_{*}i_{2}^{*}\) of (2.2.15) applied to the isomorphism \(\sigma_{A,B,C}\). The upper right rectangle commutes by the compatiblity of compositional isomorphisms with base change (see Proposition 2.11.7 of [2] and note that the Cartesian hypothesis is superfluous). The lower right rectangle commutes by the naturality of the compositional isomorphism \(\operatorname{comp}^{12}:m_{*}m_{*}^{12}\simeq m_{*}^{123}\) applied to the isomorphism \(\omega_{A,B,C}\).
It therefore suffices to show that the lower left rectangle commutes. It is obtained by applying \(m_{*}\) to the the following diagram:
To show that the above diagram commutes, we expand it horizontally using the definition (2.2.10) of \(\sigma\) to obtain:
The upper right rectangle commutes by the naturality of the base change map \(\rho^{12,3}:i_{2}^{*}m_{*}^{12}\to m_{*}^{12}i_{3}^{*}\) applied to the isomorphism \(\delta_{A,B,C}\). The lower left rectangle commutes by the naturality of the isomorphism \(\epsilon_{-,i^{*}C}:m_{*}(-)\,\widetilde{\boxtimes}\,i^{*}C\simeq m_{*}^{12}(- \widetilde{\boxtimes}\,i^{*}C)\) of (2.2.9) applied to the isomorphism \(\omega_{A,B}:i_{2}^{*}(A\,\widetilde{\boxtimes}\,B)\simeq i_{2}^{*}A\, \widetilde{\boxtimes}\,i_{2}^{*}B\) of (2.2.12). The lower right rectangle commutes because it is obtained by applying the functor \(m_{*}^{12}\) to the commutative diagram:
It remains to show that the upper left rectangle (\(\star\)) commutes, which follows from the more general assertion that for \(E\in D_{G(\mathcal{O})}(\mathrm{Gr}_{G}^{(2)})\), \(F\in D_{G(\mathcal{O})}(\mathrm{Gr}_{G})\), the following diagram commutes:
By adjunction (and the definition (2.2.9) of \(\epsilon\), which is used below to expand the bottom row horizontally), it suffices to show that the outer rectangle in the following diagram commutes (note that the middle vertical arrow comp refers to the compositional isomorphism \(m^{12,*}i_{2}^{*}\simeq i_{3}^{*}m^{12,*}\) evaluated on \(i_{2}^{*}m_{*}^{12}(E\widetilde{\boxtimes}\,F)\)):
The upper right rectangle commutes by the naturality of the isomorphism \(\operatorname{comp}:m^{12,*}i_{2}^{*}\simeq i_{3}^{*}m^{12,*}\). The rectangle below it commutes by (\(i_{3}^{*}\) applied to) the definition (2.2.9) of \(\epsilon_{E,F}\)
Hence, it suffices to show that the outer rectangle in the following diagram commutes:
The lower left rectangle commutes by the naturality of the isomorphism \(\xi^{12}:m^{12,*}(-\widetilde{\boxtimes}-)\simeq m^{*}(-)\,\widetilde{ \boxtimes}\,-\) applied to the map \(\rho_{E}^{12}\,\widetilde{\boxtimes}\,1\). The lower right rectangle is obtained by applying the functor \(-\,\widetilde{\boxtimes}\,i^{*}F\) to the diagram
This square commutes by the very definition of the base change map \(\rho_{E}^{12}:i_{2}^{*}m_{*}E\to m_{*}i_{2}^{*}E\). We must verify the commutativity of the upper rectangle (\(\dagger\)). The claim is that for \(X,Y\in D_{G(\mathcal{O})}(\mathrm{Gr}_{G})\), the following diagram commutes.
Checking the commutativity of this diagram is left to the reader (reduce to verifying the commutativity of the corresponding diagram with untwisted external products, which follows from a straightforward compatiblity between the isomorphism \(f^{*}(-\otimes-)\simeq f^{*}(-)\otimes f^{*}(-)\) and the compositional isomorphisms \((gf)^{*}\simeq f^{*}g^{*}\)).
Next, we turn to the following problem. Suppose that \(M\subseteq L\) is a connected reductive subgroup of \(L\) (the reader should keep in the mind the case in which \(M\subseteq L\) is an inclusion of Levi subgroups of \(G\); for instance, when \(M=T\) is a maximal torus). Let \(j:\operatorname{Gr}_{M}\hookrightarrow\operatorname{Gr}_{L}\) denote the induced map on affine Grassmannians. We also have the composition \(k=i\circ j:\operatorname{Gr}_{M}\hookrightarrow\operatorname{Gr}_{G}\). In Construction 2.2.14, we equipped the functors \(i^{*}\), \(j^{*}\), and \(k^{*}\) with (non-unital, for now) colax monoidal structures. On the other hand, we have a canonical isomorphism \(k^{*}\simeq j^{*}\circ i^{*}\). We would like to show that this natural isomorphism is an isomorphism of (non-unital) colax monoidal functors.
**Proposition 2.2.18**.: _Let \(j:\operatorname{Gr}_{M}\hookrightarrow\operatorname{Gr}_{L}\), \(i:\operatorname{Gr}_{L}\hookrightarrow\operatorname{Gr}_{G}\), and \(k=i\circ j:\operatorname{Gr}_{M}\hookrightarrow\operatorname{Gr}_{G}\) be as above. Equip the functors \(i^{*}\), \(j^{*}\), and \(k^{*}\) with the (non-unital) colax monoidal structures of Construction 2.2.14. Then, the compositional isomorphism \(k^{*}\simeq j^{*}\circ i^{*}\) is an isomorphism of (non-unital) colax monoidal functors. More precisely, let \(A,B\in D_{G(\mathcal{O})}(\operatorname{Gr}_{G})\). Then, the following diagram in \(D_{M(\mathcal{O})}(\operatorname{Gr}_{M})\) commutes:_
(2.2.19)
Proof.: We will notationally suppress all composition isomorphisms in the following argument (there are several that appear, but there is never ambiguity about which is meant). We can more explicitly express the above diagram (2.2.19) as follows.
(2.2.20)
The following diagram commutes by the compatibility of composition with base change (Proposition 2.11.7 of [2]).
(2.2.21)
The following diagram commutes because it is obtained by applying the functor \(m_{*}\) to the diagram which witnesses the compatibility of \(\omega_{A,B}\) with composition (see the last step in the
proof of Proposition 2.2.17).
\[\begin{CD}m_{*}j_{2}^{*}i_{2}^{*}(A\,\widetilde{\boxtimes}\,B)@>{m_{*}\omega_{A, B}}>{}>m_{*}j_{2}^{*}(i^{*}A\,\widetilde{\boxtimes}\,i^{*}B)@>{m_{*}\omega_{i^{*}A,i^{*}B}} >{}>m_{*}(j^{*}i^{*}A\,\widetilde{\boxtimes}\,j^{*}i^{*}B)\\ \downarrow\\ m_{*}k_{2}^{*}(A\,\widetilde{\boxtimes}\,B)@>{m_{*}\omega_{A,B}}>{}>m_{*}(k^ {*}A\,\widetilde{\boxtimes}\,k^{*}B).\end{CD} \tag{2.2.22}\]
Pasting diagrams (2.2.21) and (2.2.22) together horizontally yields a commutative rectangle in which the left vertical, right vertical, and bottom horizontal arrows are equal to those of the original diagram (2.2.20). It therefore suffices to show that the top horizontal arrows coincide. Since the first and last morphisms in these four-fold compositions are equal, the claim amounts to verifying the commutativity of the following diagram.
This diagram commutes by the naturality of the base change map \(j^{*}m_{*}\to m_{*}j_{2}^{*}\) applied to the isomorphism \(i_{2}^{*}(A\,\widetilde{\boxtimes}\,B)\simeq i^{*}A\,\widetilde{\boxtimes}\, i^{*}B\).
**Remark 2.2.23**.: Recall (for example, from [24, Tag 00CC]) that to give a monoidal structure on a category \(\mathcal{C}\), it suffices to give an associative product \(\star\) on \(\mathcal{C}\) together with a _unit object_ of \(\mathcal{C}\) (with respect to \(\star\)), that is, an object \(1_{\mathcal{C}}\in\mathcal{C}\) equipped with an isomorphism
\[1_{\mathcal{C}}\star 1_{\mathcal{C}}\simeq 1_{\mathcal{C}}.\]
We use this simplification below when discussing the unit constraint on the derived Satake category.
**Remark 2.2.24**.: We recall the definition of the unit constraint on \((D_{G(\mathcal{O})}(\operatorname{Gr}_{G}),\star)\) (see [2, Lemma 9.2.2]). Let \(\operatorname{IC}_{0}=\underline{\mathbb{C}}_{\operatorname{Gr}_{G}^{0}}\) denote the denote \(G(\mathcal{O})\)-equivariant skyscraper at the basepoint \(t^{0}\in\operatorname{Gr}_{G}\) and let \(j:\operatorname{pt}\hookrightarrow\operatorname{Gr}_{G}\) denote the closed inclusion of \(t^{0}\). Let \(j_{2}:\operatorname{pt}\hookrightarrow\operatorname{Gr}_{G}\,\tilde{\times} \operatorname{Gr}_{G}\) denote the inclusion of the point \(\operatorname{Gr}_{G}^{0}\,\tilde{\times}\operatorname{Gr}_{G}^{0}\). Firstly, there is an isomorphism
\[\xi:\operatorname{IC}_{0}\,\widetilde{\boxtimes}\operatorname{IC}_{0}=j_{*} \underline{\mathbb{C}}_{\operatorname{pt}}\,\widetilde{\boxtimes}\,j_{*} \underline{\mathbb{C}}_{\operatorname{pt}}\simeq(j_{2})_{*}(\underline{ \mathbb{C}}_{\operatorname{pt}}\boxtimes\underline{\mathbb{C}}_{\operatorname {pt}})\simeq(j_{2})_{*}\underline{\mathbb{C}}_{\operatorname{pt}}.\]
Since \(m\circ j_{2}=j\), we have an isomorphism \((mj_{2})_{*}\simeq m_{*}(j_{2})_{*}\). Applying \(m_{*}\) to \(\xi\) therefore yields an isomorphism
\[\eta:\operatorname{IC}_{0}\star\operatorname{IC}_{0}\simeq m_{*}(j_{2})_{*} \underline{\mathbb{C}}_{\operatorname{pt}}\simeq j_{*}\underline{\mathbb{C}}_ {\operatorname{pt}}=\operatorname{IC}_{0}.\]
The isomorphism \(\eta\) is the unit contraint.
**Construction 2.2.25**.: Let \(\mathrm{IC}^{\prime}_{0}\in D_{L(\mathcal{O})}(\mathrm{Gr}_{L})\) denote the monoidal unit and \(j^{\prime}:\mathrm{pt}\hookrightarrow\mathrm{Gr}_{L}\) the inclusion of the basepoint. We have an isomorphism \(\chi:i^{*}\mathrm{IC}_{0}\xrightarrow{\sim}\mathrm{IC}^{\prime}_{0}\), which can be precisely defined as the composition
\[\chi:i^{*}\mathrm{IC}_{0}=i^{*}j_{*}\underline{\mathbb{C}}_{\mathrm{pt}}\simeq i ^{*}i_{*}j_{*}^{\prime}\underline{\mathbb{C}}_{\mathrm{pt}}\xrightarrow{\sim} j_{*}^{\prime}\underline{\mathbb{C}}_{\mathrm{pt}}=\mathrm{IC}^{\prime}_{0}.\]
**Proposition 2.2.26**.: _The natural transformation \(\theta:i^{*}(-\star-)\to i^{*}(-)\star i^{*}(-)\) of functors \(D_{G(\mathcal{O})}(\mathrm{Gr}_{G})\times D_{G(\mathcal{O})}(\mathrm{Gr}_{G}) \to D_{L(\mathcal{O})}(\mathrm{Gr}_{G})\) defined in Construction 2.2.14 is compatible with the unit constraints underlying the monoidal categories \(D_{G(\mathcal{O})}(\mathrm{Gr}_{G})\) and \(D_{L(\mathcal{O})}(\mathrm{Gr}_{L})\). More precisely, the following diagram in \(D_{L(\mathcal{O})}(\mathrm{Gr}_{L})\) commutes:_
(2.2.27)
Proof.: It is at least obvious that all of the terms in (2.2.27) are isomorphic and supported at \(t^{0}\in\mathrm{Gr}_{L}\). Hence, it suffices to prove the commutativity of the diagram after applying the functor \(j^{*}\), where \(j:\mathrm{pt}\to\mathrm{Gr}_{L}\) denotes the inclusion of the basepoint. Let \(k=ji:\mathrm{pt}\to\mathrm{Gr}_{G}\) denote the composition. The diagram \(j^{*}(2.2.27)\) can be embedded into the following diagram:
Here, the isomorphism \(\chi^{\prime}:j^{*}\mathrm{IC}^{\prime}_{0}\to\underline{\mathbb{C}}_{\mathrm{ pt}}\) (respectively, \(\chi^{\prime\prime}:k^{*}\mathrm{IC}_{0}\underline{\mathbb{C}}_{\mathrm{pt}}\)) is obtained from Construction 2.2.25 after replacing the pair \(L\subseteq G\) by the pair \(\mathrm{pt}\subseteq L\) (respectively, \(\mathrm{pt}\subseteq G\)). Moreover, the unlabelled map \(j^{*}\mathrm{IC}^{\prime}_{0}\star j^{*}\mathrm{IC}^{\prime}_{0}\to\underline{ \mathbb{C}}_{\mathrm{pt}}\) is defined as the composition \(j^{*}\mathrm{IC}^{\prime}_{0}\star j^{*}\mathrm{IC}^{\prime}_{0}\to\underline{ \mathbb{C}}_{\mathrm{pt}}\star\underline{\mathbb{C}}_{\mathrm{pt}}\simeq \underline{\mathbb{C}}_{\mathrm{pt}}\) (and is included only for formatting purposes).
The commutativity of the top rectangle follows from the naturality of the isomorphism \(\mathrm{comp}:j^{*}i^{*}\simeq k^{*}\) applied to the morphism \(\eta\). The commutativity of the upper left small rectangle follows from the naturality of the morphism \(\theta:j^{*}(-\star-)\to j^{*}(-)\star j^{*}(-)\) applied
to \(\chi\star\chi\). The commutativity of the upper right small rectangle will follow from the case \(L=\operatorname{pt}\). The commutativity of the lower right small rectangle is trivial. The lower left small rectangle commutes by the compatiblity of \(\chi\) with composition. The commutativity of the large outer rectangle will follow from the case \(L=\operatorname{pt}\). These observations imply the commutativity of the second rectangle from the top (that is, the diagram \(j^{*}(2.2.27)\)).
We may therefore assume that \(L=\operatorname{pt}\) is the trivial subgroup. In this case, the claim is obvious.
**Corollary 2.2.28**.:
1. _The constructions of_ 2.2.14 _and_ 2.2.25 _equip the functor_ \(i^{*}:D_{G(\mathcal{O})}(\operatorname{Gr}_{G})\to D_{L(\mathcal{O})}( \operatorname{Gr}_{L})\) _with a colax monoidal structure._
2. _The constructions of_ 2.2.5_,_ 2.2.14_, and_ 2.2.25 _equip the functor_ \(i^{!}:D_{G(\mathcal{O})}(\operatorname{Gr}_{G})\to D_{L(\mathcal{O})}( \operatorname{Gr}_{L})\) _with a lax monoidal structure._
### Fusion
In this section, we will give an alternative description of our lax monoidal structure on \(i^{!}\). We recall some definitions and constructions from [16, Section 2.1.2], to which we refer the reader for more details. Let \(\operatorname{Aut}\) denote the automorphism group scheme of the \(\mathbb{C}\)-algebra \(\mathcal{O}\). It is a pro-algebraic group and acts naturally on the affine Grassmannian \(\operatorname{Gr}_{G}\) by loop rotation. Let \(X=\mathbb{A}^{1}\) and let \(\tilde{X}\to X\) denote the canonical \(\operatorname{Aut}\)-torsor over \(X\) (whose fiber over \(x\in X\) is the \(\operatorname{Aut}\)-torsor parameterizing uniformizers of the completed local ring \(\widehat{\mathcal{O}}_{X,x}\)). Recall that the _Beilinson-Drinfeld Grassmannian_ is the ind-scheme over \(X\) given by the twisted product
\[\operatorname{Gr}_{G,X}=\tilde{X}\times^{\operatorname{Aut}}\operatorname{Gr} _{G}\to X.\]
For any affine \(X\)-scheme \(x:S\to X\), the set \(\operatorname{Gr}_{G,X}(S)\) can be identified canonically with the set of isomorphism classes of pairs \((\mathcal{P},\sigma)\), where \(\mathcal{P}\) is a \(G\)-torsor over \(X_{S}=X\times S\) and \(\sigma\) is a trivialization of \(\mathcal{P}\) over the subscheme \(X_{S}\setminus\Gamma_{x}\), the complement of the graph \(\Gamma_{x}:S\to X_{S}\) (Lemma 3 of [16, Section 2.1.2]). The group scheme \(\operatorname{Aut}\) acts through group automorphisms on the arc group \(G(\mathcal{O})\). Therefore, we can form the group scheme \(\mathcal{G}_{X}\) over \(X\) given by
\[\mathcal{G}_{X}:=\tilde{X}\times^{\operatorname{Aut}}G(\mathcal{O})\to X.\]
Since the action \(G(\mathcal{O})\times\operatorname{Gr}_{G}\to\operatorname{Gr}_{G}\) is \(\operatorname{Aut}\)-equivariant, we obtain an action of the group \(X\)-scheme \(\mathcal{G}_{X}\) on the \(X\)-ind-scheme \(\operatorname{Gr}_{G,X}\). The twisted external product defines a functor
\[p_{2}^{!}:=\operatorname{\mathbb{C}}_{\tilde{X}}[1]\,\widetilde{\boxtimes}- :D_{G(\mathcal{O})\rtimes\operatorname{Aut}}(\operatorname{Gr}_{G})\to D_{ \mathcal{G}_{X}}(\operatorname{Gr}_{G,X}).\]
See [3, Section 2.4.1] for a brief discussion of the definition of the above \(\mathcal{G}_{X}\)-equivariant derived category, and [3, Chapter 10] for a thorough treatment. On the other hand, we have Gaitsgory's degeneration \(\pi:\operatorname{Gr}_{G,X}^{\prime}\to X\) from [16, Section 3.1.1]. It is the ind-scheme over \(X\) whose points over an affine \(X\)-scheme \(x:S\to X\) are given by isomorphism classes of pairs \((\mathcal{P},\sigma)\), where \(\mathcal{P}\) is a \(G\)-torsor over \(X_{S}\) and \(\sigma\) is a trivialization of \(\mathcal{P}\) over the subscheme \(X_{S}\setminus(\Gamma_{x}\cup\Gamma_{0})\), where \(\Gamma_{0}\) denotes the graph of the map \(S\to\operatorname{Spec}\mathbb{C}\xrightarrow{0}\mathbb{A}^{1}=X\). Note
that \(\mathcal{G}_{X}\) acts on \(\mathrm{Gr}^{\prime}_{G,X}\) through modification of the trivialization \(\sigma\). There are canonical isomorphisms
\[\mathrm{Gr}^{\prime}_{G,0}:=\mathrm{Gr}^{\prime}_{G,X}|_{\{0\}} \simeq\mathrm{Gr}_{G}\] \[\mathrm{Gr}^{\prime}_{G,X-0}:=\mathrm{Gr}^{\prime}_{G,X}|_{X \setminus\{0\}}\simeq\mathrm{Gr}_{G}\times\mathrm{Gr}_{G,X}|_{X\setminus\{0\}} =\mathrm{Gr}_{G}\times\mathrm{Gr}_{G,X-0}.\]
Hence, we obtain a functor
\[-\boxtimes p_{2}^{\dagger}(-)|_{\mathrm{Gr}_{G,X-0}}:D_{G(\mathcal{O})}( \mathrm{Gr}_{G})\times D_{G(\mathcal{O})\rtimes\mathrm{Aut}}(\mathrm{Gr}_{G}) \to D_{\mathcal{G}_{X-0}}(\mathrm{Gr}^{\prime}_{G,X-0}).\]
Here, we write \(\mathcal{G}_{X-0}\) for the restriction of \(\mathcal{G}_{X}\) to the open subscheme \(X\setminus\{0\}\). Finally, let \(\Psi:D_{\mathcal{G}_{X-0}}(\mathrm{Gr}^{\prime}_{G,X})\to D_{G(\mathcal{O})}( \mathrm{Gr}_{G})\) denote the functor of nearby cycles in the family \(\pi\) (we are implicitly using the canonical identification of group schemes \(\mathcal{G}_{X}|_{\{0\}}\simeq G(\mathcal{O})\) provided by the uniformizer \(t\in\widehat{\mathcal{O}}_{X,0}=\mathbb{C}[[t]]\) here). Following Gaitsgory, we define
\[C(A,B):=\Psi(A\boxtimes p_{2}^{\dagger}B|_{\mathrm{Gr}_{G,X-0}}). \tag{2.3.1}\]
Because the forgetful functor \(D_{G(\mathcal{O})\rtimes\mathrm{Aut}}(\mathrm{Gr}_{G})\to D_{G(\mathcal{O})}( \mathrm{Gr}_{G})\) is an equivalence of categories (it is fully faithful because \(\mathrm{Aut}\) is pro-unipotent, and essentially surjective because its image contains \(\mathcal{P}_{G(\mathcal{O})}(\mathrm{Gr}_{G})\) by [29, Proposition 3.2.2] which generates \(D_{G(\mathcal{O})}(\mathrm{Gr}_{G})\) as a triangulated category), we may regard \(C(-,-)\) as a bifunctor
\[C(-,-):D_{G(\mathcal{O})}(\mathrm{Gr}_{G})\times D_{G(\mathcal{O})}(\mathrm{Gr }_{G})\to D_{G(\mathcal{O})}(\mathrm{Gr}_{G}).\]
As usual, we will use the same notation for the corresponding objects defined using the subgroup \(L\). We have closed immersions
\[i_{X}:\mathrm{Gr}_{L,X}=\tilde{X}\times^{\mathrm{Aut}}\mathrm{Gr }_{L}\xrightarrow{\mathrm{id}_{X}\,\widetilde{\boxtimes}\,i}\tilde{X}\times^{ \mathrm{Aut}}\mathrm{Gr}_{G}=\mathrm{Gr}_{G,X}\] \[i^{\prime}_{X}:\mathrm{Gr}^{\prime}_{L,X}\to\mathrm{Gr}^{\prime}_ {G,X}\]
globalizing the \(\mathrm{Aut}\)-equivariant closed immersion \(i:\mathrm{Gr}_{L}\hookrightarrow\mathrm{Gr}_{G}\). The morphism \(i^{\prime}_{X}\) takes a point \((\mathcal{P},\sigma)\) of \(\mathrm{Gr}^{\prime}_{L,X}\) over an affine \(X\)-scheme \(x:S\to X\) to the point \((\mathcal{P}^{\prime},\sigma^{\prime})\in\mathrm{Gr}^{\prime}_{G,X}(S)\) given by the induced \(G\)-torsor \(\mathcal{P}^{\prime}=\mathcal{P}\times^{L}G\) and the trivialization \(\sigma^{\prime}\) of \(\mathcal{P}^{\prime}\) over \(X_{S}\setminus(\Gamma_{x}\cup\Gamma_{0})\) induced by \(\sigma\). Let \(i_{X-0}\), \(i^{\prime}_{X-0}\) denote their restrictions to \(X\setminus\{0\}\).
**Construction 2.3.2**.: Let \(A,B\in D_{G(\mathcal{O})}(\mathrm{Gr}_{G})\). We will define a natural map
\[\Theta_{A,B}:i^{*}C(A,B)\to C(i^{*}A,i^{*}B)\]
in \(D_{L(\mathcal{O})}(\mathrm{Gr}_{G})\). We have a natural isomorphism (witnessing the monoidality of \(*\)-pullback with respect to the tensor product)
\[\Omega_{A,B}:(i^{\prime}_{X-0})^{*}(A\boxtimes p_{2}^{\dagger}B|_{\mathrm{Gr} _{G,X-0}})\xrightarrow{\Omega_{1}}i^{*}A\boxtimes i^{*}p_{2}^{\dagger}B|_{ \mathrm{Gr}_{G,X-0}}\xrightarrow{1\boxtimes\Omega_{2}}i^{*}A\boxtimes p_{2} ^{\dagger}(i^{*}B)|_{\mathrm{Gr}_{G,X-0}}. \tag{2.3.3}\]
We have also used the compositional isomorphism \(i^{*}_{2}p_{2}^{*}\simeq p_{2}^{*}i^{*}\) to commute \(i^{*}\) with \(p_{2}^{\dagger}\). There is also a natural transformation [2, Lemma 4.4.8]
\[\zeta:i^{*}\circ\Psi\to\Psi\circ(i^{\prime}_{X-0})^{*}. \tag{2.3.4}\]
Applying \(\Psi\) to the isomorphism \(\Omega_{A,B}\) and then precomposing with \(\zeta\) defines the natural transformation \(\Theta_{A,B}\). That is,
\[\Theta_{A,B}=\Psi(\Omega_{A,B})\circ\zeta_{A\boxtimes p_{2}^{\dagger}B|_{\operatorname {Gr}_{G},X-0}}.\]
The following fundamental result of Gaitsgory relates the bifunctor \(C(-,-)\) to the convolution product.
**Theorem 2.3.5** (Gaitsgory [16, Proposition 6]).: _Let \(A,B\in D_{G(\mathcal{O})}(\operatorname{Gr}_{G})\). There is a natural isomorphism in \(D_{G(\mathcal{O})}(\operatorname{Gr}_{G})\)_
\[\varpi_{A,B}:C(A,B)\simeq A\star B.\]
**Remark 2.3.6**.: Gaitsgory's result only requires \(G(\mathcal{O})\)-equivariance on one of the factors; the other need only belong to the constructible derived category \(D(\operatorname{Gr}_{G})\) (with the obvious caveat that \(\varpi_{A,B}\) is in that case only an isomorphism of non-equivariant complexes). However, we will only need the version stated above.
**Construction 2.3.7**.: We need to unpack the construction of the isomorphism \(\varpi_{A,B}\) from Gaitsgory's proof of [16, Proposition 6]. Let \(\widetilde{\operatorname{Gr}}^{\prime}_{G,X}\to X\) denote the ind-scheme over \(X\) whose set of points \(\widetilde{\operatorname{Gr}}^{\prime}_{G,X}(S)\) over an affine \(X\)-scheme \(x:S\to X\) is the set of isomorphism classes of tuples \((\mathcal{P}_{0},\mathcal{P}_{1},\eta_{0},\eta_{1})\), where \(\mathcal{P}_{0}\) and \(\mathcal{P}_{1}\) are \(G\)-torsors over \(X_{S}\), \(\eta_{0}\) is an isomorphism of \(G\)-torsors \(\eta_{0}:\mathcal{P}_{0}|_{X_{S}\setminus\Gamma_{x}}\simeq\mathcal{P}_{1}|_{X _{S}\setminus\Gamma_{x}}\), and \(\eta_{1}\) is a trivialization of the \(G\)-torsor \(\mathcal{P}_{1}\) over \(X_{S}\setminus\Gamma_{0}\). We have a map \(p:\widetilde{\operatorname{Gr}}^{\prime}_{G,X}\to\operatorname{Gr}_{G,X}\) of \(X\)-ind-schemes taking an \(S\)-point \((\mathcal{P}_{0},\mathcal{P}_{1},\eta_{0},\eta_{1})\in\widetilde{ \operatorname{Gr}}^{\prime}_{G,X}(S)\) to the \(S\)-point \((\mathcal{P}_{1},\eta_{1})\in\operatorname{Gr}_{G,X}(S)\) and a map \(\mu:\widetilde{\operatorname{Gr}}^{\prime}_{G,X}\to\operatorname{Gr}^{ \prime}_{G,X}\) taking an \(S\)-point \((\mathcal{P}_{0},\mathcal{P}_{1},\eta_{0},\eta_{1})\in\widetilde{ \operatorname{Gr}}^{\prime}_{G,X}(S)\) to the \(S\)-point \((\mathcal{P}_{0},\eta_{0}\circ\eta_{1}|_{X_{S}\setminus(\Gamma_{x}\cup\Gamma_ {0})})\in\operatorname{Gr}^{\prime}_{G,X}(S)\).
On the other hand, consider the \(G(\mathcal{O})\)-torsor \(\widetilde{\operatorname{Gr}}_{G,X}\to\operatorname{Gr}_{G,X}\), defined by the fact that for any affine \(X\)-scheme \(x:S\to X\), \(\widetilde{\operatorname{Gr}}_{G,X}(S)\) is the set of isomorphism classes of tuples \((\mathcal{P},\eta,\gamma)\), where \((\mathcal{P},\eta)\) is an \(S\)-point of \(\operatorname{Gr}_{G,X}\) and \(\gamma\) is a trivialization of \(\mathcal{P}\) over the formal neighborhood \(\widehat{\Gamma}_{0}\) of \(\Gamma_{0}\subseteq X_{S}\). Then, there is an isomorphism
\[\widetilde{\operatorname{Gr}}^{\prime}_{G,X}\simeq\operatorname{Gr}_{G} \times^{G(\mathcal{O})}\widetilde{\operatorname{Gr}}_{G,X}=:\operatorname{Gr }_{G}\tilde{\times}\operatorname{Gr}_{G,X}\]
of ind-schemes over \(X\), under which \(p\) corresponds to the natural projection to the second factor. This description allows us to consider the twisted external product
\[-\widetilde{\boxtimes}-:D_{G(\mathcal{O})}(\operatorname{Gr}_{G})\times D_{ \mathcal{G}_{X}}(\operatorname{Gr}_{G,X})\to D_{\mathcal{G}_{X}}(\widetilde{ \operatorname{Gr}}^{\prime}_{G,X}).\]
Let \(A,B\in D_{G(\mathcal{O})}(\operatorname{Gr}_{G})\). Form the complex \(A\,\widetilde{\boxtimes}\,p_{2}^{\dagger}B\in D_{\mathcal{G}_{X}}(\widetilde{ \operatorname{Gr}}^{\prime}_{G,X})\). Then, there is a natural isomorphism [16, Lemma 5(b)]
\[\mu_{*}(A\,\widetilde{\boxtimes}\,p_{2}^{\dagger}B)|_{\operatorname{Gr}_{G,X-0} }\simeq A\boxtimes(p_{2}^{\dagger}B|_{\operatorname{Gr}_{G,X-0}}) \tag{2.3.8}\]
induced by the projection formula. Let \(\mu_{0}\) denote the fiber of \(\mu\) over \(0\in X\). There is a natural transformation [2, Lemma 4.8.8]
\[\Psi\mu_{*}\to\mu_{0*}\Psi, \tag{2.3.9}\]
which is an isomorphism by the ind-properness of \(\mu\) (note that the nearby cycles functor \(\Psi\) appearing in the domain of this natural transformation is that corresponding to the family \(\operatorname{Gr}^{\prime}_{G,X}\to X\) whereas that in the domain is nearby cycles in the family \(\widetilde{\operatorname{Gr}}^{\prime}_{G,X}\to X\)). Combining it with (2.3.8) yields an isomorphism
\[\mu_{0*}\Psi(A\,\widetilde{\boxtimes}\,p_{2}^{\dagger}B)\simeq C(A,B).\]
Since \(\widetilde{\operatorname{Gr}}^{\prime}_{G,X}\to X\) can be trivialized by the choice of a global coordinate \(t\) on \(\mathbb{A}^{1}\), we also have a natural isomorphism
\[\Psi(A\,\widetilde{\boxtimes}\,p_{2}^{\dagger}B)\simeq(A\,\widetilde{ \boxtimes}\,p_{2}^{\dagger}B)|_{\operatorname{Gr}^{(2)}_{G}}\simeq A\, \widetilde{\boxtimes}\,B, \tag{2.3.10}\]
where \(\operatorname{Gr}^{(2)}_{G}\) is identified with with the fiber of \(\widetilde{\operatorname{Gr}}^{\prime}_{G,X}\to X\) over \(0\in X\). Since \(\mu_{0}=m\) is the convolution map, we obtain the desired isomorphism
\[\varpi_{A,B}:A\star B=m_{*}(A\,\widetilde{\boxtimes}\,B)\simeq\mu_{0*}\Psi(A \,\widetilde{\boxtimes}\,p_{2}^{\dagger}B)\simeq C(A,B).\]
**Construction 2.3.11**.: We can define a morphism
\[\tilde{i}^{\prime}_{X}:\widetilde{\operatorname{Gr}}^{\prime}_{L,X}\to \widetilde{\operatorname{Gr}}^{\prime}_{G,X}\]
by sending an \(S\)-point (for \(x:S\to X\) an affine \(X\)-scheme) of \(\widetilde{\operatorname{Gr}}^{\prime}_{L,X}\) given by a tuple \((\mathcal{P}_{0},\mathcal{P}_{1},\eta_{0},\eta_{1})\) to the \(S\)-point of \(\widetilde{\operatorname{Gr}}^{\prime}_{G,X}\) given by \((\mathcal{P}^{\prime}_{0},\mathcal{P}^{\prime}_{1},\eta^{\prime}_{0},\eta^{ \prime}_{1})\), where \(\mathcal{P}^{\prime}_{i}=\mathcal{P}_{i}\times^{L}G\) is the \(G\)-torsor associated to the \(L\)-torsor \(\mathcal{P}_{i}\), \(\eta^{\prime}_{0}\) is the isomorphism \(\mathcal{P}^{\prime}_{0}|_{X_{S}\setminus\Gamma_{x}}\simeq\mathcal{P}^{\prime }_{1}|_{X_{S}\setminus\Gamma_{x}}\) induced from \(\eta^{\prime}_{0}\), and \(\eta^{\prime}_{1}\) is the trivialization of \(\mathcal{P}^{\prime}_{1}\) away from \(\Gamma_{0}\) induced from the trivialization \(\eta_{1}\) of \(\mathcal{P}_{1}\) away from \(\Gamma_{0}\). Given \(A,B\in D_{G(\mathcal{O})}(\operatorname{Gr}_{G})\), we have a natural isomorphism
\[\widetilde{\Omega}_{A,B}:(\tilde{i}^{\prime}_{X})^{*}(A\,\widetilde{ \boxtimes}\,p_{2}^{\dagger}B)\simeq i^{*}A\,\widetilde{\boxtimes}\,p_{2}^{ \dagger}i^{*}B. \tag{2.3.12}\]
Note that the fiber of \(\tilde{i}^{\prime}_{X}\) over \(0\in X\) identifies with the map \(i_{2}:\operatorname{Gr}^{(2)}_{L}\to\operatorname{Gr}^{(2)}_{G}\). Hence, the construction of (2.3.4) defines a natural transformation
\[\zeta_{2}:i_{2}^{*}\Psi\to\Psi(\tilde{i}^{\prime}_{X})^{*}.\]
We can now formulate the global interpretation of our lax monoidal structure on \(i^{!}\) (equivalently, of the colax monoidal structure on \(i^{*}\)).
**Proposition 2.3.13**.: _Let \(A,B\in D_{G(\mathcal{O})}(\operatorname{Gr}_{G})\). Then, the following diagram in \(D_{L(\mathcal{O})}(\operatorname{Gr}_{L})\) commutes:_
Proof.: We expand the diagram using the definition (2.3.1) of \(C(A,B)\) as well as the definitions 2.3.7, 2.3.2, and 2.2.16 of \(\varpi_{A,B}\), \(\Theta_{A,B}\), and \(\theta_{A,B}\), respectively. We will suppress many subscripts on our natural transformations (indicating the objects that they are evaluated on) to preserve readability.
The upper left rectangle commutes by the naturality of the map \(\zeta:i^{*}\Psi\to\Psi(i^{\prime}_{X-0})^{*}\) of (2.3.4) applied to the morphism (2.3.8). The middle right rectangle commutes by the naturality of the map \(\Psi\mu_{*}\to m_{*}\Psi\) of (2.3.9) applied to the isomorphism \(\Omega_{A,B}\) of (2.3.3). The lower left rectangle commutes by the naturality of the base change map \(\rho:i^{*}m_{*}\to m_{*}i_{2}^{*}\) of (2.2.15) applied to the isomorphism \(\Psi(A\,\widetilde{\boxtimes}\,p_{2}^{\dagger}B)\simeq A\,\widetilde{ \boxtimes}\,B\) of (2.3.8). The lower right rectangle is obtained by applying the functor \(m_{*}\) to the diagram
The right rectangle commutes by the naturality of the isomorphism \(\Psi\simeq(-)\big{|}_{\operatorname{Gr}_{G}^{(2)}}\) of (2.3.10) applied to the isomorphism \(\widetilde{\Omega}_{A,B}\) of (2.3.12). Note that the isomorphism \(\Psi\simeq(-)\big{|}_{\operatorname{Gr}_{G}^{(2)}}\) of (2.3.10) is defined by applying the construction of (2.3.4) to the closed inclusion \(\operatorname{Gr}_{G}^{(2)}\hookrightarrow\operatorname{Gr}_{G}^{(2)}\).
\(\widetilde{\operatorname{Gr}}^{\prime}_{G,X}\) of the zero fiber. Hence, the commutativity of the left rectangle follows from the straightforward compatiblity of the natural transformation \(g^{*}\Psi\to\Psi g^{*}\) of [2, Lemma 4.8.8] with the compositional isomorphisms for \(*\)-pullback.
We return to the original diagram and consider its upper right rectangle, which is obtained by applying \(\Psi\) to the diagram
\[\begin{CD}(i^{\prime}_{X-0})^{*}(A\boxtimes p_{2}^{\dagger}B|_{ \operatorname{Gr}_{G,X-0}})@>{\Omega_{A,B}}>{}>i^{*}A\boxtimes p_{2}^{\dagger} i^{*}B\\ (i^{\prime}_{X-0})^{*}\mu_{*}(A\widetilde{\boxtimes}p_{2}^{\dagger}B)@>{\rho^ {\prime}}>{}>\mu_{*}(\tilde{i}^{\prime}_{X})^{*}(A\widetilde{\boxtimes}p_{2}^ {\dagger}B)@>{\mu_{*}\widetilde{\Omega}}>{}>\mu_{*}(i^{*}A\boxtimes p_{2}^{ \dagger}i^{*}B).\end{CD}\]
The proof that this diagram commutes is sufficiently similar to the arguments in the proof of Proposition 2.2.17 that we omit it and leave it to the reader (use the adjunction \(\mu^{*}\dashv\mu_{*}\) and then unwind the definitions of (2.3.8) and the base change map \(\rho^{\prime}\) to reduce the commutativity of this diagram to an assertion about the compatibility of \(\Omega\) and \(\widetilde{\Omega}\) with the compositional isomorphisms for \(*\)-pullback).
The commutativity of the middle left rectangle of the original diagram follows from the more general assertion that for any object \(E\in D_{\mathcal{G}_{X}}(\widetilde{\operatorname{Gr}}^{\prime}_{G,X})\), the following diagram commutes in \(D_{L(\mathcal{O})}(\operatorname{Gr}_{L})\):
\[\begin{CD}i^{*}\Psi\mu_{*}E@>{\zeta_{\mu_{*}E}}>{}>\Psi(i^{\prime}_{X})^{*} \mu_{*}E@>{\Psi(\rho^{\prime})}>{}>\Psi\mu_{*}(\tilde{i}^{\prime}_{X})^{*}E\\ @V{i^{*}(2.3.9)}V{(2.3.9)}V\\ i^{*}m_{*}\Psi(E)@>{\rho}>{}>m_{*}i^{*}_{2}\Psi(E)@>{m_{*}\zeta_{2}}>{}>m_{*} \Psi(\tilde{i}^{\prime}_{X})^{*}E\end{CD}\]
The commutativity of this diagram is equivalent, under the adjunction \(m^{*}\dashv m_{*}\), to the commutativity of the outer rectangle in the following diagram:
The upper right rectangle commutes by the naturality of the map \(\gamma:m^{*}\Psi\to\Psi\mu^{*}\) of [2, Lemma 4.8.8]. The rectangle below it commutes by the definition of the base change map \(\rho^{\prime}:(i^{\prime}_{X})^{*}\mu_{*}\to\mu_{*}(\tilde{i}^{\prime}_{X})^{*}\). The rectangle below that commutes by the naturality of the map \(\zeta_{2}:i^{*}_{2}\Psi\to\Psi(\tilde{i}^{\prime}_{X})\) applied to the counit map \(\mu^{*}\mu_{*}E\to E\). The commutativity of the bottom left triangle follows from the definition of the base change map \(\rho:i^{*}m_{*}\to m_{*}i^{*}_{2}\). Thus, we are reduced to showing that the following diagram commutes.
The left quadrilateral commutes by the naturality of the compositional isomorphism \(\operatorname{comp}:m^{*}i^{*}\simeq i^{*}_{2}m^{*}\) applied to the map \(\Psi\mu_{*}\to m_{*}\Psi\) of (2.3.9). The lower right quadrilateral is
obtained by applying \(i_{2}^{*}\) to the diagram
\[\begin{CD}m^{*}\Psi\mu_{*}E@>{\gamma_{\mu_{*}E}}>{}>\Psi\mu^{*}\mu_{*}E\\ @V{\mu^{*}(2.3.9)}V{\Psi(\text{\rm counit})}V\\ m^{*}m_{*}\Psi(E)@>{\text{\rm counit}}>{}>\Psi(E).\end{CD}\]
Both composite maps \(m^{*}\Psi\mu_{*}E\to\Psi(E)\) are left adjunct to the morphism \(\Psi\mu_{*}E\to m_{*}\Psi(E)\) of (2.3.9), hence are equal. Finally, the upper figure (\(\dagger\)) commutes by the more general assertion that for any \(F\in D_{\mathcal{G}_{X}}(\operatorname{Gr}_{G,X}^{\prime})\), the following diagram commutes:
\[\begin{CD}m^{*}i^{*}\Psi(F)@>{m^{*}\zeta_{2}}>{}>m^{*}\Psi(i_{X}^{\prime})^{* }(F)@>{\gamma_{(i_{X}^{\prime})^{*}F}}>{}>\Psi\mu^{*}(i_{X}^{\prime})^{*}(F)\\ @V{\text{\rm comp}}V{\Psi(\text{\rm comp})}V\\ i_{2}^{*}m^{*}\Psi(F)@>{i_{2}^{*}\gamma_{F}}>{}>i_{2}^{*}\Psi\mu^{*}(F)@>{ \zeta_{2}}>{}>\Psi(\tilde{i}_{X}^{\prime})^{*}\mu^{*}(F)\end{CD}\]
This diagram expresses a straightforward compatibility between the natural transformation of [2, Lemma 4.8.8] with the compositional isomorphisms for \(*\)-pullback, whose proof we leave to the reader (since it is easy but requires unravelling the definition of the nearby cycles functor \(\Psi\), which we will avoid doing here).
### Parabolic restriction
We now return to the situation in which \(L\subseteq G\) is a Levi subgroup of \(G\). In [6, Section 5.3.28], Beilinson and Drinfeld define a natural operation taking complexes on \(\operatorname{Gr}_{G}\) to complexes on \(\operatorname{Gr}_{L}\) which we will refer to as _parabolic restriction_ (other possible names are the _constant term functor_ or the _Jacquet functor_). We also refer the reader to [5, Section 15] for a complete treatment. It is a functor
\[\operatorname{Res}_{L}^{G}:D(\operatorname{Gr}_{G})\to D(\operatorname{Gr}_{L})\]
defined through the following procedure. We identify \(L\) with the quotient \(P/V\) of \(P\) by its unipotent radical. We obtain a correspondence of ind-schemes (the induction diagram):
\[\begin{CD}\operatorname{Gr}_{L}@V{q}V{}V@V{r}V{}V\\ \operatorname{Gr}_{G}\end{CD}\]
The functor of (unnormalized) parabolic restriction is defined as the hyperbolic restriction
\[\operatorname{Res}_{L}^{G}=q_{*}\circ r^{!}:D(\operatorname{Gr}_{G})\to D( \operatorname{Gr}_{L}).\]
We will also need an equivariant version of this functor. We define it as the composition
\[\operatorname{Res}_{L}^{G}=q_{*}\circ r^{!}\circ\operatorname{For}_{L(\mathcal{ O})}^{G(\mathcal{O})}:D_{G(\mathcal{O})}(\operatorname{Gr}_{G})\to D_{L( \mathcal{O})}(\operatorname{Gr}_{L}), \tag{2.4.1}\]
where by \(r^{!}\) we mean the \(L(\mathcal{O})\)-equivariant \(!\)-pullback functor and by \(q_{*}\) we mean the \(L(\mathcal{O})\)-equivariant \(*\)-pushforward functor.
**Remark 2.4.2**.: In keeping with the notational convention of not distinguishing between the equivariant and non-equivariant versions of a sheaf functor, we continue to use the notation \(\operatorname{Res}^{G}_{L}\) for the equivariant (unnormalized) parabolic restriction.
**Remark 2.4.3**.: Let \(\operatorname{Gr}^{\chi}_{L}\subseteq\operatorname{Gr}_{L}\) denote a connected component of \(\operatorname{Gr}_{L}\) indexed by a character \(\chi\in\Lambda\) of \(Z(\check{L})\). According to [6, Proposition 5.3.29(1)] (see also [5, Lemma 15.1]), if \(F\in\mathcal{P}_{G(\mathcal{O})}(\operatorname{Gr}_{G})\) is perverse, then the complex \(\operatorname{Res}^{G}_{L}(F)|_{\operatorname{Gr}^{\chi}_{L}}\) is concentrated in perverse degree \(\langle 2\rho_{G}-2\rho_{L},\chi\rangle\). It is therefore useful to introduce the _normalized_ parabolic restriction functor \(\operatorname{Res}^{G,\natural}_{L}\) defined by
\[\operatorname{Res}^{G,\natural}_{L}:=\bigoplus_{\chi\in\pi_{0}(\operatorname{ Gr}_{L})}\operatorname{Res}^{G}_{L}|_{\operatorname{Gr}^{\chi}_{L}}[- \langle 2\rho_{G}-2\rho_{L},\chi\rangle].\]
The functor \(\operatorname{Res}^{G,\natural}_{L}\) is now \(t\)-exact. Of course, these considerations apply verbatim to the equivariant version (2.4.1) of \(\operatorname{Res}^{G}_{L}\), and we let \(\operatorname{Res}^{G,\natural}_{L}\) also denote the normalized equivariant parabolic restriction functor.
**Remark 2.4.4**.: Throughout this work, we have used the notation \(i^{!}\) to refer to the functor
\[i^{!}:D_{G(\mathcal{O})}(\operatorname{Gr}_{G})\to D_{L(\mathcal{O})}( \operatorname{Gr}_{G})\]
given by composing the forgetful functor \(\operatorname{For}^{G(\mathcal{O})}_{L(\mathcal{O})}:D_{G(\mathcal{O})}( \operatorname{Gr}_{G})\to D_{L(\mathcal{O})}(\operatorname{Gr}_{G})\) with the \(L(\mathcal{O})\)-equivariant \(!\)-pullback \(i^{!}:D_{L(\mathcal{O})}(\operatorname{Gr}_{G})\to D_{L(\mathcal{O})}( \operatorname{Gr}_{L})\). However, at some points in this section, it will be clearer to use the notation \(i^{!}\) only for the latter functor \(i^{!}:D_{L(\mathcal{O})}(\operatorname{Gr}_{G})\to D_{L(\mathcal{O})}( \operatorname{Gr}_{G})\) of \(L(\mathcal{O})\)-equivariant \(!\)-pullback and to make the forgetful functor \(\operatorname{For}^{G(\mathcal{O})}_{L(\mathcal{O})}\) explicit. However, context should make clear which sense we assign to \(i^{!}\).
**Construction 2.4.5**.: We can compare \(\operatorname{Res}^{G}_{L}\) to \(i^{!}\circ\operatorname{For}^{G(\mathcal{O})}_{L(\mathcal{O})}\) as follows. Let \(j:\operatorname{Gr}_{L}\hookrightarrow\operatorname{Gr}_{P}\) denote the closed inclusion. We have \(rj=i\). Hence, we have a natural compositional isomorphism
\[\operatorname{comp}:i^{!}\circ\operatorname{For}^{G(\mathcal{O})}_{L(\mathcal{O })}\simeq j^{!}\circ r^{!}\circ\operatorname{For}^{G(\mathcal{O})}_{L( \mathcal{O})}.\]
Since \(j\) is a closed immersion, we have a morphism \(\operatorname{counit}:j_{*}j^{!}\to\operatorname{id}\). Since \(qj=\operatorname{id}_{\operatorname{Gr}_{L}}\), we have an equality \((qj)_{*}=\operatorname{id}\). Thus, we obtain a natural transformation
\[\Delta:j^{!}=(qj)_{*}j^{!}\xrightarrow{\operatorname{comp}(j^{!})}q_{*}j_{*} j^{!}\xrightarrow{q_{*}(\operatorname{counit})}q_{*},\]
where \(\operatorname{comp}:(qj)_{*}\simeq q_{*}j_{*}\) is the compositional isomorphism. Therefore, we have constructed a natural transformation
\[\Xi:i^{!}\circ\operatorname{For}^{G(\mathcal{O})}_{L(\mathcal{O})} \xrightarrow{\operatorname{comp}}j^{!}\circ r^{!}\circ\operatorname{For}^{G( \mathcal{O})}_{L(\mathcal{O})}\xrightarrow{\Delta\left(r^{!}\circ\operatorname {For}^{G(\mathcal{O})}_{L(\mathcal{O})}\right)}q_{*}\circ r^{!}\circ \operatorname{For}^{G(\mathcal{O})}_{L(\mathcal{O})}=\operatorname{Res}^{G}_{L}. \tag{2.4.6}\]
**Construction 2.4.7**.: The natural transformation \(\Xi:i^{!}\circ\operatorname{For}_{L(\mathcal{O})}^{G(\mathcal{O})}\to \operatorname{Res}_{L}^{G}\) is the primary means by which we will compare the direct \(!\)-restriction with the parabolic restriction. However, it will be convenient (for the proof of Proposition 2.4.20) to formulate a "dual" transformation
\[\Xi^{\vee}:q_{!}\circ r^{*}\circ\operatorname{For}_{L(\mathcal{O})}^{G( \mathcal{O})}\to i^{*}\circ\operatorname{For}_{L(\mathcal{O})}^{G(\mathcal{O} )}.\]
We now mimic Construction 2.4.5. Namely, we start from the compositional isomorphism
\[\operatorname{comp}:j^{*}\circ r^{*}\circ\operatorname{For}_{L(\mathcal{O})}^ {G(\mathcal{O})}\simeq i^{*}\circ\operatorname{For}_{L(\mathcal{O})}^{G( \mathcal{O})}.\]
Since \(j\) is a closed immersion, we have that \(j_{!}=j_{*}\) and thus have a unit transformation \(\operatorname{unit}:\operatorname{id}\to j_{!}j^{*}\). We also have an equality \(\operatorname{id}=(qj)_{!}\). Therefore, we have a natural transformation
\[\Delta^{\vee}:q_{!}\xrightarrow{q_{!}\operatorname{unit}}q_{!}j_{!}j^{*} \xrightarrow{\operatorname{comp}(j^{*})}(qj)_{!}j^{*}=j^{*}, \tag{2.4.8}\]
where \(\operatorname{comp}:q_{!}j_{!}\simeq(qj)_{!}\) is the compositional isomorphism. Finally, we obtain the natural transformation
\[\Xi^{\vee}:q_{!}\circ r^{*}\circ\operatorname{For}_{L(\mathcal{O})}^{G( \mathcal{O})}\xrightarrow{\Delta^{\vee}\left(r^{*}\circ\operatorname{For}_{L( \mathcal{O})}^{G(\mathcal{O})}\right)}j^{*}\circ r^{*}\circ\operatorname{For}_ {L(\mathcal{O})}^{G(\mathcal{O})}\xrightarrow{\operatorname{comp}}i^{*}\circ \operatorname{For}_{L(\mathcal{O})}^{G(\mathcal{O})}. \tag{2.4.9}\]
The following lemma is an immediate consequence of our definition of \(\Xi^{\vee}\) (note the contravariance of \(\mathbb{D}\)).
**Lemma 2.4.10**.: _Let_
\[\Xi:i^{!}\circ\operatorname{For}_{L(\mathcal{O})}^{G(\mathcal{O})}\to q_{*} \circ r^{!}\circ\operatorname{For}_{L(\mathcal{O})}^{G(\mathcal{O})}\]
_and_
\[\Xi^{\vee}:q_{!}\circ r^{*}\circ\operatorname{For}_{L(\mathcal{O})}^{G( \mathcal{O})}\to i^{*}\circ\operatorname{For}_{L(\mathcal{O})}^{G( \mathcal{O})}.\]
_be defined as in Construction 2.4.5 and Construction 2.4.7, respectively. Then, we have commutative diagram of natural transformations_
_Here, the vertical arrows are compositions of the standard isomorphisms \(i^{!}\circ\mathbb{D}\simeq\mathbb{D}\circ i^{*}\), \(q_{*}\circ\mathbb{D}\simeq\mathbb{D}\circ q_{\!}\), \(r^{!}\circ\mathbb{D}\simeq\mathbb{D}\circ r^{*}\) (see [2, Corollary 3.9.10]), and the isomorphism \(\operatorname{For}_{L(\mathcal{O})}^{G(\mathcal{O})}\circ\mathbb{D}\simeq \mathbb{D}\circ\operatorname{For}_{L(\mathcal{O})}^{G(\mathcal{O})}\) witnessing the compatiblity of the forgetful functor with Verdier duality (see [8, Theorem 3.5.2(3)])._
**Construction 2.4.11**.: Beilinson and Drinfeld equip (the non-equivariant version of) \(\operatorname{Res}^{G}_{L}\) with a monoidal structure in [6, Section 5.3.30] through the use of the global fusion product. See [5, Proposition 15.2] for a detailed construction. We will give a slightly different (but conceptually identical) presentation of this monoidal structure (on the equivariant parabolic restriction functor \(\operatorname{Res}^{G}_{L}\)), using Gaitsgory's description Theorem 2.3.5 of the convolution product on \(D_{G(\mathcal{O})}(\operatorname{Gr}_{G})\) through the nearby cycles functor instead of the fusion product of [28, Section 5]. First of all, we have an analog of the induction diagram for the twofold convolution Grassmannians
Here, \(q_{2}\) and \(r_{2}\) are constructed from \(q\) and \(r\), respectively, just as \(i_{2}:\operatorname{Gr}^{(2)}_{L}\to\operatorname{Gr}^{(2)}_{G}\) was constructed from the map \(i:\operatorname{Gr}_{L}\to\operatorname{Gr}_{G}\). That is, we have maps \(P(\mathcal{K})\hookrightarrow G(\mathcal{K})\) (resp. \(P(\mathcal{K})\twoheadrightarrow L(\mathcal{K})\)) and \(P(\mathcal{O})\hookrightarrow G(\mathcal{O})\) (resp. \(P(\mathcal{O})\twoheadrightarrow L(\mathcal{O})\)) which induce the morphism \(r:\operatorname{Gr}_{P}\to\operatorname{Gr}_{G}\) (resp. \(q:\operatorname{Gr}_{P}\to\operatorname{Gr}_{L}\)) on the quotient ind-schemes. We therefore obtain a map \(P(\mathcal{K})\times\operatorname{Gr}_{P}\to G(\mathcal{K})\times \operatorname{Gr}_{G}\) (resp. \(P(\mathcal{K})\times\operatorname{Gr}_{P}\to L(\mathcal{K})\times \operatorname{Gr}_{L}\)) which intertwines the diagonal action of \(P(\mathcal{O})\) on \(P(\mathcal{K})\times\operatorname{Gr}_{P}\) with the diagonal action of \(G(\mathcal{O})\) (resp. \(L(\mathcal{O})\)) on \(G(\mathcal{K})\times\operatorname{Gr}_{G}\) (resp. \(L(\mathcal{K})\times\operatorname{Gr}_{L}\)). Therefore, the map \(P(\mathcal{K})\times\operatorname{Gr}_{P}\to G(\mathcal{K})\times \operatorname{Gr}_{G}\) (resp. \(P(\mathcal{K})\times\operatorname{Gr}_{P}\to L(\mathcal{K})\times \operatorname{Gr}_{L}\)) descends to a morphism of ind-schemes
\[r_{2}:\operatorname{Gr}^{(2)}_{P}=\operatorname{Gr}_{P}\tilde{\times} \operatorname{Gr}_{P}\to\operatorname{Gr}_{G}\tilde{\times}\operatorname{Gr }_{G}=\operatorname{Gr}^{(2)}_{G}\]
(resp.
\[q_{2}:\operatorname{Gr}^{(2)}_{P}=\operatorname{Gr}_{P}\tilde{\times} \operatorname{Gr}_{P}\to\operatorname{Gr}_{L}\tilde{\times}\operatorname{Gr }_{L}=\operatorname{Gr}^{(2)}_{L}\]
The ind-schemes and morphisms of SS2.3 have direct analogs for the algebraic group \(P\) (of course, \(P\) is not reductive, but that is not essential for the properties of \(\operatorname{Gr}_{P}\) that we are discussing here). Thus, we introduce the \(X\)-ind-schemes \(\operatorname{Gr}_{P,X}\), \(\operatorname{Gr}^{\prime}_{P,X}\), \(\widetilde{\operatorname{Gr}}_{P,X}\) and \(\widetilde{\operatorname{Gr}}^{\prime}_{P,X}\) and the \(X\)-group scheme \(\mathcal{P}_{X}\). For each of the these ind-schemes, we have an analog of the induction diagram in the category of ind-schemes over \(X\). In particular, we can define morphisms \(r_{X}:\operatorname{Gr}_{P,X}\to\operatorname{Gr}_{G,X}\) and \(q_{X}:\operatorname{Gr}_{P,X}\to\operatorname{Gr}_{L,X}\), as well as \(r^{\prime}_{X}:\operatorname{Gr}^{\prime}_{P,X}\to\operatorname{Gr}^{\prime}_{ G,X}\), \(\widetilde{r}_{X}:\widetilde{\operatorname{Gr}}_{P,X}\to\widetilde{ \operatorname{Gr}}_{G,X}\), \(\widetilde{r}_{X}:\widetilde{\operatorname{Gr}}_{P,X}\to\widetilde{ \operatorname{Gr}}_{G,X}\), \(\widetilde{r}^{\prime}_{X}:\widetilde{\operatorname{Gr}}^{\prime}_{P,X}\to \widetilde{\operatorname{Gr}}^{\prime}_{G,X}\) and \(q^{\prime}_{X}:\operatorname{Gr}^{\prime}_{P,X}\to\operatorname{Gr}^{\prime} _{L,X}\), \(\tilde{q}_{X}:\widetilde{\operatorname{Gr}}_{P,X}\to\widetilde{ \operatorname{Gr}}_{L,X}\), \(\tilde{q}^{\prime}_{X}:\widetilde{\operatorname{Gr}}^{\prime}_{P,X}\to \widetilde{\operatorname{Gr}}^{\prime}_{L,X}\). We leave it to the reader to spell out the definitions of these morphisms by giving functorial maps on \(S\)-points (for any affine \(X\)-scheme \(x:S\to X\)).
We can also define the appropriate analogs of the parabaolic restriction functor \(\operatorname{Res}^{G}_{L}=q_{*}\circ r^{!}\circ\operatorname{For}^{G(\mathcal{O })}_{L(\mathcal{O})}\) for each version of the Grassmannian. For example, the functor
\[\operatorname{Res}^{G}_{L,X}:D_{\mathcal{G}_{X}}(\operatorname{Gr}_{G,X})\to D _{\mathcal{L}_{X}}(\operatorname{Gr}_{L,X})\]
is defined by
\[\operatorname{Res}^{G}_{L,X}=(q_{X})_{*}\circ r^{!}_{X}\circ\operatorname{For}^{ \mathcal{G}_{X}}_{\mathcal{L}_{X}}.\]
Similarly, we have the functor \({}^{\prime}\operatorname{Res}^{G}_{L,X}:D_{\mathcal{G}_{X}}(\operatorname{Gr }^{\prime}_{G,X})\to D_{\mathcal{L}_{X}}(\operatorname{Gr}^{\prime}_{L,X})\). Following Construction 2.4.5, we can define natural transformations
\[\Xi_{X}:i^{!}_{X}\circ\operatorname{For}^{\mathcal{G}_{X}}_{\mathcal{L}_{X}} \to\operatorname{Res}^{G}_{L,X}\]
\[{}^{\prime}\Xi_{X}:(i^{\prime}_{X})^{!}\circ\operatorname{For}^{\mathcal{G}_{ X}}_{\mathcal{L}_{X}}\to{}^{\prime}\operatorname{Res}^{G}_{L,X}.\]
We can also follow Construction 2.4.7 to define the natural transformations \(\Xi_{X}^{\vee}\) and \({}^{\prime}\Xi_{X}^{\vee}\). We leave it to the reader to spell out the appropriate analogs of Lemma 2.4.10.
We will continue the notational laziness initiated in Remark 2.2.7 by using the same names for analogous maps and functors defined for the groups \(L\), \(P\), and \(G\) (for example, \(m:\operatorname{Gr}^{(2)}_{P}\to\operatorname{Gr}_{P}\) denotes the local convolution morphism, while \(\mu:\widetilde{\operatorname{Gr}^{\prime}_{P,X}}\to\operatorname{Gr}^{\prime} _{P,X}\) denotes Gaitsgory's global convolution map from Construction 2.3.7). Moreover, we replace the subscript \(X\) by \(X-0\) on an \(X\)-ind-scheme or a morphism of \(X\)-ind-schemes to indicate the restriction of this object to the open subscheme \(X\setminus\{0\}\subseteq X\). For example, we have the morphism \(r_{X-0}:=r_{X}\times_{X}\operatorname{id}_{X\setminus\{0\}}:\operatorname{ Gr}_{P,X-0}:=\operatorname{Gr}_{P,X}\times_{X}(X\setminus\{0\})\to \operatorname{Gr}_{G,X}\times_{X}(X\setminus\{0\})=:\operatorname{Gr}_{G,X-0}\) on the restrictions of these Beilinson-Drinfeld Grassmannians to \(X\setminus\{0\}\). Sometimes we opt to save space and use the subscript \(X\) on a morphism when we really should use \(X-0\), but we will only do so if it is unlikely to cause confusion.
Let \(A,B\in D_{G(\mathcal{O})}(\operatorname{Gr}_{G})\). We form the complex \(A\boxtimes p^{!}_{2}B|_{\operatorname{Gr}_{G,X-0}}\in D_{\mathcal{G}_{X-0}}( \operatorname{Gr}^{\prime}_{G,X-0})\). We have a natural isomorphism in \(D_{\mathcal{L}_{X-0}}(\operatorname{Gr}^{\prime}_{L,X-0})\)
\[\Sigma:(q^{\prime}_{X-0})_{*}(r^{\prime}_{X-0})^{!}(A\boxtimes p ^{!}_{2}B|_{\operatorname{Gr}_{G,X-0}}) \simeq q_{*}r^{!}A\boxtimes(q_{X-0})_{*}r^{!}_{X-0}\left(p^{!}_{2} B|_{\operatorname{Gr}_{G,X-0}}\right) \tag{2.4.13}\] \[\simeq q_{*}r^{!}A\boxtimes p^{!}_{2}(q_{*}r^{!}B)|_{\operatorname {Gr}_{G,X-0}}. \tag{2.4.12}\]
The first line is the composition of the isomorphisms of [2, Proposition 3.10.1] (to pass the \(*\)-pushforward into the external product) and [2, Proposition 3.10.6(3)] (to pass the \(!\)-pullback into the external product). The second line uses the evident analogs of these natural isomorphisms for the twisted external product so as to push the \(!\)-pullback and \(*\)-pushforward past the functor \(p^{!}_{2}=\underline{\mathbb{C}}_{X}[1]\,\widetilde{\boxtimes}-\). Moreover, the second line uses the fact that \(q_{*}\) and \(r^{!}\) commute with restriction to the _open_ sub-ind-scheme \(\operatorname{Gr}_{G,X-0}\subseteq\operatorname{Gr}_{G,X}\) (by smooth base change).
Next, consider a complex \(F\in D_{\mathcal{G}_{X-0}}(\operatorname{Gr}^{\prime}_{G,X-0})\). We can apply [2, Lemma 4.8.8] to construct a Verdier dual pair (in the sense that the first map yields the second after applying \(\mathbb{D}\) and replacing \(F\) by \(\mathbb{D}F\)) of natural maps
\[\Psi((q^{\prime}_{X-0})_{*}(r^{\prime}_{X-0})^{!}F) \to q_{*}r^{!}\Psi(F) \tag{2.4.15}\] \[q_{!}r^{*}\Psi(F) \to\Psi((q^{\prime}_{X-0})_{!}(r^{\prime}_{X-0})^{*}F). \tag{2.4.14}\]
On the other hand, we may identify \(r:\mathrm{Gr}_{P}\to\mathrm{Gr}_{G}\) (resp. \(i:\mathrm{Gr}_{L}\to\mathrm{Gr}_{G}\), \(q:\mathrm{Gr}_{P}\to\mathrm{Gr}_{L}\)) with the attracting ind-scheme (resp. the fixed point ind-scheme, the canonical projection from the attracting ind-scheme to the fixed point ind-scheme) for the action of \(\mathbb{G}_{m}\) on \(\mathrm{Gr}_{G}\) through \(2\rho_{G}-2\rho_{L}\), see [3, Theorem 1.2.6] as well as Proposition 2.1.5 above. Therefore, Braden's hyperbolic localization theorem (see [10], [14], and [31]) provides canonical isomorphisms \(q_{l}r^{*}\simeq q_{*}r^{!}\) and \((q^{\prime}_{X-0})!(r^{\prime}_{X-0})^{*}\simeq(q^{\prime}_{X-0})_{*}(r^{ \prime}_{X-0})^{!}\). Thus, the functors \(q_{l}r^{*}\) and \(q_{*}r^{!}\) are examples of hyperbolic restriction functors, in the sense of [10]. It follows from Richarz's work [31] (especially [31, Theorem 3.3]) that the nearby cycles functor commutes with hyperbolic restriction (even in the etale setting, and even with \(\mathbb{G}_{m}\)-equivariance weakened to the property of being \(\mathbb{G}_{m}\)-monodromic). Hence, the natural transformations (2.4.14) and (2.4.15) are isomorphisms.
Now we can put the ingredients together. Gaitsgory's Theorem 2.3.5 provides the isomorphism \(\varpi_{A,B}:A\star B\simeq C(A,B)\), which yields an isomorphism \(q_{*}r^{!}\varpi_{A,B}:q_{*}r^{!}(A\star B)\simeq q_{*}r^{!}C(A,B)\). We can invoke (2.4.14) to obtain an isomorphism
\[q_{*}r^{!}C(A,B)=q_{*}r^{!}\Psi(A\boxtimes p_{2}^{!}B|_{\mathrm{ Gr}_{G,X-0}})\simeq\Psi(q^{\prime}_{X-0})_{*}(r^{\prime}_{X-0})^{!}(A \boxtimes p_{2}^{!}B|_{\mathrm{Gr}_{G,X-0}}). \tag{2.4.16}\]
Then, we can apply \(\Psi\) to (2.4.12), (2.4.13) to obtain an isomorphism
\[\Psi(\Sigma):\Psi(q^{\prime}_{X-0})_{*}(r^{\prime}_{X-0})^{!}(A \boxtimes p_{2}^{!}B|_{\mathrm{Gr}_{G,X-0}}) \simeq\Psi\left(q_{*}r^{!}A\boxtimes p_{2}^{!}(q_{*}r^{!}B)|_{ \mathrm{Gr}_{G,X-0}}\right)\] \[\simeq\Psi\left(\mathrm{Res}_{L}^{G}(A)\boxtimes p_{2}^{!}\left( \mathrm{Res}_{L}^{G}(B)\right)\right). \tag{2.4.17}\]
Finally, we have the isomorphism
\[\varpi_{\mathrm{Res}_{L}^{G}(A),\mathrm{Res}_{L}^{G}(B)}^{-1}:\Psi\left( \mathrm{Res}_{L}^{G}(A)\boxtimes p_{2}^{!}\left(\mathrm{Res}_{L}^{G}(B)\right) \right)\simeq\mathrm{Res}_{L}^{G}(A)\star\mathrm{Res}_{L}^{G}(B). \tag{2.4.18}\]
Composing the isomorphisms (2.4.16), (2.4.17), and (2.4.18) defines the isomorphism
\[\varepsilon_{A,B}:\mathrm{Res}_{L}^{G}(A\star B)\simeq\mathrm{Res}_{L}^{G}(A) \star\mathrm{Res}_{L}^{G}(B). \tag{2.4.19}\]
We leave the task of verifying the compatibility of \(\varepsilon_{A,B}\) with the associativity and unitality constraints on \(D_{G(\mathcal{O})}(\mathrm{Gr}_{G})\) and \(D_{L(\mathcal{O})}(\mathrm{Gr}_{L})\) (described in Remark 2.2.6 and Remark 2.2.25) to the reader.
**Proposition 2.4.20**.: _Regard \(i^{!}:D_{G(\mathcal{O})}(\mathrm{Gr}_{G})\to D_{L(\mathcal{O})}(\mathrm{Gr}_{L})\) as a lax monoidal functor through Construction 2.2.14 and Construction 2.2.5. Regard the parabolic restriction functor of (2.4.1)_
\[\mathrm{Res}_{L}^{G}:D_{G(\mathcal{O})}(\mathrm{Gr}_{G})\to D_{L(\mathcal{O})}( \mathrm{Gr}_{L})\]
_as a lax monoidal functor through Construction 2.4.11. Then, the natural transformation of (2.4.6)_
\[\Xi:i^{!}\to\mathrm{Res}_{L}^{G}\]
_of functors \(D_{G(\mathcal{O})}(\mathrm{Gr}_{G})\to D_{L(\mathcal{O})}(\mathrm{Gr}_{L})\) is a natural transformation of lax monoidal functors._
Proof.: Recall that the lax monoidal structure of Construction 2.2.5 on \(i^{!}\) is obtained from the colax monoidal structure of Construction 2.2.14 by Verdier duality. Hence, by Lemma 2.4.10, the proposition is equivalent to the dual assertion that the natural transformation \(\Xi^{\vee}\) of (2.4.9) is a morphism of colax monoidal functors. As in Construction 2.4.7, we will use the description of \(\operatorname{Res}^{G}_{L}\) in terms of left adjoints throughout the proof. Moreover, we will use the description of the colax monoidal structure on \(i^{*}\) given in Proposition 2.3.13 through Gaitsgory's interpretation Theorem 2.3.5 of the convolution product.
Let \(A,B\in D_{G(\mathcal{O})}(\operatorname{Gr}_{G})\). It suffices to show that the following diagram in \(D_{L(\mathcal{O})}(\operatorname{Gr}_{L})\) commutes.
The bottom rectangle is obtained by applying the nearby cycles functor \(\Psi\) to the outer rectangle in the following diagram.
The lower left rectangle commutes by the naturality of the map \(\Xi^{\vee}\boxtimes-\) applied to the isomorphism \(p_{2}^{\dagger}\left(\operatorname{Res}_{L,X-0}^{G}(B)\right)|_{\operatorname{Gr }_{L,X-0}}\simeq\operatorname{Res}_{L}^{G}\left(p_{2}^{\dagger}B|_{ \operatorname{Gr}_{G,X-0}}\right)\) of (2.4.13). The lower right rectangle commutes because it is obtained by applying the functor \(i^{*}A\boxtimes-\) to the following diagram.
We expand the diagram horizontally using the definitions of \(\Xi^{\vee}\) (2.4.9) and of \(\Xi^{\vee}_{X-0}\).
The top left square commutes by the naturality of the map \(\Delta^{\vee}_{X-0}:(q_{X-0})!\to j^{*}_{X-0}\) applied to the morphism \(r^{*}_{X-0}p_{2}^{\dagger}B|_{\operatorname{Gr}_{G,X-0}}\to p_{2}^{\dagger}r^ {*}B|_{\operatorname{Gr}_{P,X-0}}\) of (2.4.13). The right rectangle commutes by the evident compatibility of (2.4.13) with the compositional isomorphisms for \(*\)-pullback. To show that the bottom left square commutes, we may replace \(r^{*}B\) by an arbitrary complex \(M\in D_{P(\mathcal{O})}(\operatorname{Gr}_{P})\) and show that the outer rectangle in the following diagram commutes.
The top right rectangle commutes by the naturality of the compositional isomorphism \((q_{X-0})!(j_{X-0})!\simeq\mathrm{id}\) applied to the morphism \(j_{X-0}^{*}p_{2}^{\dagger}M|_{\mathrm{Gr}_{P,X-0}}\to p_{2}^{\dagger}j^{*}M|_{ \mathrm{Gr}_{L,X-0}}\). The bottom right rectangle commutes by the compatibility of (2.4.13) with the compositional isomorphisms for \(!\)-pushforward. The left rectangle can be expanded to the following diagram.
The bottom rectangle commutes by the naturality of the morphism \((q_{X-0})!p_{2}^{\dagger}M|_{\mathrm{Gr}_{P,X-0}}\to p_{2}^{\dagger}q_{l}M|_{ \mathrm{Gr}_{L,X-0}}\) applied to the unit map \(\mathrm{unit}:M\to j_{l}j^{*}M\). The top rectangle is obtained by applying the functor \((q_{X-0})!\) to the diagram
This diagram commutes by the very definition of the isomorphism (2.4.13).
We now turn to showing the commutativity of the top rectangle \((\star)\). In doing so, we may replace \(p_{2}^{\dagger}B|_{\mathrm{Gr}_{G,X-0}}\) by an arbitrary complex \(C\in D_{\mathcal{G}_{X-0}}(\mathrm{Gr}_{G,X-0})\) and show that the following diagram commutes in \(D_{\mathcal{L}_{X-0}}(\mathrm{Gr}_{L,X}^{\prime})\).
Checking the commutativity of this diagram is an easy (if tedious) exercise, which we omit. The interested reader should expand the diagram using the definitions of \(\Xi^{\vee}\) and \({}^{{}^{\prime}}\Xi^{\vee}_{X-0}\), and then deduce the commutativity of resulting diagram from an appropriate compatibility between the isomorphisms of [2, Proposition 2.5.45(a)] (for the \(*\)-pullback) and [2, Proposition 3.10.1] (for the \(!\)-pushforward) and the compositional isomorphisms for the \(*\)-pullback and \(!\)-pushforward, respectively.
We can finally return to the original diagram. The commutativity of the top rectangle (\(\dagger\)) follows from the more general assertion that for \(E\in D_{\mathcal{G}_{X}}(\mathrm{Gr}^{\prime}_{G,X})\), the following diagram commutes in \(D_{L(\mathcal{O})}(\mathrm{Gr}_{L})\).
All of the unlabelled vertical maps are constructed from [2, Lemma 4.8.8] (like \(\zeta\), which was defined in (2.3.4)) The commutativity of the top left square follows from the naturality of the morphism \(\Delta^{\vee}:q_{!}\xrightarrow{q_{!}\mathrm{unit}}q_{!}j_{!}j^{*}\simeq j^{*}\) of (2.4.8) applied to the map \(r^{*}\Psi(E)\to\Psi(r^{\prime}_{X})^{*}E\) of [2, Lemma 4.8.8]. The commutativity of the right rectangle follows from the compatibility of the natural transformation \(i^{*}\Psi\to\Psi(i^{\prime}_{X})^{*}\) of [2, Lemma 4.8.8] with the compositional isomorphisms for the \(*\)-pullback functors (as asserted in the final step of the proof of Proposition 2.3.13). To show the commutativity of the bottom left square, we may replace \((r^{\prime}_{X})^{*}E\) by an arbitrary complex \(F\in D_{\mathcal{P}_{X}}(\mathrm{Gr}^{\prime}_{P,X})\) and show that the following diagram commutes.
Again, the unlabelled vertical maps are all the appropriate maps from [2, Lemma 4.8.8]. The lower left rectangle commutes by the naturality of the map \(q_{!}\Psi\to\Psi(q^{\prime}_{X})!\) applied to
the unit \(F\to(j^{\prime}_{X})!(j^{\prime}_{X})^{*}F\). The upper right rectangle commutes by the naturality of the isomorphism \(\operatorname{comp}:q_{!}j_{!}\simeq\operatorname{id}\) applied to the map \(j^{*}\Psi(F)\to\Psi(j^{\prime}_{X})^{*}F\). The commutativity of the lower right rectangle follows from the compatibility of the natural transformation of [2, Lemma 4.8.8] with the compositional isomorphisms for the \(!\)-pushforward functors. The upper left rectangle is obtained by applying the functor \(q_{!}\) to the following diagram
This compatibility between the natural transformations of [2, Lemma 4.8.8] (for the \(*\)-pushforward and \(*\)-pullback) and the unit of the adjunction between \(*\)-pushforward and \(*\)-pullback (recall that \(j\), \(j_{X}\), and \(j^{\prime}_{X}\) are closed immersions) is an easy consequence of the definitions.
**Remark 2.4.21**.: The parabolic restriction functors enjoy the following transitivity property. Let \(Q\subseteq P\) denote an inclusion of standard parabolic subgroups of \(G\), and let \(M\subseteq L\) denote the induced inclusion of Levi subgroups. Note the \(Q\cap L\) is a parabolic subgroup of \(L\) with Levi subgroup \(M\). In particular, we may consider the parabolic restriction functor
\[\operatorname{Res}^{L}_{M}:D_{L(\mathcal{O})}(\operatorname{Gr}_{L})\to D_{M (\mathcal{O})}(\operatorname{Gr}_{M}).\]
Then, we claim that there is a natural isomorphism of lax monoidal functors \(D_{G(\mathcal{O})}(\operatorname{Gr}_{G})\to D_{M(\mathcal{O})}( \operatorname{Gr}_{M})\)
\[\tau_{M\subseteq L\subseteq G}:\operatorname{Res}^{L}_{M}\circ\operatorname{ Res}^{G}_{L}\simeq\operatorname{Res}^{G}_{M}.\]
This compatibility is noted in [20, Eq. 6.3.2]. Observe that we have a commutative diagram of \(M(\mathcal{O})\)-equivariant ind-schemes
\[\diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c} \diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c} \diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c} \diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c} \diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c} \diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c} \diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c} \diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c} \diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c} \diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c} \diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c} \diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c} \diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c} \diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c} \diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c} \diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c} \diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c} \diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c} \diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c} \diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c} \diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c}\diagram{c} \diagram{c}\diagram\diagram{}\diagram{c}\diagram{c}\diagram\diagram{} \diagram\diagram{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{} \diagram\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram \diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram \{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram \{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram \{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram \{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram \{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram \{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\}\diagram\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\}\diagram\{}\diagram\{}\diagram\{}\diagram\{}\{}\diagram\{}\}\diagram\{}\{}\diagram\{}\diagram\{}\{}\}\diagram\{}\{\}\diagram\{}\{}\}\diagram\{\{}\}\diagram\{}\{\}\}\diagram\{\{}\}\{\}\}\\{\{\}\}\\{\{\}\{\}\{\}\{\}\{\}\{\}\{\}\{\{\}\{\}\{\\{\}\{\\{\\}\{\\{\\\{\\{\\{\\{\\{\\\{\\\{\\\{\\{\\\\{\\\\\{\
Note that the middle square is Cartesian. Therefore, we have the isomorphism
\[\tau_{M\subseteq L\subseteq G}:\mathrm{Res}^{L}_{M}\circ\mathrm{ Res}^{G}_{L} =t_{*}s^{!}\circ\mathrm{For}^{L(\mathcal{O})}_{M(\mathcal{O})}\circ q_{*}r^{!} \circ\mathrm{For}^{G(\mathcal{O})}_{L(\mathcal{O})}\] \[\simeq t_{*}s^{!}q_{*}r^{!}\circ\mathrm{For}^{L(\mathcal{O})}_{M( \mathcal{O})}\mathrm{For}^{G(\mathcal{O})}_{L(\mathcal{O})}\] \[\simeq t_{*}s^{!}q_{*}r^{!}\circ\mathrm{For}^{G(\mathcal{O})}_{M( \mathcal{O})}\] \[\simeq t_{*}(q^{\prime})_{*}(s^{\prime})^{!}r^{!}\circ\mathrm{ For}^{G(\mathcal{O})}_{M(\mathcal{O})}\] \[\simeq(q^{\prime\prime})_{*}(r^{\prime\prime})!\mathrm{For}^{G( \mathcal{O})}_{M(\mathcal{O})}\] \[=\mathrm{Res}^{G}_{M}.\]
The first (non-equality) isomorphism is given by the compatiblity [8, Theorem 3.4.1(i)] of the forgetful functor \(\mathrm{For}^{L(\mathcal{O})}_{M(\mathcal{O})}\) with \(!\)-pullback and \(*\)-pushforward. The second isomorphism is given by the transitivity of the forgetful functors. The third isomorphism is the proper base change theorem. The fourth isomorphism is induced by the compositional isomorphisms \(t_{*}(q^{\prime}_{*})\simeq(q^{\prime\prime})_{*}\) and \((s^{\prime})^{!}r^{!}\simeq(r^{\prime\prime})^{!}\).
**Remark 2.4.22**.: We recall the definition of the standard lax monoidal structure on the equivariant cohomology functor
\[R\Gamma_{G(\mathcal{O})}(\mathrm{Gr}_{G},-):D_{G(\mathcal{O})}(\mathrm{Gr}_{G} )\to D_{G(\mathcal{O})}(\mathrm{pt}).\]
Let \(A,B\in D_{G(\mathcal{O})}(\mathrm{Gr}_{G})\). Then, we have a natural map
\[R\Gamma_{G(\mathcal{O})}(\mathrm{Gr}_{G},A)\otimes R\Gamma_{G(\mathcal{O})}( \mathrm{Gr}_{G},B)\xrightarrow{-\widetilde{\boxtimes}-}R\Gamma_{G(\mathcal{O}) }(\mathrm{Gr}_{G}\,\tilde{\times}\,\mathrm{Gr}_{G},A\,\widetilde{\boxtimes}\,B) \tag{2.4.23}\]
as well as an isomorphism
\[R\Gamma_{G(\mathcal{O})}(\mathrm{Gr}_{G}\,\tilde{\times}\,\mathrm{Gr}_{G},A \,\widetilde{\boxtimes}\,B)\simeq R\Gamma_{G(\mathcal{O})}(\mathrm{Gr}_{G},m_ {*}(A\,\widetilde{\boxtimes}\,B))=R\Gamma_{G(\mathcal{O})}(\mathrm{Gr}_{G},A \star B). \tag{2.4.24}\]
The composition of the morphisms (2.4.23) and (2.4.24) equips \(R\Gamma_{G(\mathcal{O})}(\mathrm{Gr}_{G},-)\) with a lax monoidal structure.
In particular, the functor
\[H^{*}_{G(\mathcal{O})}(\mathrm{Gr}_{G},-):D_{G(\mathcal{O})}(\mathrm{Gr}_{G}) \rightarrow\mathrm{mod}(R_{G})\]
is lax monoidal. Here, for any graded \(\mathbb{C}\)-algebra \(A\), we follow [7] and write \(\mathrm{mod}(A)\) for the category of _graded_ (left) \(A\)-modules.
### Action of equivariant homology
Let \(H^{*}_{L(\mathcal{O})}(\mathrm{Gr}_{G},\mathbb{C})\) denote the \(L(\mathcal{O})\)-equivariant cohomology ring of \(\mathrm{Gr}_{G}\). We explain how the results of Yun and Zhu [34] on the \(T\)-equivariant cohomology \(H^{*}_{T(\mathcal{O})}(\mathrm{Gr}_{G},\mathbb{C})\) generalize to describe the \(L\)-equivariant cohomology \(H^{*}_{L(\mathcal{O})}(\mathrm{Gr}_{L},\mathbb{C})\) (we will have more to say in SS4).
**Construction 2.5.1**.: The graded vector space \(H^{*}_{L(\mathcal{O})}(\mathrm{Gr}_{G},\mathbb{C})\) is naturally a graded module over the graded ring \(R_{L}:=H^{*}_{L(\mathcal{O})}(\mathrm{pt},\mathbb{C})\). Via the homeomorphism \(\Omega_{\mathrm{poly}}G_{c}\cong\mathrm{Gr}_{G}\) of \(\mathrm{Gr}_{G}\) with the based polynomial loop group of a maximal compact subgroup \(G_{c}\subseteq G\), the ring \(H^{*}_{L(\mathcal{O})}(\mathrm{Gr}_{G},\mathbb{C})\simeq H^{*}_{L_{c}}(\Omega_ {\mathrm{poly}}G_{c},\mathbb{C})\) acquires the structure of a commutative and cocommutative graded Hopf \(R_{L}\)-algebra (independently of the choice of \(G_{c}\)). We refer the reader to the beautiful and classical treatment [27] of Milnor and Moore for a discussion of the Hopf algebra structure on the cohomology of a Lie group.
Following [34, Eq. 2.10], we define the \(L(\mathcal{O})\)-equivariant homology \(H^{L(\mathcal{O})}_{*}(\mathrm{Gr}_{G},\mathbb{C})\) of \(\mathrm{Gr}_{G}\) to be the \(R_{L}\)-linear graded dual of \(H^{*}_{L(\mathcal{O})}(\mathrm{Gr}_{G},\mathbb{C})\):
\[H^{L(\mathcal{O})}_{*}(\mathrm{Gr}_{G},\mathbb{C}):=\mathrm{Hom}_{R_{L}}(H^{* }_{L(\mathcal{O})}(\mathrm{Gr}_{G},\mathbb{C}),R_{L})^{\mathrm{gr}}.\]
That is, the \(n\)th graded component \(H^{L(\mathcal{O})}_{n}(\mathrm{Gr}_{G},\mathbb{C})\) consists of all \(R_{L}\)-module homomorphisms \(H^{*}_{L(\mathcal{O})}(\mathrm{Gr}_{L},\mathbb{C})\to R_{L}\) which take \(H^{i}_{L(\mathcal{O})}(\mathrm{Gr}_{L},\mathbb{C})\) to \(R^{i-n}_{L}\) (this sign convention keeps the homology in non-negative degrees).
**Remark 2.5.2**.: Note that \(\mathrm{Gr}_{G}\) is an equivariantly formal \(L(\mathcal{O})\)-space. That is, the spectral sequence
\[E_{2}^{p,q}=H^{p}_{L(\mathcal{O})}(\mathrm{pt},H^{q}(\mathrm{Gr}_{G},\mathbb{C }))\implies H^{p+q}_{L(\mathcal{O})}(\mathrm{Gr}_{G},\mathbb{C})\]
degenerates at \(E_{2}\). Indeed, the claim follows from the fact that \(H^{*}(\mathrm{Gr}_{G},\mathbb{C})\) is concentrated in even degrees (since \(\mathrm{Gr}_{G}\) admits a paving by affine spaces, namely the orbits of the Iwahori subgroup \(I\subseteq G(\mathcal{O})\)). It follows that \(H^{*}_{L(\mathcal{O})}(\mathrm{Gr}_{G},\mathbb{C})\) is a free \(R_{L}\)-module. As the graded dual of a free and finitely generated Hopf \(R_{L}\)-algebra, \(H^{L(\mathcal{O})}_{*}(\mathrm{Gr}_{G},\mathbb{C})\) inherits a natural structure of Hopf \(R_{L}\)-algebra.
**Construction 2.5.3**.: Let \(A\in D_{L(\mathcal{O})}(\mathrm{Gr}_{G})\). We wish to equip \(H^{*}_{L(\mathcal{O})}(\mathrm{Gr}_{G},A)\) with the structure of a comodule over \(H^{L(\mathcal{O})}_{*}(\mathrm{Gr}_{G},\mathbb{C})\). To do so, we fix a basis \(\{h^{i}\}\) of the free \(R_{L}\)-module \(H^{*}_{L(\mathcal{O})}(\mathrm{Gr}_{G},\mathbb{C})\) (none of the constructions below actually depend on this basis). Let \(\{h_{i}\}\) denote the dual basis of \(H^{L(\mathcal{O})}_{*}(\mathrm{Gr}_{G},\mathbb{C})\). We must specify a graded \(R_{L}\)-linear map
\[\sigma:H^{*}_{L(\mathcal{O})}(\mathrm{Gr}_{G},A)\to H^{*}_{L(\mathcal{O})}( \mathrm{Gr}_{G},A)\otimes_{R_{L}}H^{L(\mathcal{O})}_{*}(\mathrm{Gr}_{G}, \mathbb{C}).\]
The right hand side is a _module_ over \(H^{L(\mathcal{O})}_{*}(\mathrm{Gr}_{G},\mathbb{C})\), so it is equivalent to the define the unique \(H^{L(\mathcal{O})}_{*}(\mathrm{Gr}_{G},\mathbb{C})\)-linear extension of \(\sigma\) instead:
\[\tilde{\sigma}:H^{*}_{L(\mathcal{O})}(\mathrm{Gr}_{G},A)\otimes_{R_{L}}H^{L( \mathcal{O})}_{*}(\mathrm{Gr}_{G},\mathbb{C})\to H^{*}_{L(\mathcal{O})}( \mathrm{Gr}_{G},A)\otimes_{R_{L}}H^{L(\mathcal{O})}_{*}(\mathrm{Gr}_{G}, \mathbb{C}).\]
The map \(\tilde{\sigma}\) is defined [34, Lemma 3.1] to be the automorphism given explicitly on an element \(v\otimes h\) (\(v\in H^{*}_{L(\mathcal{O})}(\mathrm{Gr}_{G},A)\), \(h\in H^{L(\mathcal{O})}_{*}(\mathrm{Gr}_{G},\mathbb{C})\)) by the formula
\[\tilde{\sigma}(v\otimes h)=\sum_{i}(h^{i}\cup v)\otimes(h_{i}\wedge h).\]
In this formula, \(h^{i}\cup-\) denotes the action of the cohomology class \(h^{i}\in H^{*}_{L(\mathcal{O})}(\operatorname{Gr}_{G},\mathbb{C})\) on the vector space \(H^{*}_{L(\mathcal{O})}(\operatorname{Gr}_{G},A)\) and \(\wedge\) denotes the (Pontryagin) product on the Hopf algebra \(H^{L(\mathcal{O})}_{*}(\operatorname{Gr}_{G},\mathbb{C})\). Note that this sum is actually finite because \(A\) is supported on \(\operatorname{Gr}_{G}^{\leq\lambda}\) for \(\lambda\) sufficiently large.
**Remark 2.5.4**.: The forgetful map \(H^{*}_{G(\mathcal{O})}(\operatorname{Gr}_{G},\mathbb{C})\to H^{*}_{L( \mathcal{O})}(\operatorname{Gr}_{L},\mathbb{C})\) is an \(R_{G}\)-algebra homomorphism, where \(H^{*}_{L(\mathcal{O})}(\operatorname{Gr}_{G},\mathbb{C})\) is viewed as an \(R_{G}\)-module via restriction of scalars along the forgetful homomorphism \(R_{G}=H^{*}_{G(\mathcal{O})}(\operatorname{pt},\mathbb{C})\to H^{*}_{L( \mathcal{O})}(\operatorname{pt},\mathbb{C})=R_{L}\). Hence, it extends uniquely to an \(R_{L}\)-algebra homomorphism
\[\phi:H^{*}_{G(\mathcal{O})}(\operatorname{Gr}_{G},\mathbb{C})\otimes_{R_{G}}R _{L}\to H^{*}_{L(\mathcal{O})}(\operatorname{Gr}_{G},\mathbb{C}).\]
The morphism \(\phi\) is in fact an isomorphism. Indeed, note that \(R_{L}\) is a graded ring with augmentation module \(R_{L}/\mathfrak{m}_{L}\simeq H^{*}(\operatorname{pt},\mathbb{C})\simeq \mathbb{C}\) (where \(\mathfrak{m}_{L}\subseteq R_{L}\) denotes the irrelevant ideal). Tensoring \(\phi\) with the augmentation module \(R_{L}/\mathfrak{m}_{L}\) yields the identity map on \(H^{*}(\operatorname{Gr}_{G},\mathbb{C})\) (by Remark 2.5.2). Since \(\phi\) is a homomorphism of free graded \(R_{L}\)-modules concentrated in non-negative degrees, we deduce from the graded form of Nakayama's lemma that \(\phi\) is an isomorphism.
Recall that the comultiplications on \(H^{*}_{G(\mathcal{O})}(\operatorname{Gr}_{G},\mathbb{C})\) and \(H^{*}_{L(\mathcal{O})}(\operatorname{Gr}_{G},\mathbb{C})\) are induced by pullback along the multiplication map \(\operatorname{Gr}_{G}\times\operatorname{Gr}_{G}\simeq\Omega_{\operatorname{ poly}}G_{c}\times\Omega_{\operatorname{poly}}G_{c}\to\Omega_{\operatorname{poly}}G_{c} \simeq\operatorname{Gr}_{G}\). The forgetful map \(H^{*}_{G(\mathcal{O})}(\operatorname{Gr}_{G},\mathbb{C})\to H^{*}_{L( \mathcal{O})}(\operatorname{Gr}_{G},\mathbb{C})\) is compatible with the pullback of cohomology classes (an obvious consequence of the definitions), so we deduce that \(\phi\) is an \(R_{L}\)-coalgebra homomorphism. Therefore, it an isomorphism of Hopf \(R_{L}\)-algebras (note that a bialgebra homomorphism automatically respects antipodes).
Applying graded duality over the base ring \(R_{L}\) to \(\phi\) yields an isomorphism of Hopf \(R_{L}\)-algebras
\[\phi^{\vee}_{L\subseteq G}:=\phi^{\vee}:H^{L(\mathcal{O})}_{*}(\operatorname{ Gr}_{G},\mathbb{C})\simeq H^{G(\mathcal{O})}_{*}(\operatorname{Gr}_{G}, \mathbb{C})\otimes_{R_{G}}R_{L}.\]
If \(M\subseteq L\) is a further Levi subgroup of \(L\), then we have the following commutative diagram of Hopf \(R_{M}\)-algebras, which expresses the transitivity of this construction.
(2.5.5)
Here, the right vertical map is the standard isomorphism.
**Construction 2.5.6**.: Consider now the closed subspace \(i:\operatorname{Gr}_{L}\hookrightarrow\operatorname{Gr}_{G}\). We have an induced map in equivariant cohomology
\[i^{*}:H^{*}_{L(\mathcal{O})}(\operatorname{Gr}_{G},\mathbb{C})\to H^{*}_{L( \mathcal{O})}(\operatorname{Gr}_{L},\mathbb{C}).\]
Note that there is a commutative diagram of \(L_{c}\)-spaces (where \(L_{c}\subseteq L\) is a maximal compact contained in \(G_{c}\)):
Since the inclusion \(\mathrm{Gr}_{L}\simeq\Omega_{\mathrm{poly}}L_{c}\hookrightarrow\Omega_{\mathrm{ poly}}G_{c}\simeq\mathrm{Gr}_{G}\) is a group homomorphism, it follows immediately from the definition of the comultiplications on \(H^{*}_{L(\mathcal{O})}(\mathrm{Gr}_{G},\mathbb{C})\) and \(H^{*}_{L(\mathcal{O})}(\mathrm{Gr}_{G},\mathbb{C})\) that the map
\[i^{*}:H^{*}_{L(\mathcal{O})}(\mathrm{Gr}_{G},\mathbb{C})\to H^{*}_{L( \mathcal{O})}(\mathrm{Gr}_{L},\mathbb{C})\]
is a homomorphism of \(\mathbb{C}\)-coalgebras. Of course, \(i^{*}\) is a \(\mathbb{C}\)-algebra homomorphism (the products on these algebras are given by the cup product in equivariant cohomology), so \(i^{*}\) is a Hopf \(\mathbb{C}\)-algebra homomorphism. Passing to \(R_{L}\)-linear graded duals, we deduce that the pushforward map
\[i_{*}:=(i^{*})^{\vee}:H^{L(\mathcal{O})}_{*}(\mathrm{Gr}_{L},\mathbb{C})\to H ^{L(\mathcal{O})}_{*}(\mathrm{Gr}_{G},\mathbb{C})\]
is a homomorphism of graded Hopf \(R_{L}\)-algebras. We can now bring in Remark 2.5.4 and define a homomorphism of graded Hopf \(R_{L}\)-algebras
\[i_{*}:H^{L(\mathcal{O})}_{*}(\mathrm{Gr}_{L},\mathbb{C})\xrightarrow{i_{*}}H^ {L(\mathcal{O})}_{*}(\mathrm{Gr}_{G},\mathbb{C})\xrightarrow{\phi^{\vee}_{L \subseteq G}}H^{G(\mathcal{O})}_{*}(\mathrm{Gr}_{G},\mathbb{C})\otimes_{R_{G}} R_{L}. \tag{2.5.7}\]
Suppose now that \(M\subseteq L\) is a further Levi subgroup of \(L\) and that \(j:\mathrm{Gr}_{M}\hookrightarrow\mathrm{Gr}_{L}\) denotes the inclusion of Grassmannians. Let \(k=i\circ j:\mathrm{Gr}_{M}\hookrightarrow\mathrm{Gr}_{G}\) denote the composition. Then, it follows from (2.5.5) that the following diagram of Hopf \(R_{M}\)-algebras commutes.
(2.5.8)
Again, the right vertical map is the standard isomorphism.
**Remark 2.5.9**.: It is useful to reformulate these constructions in the language of group schemes. Since \(H^{G(\mathcal{O})}_{*}(\mathrm{Gr}_{G},\mathbb{C})\) is a commutative and cocommutative Hopf \(R_{G}\)-algebra, its spectrum
\[\mathfrak{A}_{G}:=\mathrm{Spec}\,H^{G(\mathcal{O})}_{*}(\mathrm{Gr}_{G}, \mathbb{C})\to\mathrm{Spec}\,H^{*}_{G(\mathcal{O})}(\mathrm{pt},\mathbb{C}) \simeq\mathfrak{c}_{G}\]
is a commutative affine \(\mathfrak{c}_{G}\)-group scheme, where \(\mathfrak{c}_{G}\simeq\mathfrak{t}//W\) is the Chevalley scheme. Then, (2.5.7) of Construction 2.5.6 defines a homomorphism of \(\mathfrak{c}_{L}\)-group schemes
\[\rho^{G}_{L}:=\mathrm{Spec}\,i_{*}:\mathfrak{A}_{G}\times_{\mathfrak{c}_{G}} \mathfrak{c}_{L}\to\mathfrak{A}_{L}.\]
The transitivity property (2.5.5) dualizes to the fact that the composition
\[\mathfrak{A}_{G}\times_{\mathfrak{c}_{G}}\mathfrak{c}_{M}\simeq(\mathfrak{A}_{G} \times_{\mathfrak{c}_{G}}\mathfrak{c}_{L})\times_{\mathfrak{c}_{L}}\mathfrak{ c}_{M}\xrightarrow{\rho_{L}^{G}\times\operatorname{id}}\mathfrak{A}_{L}\times_{ \mathfrak{c}_{L}}\mathfrak{c}_{M}\xrightarrow{\rho_{M}^{L}}\mathfrak{A}_{M}\]
is equal to \(\rho_{M}^{G}\).
### Summary
We will now bring together all of the ingredients from SS2. We start by reviewing the notation that we use when multiple Levi subgroups of \(G\) are in play.
**Notation 2.6.1**.: Recall that \(\Delta\) denotes the set of simple roots of \(G\). For each subset \(I\subseteq\Delta\), we define \(L_{I}\) to be the corresponding standard Levi subgroup. That is, \(L_{I}\) is the unique Levi subgroup of \(G\) containing the fixed maximal torus \(T\) such that the simple root spaces \(\mathfrak{g}_{\alpha}\subseteq\mathfrak{g}\) contained in \(\mathfrak{l}_{I}:=\operatorname{Lie}(L_{I})\subseteq\mathfrak{g}\) are exactly those labelled by the roots \(\alpha\in I\). For example, \(L_{\emptyset}=T\) and \(L_{\Delta}=G\). If \(I\subseteq J\) is an inclusion of subsets of \(\Delta\), then we have the containment \(L_{I}\subseteq L_{J}\). Moreover, we define \(P_{I}\subseteq G\) to be the corresponding standard parabolic subgroup. That is, \(P_{I}\) is the unique parabolic subgroup of \(G\) containing \(T\) such that the negative simple root spaces \(\mathfrak{g}_{-\alpha}\subseteq\mathfrak{g}\) contained in \(\mathfrak{p}_{I}:=\operatorname{Lie}(P_{I})\) are exactly those root spaces labelled by the elements \(-\alpha\in-I\). Let \(V_{I}\subseteq P_{I}\) denote the unipotent radical of \(P_{I}\). Then, \(L_{I}\subseteq P_{I}\) is a Levi factor of \(P_{I}\) and the quotient \(P_{I}/V_{I}\) identifies canonically with \(L_{I}\). Once again, an inclusion \(I\subseteq J\) yields inclusions \(P_{I}\subseteq P_{J}\) and \(V_{J}\subseteq V_{I}\). Let \(W_{I}\subseteq W\) denote the Weyl group of \(L_{I}\) (it is the subgroup of \(W\) generated by the simple reflections \(s_{\alpha}\in W\) for \(\alpha\in I\)). Let \(\Phi_{I}\subseteq\Phi\) denote the set of roots of \(L_{I}\).
Let \(\operatorname{Gr}_{I}:=\operatorname{Gr}_{L_{I}}\). Instead of \(\operatorname{Gr}_{\Delta}=\operatorname{Gr}_{G}\), we simply write \(\operatorname{Gr}\) in this subsection. If \(I\subseteq J\), we have the inclusion \(L_{I}\subseteq L_{J}\) and therefore have a closed immersion \(i_{I\subseteq J}:\operatorname{Gr}_{I}\hookrightarrow\operatorname{Gr}_{J}\) of affine Grassmannians. When \(J=\Delta\), we simply write \(i_{I}\) for \(i_{I\subseteq\Delta}\).
**Notation 2.6.2**.: Let \(I\subseteq\Delta\). Let \(R_{I}:=R_{L_{I}}=H^{*}_{L_{I}}(\operatorname{pt},\mathbb{C})\). We write \(R\) instead of \(R_{\Delta}\). For any inclusion \(I\subseteq J\), we have a homomorphism \(R_{J}\to R_{I}\). Recall that there is a canonical isomorphism of graded \(\mathbb{C}\)-algebras
\[R_{\emptyset}=H^{*}_{T}(\operatorname{pt},\mathbb{C})\simeq\operatorname{Sym }\mathfrak{t}^{*}\]
where \(\mathfrak{t}=\operatorname{Lie}(T)\). Moreover, the Weyl group \(W\) acts naturally on \(R_{\emptyset}\) and the homomorphism \(R_{I}\to R_{\emptyset}\) identifies \(R_{I}\) with the \(W_{I}\subseteq W\)-invariants. Hence, there is a canonical isomorphism
\[R_{I}\simeq(\operatorname{Sym}\mathfrak{t}^{*})^{W_{I}}\,.\]
In particular, the \(W_{I}\)-invariant products
\[f_{I} :=\prod_{\alpha\in\Phi_{I}}\alpha\in(\operatorname{Sym} \mathfrak{t}^{*})^{W_{I}} \tag{2.6.4}\] \[g_{I} :=\prod_{\alpha\not\in\Phi_{I}}\alpha\in(\operatorname{Sym} \mathfrak{t}^{*})^{W_{I}} \tag{2.6.3}\]
define elements of \(R_{I}\).
**Remark 2.6.5**.: Let \(I\subseteq J\subseteq\Delta\). In SSSS2.4, we studied the (unnormalized, equivariant) parabolic restriction functor (2.4.1) of Beilinson-Drinfeld [6, SS5.3.28]
\[\operatorname{Res}_{I\subseteq J}:=\operatorname{Res}_{L_{I}}^{L_{J}}:D_{L_{J}( \mathcal{O})}(\operatorname{Gr}_{J})\to D_{L_{I}(\mathcal{O})}( \operatorname{Gr}_{I}).\]
When \(J=\Delta\), we simply write \(\operatorname{Res}_{I}\) for \(\operatorname{Res}_{I\subseteq\Delta}\). One of the main constructions of SSSS2.4 is Construction 2.4.5, which defines a natural transformation (2.4.6)
\[\Xi_{I\subseteq J}:i^{!}_{I\subseteq J}\to\operatorname{Res}_{I\subseteq J}\]
of functors \(D_{L_{J}(\mathcal{O})}(\operatorname{Gr}_{J})\to D_{L_{I}(\mathcal{O})}( \operatorname{Gr}_{I})\). Let \(A\in D_{L_{J}(\mathcal{O})}(\operatorname{Gr}_{J})\) denote a \(L_{J}(\mathcal{O})\)-equivariant complex. Then, we can evaluate the natural transformation \(\Xi_{I\subseteq J}\) on \(A\) and pass to \(L_{I}(\mathcal{O})\)-equivariant cohomology to obtain an \(R_{I}\)-module homomorphism
\[\xi_{I\subseteq J}:=H^{*}_{L_{I}(\mathcal{O})}(\Xi_{I\subseteq J}):H^{*}_{L_{ I}(\mathcal{O})}(\operatorname{Gr}_{I},i^{!}_{I\subseteq J}A)\to H^{*}_{L_{ I}(\mathcal{O})}(\operatorname{Gr}_{I},\operatorname{Res}_{I\subseteq J}(A)). \tag{2.6.6}\]
When \(J=\Delta\), we make the usual abbreviations \(\Xi_{I}:=\Xi_{I\subseteq\Delta}\) and \(\xi_{I}:=\xi_{I\subseteq\Delta}\). We can now bring in the results of SSSS2.1 to establish the following proposition.
**Proposition 2.6.7**.: _Let \(A\in D_{G(\mathcal{O})}(\operatorname{Gr})\) denote a \(G(\mathcal{O})\)-equivariant complex on \(\operatorname{Gr}\). Assume that the underlying complex \(\operatorname{For}^{G(\mathcal{O})}(A)\in D(\operatorname{Gr})\) is \(!\)-parity (in the sense of [22]; see also Remark 2.1.2 above). Consider the natural \(R_{I}\)-module homomorphism_
\[\xi_{I}:H^{*}_{L_{I}(\mathcal{O})}(\operatorname{Gr}_{I},i^{!}_{I}A)\to H^{*}_ {L_{I}(\mathcal{O})}(\operatorname{Gr}_{I},\operatorname{Res}_{I}(A))\]
_of (2.6.6). Then, the localization \((\xi_{I})_{g_{I}}\) at the element \(g_{I}\in R_{I}\) of (2.6.3) is an isomorphism. Moreover, \(\xi_{I}\) is injective. These assumptions apply if \(A\in\mathcal{P}_{G(\mathcal{O})}(\operatorname{Gr})\) is perverse._
Proof.: For each \(\chi\in\pi_{0}(\operatorname{Gr}_{I})\) corresponding to a connected component \(\operatorname{Gr}_{I}^{\chi}\subseteq\operatorname{Gr}_{I}\), let \(i^{\chi}_{I}:\operatorname{Gr}_{I}^{\chi}\hookrightarrow\operatorname{Gr}\) denote the induced closed immersion into \(\operatorname{Gr}\). Let \(\operatorname{Gr}_{P_{I}}^{\chi}=\operatorname{Gr}_{P_{I}}\times_{ \operatorname{Gr}_{I}}\operatorname{Gr}_{I}^{\chi}\subseteq\operatorname{Gr} _{P_{I}}\). We have inclusions \(k^{\chi}_{I}:\operatorname{Gr}_{I}^{\chi}\hookrightarrow\operatorname{Gr}_{P_ {I}}^{\chi}\) (a closed immersion) and \(r^{\chi}_{I}:\operatorname{Gr}_{P}^{\chi}\to\operatorname{Gr}\) such that \(i^{\chi}_{I}=r^{\chi}_{I}\circ k^{\chi}_{I}\).
Since \(\operatorname{Gr}_{P_{I}}\) (resp. \(\operatorname{Gr}_{I}\)) is the scheme-theoretic disjoint union over \(\chi\in\pi_{0}(\operatorname{Gr}_{I})\) of its sub-ind-schemes \(\operatorname{Gr}_{P_{I}}^{\chi}\) (resp. \(\operatorname{Gr}_{I}^{\chi}\)), it follows from the definition (2.4.1) that \(\xi_{I}\) is the direct sum over \(\chi\in\pi_{0}(\operatorname{Gr}_{I})\) of the natural \(R_{I}\)-module homomorphisms
\[\xi^{\chi}_{I}:H^{*}_{L_{I}(\mathcal{O})}(\operatorname{Gr}_{I}^{ \chi},(i^{\chi}_{I})^{!}A)\simeq H^{*}_{L_{I}(\mathcal{O})}(\operatorname{Gr}_{I}^{\chi},(k^{\chi}_{I}) ^{!}(r^{\chi}_{I})^{!}A)\] \[\simeq H^{*}_{L_{I}(\mathcal{O})}(\operatorname{Gr}_{P_{I}}^{\chi},(k^{ \chi}_{I})_{*}(k^{\chi}_{I})^{!}(r^{\chi}_{I})^{!}A)\] \[\to H^{*}_{L_{I}(\mathcal{O})}(\operatorname{Gr}_{P_{I}}^{\chi},(r^{ \chi}_{I})^{!}A).\]
It therefore suffices to show that each \(\xi^{\chi}_{I}\) is injective and localizes to an isomorphism away from \(g_{I}\in R_{I}\).
By Proposition 2.1.5, the element \(g_{I}\in\operatorname{Sym}\mathfrak{t}^{*}\) vanishes on the stabilizer \(\operatorname{Lie}(T_{x})\) for any \(x\in\operatorname{Gr}_{P_{I}}^{\chi}\setminus\operatorname{Gr}_{I}^{\chi}\) (where \(T_{x}\subseteq T\) is the stabilizer of the point \(x\in\operatorname{Gr}\) in \(T\)). It follows from the
equivariant localization theorem, say in the form of [35, Theorem A.1.13], that the natural \(R_{\emptyset}\)-module homomorphism
\[\hat{\xi}_{I}^{\chi}:H^{*}_{T({\mathcal{O}})}({\rm Gr}_{I}^{\chi},(i_{I}^{\chi} )!A)\simeq H^{*}_{T({\mathcal{O}})}({\rm Gr}_{P_{I}}^{\chi},(k_{I}^{\chi})_{*}(k _{I}^{\chi})!(r_{I}^{\chi})!A)\to H^{*}_{T({\mathcal{O}})}({\rm Gr}_{P_{I}}^{ \chi},(r_{I}^{\chi})!A)\]
becomes an isomorphism after localization at \(g_{I}\in R_{I}\). See the proof of Proposition 2.1.6 and the discussion preceding it for a slightly more detailed explanation.
Since \(A\) is \(!\)-parity, it follows from the proof of Proposition 2.1.1 (see Remark 2.1.2) that the complexes \((i_{I}^{\chi})!A\) and and \((r_{I}^{\chi})!A\) are \(T\)-equivariantly formal. Hence, applying the functor of \(W_{I}\)-invariants to \(\hat{\xi}_{I}^{\chi}\) recovers the morphism \(\xi_{I}^{\chi}\). Since taking \(W_{I}\)-invariants commutes with localization at the \(W_{I}\)-invariant element \(g_{I}\in R_{I}\) (exercise), it follows that \(\xi_{I}^{\chi}\) becomes an isomorphism after inverting \(g_{I}\). Moreover, the equivariant formality of \((i_{I}^{\chi})!A\) yields the freeness of \(H_{L_{I}({\mathcal{O}})}({\rm Gr}_{I}^{\chi},(i_{I}^{\chi})!A)\) over the ring \(R_{I}\), from which we deduce the injectivity of \(\xi_{I}^{\chi}\) (as in the proof of Proposition 2.1.6).
If \(A\in{\mathcal{P}}_{G({\mathcal{O}})}({\rm Gr})\) is perverse, then it is a direct sum of irreducible perverse sheaves. Hence, it follows from the proof of Proposition 2.1.1 that \(A\) is \(!\)-parity, verifying the last assertion of the proposition.
**Remark 2.6.8**.: We can apply Proposition 2.6.7 to the case in which \(A={\mathcal{F}}_{\rm reg}\) is the regular object. Recall from Remark 2.2.1 that \({\mathcal{F}}_{\rm reg}\) is naturally a ring object in \(D_{G({\mathcal{O}})}({\rm Gr})\). In SSSS2.2, we equipped the functor \(i_{I}^{!}:D_{G({\mathcal{O}})}({\rm Gr})\to D_{L_{I}({\mathcal{O}})}({\rm Gr} _{I})\) with a lax monoidal structure. Hence, \(i_{I}^{!}{\mathcal{F}}_{\rm reg}\) is equipped with a ring structure in \(D_{L_{I}({\mathcal{O}})}({\rm Gr}_{I})\). On the other hand, in SSSS2.4, we recalled the construction of Beilinson-Drinfeld [6] of a lax monoidal structure on the parabolic restriction functor \({\rm Res}_{I}:D_{G({\mathcal{O}})}({\rm Gr})\to D_{L_{I}({\mathcal{O}})}({\rm Gr }_{I})\). Hence, \({\rm Res}_{I}({\mathcal{F}}_{\rm reg})\) is naturally a ring object in \(D_{L_{I}({\mathcal{O}})}({\rm Gr}_{I})\). In Proposition 2.4.20 we showed that the natural map
\[\Xi_{I}:i_{I}^{!}\to{\rm Res}_{I}\]
is a natural transformation of lax monoidal functors. Hence, the morphism
\[\Xi_{I}({\mathcal{F}}_{\rm reg}):i_{I}^{!}{\mathcal{F}}_{\rm reg}\to{\rm Res}_ {I}({\mathcal{F}}_{\rm reg})\]
is a ring homomorphism. Since \(H^{*}_{L_{I}({\mathcal{O}})}({\rm Gr}_{I},-)\) is a lax monoidal functor (see Remark 2.4.22), we deduce that the natural map
\[\xi_{I}:H^{*}_{L_{I}({\mathcal{O}})}({\rm Gr}_{I},i_{I}^{!}{\mathcal{F}}_{\rm reg })\to H^{*}_{L_{I}({\mathcal{O}})}({\rm Gr}_{I},{\rm Res}_{I}({\mathcal{F}}_{ \rm reg}))\]
is a homomorphism of graded \(R_{I}\)-_algebras_. By Proposition 2.6.7 above, its localization at \(g_{I}\in R_{I}\) is a canonical isomorphism
\[(\xi_{I})_{g_{I}}:H^{*}_{L_{I}({\mathcal{O}})}({\rm Gr}_{I},i_{I}^{!}{\mathcal{ F}}_{\rm reg})_{g_{I}}\xrightarrow{\sim}H^{*}_{L_{I}({\mathcal{O}})}({\rm Gr}_{I},{ \rm Res}_{I}({\mathcal{F}}_{\rm reg}))_{g_{I}} \tag{2.6.9}\]
of graded \((R_{I})_{g_{I}}\)-algebras.
**Remark 2.6.10**.: Recall from Remark 2.4.3 that the parabolic restriction functor should be normalized so as to preserve perversity. Mimicking the definition given there of \(\operatorname{Res}^{\natural}_{I}:=\operatorname{Res}^{G,\natural}_{L_{I}}\), we introduce the normalized version of the corestriction functor
\[i^{!,\natural}_{I}:=\bigoplus_{\chi\in\pi_{0}(\operatorname{Gr}_{I})}i^{!}_{I} |_{\operatorname{Gr}^{\chi}_{I}}[-\langle 2\rho_{G}-2\rho_{L},\chi\rangle].\]
The morphism \(\Xi_{I}\) now gives rise to a morphism of \(\pi_{0}(\operatorname{Gr}_{I})=\Lambda/\Lambda_{I}\)-graded functors
\[\Xi^{\natural}_{I}:i^{!,\natural}_{I}\to\operatorname{Res}^{\natural}_{I}.\]
Hence, the isomorphism of (2.6.9) becomes an isomorphism
\[(\xi^{\natural}_{I})_{g_{I}}:H^{*}_{L_{I}(\mathcal{O})}(\operatorname{Gr}_{I },i^{!,\natural}_{I}\mathcal{F}_{\operatorname{reg}})_{g_{I}}\xrightarrow{ \sim}H^{*}_{L_{I}(\mathcal{O})}(\operatorname{Gr}_{I},\operatorname{Res}^{ \natural}_{I}(\mathcal{F}_{\operatorname{reg}}))_{g_{I}} \tag{2.6.11}\]
of \((\mathbb{Z}\)-graded \((R_{I})_{g_{I}}\)-algebras.
**Remark 2.6.12**.: Recall the \(\mathfrak{c}_{I}:=\mathfrak{c}_{L_{I}}\)-group scheme \(\mathfrak{A}_{I}:=\mathfrak{A}_{L_{I}}:=\operatorname{Spec}H^{L_{I}(\mathcal{O })}_{*}(\operatorname{Gr}_{I},\mathbb{C})\) from Remark 2.5.9. In Construction 2.5.3, it was shown that the cohomology \(H^{*}_{L_{I}(\mathcal{O})}(\operatorname{Gr}_{I},A)\) of any complex \(A\in D_{L_{I}(\mathcal{O})}(\operatorname{Gr}_{I})\) is naturally a \(\mathfrak{A}_{I}\)-module (i.e. a \(H^{L(\mathcal{O})}_{*}(\operatorname{Gr}_{I},\mathbb{C})\)-comodule). It is evident that if \(A\to B\) is a morphism in \(D_{L_{I}(\mathcal{O})}(\operatorname{Gr}_{I})\), then the induced map on equivariant cohomology \(H^{*}_{L_{I}(\mathcal{O})}(\operatorname{Gr}_{I},A)\to H^{*}_{L_{I}( \mathcal{O})}(\operatorname{Gr}_{I},B)\) is a \(\mathfrak{A}_{I}\)-module homomorphism. Applying this observation to the morphisms \(\Xi_{I}(\mathcal{F}_{\operatorname{reg}})\) and \(\Xi^{\natural}_{I}(\mathcal{F}_{\operatorname{reg}})\), we deduce that the isomorphisms \((\xi_{I})_{g_{I}}\) of (2.6.9) and \((\xi^{\natural}_{I})_{g_{I}}\) of (2.6.11) are isomorphisms of graded \(\mathfrak{A}_{I}\times_{\mathfrak{c}_{I}}\mathfrak{c}^{-}_{I-\operatorname{ gen}}\)-algebras.
**Remark 2.6.13**.: On the other hand, by Proposition 2.1.6, the natural map of graded \(R_{\emptyset}\)-modules
\[H^{*}_{T(\mathcal{O})}(\operatorname{Gr}_{T},i^{!}_{\emptyset}\mathcal{F}_{ \operatorname{reg}})\to H^{*}_{T(\mathcal{O})}(\operatorname{Gr}_{I},i^{!}_{ I}\mathcal{F}_{\operatorname{reg}})\]
becomes an isomorphism after localization at \(f_{I}\in R_{\emptyset}\). By the equivariant formality of \(i^{!}_{I}\mathcal{F}_{\operatorname{reg}}\), we have an isomorphism of \(R_{I}\)-modules
\[H^{*}_{L_{I}(\mathcal{O})}(\operatorname{Gr}_{I},i^{!}_{I}\mathcal{F}_{ \operatorname{reg}})\otimes_{R_{I}}R_{\emptyset}\simeq H^{*}_{T(\mathcal{O})} (\operatorname{Gr}_{I},i^{!}_{I}\mathcal{F}_{\operatorname{reg}}).\]
Localization at \(f_{I}\in R_{\emptyset}\) therefore yields an isomorphism of \(R_{\emptyset}\)-modules
\[H^{*}_{L_{I}(\mathcal{O})}(\operatorname{Gr}_{I},i^{!}_{I}\mathcal{F}_{ \operatorname{reg}})_{f_{I}}\otimes_{(R_{I})_{f_{I}}}(R_{\emptyset})_{f_{I}} \simeq H^{*}_{T(\mathcal{O})}(\operatorname{Gr}_{T},i^{!}_{\emptyset} \mathcal{F}_{\operatorname{reg}})_{f_{I}}.\]
## 3. Spectral Side
We will now study the "spectral side" of Theorem 1.5.2. Namely, we will study the relationship between the Hamiltonian \(\check{G}\)-varieties \(T^{*}(\check{G}/(\check{U},\psi_{I}))\) for varying additive characters \(\psi_{I}\in\check{\mathfrak{u}}^{*}\).
### Partial Kostant-Whittaker reduction
We will start by defining the main construction appearing on the spectral side of our work, the _partial Kostant-Whittaker reduction_.
**Notation 3.1.1**.: Fix a subset \(I\subseteq\Delta\) of simple roots. We define the additive character \(\psi_{I}\) of \(\check{U}\) to be the composition
Here, \(\operatorname{pr}_{I}\) is given by projection onto the factors indexed by \(I\), and the last map is summation. We also denote by \(\psi_{I}\) the linear form \((d\psi_{I})_{1}\in\tilde{\mathfrak{u}}^{*}\), where \(1\in\check{U}\) is the identity. We write \(\tilde{\mathfrak{u}}^{\perp}\subseteq\tilde{\mathfrak{g}}^{*}\) for the linear complement to \(\tilde{\mathfrak{u}}\subseteq\tilde{\mathfrak{g}}\).
Throughout this section, \(\check{M}\) denotes a smooth and quasi-affine Hamiltonian \(\check{G}\)-scheme, equipped with a moment map \(\mu:\check{M}\to\tilde{\mathfrak{g}}^{*}\).
**Remark 3.1.2**.: To avoid questions of derived algebraic geometry, we will assume that \(\check{G}\) acts freely on \(\check{M}\), or equivalently that \(\check{M}\) is smooth over \(\tilde{\mathfrak{g}}^{*}\). In particular, we assume that \(\check{M}\) is flat over \(\tilde{\mathfrak{g}}^{*}\).
**Construction 3.1.3**.: On one hand, we may form the Hamiltonian reduction of \(\check{M}\) with respect to the \(\check{U}\)-action to obtain a Hamiltonian \(\check{T}\)-scheme \(\check{M}/\!\!/\check{U}:=(\check{M}\times_{\tilde{\mathfrak{g}}^{*}}\tilde{ \mathfrak{u}}^{\perp})/\check{U}\). Via the reduced moment map
\[\mu_{\operatorname{red}}:\check{M}/\!\!/\check{U}\to\check{\mathfrak{t}}^{*},\]
we regard \(\check{M}/\!\!/\check{U}\) as a \(\check{\mathfrak{t}}^{*}\)-scheme. It carries an action of the constant \(\check{\mathfrak{t}}^{*}\)-group scheme \(\check{T}_{\check{\mathfrak{t}}^{*}}\).
**Construction 3.1.4**.: On the other hand, we may form the _partial Kostant-Whittaker reduction_
\[\check{M}/\!\!/(\check{U},\psi_{I}):=(\check{M}\times_{\tilde{\mathfrak{g}}^{* }}(\check{\mathfrak{u}}^{\perp}+\psi_{I}))/\check{U}.\]
Here, we slightly abuse notation and write \(\check{\mathfrak{u}}^{\perp}+\psi_{I}\) for the space of linear forms on \(\tilde{\mathfrak{g}}\) which restrict to \(\psi_{I}\) on the subspace \(\check{\mathfrak{u}}\). We have a natural projection
\[\chi_{\check{M}}:\check{M}/\!\!/(\check{U},\psi_{I})\to(\check{\mathfrak{u}}^ {\perp}+\psi_{I})/\check{U}\hookrightarrow\check{\mathfrak{g}}^{*}/\check{U} \to\check{\mathfrak{c}}. \tag{3.1.5}\]
Here, we write \(\check{\mathfrak{c}}\) for the (coadjoint) Chevalley space of \(\check{G}\); that is, the spectrum of \(\check{G}\)-invariant polynomials on \(\tilde{\mathfrak{g}}^{*}\). It is the GIT quotient of \(\tilde{\mathfrak{g}}^{*}\) under the coadjoint action of \(\check{G}\) and the coarse moduli scheme underlying the Artin stack \(\tilde{\mathfrak{g}}^{*}/\check{G}\). We let \(\chi:\tilde{\mathfrak{g}}^{*}\to\check{\mathfrak{c}}\) denote the natural projection. It is \(\check{G}\)-invariant, hence induces a morphism \(\tilde{\mathfrak{g}}^{*}/\check{U}\to\check{\mathfrak{c}}\). This morphism is the last map in the composition (3.1.5).
**Construction 3.1.6**.: Since the character \(\psi_{I}\) is trivial on the unipotent radical \(\check{V}_{I}\) of \(\check{P}_{I}\), it descends to an additive character \(\overline{\psi}_{I}\) of \(\check{U}/\check{V}_{I}=\check{U}_{I}\), which we identify with a maximal unipotent subgroup of \(\check{L}_{I}\simeq\check{P}_{I}/\check{V}_{I}\). Since \(\psi_{I}\) is non-trivial on each simple root space of \(\check{L}_{I}\), the character \(\overline{\psi}_{I}\) is non-degenerate. Let \(\tilde{\mathfrak{v}}_{I}=\operatorname{Lie}(\check{V}_{I})\) and write \(\tilde{\mathfrak{u}}_{I}^{\perp}+\overline{\psi}_{I}\) for the space of
linear forms on \(\check{\mathfrak{i}}_{I}\) restricting to \(\overline{\psi}_{I}\) on \(\check{\mathfrak{u}}_{I}\). We have an evident equality of closed subschemes of \(\check{\mathfrak{g}}^{*}\)
\[\check{\mathfrak{u}}^{\perp}+\psi_{I}=\check{\mathfrak{v}}_{I}^{\perp}\times_{ \check{\mathfrak{i}}_{I}^{*}}(\check{\mathfrak{u}}_{I}^{\perp}+\overline{\psi }_{I}).\]
Here, the projection \(\check{\mathfrak{v}}_{I}^{\perp}\to\check{\mathfrak{i}}_{I}^{*}\) is given by restriction to \(\check{\mathfrak{p}}_{I}\) followed by descent along the projection \(\check{\mathfrak{p}}_{I}\twoheadrightarrow\check{\mathfrak{i}}_{I}\). We may form the fiber product with \(\check{M}\) over \(\check{\mathfrak{g}}^{*}\) to obtain an isomorphism
\[\tilde{\eta}_{I}:\check{M}\times_{\check{\mathfrak{g}}^{*}}\check{\mathfrak{ v}}_{I}^{\perp}\times_{\check{\mathfrak{i}}_{I}^{*}}(\check{\mathfrak{u}}_{I}^{ \perp}+\overline{\psi}_{I})\simeq\check{M}\times_{\check{\mathfrak{g}}}( \check{\mathfrak{u}}^{\perp}+\psi_{I}).\]
Passing to quotients by the induced \(\check{U}\)-actions yields an isomorphism
\[\eta_{I}:(\check{M}/\!\!/\check{V}_{I})/\!\!/(\check{U},\overline{ \psi}_{I}) =((\check{M}\times_{\check{\mathfrak{g}}^{*}}\check{\mathfrak{v}}_ {I}^{\perp})/\check{V}_{I}\times_{\check{\mathfrak{i}}_{I}^{*}}(\check{ \mathfrak{u}}_{I}^{\perp}+\overline{\psi}_{I}))/\check{U}_{I}\] \[\simeq(\check{M}\times_{\check{\mathfrak{g}}^{*}}\check{\mathfrak{ v}}_{I}^{\perp}\times_{\check{\mathfrak{i}}_{I}^{*}}(\check{\mathfrak{u}}_{I}^{ \perp}+\overline{\psi}_{I}))/\check{U}\] \[\simeq(\check{M}\times_{\check{\mathfrak{g}}}(\check{\mathfrak{ u}}^{\perp}+\psi_{I}))/\check{U}\] \[=\check{M}/\!\!/(\check{U},\psi_{I}). \tag{3.1.7}\]
Let \(\check{\mathfrak{c}}_{I}=\operatorname{Spec}\mathcal{O}(\check{\mathfrak{i}} _{I}^{*})^{\check{L}_{I}}\) denote the Chevalley space of \(\check{L}_{I}\). As a Hamiltonian reduction of a Hamiltonian \(\check{L}_{I}\)-space, the iterated reduction \((\check{M}/\!\!/\check{V}_{I})/\!\!/(\check{U}_{I},\overline{\psi}_{I})\) is equipped with a natural map
\[\chi_{\check{M}/\!\!/\check{V}_{I}}:(\check{M}/\!\!/\check{V}_{I})/\!\!/( \check{U}_{I},\overline{\psi}_{I})\to\check{\mathfrak{c}}_{I}. \tag{3.1.8}\]
**Remark 3.1.9**.: The triangular decomposition \(\check{\mathfrak{g}}=\check{\mathfrak{u}}\oplus\check{\mathfrak{t}}\oplus \check{\mathfrak{u}}^{-}\) yields a projection \(\check{\mathfrak{g}}\twoheadrightarrow\check{\mathfrak{i}}_{I}\). Dualizing, we obtain a closed immersion \(\check{\mathfrak{i}}_{I}^{*}\hookrightarrow\check{\mathfrak{g}}^{*}\) intertwining the coadjoint actions of \(\check{L}_{I}\) and \(\check{G}\). We obtain an induced map on invariant-theoretic quotients
\[\pi_{I}:\mathfrak{c}_{I}\to\mathfrak{c}.\]
The following result is useful for reducing the study of a partial Kostant-Whittaker reduction to the non-degenerate case.
**Lemma 3.1.10**.: _Equip \(\check{M}/\!\!/\check{V}_{I}\) with its natural structure of Hamiltonian \(\check{L}_{I}\)-scheme. We have a canonical isomorphism of schemes_
\[\eta_{I}:(\check{M}/\!\!/\check{V}_{I})/\!\!/(\check{U}_{I},\overline{\psi}_{ I})\simeq\check{M}/\!\!/(\check{U},\psi_{I})\]
_fitting into a commutative diagram_
(3.1.11)
Proof.: The isomorphism \(\eta_{I}\) is that of (3.1.7). It remains to check that the diagram (3.1.11) commutes. It suffices to show that following diagram commutes.
Here, \(\chi_{I}:\tilde{\mathfrak{l}}_{I}^{*}\to\tilde{\mathfrak{c}}_{I}\) is the characteristic polynomial map on \(\tilde{\mathfrak{l}}_{I}^{*}\). The commutativity of the left square follows immediately from the definition of \(\tilde{\eta}_{I}\). Under the identification \(\tilde{\mathfrak{g}}\simeq\tilde{\mathfrak{g}}^{*}\), the commutativity of the right rectangle is equivalent to the commutativity of the following diagram.
The commutativity of the upper half of the diagram is obvious. The bottom half embeds into the diagram
The two maps \(\tilde{\mathfrak{p}}_{I}\to\tilde{\mathfrak{c}}\) that we claim are equal are \(\check{P}_{I}\)-invariant. Since every element of \(\tilde{\mathfrak{p}}_{I}\) is conjugate to an element of \(\tilde{\mathfrak{b}}\), it suffices to show that the compositions \(\tilde{\mathfrak{b}}\hookrightarrow\tilde{\mathfrak{p}}_{I}\to\tilde{ \mathfrak{c}}\) coincide. Given the obvious commutativity of the upper left square, it suffices to show that the outer rectangle and upper right rectangle commute. The upper right rectangle becomes the outer rectangle after replacing \(\tilde{\mathfrak{g}}\) by \(\tilde{\mathfrak{l}}_{I}\), so it suffices to show the commutativity of the outer rectangle.
Thus, we are reduced to the observation that the diagram
commutes, which is standard.
### Action of the regular centralizer
We begin by reviewing Ngo's construction of the regular centralizer group scheme as explained in [32]. Let \(\tilde{\mathfrak{g}}^{*}_{\mathrm{reg}}\subseteq\tilde{\mathfrak{g}}^{*}\) denote the open subscheme of regular elements; namely, those elements \(\xi\in\tilde{\mathfrak{g}}^{*}\) with coadjoint centralizer \(Z_{\check{G}}(\xi)\) of minimal dimension. Over \(\tilde{\mathfrak{g}}^{*}\), we have the universal (coadjoint) centralizer group scheme \(\mathfrak{I}_{\mathrm{univ}}\), defined as the fiber product
We form the restriction \(\mathfrak{I}_{\mathrm{reg}}=\mathfrak{I}_{\mathrm{univ}}\times_{\tilde{ \mathfrak{g}}^{*}}\tilde{\mathfrak{g}}^{*}_{\mathrm{reg}}\), a commutative affine group scheme over \(\tilde{\mathfrak{g}}^{*}_{\mathrm{reg}}\). In [30, Lemme 2.1.1], smooth descent (of affine morphisms) is used to construct a smooth affine group scheme \(\mathfrak{J}\) over \(\tilde{\mathfrak{c}}\) equipped with a canonical isomorphism of group schemes over \(\tilde{\mathfrak{g}}^{*}_{\mathrm{reg}}\)
\[\mathfrak{I}_{\mathrm{reg}}\simeq\mathfrak{J}\times_{\mathfrak{c},\chi} \tilde{\mathfrak{g}}^{*}_{\mathrm{reg}}.\]
Furthermore, this isomorphism extends uniquely (by an application of Hartog's principle) to a homomorphism of group schemes
\[\rho:\mathfrak{J}\times_{\mathfrak{c},\chi}\tilde{\mathfrak{g}}^{*}\to \mathfrak{I}_{\mathrm{univ}}. \tag{3.2.1}\]
We will need a well-known alternative construction of \(\mathfrak{J}\).
**Remark 3.2.2**.: We recall the Kostant slice \(\kappa:\tilde{\mathfrak{c}}\to\tilde{\mathfrak{g}}^{*}\) to the regular nilpotent orbit. Consider the regular nilpotent element10
Footnote 10: It is a fundamental observation of Ginzburg, going back to [18], that the nilpotent element \(e\) arises through Tannakian formalism from the Chern class \(c_{1}(\mathcal{L}_{\mathrm{det}})\in H^{2}(\mathrm{Gr}_{G},\mathbb{C})\) of the determinant line bundle.
\[e=\sum_{\alpha\in\Delta}X_{\alpha},\]
where \(X_{\alpha}\in\tilde{\mathfrak{g}}_{\alpha}\) is the non-zero element provided by the pinning of \(\check{G}\). The map \(\mathrm{ad}^{*}_{e}:\tilde{\mathfrak{b}}^{\perp}\to\tilde{\mathfrak{u}}^{\perp}\) is injective. Let \(\tilde{\mathfrak{s}}\subseteq\tilde{\mathfrak{u}}^{\perp}\) denote a \(\mathbb{G}_{m}\)-stable complement to \(\mathrm{ad}^{*}_{e}(\tilde{\mathfrak{b}}^{\perp})\), where \(\mathbb{G}_{m}\) acts through the homomorphism \(2\check{\rho}:\mathbb{G}_{m}\to\check{T}\). Then, Kostant's theorem [32, Theorem 3.2.2] implies that the subspace \(\tilde{\mathfrak{s}}+\psi\) maps isomorphically onto \(\tilde{\mathfrak{c}}=\tilde{\mathfrak{g}}^{*}/\!/\check{G}\). The inverse map followed by the
inclusion \(\check{\mathfrak{s}}+\psi\hookrightarrow\check{\mathfrak{g}}^{*}\) defines the morphism \(\kappa:\check{\mathfrak{c}}\to\check{\mathfrak{g}}^{*}\). Clearly, we have an isomorphism of \(\check{\mathfrak{c}}\)-group schemes
\[\mathfrak{J}\simeq\mathfrak{I}_{\text{univ}}\times_{\check{\mathfrak{g}}^{*}, \kappa}\check{\mathfrak{c}}. \tag{3.2.3}\]
Similarly, we can always describe a Kostant-Whittaker reduction \(\check{M}/\!\!/(\check{U},\psi)\) by restricting \(\check{M}\) along the Kostant slice:
**Lemma 3.2.4**.: _Let \(\psi=\psi_{\Delta}\) denote a non-degenerate additive character of \(\check{U}\). There is a canonical \(\check{\mathfrak{c}}\)-isomorphism_
\[\delta_{\psi}^{\check{M}}:\check{M}/\!\!/(\check{U},\psi)\simeq\check{M}\times _{\check{\mathfrak{g}}^{*},\kappa}\check{\mathfrak{c}}.\]
Proof.: By definition, \(\check{M}/\!\!/(\check{U},\psi)=(\check{M}\times_{\check{\mathfrak{g}}^{*}}( \check{\mathfrak{u}}^{\perp}+\psi_{I}))/\check{U}\). Let
\[q:\check{M}\times_{\check{\mathfrak{g}}^{*}}(\check{\mathfrak{u}}^{\perp}+ \psi_{I})\to\check{M}/\!\!/(\check{U},\psi)\]
denote the quotient map. It is a \(\check{U}\)-torsor. So, we have a commutative diagram
By Kostant's theorem [32, Theorem 3.2.2], the map \(\chi:\check{\mathfrak{u}}^{\perp}+\psi\to\check{\mathfrak{c}}\) is a \(\check{U}\)-torsor. Hence, the commutative square on the right is a morphism of \(\check{U}\)-torsors, hence is Cartesian. The commutative square on the left is evidently Cartesian. Therefore, the composite square is Cartesian. It follows that the composition of the top row \((\delta_{\psi}^{\check{M}})^{-1}:\check{M}\times_{\check{\mathfrak{g}}^{*}, \kappa}\check{\mathfrak{c}}\to\check{M}/\!\!/(\check{U},\psi)\) is an isomorphism, as needed.
**Example 3.2.5**.: Consider the "universal" example of \(\check{M}=T^{*}\check{G}\). Then, Lemma 3.2.4 is simply the observation that we have a canonical \(\check{\mathfrak{c}}\)-isomorphism
\[\delta_{\psi}:=\delta_{\psi}^{T^{*}\check{G}}:T^{*}(\check{G}/(\check{U},\psi ))\simeq T^{*}\check{G}\times_{\check{\mathfrak{g}}^{*},\kappa}\check{ \mathfrak{c}}\simeq\check{G}\times\check{\mathfrak{c}}. \tag{3.2.6}\]
**Remark 3.2.7**.: In our intended application, there will be another algebraic group acting compatibly on \(\check{M}\). Therefore, we introduce into our setup an algebraic group \(\check{H}\) with an action on \(\check{M}\) that commutes with the given action of \(\check{G}\). As we construct various isomorphisms in this section, we will stop to observe that they are in fact \(\check{H}\)-equivariant.
Because the \(\check{H}\)-action on \(\check{M}\) commutes with the \(\check{G}\)-action, it follows easily that the moment map \(\mu\) is \(\check{H}\)-invariant. Therefore, \(\check{H}\) acts naturally on \(\check{M}\times_{\check{\mathfrak{g}}^{*}}\kappa\). Similarly, \(\check{H}\) acts naturally on \(\check{M}\times_{\check{\mathfrak{g}}^{*}}(\check{\mathfrak{u}}^{\perp}+\psi)\). This action commutes with that of \(\check{U}\), so we obtain an action of \(\check{H}\) on the quotient \(\check{M}/\!\!/(\check{U},\psi)\).
**Lemma 3.2.8**.: _Suppose that \(\check{M}\) is equipped with an action of an algebraic group \(\check{H}\) commuting with the action of \(\check{G}\). Then, the isomorphism_
\[\delta^{\check{M}}_{\psi}:\check{M}\,/\!\!/(\check{U},\psi)\simeq\check{M}\times_ {\check{\mathfrak{g}}^{*},\kappa}\check{\mathfrak{c}}\]
_of Lemma 3.2.4 is \(\check{H}\)-equivariant._
Proof.: It suffices to check that the maps
\[\operatorname{id}_{\check{M}}\times\kappa :\check{M}\times_{\check{\mathfrak{g}}^{*},\kappa}\check{ \mathfrak{c}}\to\check{M}\times_{\check{\mathfrak{g}}^{*}}(\check{\mathfrak{ u}}^{\perp}+\psi)\] \[q :\check{M}\times_{\check{\mathfrak{g}}^{*}}(\check{\mathfrak{u}} ^{\perp}+\psi)\to\check{M}\,/\!\!/(\check{U},\psi)\]
are \(\check{H}\)-equivariant. We defined the \(\check{H}\)-action on \(\check{M}\,/\!\!/(\check{U},\psi)\) as the unique one making the quotient map \(\check{M}\times_{\check{\mathfrak{g}}^{*}}(\check{\mathfrak{u}}^{\perp}+\psi) \to\check{M}\,/\!\!/(\check{U},\psi)\)\(\check{H}\)-invariant, so its \(\check{H}\)-equivariance is trivial. The map \(\operatorname{id}_{\check{M}}\times\kappa\) is obtained by pulling back the (trivially \(\check{H}\)-equivariant) map \(\kappa:\check{\mathfrak{c}}\to(\check{\mathfrak{u}}^{\perp}+\psi)\) along the \(\check{H}\)-equivariant projection \(\operatorname{pr}_{2}:\check{M}\times_{\check{\mathfrak{g}}^{*}}(\check{ \mathfrak{u}}^{\perp}+\psi)\to(\check{\mathfrak{u}}^{\perp}+\psi)\), so the claim follows.
**Construction 3.2.9**.: Via the isomorphism of Lemma 3.2.4, we may equip the Kostant-Whittaker reduction \(\check{M}\,/\!\!/(\check{U},\psi)\) with an action of the group scheme \(\mathfrak{I}_{\operatorname{univ}}\times_{\check{\mathfrak{g}}^{*},\kappa} \check{\mathfrak{c}}\). Thus, the identification (3.2.3) equips the \(\check{\mathfrak{c}}\)-scheme \(\check{M}\,/\!\!/(\check{U},\psi)\) with an action of the regular centralizer \(\mathfrak{I}\).
Now, let \(\mathfrak{I}_{I}\to\check{\mathfrak{c}}_{I}\) denote the regular centralizer associated to the reductive group \(\check{L}_{I}\). By (3.1.10), we can realize \(\check{M}\,/\!\!/(\check{U},\psi_{I})\) as the non-degenerate Kostant-Whittaker reduction of the Hamiltonian \(\check{L}_{I}\)-space \(\check{M}\,/\!\!/\check{V}_{I}\). Thus, we obtain as above an action of the regular centralizer \(\mathfrak{I}_{I}\) on the \(\check{\mathfrak{c}}_{I}\)-scheme \(\check{M}\,/\!\!/(\check{U},\psi_{I})\).
**Example 3.2.10**.: If \(I=\emptyset\), then \(\psi_{I}=1\) is the trivial character. Hence, \(\check{M}\,/\!\!/(\check{U},\psi_{I})=\check{M}\,/\!\!/\check{U}\). Furthermore, \(\check{\mathfrak{c}}_{I}=\check{\mathfrak{t}}^{*}\) and \(\mathfrak{I}_{I}=\check{T}\times\check{\mathfrak{t}}^{*}\). Thus, the \(\mathfrak{I}_{I}\)-action on \(\check{M}\,/\!\!/(\check{U},\psi_{I})\) is simply the natural action of the constant group scheme \(\check{T}\times\check{\mathfrak{t}}^{*}\) on \(\check{M}\,/\!\!/\check{U}\), which is regarded as a \(\check{\mathfrak{t}}^{*}\)-scheme through the reduced \(\check{T}\) moment map.
### Generic comparison of Hamiltonian and Kostant-Whittaker reductions
We will now compare the quotients \(\check{M}\,/\!\!/(\check{U},\psi)\) and \(\check{M}\,/\!\!/\check{U}\) (we write \(\psi=\psi_{\Delta}\) for the non-degenerate additive character on \(\check{U}\)). Let \(\check{\mathfrak{t}}^{*}_{\mathrm{gen}}\subseteq\check{\mathfrak{t}}^{*}\) denote the open subscheme defined by the non-vanishing \(\check{\alpha}\neq 0\) of each coroot \(\check{\alpha}\in\Phi\) of \(\check{G}\). It is \(W\)-stable, hence descends to an open subscheme \(\check{\mathfrak{c}}_{\mathrm{gen}}\subseteq\check{\mathfrak{c}}\). Note that the projection \(\check{\mathfrak{t}}^{*}_{\mathrm{gen}}\to\check{\mathfrak{c}}_{\mathrm{gen}}\) is a \(W\)-torsor.
We will now establish a version of Lemma 3.2.4 for the trivial character. The penalty for taking the Hamiltonian reduction at level \(0\in\check{\mathfrak{u}}^{*}\) instead of at the generic level \(\psi\in\mathfrak{u}^{*}\) is that we will only able be able to construct an analog of Lemma 3.2.4 after restricting to the open subscheme \(\check{\mathfrak{t}}^{*}\). We adapted the following argument directly from Kostant's proof of [32, Theorem 3.2.2].
**Lemma 3.3.1**.: _There is a canonical \(\check{\mathfrak{t}}^{*}_{\mathrm{gen}}\)-isomorphism_
\[\delta^{\check{M}}_{\mathrm{gen}}:\check{M}\,/\!\!/\check{U}\times_{\check{ \mathfrak{t}}^{*}}\check{\mathfrak{t}}^{*}_{\mathrm{gen}}\simeq\check{M}\times_ {\check{\mathfrak{g}}^{*}}\check{\mathfrak{t}}^{*}_{\mathrm{gen}}.\]
Proof.: Note that \(\check{\mathfrak{t}}^{*}_{\rm gen}\subseteq\check{\mathfrak{g}}^{*}_{\rm reg}\). Indeed, under the \(\check{G}\)-equivariant isomorphism \(\check{\mathfrak{g}}\simeq\check{\mathfrak{g}}^{*}\), the elements of \(\check{\mathfrak{t}}^{*}_{\rm gen}\) map to regular semisimple elements of \(\check{\mathfrak{g}}\). Let \(\check{\mathfrak{u}}^{\perp}_{\rm gen}=\check{\mathfrak{u}}^{\perp}\times_{ \check{\mathfrak{t}}^{*}}\check{\mathfrak{t}}^{*}_{\rm gen}\) and let
\[q:\check{\mathfrak{u}}^{\perp}_{\rm gen}\to\check{\mathfrak{t}}^{*}_{\rm gen}\]
denote the projection to the second factor. Let \(p:\check{M}\times_{\check{\mathfrak{g}}^{*}}\check{\mathfrak{u}}^{\perp}\to \check{M}/\!\!/\check{U}\) denote the quotient map. It is a \(\check{U}\)-torsor (recall that \(\check{G}\) acts freely on \(\check{M}\) by hypothesis). Hence, we obtain a \(\check{U}\)-torsor
\[p_{\rm gen}:\check{M}\times_{\check{\mathfrak{g}}^{*}}\check{\mathfrak{u}}^{ \perp}_{\rm gen}\simeq(\check{M}\times_{\check{\mathfrak{g}}^{*}}\check{ \mathfrak{u}}^{\perp})\times_{\check{\mathfrak{t}}^{*}}\check{\mathfrak{t}}^{* }_{\rm gen}\xrightarrow{p}\check{M}/\!\!/\check{U}\times_{\check{\mathfrak{t}}^ {*}}\check{\mathfrak{t}}^{*}_{\rm gen}.\]
Moreover, we have a commutative diagram
The map \(\check{\mathfrak{t}}^{*}_{\rm gen}\hookrightarrow\check{\mathfrak{u}}^{\perp}_ {\rm gen}\) is given by pulling back linear forms from \(\check{\mathfrak{t}}\) to \(\check{\mathfrak{b}}\) and then extending them trivially over \(\check{\mathfrak{u}}^{-}\). We claim that \(q\) is a \(\check{U}\)-torsor. It suffices to show that the action map \(a:\check{U}\times\check{\mathfrak{t}}^{*}_{\rm gen}\to\check{\mathfrak{u}}^{\perp} _{\rm gen}\) is an isomorphism. Under the \(\check{G}\)-equivariant isomorphism \(\check{\mathfrak{g}}\simeq\check{\mathfrak{g}}^{*}\), the claim translates to the assertion that \(\check{U}\times\check{\mathfrak{t}}_{\rm gen}\to\check{\mathfrak{b}}_{\rm gen}\) is an isomorphism, where \(\check{\mathfrak{b}}_{\rm gen}=\check{\mathfrak{b}}\times_{\check{\mathfrak{t }}}\check{\mathfrak{t}}_{\rm gen}\) and \(\check{\mathfrak{t}}_{\rm gen}\subseteq\check{\mathfrak{t}}\) is the open subscheme defined by \(\alpha\neq 0\) for each root \(\alpha\in\check{\Phi}\) of \(\check{G}\).
Let \(u\in\check{U}\) and \(x\in\check{\mathfrak{t}}_{\rm gen}\). Suppose that \({\rm ad}_{u}x\in\check{\mathfrak{t}}_{\rm gen}\). Then, \(u\) conjugates the centralizer \(C_{\check{G}}(x)\) to \(C_{\check{G}}({\rm ad}_{u}x)\). Since \(x\) and \({\rm ad}_{u}x\) are regular semisimple and belong to the Cartan subalgebra \(\check{\mathfrak{t}}\), it follows that \(C_{\check{G}}(x)=C_{\check{G}}({\rm ad}_{u}x)=\check{T}\). Thus, \(u\) normalizes \(\check{T}\), hence \(u\in N(\check{T})\cap\check{U}=\{1\}\). So, \(u=1\). If \({\rm ad}_{u}x={\rm ad}_{u^{\prime}}x^{\prime}\) for \(u,u^{\prime}\in\check{U}\) and \(x,x^{\prime}\in\check{\mathfrak{t}}_{\rm gen}\), we deduce that \(u=u^{\prime}\) and \(x=x^{\prime}\). Therefore, \(\check{U}\times\check{\mathfrak{t}}_{\rm gen}\to\check{\mathfrak{b}}_{\rm gen}\) is injective (on closed points).
Let \(x\in\check{\mathfrak{b}}_{\rm gen}\). We have a Jordan decomposition \(x=x_{ss}+x_{n}\), where \(x_{n}\in\check{\mathfrak{u}}\). The image of \(x\) in \(\check{\mathfrak{t}}=\check{\mathfrak{b}}/\check{\mathfrak{u}}\) is equal to that of \(x_{ss}\), so we conclude that \(x_{ss}\) is regular semisimple. On the other hand, \([x_{ss},x_{n}]=0\). Therefore, \(x_{n}\) belongs to the centralizer \(\check{\mathfrak{g}}_{\check{\mathfrak{g}}}(x_{ss})\). Since \(x_{ss}\) is regular semisimple, it follows that \(\check{\mathfrak{g}}_{\check{\mathfrak{g}}}(x_{ss})\) is a Cartan subalgebra of \(\check{\mathfrak{b}}\), hence that \(x_{n}=0\). Furthermore, \(\check{\mathfrak{z}}_{\check{\mathfrak{g}}}(x_{ss})\) maps isomorphically onto \(\check{\mathfrak{t}}\), so by Levi's theorem \(\check{\mathfrak{z}}_{\check{\mathfrak{g}}}(x_{ss})\) is \(\check{U}\)-conjugate to \(\check{\mathfrak{t}}\). In particular, \(x=x_{ss}\in\check{\mathfrak{z}}_{\check{\mathfrak{g}}}(x_{ss})\) is \(\check{U}\)-conjugate to an element of \(\check{\mathfrak{t}}_{\rm gen}\). Therefore, \(a:\check{U}\times\check{\mathfrak{t}}_{\rm gen}\to\check{\mathfrak{b}}_{\rm gen}\) is surjective (on closed points). Since \(a\) is a bijective morphism of smooth varieties (over \(\mathbb{C}\)), it is an isomorphism by Zariski's Main Theorem.
Thus, the square on the right is a morphism of \(\check{U}\)-torsors, therefore is Cartesian. The square on the left is clearly Cartesian. Hence, the composite square is Cartesian. Therefore, the composition of the top row is an isomorphism \((\delta^{\check{M}}_{\rm gen})^{-1}:\check{M}\times_{\check{\mathfrak{g}}^{*}} \check{\mathfrak{t}}^{*}_{\rm gen}\to\check{M}/\!\!/\check{U}\times_{\check{ \mathfrak{t}}^{*}}\check{\mathfrak{t}}^{*}_{\rm gen}\), as needed.
**Example 3.3.2**.: Consider the example of \(\check{M}=T^{*}\check{G}\). Since \(\check{M}/\!\!/\check{U}=T^{*}(\check{G}/\check{U})\), Lemma 3.3.1 provides a canonical \(\check{G}\)-equivariant \(\check{\mathfrak{t}}^{*}_{\rm gen}\)-isomorphism
\[\delta_{\rm gen}:=\delta^{T^{*}\check{G}}_{\rm gen}:T^{*}(\check{G}/\check{U}) \times_{\check{\mathfrak{t}}^{*}}\check{\mathfrak{t}}^{*}_{\rm gen}\simeq T^ {*}\check{G}\times_{\check{\mathfrak{g}}^{*}}\check{\mathfrak{t}}^{*}_{\rm gen }\simeq(\check{G}\times\check{\mathfrak{g}}^{*})\times_{\check{\mathfrak{g}}^ {*}}\check{\mathfrak{t}}^{*}_{\rm gen}=\check{G}\times\check{\mathfrak{t}}^{*} _{\rm gen}. \tag{3.3.3}\]
Concretely, this means (in particular) that we have given a section to the moment map \(\mu_{\rm red}:T^{*}(\check{G}/\check{U})\to\check{\mathfrak{t}}^{*}\) over the open subscheme \(\check{\mathfrak{t}}^{*}_{\rm gen}\subseteq\check{\mathfrak{t}}^{*}\) which picks out an element with trivial \(\check{G}\)-stabilizer.
**Remark 3.3.4**.: We can combine Lemma 3.2.4 and Lemma 3.3.1 (in the guise of Example 3.2.5 and Example 3.3.2) to obtain a \(\check{G}\)-equivariant (by Lemma 3.2.8) \(\check{\mathfrak{t}}^{*}_{\rm gen}\)-isomorphism
\[\tilde{\varepsilon}:T^{*}(\check{G}/\check{U})\times_{\check{ \mathfrak{t}}^{*}}\check{\mathfrak{t}}^{*}_{\rm gen} \stackrel{{\delta_{\rm gen}}}{{\simeq}}\check{G}\times \check{\mathfrak{t}}^{*}_{\rm gen}\] \[=(\check{G}\times\check{\mathfrak{c}})\times_{\check{\mathfrak{ t}}}\check{\mathfrak{t}}^{*}_{\rm gen}\] \[\stackrel{{\delta_{\psi}^{-1}}}{{\simeq}}T^{*}( \check{G}/(\check{U},\psi))\times_{\check{\mathfrak{t}}}\check{\mathfrak{t}}^ {*}_{\rm gen}. \tag{3.3.5}\]
In fact, we can be slightly more precise. In the proof of Lemma 3.3.1, we defined a morphism
\[T^{*}(\check{G}/(\check{U},\psi))\times_{\check{\mathfrak{t}}}\check{ \mathfrak{t}}^{*}\stackrel{{\delta_{\psi}}}{{\simeq}}(\check{G} \times\check{\mathfrak{c}})\times_{\check{\mathfrak{t}}}\check{\mathfrak{t}}^ {*}\simeq\check{G}\times\check{\mathfrak{t}}^{*}\hookrightarrow\check{G} \times\check{\mathfrak{u}}^{\perp}\twoheadrightarrow(\check{G}\times\check{ \mathfrak{u}}^{\perp})/\check{U}\simeq T^{*}(\check{G}/\check{U})\]
whose pullback to \(\check{\mathfrak{t}}^{*}_{\rm gen}\) yields the isomorphism (3.3.5). This comparison between \(T^{*}(\check{G}/\check{U})\) and \(T^{*}(\check{G}/(\check{U},\psi))\) over \(\check{\mathfrak{t}}^{*}_{\rm gen}\) will facilitate our comparison between the Hamiltonian and Kostant-Whittaker reductions of any Hamiltonian \(\check{G}\)-variety below (see Proposition 3.3.9).
**Lemma 3.3.6**.: _There is a canonical \(\check{T}\)-equivariant \(\check{\mathfrak{t}}^{*}\)-isomorphism_
\[\check{M}/\!\!/\check{U}\simeq\check{G}\backslash(\check{M}\times_{\check{ \mathfrak{g}}^{*}}T^{*}(\check{G}/\check{U})).\]
_Here, the map \(T^{*}(\check{G}/\check{U})\to\check{\mathfrak{g}}^{*}\) is the moment map for the left \(\check{G}\)-action, and \(T^{*}(\check{G}/\check{U})\) is viewed as a \(\check{\mathfrak{t}}^{*}\)-scheme via the moment map \(T^{*}(\check{G}/\check{U})\to\check{\mathfrak{t}}^{*}\) for the right \(\check{T}\)-action. Moreover, \(\check{G}\) acts diagonally on the product \(\check{M}\times_{\check{\mathfrak{g}}^{*}}T^{*}(\check{G}/\check{U})\)._
Proof.: Consider the isomorphism
\[\iota:\check{M}\times_{\check{\mathfrak{g}}^{*}}(\check{G}\times\check{ \mathfrak{u}}^{\perp})\simeq\check{M}\times_{\check{\mathfrak{g}}^{*}}( \check{G}\times\check{\mathfrak{u}}^{\perp})\]
given by \(\iota(m,g,\chi)=(g^{-1}m,g,\chi)\). On the left, the structure map \(\check{G}\times\check{\mathfrak{u}}^{\perp}\to\check{\mathfrak{g}}^{*}\) is the action map, whereas on the right we take the structure map \(\check{G}\times\check{\mathfrak{u}}^{\perp}\to\check{\mathfrak{g}}^{*}\) to be the composition of the projection to the second factor followed by the inclusion \(\check{\mathfrak{u}}^{\perp}\hookrightarrow\check{\mathfrak{g}}^{*}\).
Let \(\check{G}\) act on the left hand side diagonally and on the right hand side by \(h\cdot(m,g,\chi)=(m,hg,\chi)\). Then, \(\iota\) is \(\check{G}\)-equivariant. Let \(\check{U}\) act on the left hand side by \(u\cdot(m,g,\chi)=(m,gu^{-1},u\cdot\chi)\) (where \(u\cdot\chi\) is the coadjoint action) and on the right hand side by \(u\cdot(m,g,\chi)=(um,gu^{-1},u\cdot\chi)\). Then, \(\iota\) is \(\check{U}\)-equivariant. Therefore, we obtain an isomorphism \(\bar{\iota}\) on \(\check{G}\times\check{U}\)-quotients:
\[\overline{\iota}:\check{G}\backslash(\check{M}\times_{\check{\mathfrak{g}}^{*}}( \check{G}\times\check{\mathfrak{u}}^{\perp}))/\check{U}\simeq\check{G}\backslash (\check{M}\times_{\check{\mathfrak{g}}^{*}}(\check{G}\times\check{\mathfrak{u}}^{ \perp}))/\check{U}.\]
On the left hand side, we quotient by \(\check{U}\) and then \(\check{G}\) to obtain an isomorphism
\[\check{G}\backslash(\check{M}\times_{\check{\mathfrak{g}}^{*}}(\check{G}\times \check{\mathfrak{u}}^{\perp}))/\check{U}\simeq\check{G}\backslash(\check{M} \times_{\check{\mathfrak{g}}^{*}}T^{*}(\check{G}/\check{U})).\]
On the right hand side, we quotient by \(\check{G}\) and then \(\check{U}\) to obtain an isomorphism
\[\check{G}\backslash(\check{M}\times_{\check{\mathfrak{g}}^{*}}(\check{G} \times\check{\mathfrak{u}}^{\perp}))/\check{U}\simeq(\check{M}\times_{\check {\mathfrak{g}}^{*}}\check{\mathfrak{u}}^{\perp})/\check{U}=\check{M}/\!/ \check{U}.\]
Putting it all together constructs the desired isomorphism.
The proof of the following lemma is identical to that of Lemma 3.3.6.
**Lemma 3.3.7**.: _There is a canonical \(\mathfrak{J}\)-equivariant isomorphism of \(\check{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{ \mathfrak{ }}}}}}}}}}}}}}}\) schemes_
\[\check{M}/\!/(\check{U},\psi)\simeq\check{G}\backslash(\check{M}\times_{ \check{\mathfrak{g}}^{*}}T^{*}(\check{G}/(\check{U},\psi))).\]
_Here, the map \(T^{*}(\check{G}/(\check{U},\psi))\to\check{\mathfrak{g}}^{*}\) is the moment map for the left \(\check{G}\)-action, and the cotangent bundle \(T^{*}(\check{G}/(\check{U},\psi))\) is viewed as a \(\check{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ }}}}}}}}}}}}}}}\)\) and the natural projection_
\[T^{*}(\check{G}/(\check{U},\psi))=(\check{G}\times(\check{\mathfrak{u}}^{ \perp}+\psi))/\check{U}\to(\check{\mathfrak{u}}^{\perp}+\psi)/\check{U}\simeq \check{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrakmathfrak{ }}}}}}}}}}}}}}\check
Similarly, we have a \(\check{T}_{\check{\mathfrak{i}}^{*}_{\mathrm{gen}}}\)-equivariant isomorphism of \(\check{\mathfrak{i}}^{*}_{\mathrm{gen}}\)-schemes
\[\check{M}/\!/\!/\check{U}\times_{\check{\mathfrak{i}}^{*}}\check{ \mathfrak{i}}^{*}_{\mathrm{gen}} \simeq\check{G}\backslash(\check{M}\times_{\check{\mathfrak{g}}^{* }}T^{*}(\check{G}/\check{U}))\times_{\check{\mathfrak{i}}^{*}}\check{ \mathfrak{i}}^{*}_{\mathrm{gen}}\] \[\simeq\check{G}\backslash(\check{M}\times_{\check{\mathfrak{g}}^ {*}}(T^{*}(\check{G}/\check{U})\times_{\check{\mathfrak{i}}^{*}}\check{ \mathfrak{i}}^{*}_{\mathrm{gen}})).\]
Therefore, it suffices to exhibit a \(\check{G}\)-equivariant \(\check{T}_{\check{\mathfrak{i}}^{*}}\)-equivariant morphism of \(\check{\mathfrak{g}}^{*}\times\check{\mathfrak{i}}^{*}\)-schemes
\[T^{*}(\check{G}/(\check{U},\psi))\times_{\check{\mathfrak{i}}}\check{ \mathfrak{i}}^{*}\to T^{*}(\check{G}/\check{U})\]
whose pullback to \(\check{\mathfrak{i}}^{*}_{\mathrm{gen}}\) is an isomorphism
\[(T^{*}(\check{G}/(\check{U},\psi))\times_{\check{\mathfrak{i}}}\check{ \mathfrak{i}}^{*})\times_{\check{\mathfrak{i}}^{*}}\check{\mathfrak{i}}^{*}_{ \mathrm{gen}}\simeq T^{*}(\check{G}/\check{U})\times_{\check{\mathfrak{i}}^{*} }\check{\mathfrak{i}}^{*}_{\mathrm{gen}}.\]
We did this in Remark 3.3.4.
For \(I\subseteq\Delta\), we define the open subscheme \(\check{\mathfrak{i}}^{*}_{I-\mathrm{gen}}\) of "\(I\)-generic" elements by the non-vanishing of the roots \(\check{\alpha}\in\check{\Phi}_{I}\). Since \(\check{\mathfrak{i}}^{*}_{I-\mathrm{gen}}\) is \(W_{I}\)-stable, it descends to an open subscheme \(\check{\mathfrak{c}}_{I-\mathrm{gen}}\subseteq\check{\mathfrak{c}}_{I}\). Note that the map \(\check{\mathfrak{i}}^{*}_{I-\mathrm{gen}}\to\check{\mathfrak{c}}_{I}\) is a \(W_{I}\)-torsor over \(\check{\mathfrak{c}}_{I-\mathrm{gen}}\). Recall that \(\mathfrak{J}_{I}\) denotes the regular centralizer over \(\check{\mathfrak{c}}_{I}\) associated to \(\check{L}_{I}\).
We have the following generalization of Proposition 3.3.9.
**Corollary 3.3.10**.: _Let \(I\subseteq\Delta\). There is a canonical \(\check{T}_{\check{\mathfrak{i}}^{*}}\)-equivariant morphism of \(\check{\mathfrak{i}}^{*}_{I-\mathrm{gen}}\)-schemes_
\[\eta_{I-\mathrm{gen}}:\check{M}/\!/\!/(\check{U},\psi_{I})\times_{\check{ \mathfrak{i}}_{I}}\check{\mathfrak{i}}^{*}\to\check{M}/\!/\!/\check{U}\]
_whose pullback to \(\check{\mathfrak{i}}^{*}_{I-\mathrm{gen}}\) yields an isomorphism_
\[\eta_{I-\mathrm{gen}}\times_{\check{\mathfrak{i}}}\check{\mathfrak{i}}^{*}_{ I-\mathrm{gen}}:\big{(}\check{M}/\!/(\check{U},\psi_{I})\times_{\check{ \mathfrak{i}}_{I}}\check{\mathfrak{i}}^{*}\big{)}\times_{\check{\mathfrak{i}} ^{*}}\check{\mathfrak{i}}^{*}_{I-\mathrm{gen}}\simeq\check{M}/\!/\!/\check{U} \times_{\check{\mathfrak{i}}^{*}}\check{\mathfrak{i}}^{*}_{I-\mathrm{gen}}.\]
Proof.: In Lemma 3.1.10, we produced an isomorphism
\[\check{M}/\!/(\check{U},\psi_{I})\simeq(\check{M}/\!/\!/\check{V}_{I})/\!/( \check{U}_{I},\overline{\psi}_{I}).\]
Applying Proposition 3.3.9 to the Hamiltonian \(\check{L}_{I}\)-space \(\check{M}/\!/\!/\check{V}_{I}\), we obtain a \(\check{T}_{\check{\mathfrak{i}}^{*}}\)-equivariant morphism
\[\check{M}/\!/(\check{U},\psi_{I})\times_{\check{\mathfrak{i}}_{I}}\check{ \mathfrak{i}}^{*}\simeq(\check{M}/\!/\check{V}_{I})/\!/(\check{U}_{I},\overline {\psi}_{I})\times_{\check{\mathfrak{i}}_{I}}\check{\mathfrak{i}}^{*}\to( \check{M}/\!/\!/\check{V}_{I})/\!/\!/\check{U}_{I} \tag{3.3.11}\]
which induces an isomorphism
\[\big{(}\check{M}/\!/\!/(\check{U},\psi_{I})\times_{\check{\mathfrak{i}}} \check{\mathfrak{i}}^{*}\big{)}\times_{\check{\mathfrak{i}}^{*}}\check{ \mathfrak{i}}^{*}_{I-\mathrm{gen}}\simeq(\check{M}/\!/\!/\check{V}_{I})/\!/ \check{U}_{I}\times_{\check{\mathfrak{i}}^{*}}\check{\mathfrak{i}}^{*}_{ \mathrm{gen}}. \tag{3.3.12}\]
By Lemma 3.1.10, we obtain an isomorphism \((\check{M}/\!/\!/\check{V}_{I})/\!/\check{U}_{I}\simeq\check{M}/\!/\!/\check{U}\). Inserting this isomorphism into 3.3.11 and 3.3.12, we obtain the desired morphism
\[\eta_{I-\mathrm{gen}}:\big{(}\check{M}/\!/\!/(\check{U},\psi_{I})\times_{\check {\mathfrak{i}}}\check{\mathfrak{i}}^{*}\big{)}\times_{\check{\mathfrak{i}}^{*}} \check{\mathfrak{i}}^{*}_{I-\mathrm{gen}}\to\check{M}/\!/\check{U}\times_{ \check{\mathfrak{i}}^{*}}\check{\mathfrak{i}}^{*}_{I-\mathrm{gen}}.\]
### Anti-generic comparison of Hamiltonian and Kostant-Whittaker reductions
We have studied the partial Kostant-Whittaker reduction \(\tilde{M}/\!\!/(\tilde{U},\psi_{I})\) over the open locus \(\tilde{\mathfrak{c}}_{I-\mathrm{gen}}\). We'll next restrict to the open subscheme \(\tilde{\mathfrak{c}}_{I-\mathrm{gen}}^{-}\subseteq\tilde{\mathfrak{c}}_{I}\) defined by the non-vanishing \(\check{\alpha}\neq 0\) for each root \(\check{\alpha}\in\check{\Phi}\setminus\check{\Phi}_{I}\). More precisely, we define \(\tilde{\mathfrak{c}}_{I-\mathrm{gen}}^{*-}\) to be the open subscheme of \(\tilde{\mathfrak{t}}^{*}\) defined by the non-vanishing \(\check{\alpha}\neq 0\) for each \(\check{\alpha}\in\check{\Phi}\setminus\check{\Phi}_{I}\). Since \(\check{\Phi}\setminus\check{\Phi}_{I}\) is \(W_{I}\)-stable, it follows that \(\tilde{\mathfrak{t}}_{I}^{*-}\) is \(W_{I}\)-stable, hence descends to an open subscheme \(\tilde{\mathfrak{c}}_{I-\mathrm{gen}}^{-}\subseteq\tilde{\mathfrak{c}}_{I}\). We refer to \(\tilde{\mathfrak{t}}_{I-\mathrm{gen}}^{*-}\) and \(\tilde{\mathfrak{c}}_{I-\mathrm{gen}}^{*}\) as the _\(I\)-anti-generic_ loci.
**Remark 3.4.1**.: Note that \(\tilde{\mathfrak{t}}_{I-\mathrm{gen}}^{*-}\neq\tilde{\mathfrak{t}}_{I^{\circ }-\mathrm{gen}}^{*}\), where \(I^{\circ}=\Delta\setminus I\): that is, we are going to avoid all of the "walls" in the root system of \(\check{G}\) associated to roots of \(\check{G}\) that are _not_ roots of \(\check{L}_{I}\), even if they are not sums of simple roots in \(I^{\circ}\). For example, consider the case \(\check{G}=\mathrm{SL}_{3}\) with its standard Levi subgroup \(\check{L}_{I}=\mathrm{S}(\mathrm{GL}_{2}\times\mathbb{G}_{m})\). Let \(\lambda_{1}-\lambda_{2}\), \(\lambda_{2}-\lambda_{3}\), and \(\lambda_{1}-\lambda_{3}\) denote the three positive roots. With this notation, \(I=\{\lambda_{1}-\lambda_{2}\}\). Therefore, \(\tilde{\mathfrak{t}}_{I-\mathrm{gen}}^{*-}\subseteq\mathbb{A}^{2}\) is the open locus complementary to the lines \(\lambda_{2}=\lambda_{3}\) and \(\lambda_{1}=\lambda_{3}\). On the other hand, \(\tilde{\mathfrak{t}}_{I^{\circ}-\mathrm{gen}}^{*}\) is the complement of the single line \(\lambda_{2}=\lambda_{3}\).
**Remark 3.4.2**.: Consider the affine subspace \(\tilde{\mathfrak{u}}^{\perp}+\psi_{I}\subseteq\tilde{\mathfrak{g}}^{*}\). It consists of all linear forms on \(\check{\mathfrak{g}}\) which restrict to \(\psi_{I}\) on \(\check{\mathfrak{u}}\). Since \(\psi_{I}\) vanishes on \(\tilde{\mathfrak{v}}_{I}\), the same is true of any element of \(\tilde{\mathfrak{u}}^{\perp}+\psi_{I}\). Therefore, regarding \((\check{\mathfrak{p}}_{I}/\check{\mathfrak{v}}_{I})^{*}\) as the space of linear forms on the parabolic subalgebra \(\check{\mathfrak{p}}_{I}\) which vanish on \(\check{\mathfrak{v}}_{I}\), we have a restriction map
\[r_{I}:\tilde{\mathfrak{u}}^{\perp}+\psi_{I}\to(\check{\mathfrak{p}}/\check{ \mathfrak{v}})^{*}\simeq\tilde{\mathfrak{t}}^{*}.\]
Its image is exactly \(\check{\mathfrak{u}}_{I}+\overline{\psi}_{I}\).
**Lemma 3.4.3**.: _Let \(I\subseteq\Delta\). Regard \(\check{\mathfrak{u}}^{\perp}+\psi_{I}\) as a \(\check{\mathfrak{c}}_{I}\)-scheme via the composition of the restriction map \(r_{I}:\check{\mathfrak{u}}^{\perp}+\psi_{I}\to\tilde{\mathfrak{t}}_{I}^{*}\) (see Remark 3.4.2) with the characteristic polynomial map \(\chi_{I}:\tilde{\mathfrak{t}}_{I}^{*}\to\check{\mathfrak{c}}_{I}\). Then, the natural projection_
\[\pi_{2}:(\check{\mathfrak{u}}^{\perp}+\psi_{I})\times_{\check{\mathfrak{c}}_{ I}}\check{\mathfrak{c}}_{I-\mathrm{gen}}^{-}\to\check{\mathfrak{c}}_{I- \mathrm{gen}}^{-} \tag{3.4.4}\]
_is a \(\check{U}\)-torsor._
Proof.: We can factor the morphism \(r_{I}:\check{\mathfrak{u}}^{\perp}+\psi_{I}\to\tilde{\mathfrak{t}}_{I}^{*}\) as a surjection \(\alpha:\check{\mathfrak{u}}^{\perp}+\psi_{I}\twoheadrightarrow\check{ \mathfrak{u}}_{I}^{\perp}+\overline{\psi}_{I}\) followed by the natural inclusion \(\check{\mathfrak{u}}_{I}^{\perp}+\overline{\psi}_{I}\hookrightarrow\tilde{ \mathfrak{t}}_{I}^{*}\). Therefore, we have the following commutative diagram.
We regard \(\check{\mathfrak{u}}_{I}^{\perp}+\overline{\psi}_{I}\) as a \(\check{\mathfrak{c}}_{I}\)-scheme via \(\chi_{I}:\check{\mathfrak{u}}_{I}^{\perp}+\overline{\psi}_{I}\to\check{ \mathfrak{c}}_{I}\), so that, by the commutative diagram above, \(\alpha\) is a morphism of \(\check{\mathfrak{c}}_{I}\)-schemes. We now pull \(\alpha\) back to the open subscheme
\(\tilde{\mathfrak{C}}^{-}_{I-\mathrm{gen}}\subseteq\tilde{\mathfrak{c}}_{I}\) to obtain a commutative diagram
\[(\tilde{\mathfrak{u}}^{\perp}+\psi_{I})\times_{\tilde{\mathfrak{c}}_{I}}\tilde{ \mathfrak{c}}^{-}_{I-\mathrm{gen}}\xrightarrow[r_{I}]{\alpha^{\prime}} \xrightarrow[r_{I}]{\alpha^{\prime}}\]
\(\tilde{\mathfrak{u}}^{\perp}_{I}+\overline{\psi}_{I})\times_{\tilde{\mathfrak{c}}_{ I}}\tilde{\mathfrak{c}}^{-}_{I-\mathrm{gen}}\)
Note that \(\alpha^{\prime}\) is \(\tilde{V}_{I}\)-invariant, where \(\tilde{V}_{I}\) acts through its coadjoint action on the first factor. It therefore suffices to show that \((i)\)\(\pi^{\prime}_{2}\) is a \(\tilde{U}_{I}=\tilde{U}/\tilde{V}_{I}\)-torsor and that \((ii)\)\(\alpha^{\prime}\) is a \(\tilde{V}_{I}\)-torsor.
\((i)\) The morphism \(\pi^{\prime}_{2}\) is a base change of the characteristic polynomial \(\chi_{I}:\tilde{\mathfrak{u}}_{I}+\overline{\psi}_{I}\to\tilde{\mathfrak{c}}_ {I}\), which is a \(\tilde{U}_{I}\)-torsor by Kostant's theorem [32, Theorem 3.2.2] applied to \(\tilde{\mathfrak{l}}_{I}\).
\((ii)\) Recall that \(\tilde{\mathfrak{v}}^{\perp}_{I}\) denotes the space of linear forms on \(\tilde{\mathfrak{g}}\) vanishing on \(\tilde{\mathfrak{v}}_{I}=\mathrm{Lie}(\tilde{V}_{I})\). We have a Cartesian diagram
\[\begin{CD}(\tilde{\mathfrak{u}}^{\perp}+\psi_{I})\times_{\tilde{\mathfrak{c} }_{I}}\tilde{\mathfrak{c}}^{-}_{I-\mathrm{gen}}@>{\alpha^{\prime}}>{}>(\tilde{ \mathfrak{u}}^{\perp}_{I}+\overline{\psi}_{I})\times_{\tilde{\mathfrak{c}}_{I }}\tilde{\mathfrak{c}}^{-}_{I-\mathrm{gen}}\\ @V{}V{}V@V{}V{}V\\ \tilde{\mathfrak{v}}^{\perp}_{I}\times_{\tilde{\mathfrak{c}}_{I}}\tilde{ \mathfrak{c}}^{-}_{I-\mathrm{gen}}@>{}>{r_{I}}>\tilde{\mathfrak{l}}^{*}_{I} \times_{\tilde{\mathfrak{c}}_{I}}\tilde{\mathfrak{c}}^{-}_{I-\mathrm{gen}}. \end{CD}\]
Therefore, it suffices to show that the bottom arrow is a \(\tilde{V}_{I}\)-torsor. The \(\tilde{G}\)-invariant isomorphisms \(\tilde{\mathfrak{g}}^{*}\simeq\tilde{\mathfrak{g}}\) and \(\tilde{\mathfrak{l}}^{*}_{I}\simeq\tilde{\mathfrak{l}}_{I}\) induce a commutative diagram
\[\begin{CD}\tilde{\mathfrak{v}}^{\perp}_{I}\times_{\tilde{\mathfrak{c}}_{I}} \tilde{\mathfrak{c}}^{-}_{I-\mathrm{gen}}@>{r_{I}}>{}>\tilde{\mathfrak{l}}^{*}_ {I}\times_{\tilde{\mathfrak{c}}_{I}}\tilde{\mathfrak{c}}^{-}_{I-\mathrm{gen}} \\ @V{}V{}V\sim @V{}V{}V\\ \tilde{\mathfrak{p}}_{I}\times_{\tilde{\mathfrak{c}}_{I}}\tilde{\mathfrak{c}}^{ -}_{I-\mathrm{gen}}@>{}>{p_{I}}>\tilde{\mathfrak{l}}_{I}\times_{\tilde{ \mathfrak{c}}_{I}}\tilde{\mathfrak{c}}^{-}_{I-\mathrm{gen}},\end{CD}\]
where \(p_{I}\) is induced by the projection \(\tilde{\mathfrak{p}}_{I}\twoheadrightarrow\tilde{\mathfrak{l}}_{I}\). To show that \(p_{I}\) is a \(\tilde{V}_{I}\)-torsor, we will show that the action map
\[a:(\tilde{V}_{I}\times\tilde{\mathfrak{l}}_{I})\times_{\tilde{\mathfrak{c}}_{ I}}\tilde{\mathfrak{c}}^{-}_{I-\mathrm{gen}}\to\tilde{\mathfrak{p}}_{I} \times_{\tilde{\mathfrak{c}}_{I}}\tilde{\mathfrak{c}}^{-}_{I-\mathrm{gen}}\]
is an isomorphism. We claim that \(a\) is etale. Since \(a\) is a morphism of smooth varieties, it suffices to check that for any \((v,x)\in(\tilde{V}_{I}\times\tilde{\mathfrak{l}}_{I})\times_{\tilde{\mathfrak{c }}_{I}}\tilde{\mathfrak{c}}^{-}_{I-\mathrm{gen}}\),
\[da_{(v,x)}:T_{(v,x)}((\tilde{V}_{I}\times\tilde{\mathfrak{l}}_{I})\times_{ \tilde{\mathfrak{c}}_{I}}\tilde{\mathfrak{c}}^{-}_{I-\mathrm{gen}})\to T _{v\cdot x}(\tilde{\mathfrak{p}}_{I}\times_{\tilde{\mathfrak{c}}_{I}}\tilde{ \mathfrak{c}}^{-}_{I-\mathrm{gen}})\]
is an isomorphism. Since \(a\) is \(\tilde{V}_{I}\)-equivariant, it suffices to check the claim when \(v=1\). At such a point, the differential
\[da_{(1,x)}:\tilde{\mathfrak{v}}_{I}\oplus\tilde{\mathfrak{l}}_{I}\to\tilde{ \mathfrak{p}}_{I}\]
is given by
\[da_{(1,x)}(\xi,\zeta)=[\xi,x]+\zeta.\]
Since the domain and codomain of \(da_{(1,x)}\) have the same dimension, it suffices to show surjectivity. The image of \(da_{(1,x)}\) is \(\operatorname{ad}_{x}(\tilde{\mathfrak{v}}_{I})+\tilde{\mathfrak{l}}_{I}\subseteq \tilde{\mathfrak{p}}_{I}\). Since \(\tilde{\mathfrak{p}}_{I}=\tilde{\mathfrak{v}}_{I}+\tilde{\mathfrak{l}}_{I}\), it suffices to show that
\[\operatorname{ad}_{x}:\tilde{\mathfrak{v}}_{I}\to\tilde{\mathfrak{v}}_{I}\]
is surjective. It therefore suffices to demonstrate the injectivity of \(\operatorname{ad}_{x}\). Let \(\xi\in\tilde{\mathfrak{v}}_{I}\) and suppose that \([\xi,x]=0\). Let \(x=x_{n}+x_{ss}\) denote the Jordan decomposition of \(x\). We have \([\xi,x_{ss}]=[\xi,x_{n}]=0\). Since \(x\in\tilde{\mathfrak{l}}_{I}\), it follows that \(x_{ss},x_{n}\in\tilde{\mathfrak{l}}_{I}\). Note that \(x_{ss}\) has the same image in \(\tilde{\mathfrak{c}}_{I}\) as \(x\), so it follows that \(x_{ss}\in\tilde{\mathfrak{p}}_{I}\times_{\tilde{\mathfrak{l}}_{I}}\tilde{ \mathfrak{c}}_{I-\operatorname{gen}}^{-}\) as well. Since \(x_{ss}\) is semisimple, \(\operatorname{ad}_{x_{ss}}\) is a semisimple endomorphism of \(\tilde{\mathfrak{v}}_{I}\). Furthermore, the fact that \(x_{ss}\in\tilde{\mathfrak{p}}_{I}\times_{\tilde{\mathfrak{l}}_{I}}\tilde{ \mathfrak{c}}_{I-\operatorname{gen}}^{-}\) implies that \(x_{ss}\) acts on \(\tilde{\mathfrak{v}}_{I}\) with non-zero eigenvalues (the weights of \(\tilde{\mathfrak{v}}_{I}\) are contained in \(\Phi\setminus\Phi_{I}\)). Hence, the centralizer of \(x_{ss}\) in \(\tilde{\mathfrak{v}}_{I}\) is trivial (it is the zero eigenspace of \(\operatorname{ad}_{x_{ss}}\)), and it follows that \(\xi=0\).
Next, we assert that the action map \(a\) is surjective. We must show that any element of \(\tilde{\mathfrak{p}}_{I}\times_{\tilde{\mathfrak{l}}_{I}}\tilde{\mathfrak{c}} _{I-\operatorname{gen}}^{-}\) is conjugate under \(\check{V}_{I}\) to an element of \(\tilde{\mathfrak{l}}_{I}\times_{\tilde{\mathfrak{l}}_{I}}\tilde{\mathfrak{c}} _{I-\operatorname{gen}}^{-}\). Since \(\tilde{\mathfrak{l}}_{I}\times_{\tilde{\mathfrak{l}}_{I}}\tilde{\mathfrak{c}} _{I-\operatorname{gen}}^{-}\) is \(\tilde{L}_{I}\)-stable and \(\check{P}_{I}=\check{V}_{I}\rtimes\tilde{L}_{I}\), it suffices to show that any element of \(\tilde{\mathfrak{p}}_{I}\times_{\tilde{\mathfrak{l}}_{I}}\tilde{\mathfrak{c} }_{I-\operatorname{gen}}^{-}\) is conjugate under \(\check{P}_{I}\) to an element of \(\tilde{\mathfrak{l}}_{I}\). Let \(x\in\tilde{\mathfrak{p}}_{I}\times_{\tilde{\mathfrak{l}}_{I}}\tilde{ \mathfrak{c}}_{I-\operatorname{gen}}^{-}\). Let \(x=x_{n}+x_{ss}\) denote the Jordan decomposition. Note that \(x_{ss}\in\tilde{\mathfrak{p}}_{I}\times_{\tilde{\mathfrak{l}}_{I}}\tilde{ \mathfrak{c}}_{I-\operatorname{gen}}^{-}\). After conjugating by \(\check{P}_{I}\), we can assume that \(x_{ss}\in\tilde{\mathfrak{l}}\) and in particular \(x_{ss}\in\tilde{\mathfrak{l}}_{I}\). The zero eigenspace of \(\operatorname{ad}_{x_{ss}}\) on \(\tilde{\mathfrak{u}}\) is the sum of the positive root spaces \(\check{\mathfrak{g}}_{\alpha}\) such that \(\alpha(x_{ss})=0\). Since \(\alpha(x_{ss})\neq 0\) for every \(\alpha\not\in\Phi_{I}\), we deduce that the centralizer of \(x_{ss}\) in \(\check{\mathfrak{u}}\) is contained in the sum of the positive root spaces \(\check{\mathfrak{g}}_{\alpha}\) such that \(\alpha\in\Phi_{I}\), i.e. the subspace \(\check{\mathfrak{u}}_{I}\). Since \([x_{n},x_{ss}]=0\), we conclude that \(x_{n}\in\check{\mathfrak{u}}_{I}\subseteq\tilde{\mathfrak{l}}_{I}\). Therefore, \(x\in\tilde{\mathfrak{l}}_{I}\) and we are done.
Therefore, \(a\) is an etale cover. To show that \(a\) is an isomorphism, it suffices to show that it has connected (closed) fibers. Let \(x\in\check{\mathfrak{p}}_{I}\). The fiber \(a^{-1}(x)\) is equipped with a natural projection \(q_{x}:a^{-1}(x)\to\tilde{\mathfrak{l}}_{I}\). Since \(\tilde{\mathfrak{l}}_{I}\) is connected, it suffices to show that the fiber of \(q_{x}\) over any element \(y\in\tilde{\mathfrak{l}}_{I}\) is connected. Let \(v\in\check{V}_{I}\) denote an element satisfying \(a(v,y)=x\) (which exists by the surjectivity of \(a\)). We have \(q_{x}^{-1}(y)\simeq v\mathrm{Stab}_{\check{V}_{I}}(y)\). Since \(\mathrm{Stab}_{\check{V}_{I}}(y)\) is a closed subgroup of \(\check{V}_{I}\), it is unipotent, therefore connected, as needed.
**Construction 3.4.5**.: In light of Lemma 3.4.3, it is natural to introduce a "partial Kostant section"
\[\sigma_{I}:\check{\mathfrak{c}}_{I}\to\check{\mathfrak{g}}^{*}\]
which trivializes the torsor \(\pi_{2}\) of (3.4.4). More precisely, we want \(\sigma_{I}\) to factor through \(\check{\mathfrak{u}}^{\perp}+\psi_{I}\) and for its pullback
\[\sigma_{I}\times_{\check{\mathfrak{c}}_{I}}\tilde{\mathfrak{c}}_{I- \operatorname{gen}}^{-}:\tilde{\mathfrak{c}}_{I-\operatorname{gen}}^{-} \to(\check{\mathfrak{u}}^{\perp}+\psi_{I})\times_{\check{\mathfrak{c}}_{I}} \tilde{\mathfrak{c}}_{I-\operatorname{gen}}^{-}\]
to be a section to \(\pi_{2}\). Let \(\kappa_{I}:\check{\mathfrak{c}}_{I}\to\tilde{\mathfrak{l}}_{I}^{*}\) denote the Kostant section associated to \(\tilde{\mathfrak{l}}_{I}\). We define \(\sigma_{I}\) to be the composition of \(\kappa_{I}\) with the inclusion \(\tilde{\mathfrak{l}}_{I}^{*}\hookrightarrow\check{\mathfrak{g}}^{*}\) given by pulling back a linear form on \(\tilde{\mathfrak{l}}_{I}\) to \(\check{\mathfrak{p}}_{I}\) and extending it by zero on \(\check{\mathfrak{v}}_{I}^{-}\). Note that when \(I=\emptyset\), \(\sigma_{I}:\check{\mathfrak{t}}^{*}\hookrightarrow\check{\mathfrak{g}}^{*}\) is just the
inclusion of \(\check{\mathfrak{t}}^{*}\) into \(\check{\mathfrak{g}}^{*}\) that has already appeared, and when \(I=\Delta\), \(\sigma_{I}=\kappa_{I}=\kappa\) is just the usual Kostant section \(\kappa:\check{\mathfrak{c}}\to\check{\mathfrak{g}}^{*}\).
**Lemma 3.4.6**.: _There is a canonical \(\mathfrak{J}\times_{\check{\mathfrak{t}}_{I}}\check{\mathfrak{c}}^{-}_{I- \mathrm{gen}}\)-equivariant isomorphism of \(\check{\mathfrak{c}}^{-}_{I-\mathrm{gen}}\)-schemes_
\[\check{M}/\!\!/(\check{U},\psi_{I})\times_{\check{\mathfrak{t}}_{I}}\check{ \mathfrak{c}}^{-}_{I-\mathrm{gen}}\simeq\check{M}\times_{\check{\mathfrak{g}}^ {*},\sigma_{I}}\check{\mathfrak{c}}^{-}_{I-\mathrm{gen}}.\]
Proof.: Let \(p:\check{M}\times_{\check{\mathfrak{g}}^{*}}(\check{\mathfrak{u}}^{\perp}+ \psi_{I})\to\check{M}/\!\!/(\check{U},\psi_{I})\) denote the quotient map. It is a \(\check{U}\)-torsor. Now, consider the commutative diagram
The square on the left is Cartesian by construction. By Lemma 3.4.3, the square on the right is a morphism of \(\check{U}\)-torsors, hence is Cartesian. Therefore, the composite square is Cartesian. It follows that the composition of the top row is an isomorphism
\[\check{M}\times_{\check{\mathfrak{g}}^{*},\sigma_{I}}\check{\mathfrak{c}}^{-} _{I-\mathrm{gen}}\xrightarrow{\sim}\check{M}/\!\!/(\check{U},\psi_{I})\times_ {\check{\mathfrak{t}}_{I}}\check{\mathfrak{c}}^{-}_{I-\mathrm{gen}}.\]
**Example 3.4.7**.: We can apply Lemma 3.4.6 to the example of \(\check{M}=T^{*}\check{G}\) to obtain a natural isomorphism
\[T^{*}(\check{G}/(\check{U},\psi_{I}))\times_{\check{\mathfrak{t}}_{I}}\check{ \mathfrak{c}}^{-}_{I}\simeq\check{G}\times\check{\mathfrak{c}}^{-}_{I- \mathrm{gen}}. \tag{3.4.8}\]
On the other hand, Example 3.3.2 provides an isomorphism
\[T^{*}(\check{G}/(\check{U},\psi))\simeq\check{G}\times\check{\mathfrak{c}}. \tag{3.4.9}\]
Combining (3.4.8) with (3.4.9), we deduce an isomorphism
\[T^{*}(\check{G}/(\check{U},\psi_{I}))\times_{\check{\mathfrak{t }}_{I}}\check{\mathfrak{c}}^{-}_{I-\mathrm{gen}} \simeq\check{G}\times\check{\mathfrak{c}}^{-}_{I-\mathrm{gen}}\] \[\simeq(\check{G}\times\check{\mathfrak{c}})\times_{\check{ \mathfrak{c}}}\check{\mathfrak{c}}^{-}_{I-\mathrm{gen}}\] \[\simeq T^{*}(\check{G}/(\check{U},\psi))\times_{\check{ \mathfrak{c}}}\check{\mathfrak{c}}^{-}_{I-\mathrm{gen}}.\]
In fact, we can be slightly more precise. We have a canonical map of \(\check{\mathfrak{c}}_{I}\)-varieties
\[T^{*}(\check{G}/(\check{U},\psi))\times_{\check{\mathfrak{c}}}\check{ \mathfrak{c}}_{I}\simeq\check{G}\times\check{\mathfrak{c}}_{I}\stackrel{{ \mathrm{id}\times\sigma_{I}}}{{\hookrightarrow}}\check{G}\times(\check{ \mathfrak{u}}^{\perp}+\psi_{I})\twoheadrightarrow(\check{G}\times(\check{ \mathfrak{u}}^{\perp}+\psi_{I}))/\check{U}\simeq T^{*}(\check{G}/(\check{U}, \psi_{I}))\]
which pulls back over the anti-generic locus \(\check{\mathfrak{c}}^{-}_{I-\mathrm{gen}}\) to the isomorphism (3.4.10).
In order to generalize Remark 3.4.7 to arbitrary \(\check{M}\), we note the following easy analog of Lemma 3.3.6 and Lemma 3.3.7.
**Lemma 3.4.11**.: _There is a canonical \(\mathfrak{J}_{I}\)-equivariant isomorphism of \(\check{\mathfrak{c}}_{I}\)-schemes_
\[\check{M}\!/\!/(\check{U},\psi_{I})\simeq\check{G}\backslash(\check{M}\times_{ \check{\mathfrak{g}}^{*}}T^{*}(\check{G}/(\check{U},\psi_{I}))).\]
_Here, the map \(T^{*}(\check{G}/(\check{U},\psi_{I}))\to\check{\mathfrak{g}}^{*}\) is the moment map for the left \(\check{G}\)-action, and the cotangent bundle \(T^{*}(\check{G}/(\check{U},\psi_{I}))\) is viewed as a \(\check{\mathfrak{c}}_{I}\)-scheme via the natural projection_
\[T^{*}(\check{G}/(\check{U},\psi_{I}))=(\check{G}\times(\check{\mathfrak{u}}^{ \perp}+\psi_{I}))/\check{U}\to(\check{\mathfrak{u}}^{\perp}+\psi_{I})/\check {U}\to\check{\mathfrak{c}}.\]
_Moreover, \(\check{G}\) acts diagonally on the product \(\check{M}\times_{\check{\mathfrak{g}}^{*}}T^{*}(\check{G}/(\check{U},\psi_{I}))\)._
Now we can formulate our general description of a partial Kostant-Whittaker reduction over the anti-generic locus of \(\check{\mathfrak{c}}_{I}\).
**Proposition 3.4.12**.: _There is a canonical \(\mathfrak{J}_{I}\)-equivariant morphism of \(\check{\mathfrak{c}}_{I}\)-schemes_
\[\eta^{-}_{I-\mathrm{gen}}:\check{M}\!/\!/(\check{U},\psi)\times_{\check{ \mathfrak{c}}}\check{\mathfrak{c}}_{I}\to\check{M}\!/\!/(\check{U},\psi_{I})\]
_which pulls back to an isomorphism of \(\check{\mathfrak{c}}^{-}_{I-\mathrm{gen}}\)-schemes_
\[\eta^{-}_{I-\mathrm{gen}}\times_{\check{t}_{I}}\check{\mathfrak{c}}^{-}_{I- \mathrm{gen}}:\left(\check{M}\!/\!/(\check{U},\psi)\times_{\check{\mathfrak{ c}}}\check{\mathfrak{c}}_{I}\right)\times_{\check{t}_{I}}\check{\mathfrak{c}}^{-}_{I- \mathrm{gen}}\simeq\check{M}\!/\!/(\check{U},\psi_{I})\times_{\check{t}_{I}} \check{\mathfrak{c}}^{-}_{I-\mathrm{gen}}\]
Proof.: The construction of \(\eta^{-}_{I-\mathrm{gen}}\) is easily adapted from that of \(\eta_{I-\mathrm{gen}}\) in the proof of Proposition 3.3.9, using Lemma 3.4.7 in place of Lemma 3.3.4 and Lemma 3.4.11 in place of Lemma 3.3.6.
We omit the proof of the following straightforward lemma.
**Lemma 3.4.13**.: _Suppose that \(\check{M}\) is equipped with an action of an algebraic group \(\check{H}\) commuting with the action of \(\check{G}\). Then, the morphism \(\eta^{-}_{I-\mathrm{gen}}\) of Proposition 3.4.12 is \(\check{H}\)-equivariant._
### Affine closure
We will now apply Proposition 3.3.9 and Proposition 3.4.12 to obtain a description of the ring \(\mathcal{O}(\check{M}\!/\!/(\check{U},\psi_{I}))\) in terms of the Hamiltonian reduction \(\mathcal{O}(\check{M}\!/\!/\check{U})\) and the non-degenerate Kostant-Whittaker reduction \(\mathcal{O}(\check{M}\!/\!/(\check{U},\psi))\). Consider the union \(\check{\mathfrak{c}}^{\circ}_{I}:=\check{\mathfrak{c}}_{I-\mathrm{gen}}\cup \check{\mathfrak{c}}^{-}_{I-\mathrm{gen}}\) of the \(I\)-generic and \(I\)-anti-generic loci in \(\check{\mathfrak{c}}_{I}\). Let
\[\pi_{I}:\check{M}\!/\!/(\check{U},\psi_{I})\to\check{\mathfrak{c}}_{I}\]
denote the natural projection induced by the \(\check{V}_{I}\)-reduced moment map \(\check{M}\!/\!/\check{V}_{I}\to\check{\mathfrak{l}}^{*}_{I}\!/\!/\check{L}= \check{\mathfrak{c}}_{I}\).
**Lemma 3.5.1**.: _The restriction map_
\[\mathcal{O}(\check{M}\!/\!/(\check{U},\psi_{I}))\to\mathcal{O}(\pi_{I}^{-1}( \check{\mathfrak{c}}^{\circ}_{I})) \tag{3.5.2}\]
_restricting regular functions on \(\check{M}\!/\!/(\check{U},\psi_{I})\) to the open subscheme \(\pi_{I}^{-1}(\check{\mathfrak{c}}^{\circ}_{I})\subseteq\check{M}\!/\!/(\check {U},\psi_{I})\) is an isomorphism._
Proof.: We have assumed that \(\check{M}\) is a free \(\check{G}\)-variety; thus, the reduction \(\check{M}/\!\!/\check{V}_{I}\) is a free \(\check{L}_{I}\)-variety. Therefore, the moment map \(\mu_{I}:\check{M}/\!\!/\check{V}_{I}\to\check{\mathfrak{l}}_{I}^{*}\) is smooth. By Lemma 3.2.4, the projection \(\pi_{I}\) is a base change of \(\mu_{I}\), hence is smooth. In particular, \(\pi_{I}\) is flat.
The closed subscheme \(\check{\mathfrak{c}}_{I}\setminus\check{\mathfrak{c}}_{I}^{\circ}\) has codimension 2 in \(\check{\mathfrak{c}}_{I}\). Indeed,
\[\check{\mathfrak{c}}_{I}\setminus\check{\mathfrak{c}}_{I}^{\circ}=\bigcup_{ \begin{subarray}{c}\alpha\in I\\ \beta\not\in I\end{subarray}}\{\alpha=0\}\cap\{\beta=0\}.\]
Since \(\alpha\neq\beta\) in any of the terms of this union, we conclude that the linear subspace \(\{\alpha=0\}\cap\{\beta=0\}=\{\alpha,\beta=0\}\) has codimension 2 in \(\check{\mathfrak{c}}_{I}\).
It then follows from the flatness of \(\pi_{I}\) that the preimage \(\pi_{I}^{-1}(\check{\mathfrak{c}}_{I}^{\circ})\) of \(\check{\mathfrak{c}}_{I}\setminus\check{\mathfrak{c}}_{I}^{\circ}\) has codimension \(\geq 2\) in \(\check{M}/\!\!/(\check{U},\psi_{I})\). Moreover, \(\check{M}/\!\!/(\check{U},\psi_{I})\) is smooth over the affine space \(\check{\mathfrak{c}}_{I}\) and is therefore regular (in particular, normal). By Hartog's principle (say, in the form of [33, Tag 0BCS]), we conclude that the restriction map (3.5.2) is an isomorphism.
**Construction 3.5.3**.: Since \(\check{\mathfrak{c}}_{I}^{\circ}=\check{\mathfrak{c}}_{I-\mathrm{gen}}\cup \check{\mathfrak{c}}_{I-\mathrm{gen}}^{-}\), Lemma 3.5.1 yields an isomorphism
\[\mathcal{O}(\check{M}/\!\!/(\check{U},\psi_{I}))\xrightarrow{\sim}\mathcal{O} (\pi_{I}^{-1}(\check{\mathfrak{c}}_{I-\mathrm{gen}}))\times_{\mathcal{O}(\pi _{I}^{-1}(\check{\mathfrak{c}}_{I-\mathrm{gen}}^{-}\cap\check{\mathfrak{c}}_ {I-\mathrm{gen}}))}\mathcal{O}(\pi_{I}^{-1}(\check{\mathfrak{c}}_{I-\mathrm{ gen}}^{-})).\]
Recall that \(f_{I}\in\mathcal{O}(\check{\mathfrak{l}}^{*})\) (respectively, \(g_{I}\in\mathcal{O}(\check{\mathfrak{l}}^{*})\)) denotes the product of the roots in \(\check{\Phi}_{I}\) (respectively, in \(\check{\Phi}\setminus\check{\Phi}_{I}\)). Since \(f_{I}\) and \(g_{I}\) are \(W_{I}\)-invariant, they descend to functions on \(\check{\mathfrak{c}}_{I}\). Moreover, the non-vanishing locus \(D(f_{I})\) (resp. \(D(g_{I})\)) in \(\check{\mathfrak{c}}_{I}\) is exactly \(\check{\mathfrak{c}}_{I-\mathrm{gen}}\) (resp. \(\check{\mathfrak{c}}_{I-\mathrm{gen}}^{-}\)). Since \(\check{\mathfrak{c}}_{I}\) is affine, restriction therefore induces isomorphisms
\[\mathcal{O}(\check{M}/\!\!/(\check{U},\psi_{I}))_{f_{I}} \xrightarrow{\sim}\mathcal{O}(\pi_{I}^{-1}(\check{\mathfrak{c}}_{I- \mathrm{gen}}))\] \[\mathcal{O}(\check{M}/\!\!/(\check{U},\psi_{I}))_{g_{I}} \xrightarrow{\sim}\mathcal{O}(\pi_{I}^{-1}(\check{\mathfrak{c}}_{I- \mathrm{gen}}^{-}))\] \[\mathcal{O}(\check{M}/\!\!/(\check{U},\psi_{I}))_{f_{I}g_{I}} \xrightarrow{\sim}\mathcal{O}(\pi_{I}^{-1}(\check{\mathfrak{c}}_{I-\mathrm{ gen}}\cap\check{\mathfrak{c}}_{I-\mathrm{gen}}^{-})).\]
Now we can bring in Corollary 3.3.10 and Proposition 3.4.12 to obtain isomorphisms
\[\eta_{I-\mathrm{gen}}^{*}:\mathcal{O}(\pi_{I}^{-1}(\check{\mathfrak{c}}_{I- \mathrm{gen}}))\simeq\mathcal{O}(\check{M}/\!\!/\check{U}\times_{\check{ \mathfrak{l}}^{*}}\check{\mathfrak{l}}_{I-\mathrm{gen}}^{*})\simeq\mathcal{O} (\check{M}/\!\!/\check{U})_{f_{I}}\]
\[(\eta_{I-\mathrm{gen}}^{-})^{*}:\mathcal{O}(\pi_{I}^{-1}(\check{\mathfrak{c} }_{I-\mathrm{gen}}^{-}))\simeq\mathcal{O}(\check{M}/\!\!/\check{U}\times_{ \check{\mathfrak{l}}^{*}}\check{\mathfrak{l}}_{I-\mathrm{gen}}^{-*})\simeq \mathcal{O}(\check{M}/\!\!/(\check{U},\psi))_{g_{I}}\]
In summary, we have a canonical isomorphism
\[\eta_{I}:\mathcal{O}(\check{M}/\!\!/(\check{U},\psi_{I}))\simeq\mathcal{O}( \check{M}/\!\!/\check{U})_{f_{I}}\times_{\mathcal{O}(\check{M}/\!\!/(\check{U}, \psi))_{f_{I}g_{I}}}\mathcal{O}(\check{M}/\!\!/(\check{U},\psi))_{g_{I}}.\]
Here, the map \(\mathcal{O}(\check{M}/\!\!/\check{U})_{f_{I}}\to\mathcal{O}(\check{M}/\!\!/( \check{U},\psi))_{f_{I}g_{I}}\) is given by the natural inclusion
\[\mathcal{O}(\check{M}/\!\!/\check{U})_{f_{I}}\hookrightarrow\mathcal{O}(\check{ M}/\!\!/\check{U})_{f_{I}g_{I}}\]
followed by the identification
\[\mathcal{O}(\check{M}/\!\!/\check{U})_{f_{I}g_{I}}\simeq\mathcal{O}(\check{M}/ \!\!/(\check{U},\psi))_{f_{I}g_{I}}\]
of Proposition 3.3.9 (the "absolute" version of Corollary 3.3.10 in which _all_ roots are inverted).
## 4. Comparison
### Parabolic restriction
We will now complete our discussion of the parabolic restriction functors \(\mathrm{Res}^{\natural}_{I}:D_{G(\mathcal{O})}(\mathrm{Gr}_{G})\to D_{L_{I}( \mathcal{O})}(\mathrm{Gr}_{I})\) that was initiated in SSSS2.4. As we recalled in Remark 2.4.3, the functor \(\mathrm{Res}^{\natural}_{I}\) is \(t\)-exact and therefore induces an exact functor
\[\mathrm{Res}^{\natural}_{I}:\mathcal{P}_{G(\mathcal{O})}(\mathrm{Gr}_{G}) \to\mathcal{P}_{L_{I}(\mathcal{O})}(\mathrm{Gr}_{I})\]
on abelian categories of perverse sheaves. Moreover, we recalled in Construction 2.4.11 the construction of a monoidal structure on \(\mathrm{Res}^{\natural}_{I}\).
On the other hand, the inclusion \(\check{L}_{I}\subseteq\check{G}\) of the canonical dual Levi subgroup induces an exact restriction functor
\[\mathrm{Res}^{\check{G}}_{\check{L}_{I}}:\mathrm{Rep}(\check{G})\to\mathrm{Rep }(\check{L}_{I}).\]
Beilinson and Drinfeld [6, SS5.3.29] have established the following compatibility between \(\mathrm{Res}^{\natural}_{I}\) and \(\mathrm{Res}^{\check{G}}_{\check{L}_{I}}\) under the geometric Satake equivalence.
**Theorem 4.1.1** (Beilinson-Drinfeld).: _Let_
\[\mathbb{S}_{G}\,:\mathcal{P}_{G(\mathcal{O})}(\mathrm{Gr}_{G}) \to\mathrm{Rep}(\check{G})\] \[\mathbb{S}_{L_{I}}:\mathcal{P}_{L_{I}(\mathcal{O})}(\mathrm{Gr}_{ I})\to\mathrm{Rep}(\check{L}_{I})\]
_denote the respective geometric Satake transforms of Mirkovic-Vilonen [28]. Then, the following diagram of abelian categories and exact tensor functors commutes up to a (canonical) natural isomorphism of tensor functors:_
We refer the reader to [5, Proposition 15.3] for a proof.
**Remark 4.1.2**.: It is our point of view that Theorem 4.1.1 is the (Tannakian) _definition_ of the Langlands dual Levi subgroup \(\check{L}_{I}\subseteq\check{G}\).
### The Theorem of Ginzburg-Riche
It is now time to bring in our main tool, the results of Ginzburg and Riche in [20]. Recall from (2.6.9) that we have produced an injective graded \(R_{\emptyset}\)-module homomorphism
\[\xi^{\natural}_{\emptyset}:H^{*}_{T(\mathcal{O})}(\mathrm{Gr}_{T},i^{!,\natural }_{\emptyset}\mathcal{F}_{\mathrm{reg}})\to H^{*}_{T(\mathcal{O})}(\mathrm{Gr }_{T},\mathrm{Res}^{\natural}_{\emptyset}(\mathcal{F}_{\mathrm{reg}})).\]
It is a \(\check{G}\)-equivariant (obvious) \(\check{T}\)-equivariant (Remark 2.6.12) ring homomorphism (Remark 2.6.8). Moreover, we can invoke Theorem 4.1.1 (in the case \(L=T\), which is really
a direct consequence of the Mirkovic-Vilonen proof [28] of geometric Satake) in combination with [34, Lemma 2.2] to obtain a canonical \(\check{G}\)-equivariant \(\check{T}\)-equivariant \(R_{\emptyset}\)-algebra isomorphism
\[H^{*}_{T(\mathcal{O})}(\operatorname{Gr}_{T},\operatorname{Res}^{\natural}_{ \emptyset}(\mathcal{F}_{\operatorname{reg}}))\simeq H^{*}_{T(\mathcal{O})} \left(\operatorname{Gr}_{T},\mathbb{S}^{-1}_{\check{T}}\left(\operatorname{ Res}^{\check{G}}_{T}\left(\mathcal{O}(\check{G})\right)\right)\right)\simeq \mathcal{O}(\check{G})\otimes R_{\emptyset}. \tag{4.2.1}\]
Via this natural identification, we may regard \(\xi^{\natural}_{\emptyset}\) as a homomorphism
\[\xi^{\natural}_{\emptyset}:H^{*}_{T(\mathcal{O})}(\operatorname{Gr}_{T},i^{ \natural,\natural}_{\emptyset}\mathcal{F}_{\operatorname{reg}})\to\mathcal{O }(\check{G})\otimes R_{\emptyset}.\]
On the spectral side, we consider the algebra of functions \(\mathcal{O}(T^{*}(\check{G}/\check{U}))\) on the cotangent bundle of the basic affine space. It is \(\check{G}\)-module via the natural left action of \(\check{G}\) on \(T^{*}(\check{G}/\check{U})\) and a \(\check{T}\)-module via the natural right action of \(\check{T}\simeq\check{B}/\check{U}\) on \(T^{*}(\check{G}/\check{U})\). We have a \(\check{G}\times\check{T}\)-equivariant morphism
\[\check{G}\times\check{\mathfrak{t}}^{*}\hookrightarrow\check{G}\times(\check{ \mathfrak{g}}/\check{\mathfrak{u}})^{*}\twoheadrightarrow\left(\check{G} \times(\check{\mathfrak{g}}/\check{\mathfrak{u}})^{*}\right)/\check{U}=T^{*}( \check{G}/\check{U})\]
Here, the embedding \(\check{\mathfrak{t}}^{*}\hookrightarrow\left(\check{\mathfrak{g}}/\check{ \mathfrak{u}}\right)^{*}=\check{\mathfrak{u}}^{\perp}\subseteq\check{ \mathfrak{g}}^{*}\) takes a linear form \(\omega\in\check{\mathfrak{t}}^{*}\) on \(\check{\mathfrak{t}}\) to the unique linear form \(\tilde{\omega}\in\check{\mathfrak{g}}^{*}\) on \(\check{\mathfrak{g}}\) extending \(\omega\) (along the embedding \(\check{\mathfrak{t}}\hookrightarrow\check{\mathfrak{g}}\)) and vanishing on both \(\check{\mathfrak{u}}\) and \(\check{\mathfrak{u}}^{-}\). Note that this embedding appeared in Construction 3.4.5 as the example of our (coadjoint) partial Kostant section \(\sigma_{I}:\check{\mathfrak{c}}_{I}\hookrightarrow\check{\mathfrak{g}}^{*}\) in the case \(I=\emptyset\). We obtain an induced \(\check{G}\times\check{T}\)-equivariant algebra homomorphism on the coordinate rings
\[\mathcal{O}(T^{*}(\check{G}/\check{U}))\to\mathcal{O}(\check{G})\otimes \mathcal{O}(\check{\mathfrak{t}}^{*})=\mathcal{O}(\check{G})\otimes R_{ \emptyset}. \tag{4.2.2}\]
With this notation in place, we can recall the "semi-classical limit" of the main theorem of Ginzburg and Riche as stated in [20, Theorem 2.3.1] in a form convenient for our purposes. The notation \(\Upsilon_{\emptyset}\) that we use in the statement is intended to emphasize that their isomorphism will be recovered in the case \(I=\emptyset\) of our Theorem 1.5.2.
**Theorem 4.2.3** (Ginzburg, Riche [20, Theorem 2.3.1]).: _There exists a unique \(\check{G}\times\check{T}\)-equivariant isomorphism of graded \(R_{\emptyset}\)-algebras_
\[\Upsilon_{\emptyset}:H^{*}_{T(\mathcal{O})}(\operatorname{Gr}_{T},i^{\natural,\natural}_{\emptyset}\mathcal{F}_{\operatorname{reg}})\simeq\mathcal{O}(T^{ *}(\check{G}/\check{U}))\]
_such that the following diagram of \(R_{L}\)-algebras commutes:_
(4.2.4)
**Remark 4.2.5**.: The fact that \(\Upsilon_{\emptyset}\) is an an algebra homomorphism is an immediate consequence of the commutativity of (4.2.4). Indeed, the other three maps \(\xi^{\natural}_{\emptyset}\), (4.2.1), and (4.2.2) appearing in the diagram are algebra homomorphisms, and the horizontal maps \(\xi^{\natural}_{\emptyset}\)
(4.2.2) are injective (if it is not clear that (4.2.2) is injective, simply note that (4.2.1) is an isomorphism and that \(\xi_{\emptyset}^{\natural}\) is injective by Proposition 2.6.7).
### The Theorem of Yun-Zhu
We will now conclude our discussion of the results of Yun and Zhu from [34] initiated in SSSS2.5. Recall that in Remark 2.5.9 we defined
\[\mathfrak{A}_{I}:=\mathfrak{A}_{L_{I}}:=\operatorname{Spec}H_{*}^{L_{I}( \mathcal{O})}(\operatorname{Gr}_{I},\mathbb{C})\to\operatorname{Spec}R_{I}= \mathfrak{c}_{I}\]
to be the \(\mathfrak{c}_{I}\)-group scheme obtained from the natural Hopf \(R_{I}\)-algebra structure of Construction 2.5.1 on the \(R_{I}\)-module \(H_{*}^{L_{I}(\mathcal{O})}(\operatorname{Gr}_{I},\mathbb{C})\). In what follows, we will use the canonical identification
\[\mathfrak{c}_{I}=\mathfrak{t}/\!/W_{I}\simeq\tilde{\mathfrak{t}}^{*}/\!/W_{I} =\tilde{\mathfrak{c}}_{I}\]
without further comment. For any \(\mathcal{F}\in\mathcal{P}_{L_{I}(\mathcal{O})}(\operatorname{Gr}_{I})\), we recalled in Construction 2.5.3 the natural \(\mathfrak{A}_{I}\)-module structure on the \(R_{I}\)-module \(H_{L_{I}(\mathcal{O})}^{*}(\operatorname{Gr}_{I},\mathcal{F})\). We can take \(\mathcal{F}\) to be the regular sheaf on \(\operatorname{Gr}_{I}\), that is, the Hopf algebra object \(\mathcal{F}_{I,\operatorname{reg}}\in\mathcal{P}_{L_{I}(\mathcal{O})}( \operatorname{Gr}_{I})\) corresponding to the regular representation \(\mathcal{O}(\check{L}_{I})\in\operatorname{Rep}(\check{L}_{I})\) under the geometric Satake equivalence. Moreover, [34, Lemma 2.4] yields a natural isomorphism
\[H_{L_{I}(\mathcal{O})}^{*}(\operatorname{Gr}_{I},\mathcal{F}_{I,\operatorname {reg}})\simeq H_{T(\mathcal{O})}^{*}(\operatorname{Gr}_{I},\mathcal{F}_{I, \operatorname{reg}})^{W_{I}}\simeq(\mathcal{O}(\check{L}_{I})\otimes R_{ \emptyset})^{W_{I}}\simeq\mathcal{O}(\check{L}_{I})\otimes R_{I}\]
of Hopf \(R_{I}\)-algebras. Hence, we obtain a natural action of \(\mathfrak{A}_{I}\) on \(\mathcal{O}(\check{L}_{I})\otimes R_{I}\) through Hopf \(R_{I}=\mathcal{O}(\tilde{\mathfrak{c}}_{I})\)-algebra homomorphisms. In other words, we obtain an action of \(\mathfrak{A}_{I}\) on the \(\tilde{\mathfrak{c}}_{I}\)-scheme \(\check{L}_{I}\times\tilde{\mathfrak{c}}_{I}\) through group homomorphisms. Acting on the identity section of \(\check{L}_{I}\times\tilde{\mathfrak{c}}_{I}\) defines a homomorphism of \(\check{\mathfrak{c}}_{I}\)-group schemes
\[\mathfrak{A}_{I}\to\check{L}_{I}\times\tilde{\mathfrak{c}}_{I}.\]
On the other hand, we have the \(\tilde{\mathfrak{c}}_{I}\)-group scheme of regular centralizers \(\mathfrak{J}_{I}\to\tilde{\mathfrak{c}}_{I}\), recalled in SSSS3.2. By its very construction, it is equipped with a canonical group scheme embedding \(\mathfrak{J}_{I}\hookrightarrow\check{L}_{I}\times\tilde{\mathfrak{c}}_{I}\).
**Theorem 4.3.1** (Yun, Zhu [34]).: _Assume that \(G\) has almost simple derived subgroup. Then, the map \(\mathfrak{A}_{I}\to\check{L}_{I}\times\tilde{\mathfrak{c}}_{I}\) induces a canonical isomorphism of \(\tilde{\mathfrak{c}}_{I}\)-group schemes \(\mathfrak{A}_{I}\simeq\mathfrak{J}_{I}\)._
**Remark 4.3.2**.: Note that the hypothesis that \(G\) has almost simple derived subgroup passes to all Levi subgroups of \(G\), which justifies our application of [34] to the connected reductive group \(L_{I}\).
**Remark 4.3.3**.: Assuming that \(G\) has almost simple derived subgroup, we can combine the natural isomorphism of Remark 2.5.4 with Theorem 4.3.1 (in the case \(L=G\)) to obtain an isomorphism of \(\tilde{\mathfrak{c}}_{I}\)-group schemes
\[\operatorname{Spec}H_{*}^{L(\mathcal{O})}(\operatorname{Gr}_{G},\mathbb{C}) \simeq\mathfrak{A}\times_{\xi}\tilde{\mathfrak{c}}_{I}\simeq\mathfrak{J} \times_{\xi}\tilde{\mathfrak{c}}_{I}.\]
We can now apply Remark 2.5.9 to construct a homomorphism of \(\tilde{\mathfrak{c}}_{I}\)-group schemes
\[\rho_{I}^{\operatorname{geom}}:\mathfrak{J}\times_{\xi}\tilde{\mathfrak{c}}_{ I}\simeq\mathfrak{A}\times_{\epsilon}\mathfrak{c}_{I}\overset{\rho_{L_{I}}^{G}}{ \longrightarrow}\mathfrak{A}_{I}\simeq\mathfrak{J}_{I}. \tag{4.3.4}\]
On the other hand, [32, Theorem 3.4.2] provides canonical identifications of schemes of (commutative) Lie algebras over the bases \(\check{\mathfrak{c}}\) and \(\check{\mathfrak{c}}_{I}\), respectively:
\[\mathbb{L}\mathrm{Lie}(\mathfrak{J}/\check{\mathfrak{c}})\simeq T^{*}(\check{ \mathfrak{c}})\ \ \mathbb{L}\mathrm{Lie}(\mathfrak{J}_{I}/\check{\mathfrak{c}}_{I})\simeq T^{*}( \check{\mathfrak{c}}_{I}). \tag{4.3.5}\]
Here, \(T^{*}(\check{\mathfrak{c}})\) and \(T^{*}(\check{\mathfrak{c}}_{I})\) denote the respective cotangent bundles. On the other hand, we may differentiate the map \(\pi_{I}:\check{\mathfrak{c}}_{I}\to\check{\mathfrak{c}}\) to obtain a morphism of schemes of (commutative) Lie algebras over \(\check{\mathfrak{c}}_{I}\)
\[\pi_{I}^{*}:T^{*}(\check{\mathfrak{c}})\times_{\check{\mathfrak{c}}}\check{ \mathfrak{c}}_{I}\to T^{*}(\check{\mathfrak{c}}_{I}).\]
Since the following proposition will not be used in what follows, we omit its (straightforward) proof.
**Proposition 4.3.6**.: _The following diagram of \(\check{\mathfrak{c}}_{I}\)-schemes commutes._
### Proof of Theorem 1.5.2
We are now ready to tie in the results of SS2 and SS3 to give our proof of the main theorem, Theorem 1.5.2. We will state and prove a refined version of it, Theorem 4.4.5 below. The following elementary lemma will be used in the course of the proof.
**Lemma 4.4.1**.: _Let \(M\) be a flat \(R_{L}\)-module. Let \(f_{I},g_{I}\in R_{I}\) denote the elements of (2.6.3) and (2.6.4), respectively. Then, the restriction map_
\[M\to M_{f_{I}}\times_{M_{f_{I}g_{I}}}M_{g_{I}}\]
_is an isomorphism._
Proof.: By Lazard's theorem [33, Tag 058G], we may express \(M\) as a filtered colimit of free modules of finite rank. The exactness of localization then allows us to assume that \(M\) itself is free of finite rank. Obviously, it is then sufficient to treat the case in which \(M\) is free of rank \(1\). The claim is then that the restriction map
\[R_{L}\to(R_{L})_{f_{I}}\times_{(R_{L})_{f_{I}g_{I}}}(R_{L})_{g_{I}}\]
is a bijection. Since \(R_{L}\) is a regular domain (in particular an integrally closed domain), the claim follows from the observation (see the proof of Lemma 3.5.1) that
\[\operatorname{codim}_{\operatorname{Spec}R_{L}}(\operatorname{Spec}R_{L}/(f_{ I})\cap\operatorname{Spec}R_{L}/(g_{I}))\geq 2\]
in conjunction with Hartog's principle [33, Tag 0BCS].
Before proceeding to the proof of Theorem 4.4.5, we make some preliminary definitions and observations.
**Construction 4.4.2**.: We define the \(\mathbb{Z}\)-grading on the algebra \(\mathcal{O}(T^{*}(\check{G}/(\check{U},\psi_{I})))\) which will correspond under the isomorphism of Theorem 1.5.2 to the cohomological grading on the geometric side. Note that we have an injection
\[\mathcal{O}(T^{*}(\check{G}/(\check{U},\psi_{I})))\simeq(\mathcal{O}(\check{G} )\otimes\mathcal{O}(\check{\mathfrak{u}}^{\perp}+\psi_{I}))^{\check{U}} \subseteq\mathcal{O}(\check{G})\otimes\mathcal{O}(\check{\mathfrak{u}}^{ \perp}+\psi_{I}).\]
We take the \(\mathbb{Z}\)-grading on the \(\check{L}_{I}\)-module \(\mathcal{O}(\check{G})\) given by the eigenvalues of the distinguished torus \(2\check{\rho}_{I}:\mathbb{G}_{m}\to\check{L}_{I}\) (that which induces the cohomological grading under geometric Satake for the group \(\check{L}_{I}\)). Observe that if \(\mathbb{G}_{m}\) acts on \(\check{\mathfrak{g}}^{*}\) through \(2\check{\rho}_{I}\), then the element \(\psi_{I}\) acquires the weight \(+2\) (recall that there is an \(\check{G}\)-equivariant isomorphism \(\check{\mathfrak{g}}^{*}\simeq\check{\mathfrak{g}}\) taking \(\psi_{I}\) to the irregular nilpotent element \(e_{I}=\sum_{\alpha\in I}X_{\alpha}\), which has \(2\check{\rho}_{I}\)-weight \(+2\)). Instead, we consider the \(\mathbb{G}_{m}\)-action on \(\check{\mathfrak{g}}^{*}\) given by \(t\mapsto t^{-2}2\check{\rho}_{I}(t)\); by construction, this action leaves \(\psi_{I}\) fixed and hence preserves the affine slice \(\check{\mathfrak{u}}^{\perp}+\psi_{I}\). We grade \(\mathcal{O}(\check{\mathfrak{u}}^{\perp}+\psi_{I})\) by the eigenvalues of _this_\(\mathbb{G}_{m}\)-action.
We need to grade the algebra \(\mathcal{O}(T^{*}(\check{G}/(\check{U},\psi))\times_{\check{\mathfrak{l}}} \check{\mathfrak{c}}_{I})\) in such a way that the canonical map
\[(\eta_{I-\mathrm{gen}}^{-})^{*}:\mathcal{O}(T^{*}(\check{G}/(\check{U},\psi _{I})))\to\mathcal{O}(T^{*}(\check{G}/(\check{U},\psi))\times_{\check{ \mathfrak{l}}}\check{\mathfrak{c}}_{I})\simeq\mathcal{O}(\check{G})\otimes \mathcal{O}(\check{\mathfrak{c}}_{I}).\]
of Proposition 3.4.12 is graded. Of course, we once again grade \(\mathcal{O}(\check{G})\) by the coweight \(2\check{\rho}_{I}:\mathbb{G}_{m}\to\check{L}_{I}\). We must grade \(\mathcal{O}(\check{\mathfrak{c}}_{I})\) in such a way that the algebra map \(\sigma_{I}^{*}:\mathcal{O}(\check{\mathfrak{u}}^{\perp}+\psi_{I})\to\mathcal{ O}(\check{\mathfrak{c}}_{I})\) of Construction 3.4.5 is graded. Dually, we must define an action of \(\mathbb{G}_{m}\) on \(\check{\mathfrak{c}}_{I}\) such that the partial Kostant section \(\sigma_{I}:\check{\mathfrak{c}}_{I}\to\check{\mathfrak{u}}^{\perp}+\psi_{I}\) is \(\mathbb{G}_{m}\)-equivariant. Since \(\sigma_{I}\) is the composition of the embedding \(\check{\mathfrak{l}}_{I}^{*}\hookrightarrow\check{\mathfrak{g}}^{*}\) with the standard Kostant section \(\kappa_{I}:\check{\mathfrak{c}}_{I}\to\check{\mathfrak{l}}_{I}^{*}\), the problem is to give an action of \(\mathbb{G}_{m}\) on \(\check{\mathfrak{c}}_{I}\) for which the Kostant section \(\kappa_{I}:\check{\mathfrak{c}}_{I}\to\check{\mathfrak{l}}_{I}^{*}\) is \(\mathbb{G}_{m}\)-equivariant, where \(\mathbb{G}_{m}\) acts on \(\check{\mathfrak{l}}_{I}^{*}\) (as above) by \(t\mapsto t^{-2}2\check{\rho}_{I}(t)\). But this action of \(\mathbb{G}_{m}\) on \(\check{\mathfrak{l}}_{I}^{*}\) clearly preserves the image of the closed immersion \(\kappa_{I}\), so we obtain the desired (and in fact uniquely determined) action of \(\mathbb{G}_{m}\) on \(\check{\mathfrak{c}}_{I}\).
An alternative construction of this action can be obtained as follows. Let \(\mathbb{G}_{m}\) act on \(\check{\mathfrak{l}}_{I}^{*}\) by \(t\mapsto t^{-2}\). This action commutes with the coadjoint action of \(\check{L}_{I}\), hence induces an action of \(\mathbb{G}_{m}\) on \(\check{\mathfrak{l}}_{I}^{*}/\!/\!/\check{L}_{I}\simeq\check{\mathfrak{c}}_{I}\); this is the same action. The latter description makes it clear that the natural map \(\check{\mathfrak{l}}^{*}\to\check{\mathfrak{c}}_{I}\) is \(\mathbb{G}_{m}\)-equivariant, where \(\mathbb{G}_{m}\) acts on \(\check{\mathfrak{l}}^{*}\) by \(t\mapsto t^{-2}\). Hence, we obtain an inclusion of graded algebras \(\mathcal{O}(\check{\mathfrak{c}}_{I})\hookrightarrow\mathcal{O}(\check{ \mathfrak{l}}^{*})\simeq\mathrm{Sym}\,\mathfrak{\mathfrak{l}}^{*}\), where \(\mathfrak{\mathfrak{l}}^{*}\) is placed in degree \(+2\). This grading on \(\mathcal{O}(\mathfrak{\mathfrak{l}}^{*})\) agrees with the cohomological grading on \(R_{\emptyset}=H_{T}^{*}(\mathrm{pt},\mathbb{C})\) under the standard isomorphism \(R_{\emptyset}\simeq\mathrm{Sym}\,\mathfrak{\mathfrak{l}}^{*}\). It immediately follows that the standard isomorphism \(R_{I}\simeq\mathcal{O}(\check{\mathfrak{c}}_{I})\) identifies the cohomological grading on \(R_{I}\) with our grading on \(\mathcal{O}(\check{\mathfrak{c}}_{I})\).
This action of \(\mathbb{G}_{m}\) on \(\check{\mathfrak{c}}_{I}\) appears frequently in representation theory; for example, in Ngo's work [30].
**Construction 4.4.3**.: We need an \(L_{I}(\mathcal{O})\)-equivariant version of the isomorphism (4.2.1). More precisely, we define an isomorphism of \(R_{I}\)-algebras
\[\Upsilon:H_{L(\mathcal{O})}^{*}(\mathrm{Gr}_{I},\mathrm{Res}_{I}^{\natural}( \mathcal{F}_{\mathrm{reg}}))\simeq\mathcal{O}(T^{*}(\check{G}/(\check{U},\psi ))\times_{\check{\mathfrak{l}}}\check{\mathfrak{c}}_{I}). \tag{4.4.4}\]
By [34, Lemma 2.2] in conjunction with Theorem 4.1.1, we have a natural isomorphism of graded \(R_{\emptyset}\)-algebras
\[H^{*}_{T(\mathcal{O})}(\operatorname{Gr}_{I},\operatorname{Res}^{\natural}_{I}( \mathcal{F}_{\operatorname{reg}}))\stackrel{{(1)}}{{\simeq}}H^{*} (\operatorname{Gr}_{I},\operatorname{Res}^{\natural}_{I}(\mathcal{F}_{ \operatorname{reg}}))\otimes R_{\emptyset}\stackrel{{(2)}}{{\simeq}} \mathcal{O}(\check{G})\otimes R_{\emptyset}\stackrel{{(3)}}{{ \simeq}}\mathcal{O}(\check{G}\times\check{\mathfrak{c}}_{I})\otimes_{R_{ \tilde{U}}}R_{\emptyset}.\]
We can pass to \(W_{I}\)-invariants (using the equivariant formality of \(\operatorname{Res}^{\natural}_{I}(\mathcal{F}_{\operatorname{reg}})\)) to obtain the desired isomorphism
\[\Upsilon:H^{*}_{L(\mathcal{O})}(\operatorname{Gr}_{I},\operatorname{Res}^{ \natural}_{I}(\mathcal{F}_{\operatorname{reg}}))\simeq\mathcal{O}(\check{G} \times\check{\mathfrak{c}}_{I})\otimes_{R_{I}}R^{W_{I}}_{\emptyset}\simeq \mathcal{O}(T^{*}(\check{G}/(\check{U},\psi))\times_{\check{\mathfrak{c}}} \check{\mathfrak{c}}_{I}).\]
The last isomorphism is Lemma 3.2.4. By construction, \(\Upsilon\) is an isomorphism of \(R_{I}\)-algebras. It is \(\check{G}\)-equivariant: indeed, (1) is \(\check{G}\)-equivariant by the very definition of the \(\check{G}\)-action on \(H^{*}_{T(\mathcal{O})}(\operatorname{Gr}_{I},\operatorname{Res}^{\natural}_{I} (\mathcal{F}_{\operatorname{reg}}))\), (2) and (3) are obviously \(\check{G}\)-equivariant, and the isomorphism of Lemma 3.2.4 is \(\check{G}\)-equivariant by Lemma 3.2.8. When \(G\) has almost simple derived subgroup, we can use the Yun-Zhu isomorphism \(\mathfrak{A}_{I}\simeq\mathfrak{J}_{I}\) of Theorem 4.3.1 and ask if \(\Upsilon\) is \(\mathfrak{J}_{I}\)-equivariant. That \(\Upsilon\) is indeed \(\mathfrak{J}_{I}\)-equivariant is a direct consequence of the Bezrukavnikov-Finkelberg proof of the derived Satake equivalence [9], which identifies the equivariant cohomology functor with the Kostant-Whittaker reduction of \(\operatorname{Sym}\check{\mathfrak{g}}[-2]\)-modules; we refer the reader to Riche's proof [32, Theorem 5.5.1] of the mixed modular derived Satake equivalence for the details.
Finally, we assert that \(\Upsilon\) is compatible with the \(\mathbb{Z}\)-gradings. The isomorphism (1) respects the cohomological grading by construction (see the proof of [34, Lemma 2.2]). The isomorphism (2) identifies the cohomological grading on \(H^{*}(\operatorname{Gr}_{I},\operatorname{Res}^{\natural}_{I}(\mathcal{F}_{ \operatorname{reg}}))\) with the the grading of the \(\check{L}_{I}\)-module \(\mathcal{O}(\check{G})\) by the eigenvalues of \(2\tilde{\rho}_{I}:\mathbb{G}_{m}\to\check{L}_{I}\) (this is a standard property of the geometric Satake equivalence; see for example [28, Theorem 3.6]). Therefore, \(\Upsilon\) identifies the cohomological grading on \(H^{*}_{L(\mathcal{O})}(\operatorname{Gr}_{I},\operatorname{Res}^{\natural}_{I }(\mathcal{F}_{\operatorname{reg}}))\) with the grading on \(\mathcal{O}(T^{*}(\check{G}/(\check{U},\psi))\times_{\check{\mathfrak{c}}} \check{\mathfrak{c}}_{I})\simeq\mathcal{O}(\check{G})\otimes\mathcal{O}( \check{\mathfrak{c}}_{I})\) induced by the grading on \(\mathcal{O}(\check{G})\) by the eigenvalues of \(2\tilde{\rho}_{I}\) and the cohomological grading on \(\mathcal{O}(\check{\mathfrak{c}}_{I})\simeq R_{I}\); that is exactly the grading defined in Construction 4.4.2.
**Theorem 4.4.5**.: _Let \(L=L_{I}\subseteq G\) denote a Levi subgroup of the connected reductive group \(G\) with simple roots \(I\subseteq\Delta\). Let \(i_{I}:\operatorname{Gr}_{I}:=\operatorname{Gr}_{L}\hookrightarrow\operatorname{ Gr}_{G}\) denote the induced closed immersion on affine Grassmannians. Let \(\mathcal{F}_{\operatorname{reg}}\in D_{G(\mathcal{O})}(\operatorname{Gr}_{G})\) denote the regular sheaf of Remark 2.2.1 equipped with its natural structure of \(\check{G}\)-equivariant ring object of \(D_{G(\mathcal{O})}(\operatorname{Gr}_{G})\). Let \(H^{*}_{L(\mathcal{O})}(\operatorname{Gr}_{I},i^{!,\natural}_{I}\mathcal{F}_{ \operatorname{reg}})\) denote the \(L(\mathcal{O})\)-equivariant cohomology of its normalized (see Remark 2.6.10) corestriction, equipped with its natural structure of graded \(R_{I}:=H^{*}_{L(\mathcal{O})}(\operatorname{pt})\)-algebra (arising from the lax monoidal structures of Construction 2.2.5, Construction 2.2.14, and Remark 2.4.22). Moreover, in the case that \(G\) has almost simple derived subgroup, we view \(H^{*}_{L(\mathcal{O})}(\operatorname{Gr}_{L},i^{!,\natural}_{I}\mathcal{F}_{ \operatorname{reg}})\) as a module over the \(R_{L}\)-group scheme \(\mathfrak{J}_{I}\) via Remark 2.5.3 and the Yun-Zhu isomorphism \(\mathfrak{J}_{I}\simeq\mathfrak{A}_{I}=\operatorname{Spec}H^{L(\mathcal{O})}_{*} (\operatorname{Gr}_{L},\mathbb{C})\) of Theorem 4.3.1._
_On the other hand, consider the ring of regular functions \(\mathcal{O}(T^{*}(\check{G}/(\check{U},\psi_{I})))\). Here, \(\psi_{I}\in\check{\mathfrak{u}}^{*}\) is the additive character of Notation 3.1.1 and \(T^{*}(\check{G}/(\check{U},\psi_{I})):=T^{*}\check{G}/\!\!/(\check{U},\psi_{I})\) is the Hamiltonian reduction of the Hamiltonian \(\check{U}\)-variety \(T^{*}\check{G}\) at the level \(\psi_{I}\in\check{\mathfrak{u}}^{*}\). View \(\mathcal{O}(T^{*}(\check{G}/(\check{U},\psi_{I})))\) as an \(R_{I}\)-algebra via the canonical map \(T^{*}(\check{G}/(\check{U},\psi_{I}))\to\check{\mathfrak{c}}_{I}\) of (3.1.8), where \(\check{\mathfrak{c}}_{I}=\operatorname{Spec}R_{I}\) is the Chevalley scheme of \(\check{L}_{I}\). Equip \(\mathcal{O}(T^{*}(\check{G}/(\check{U},\psi_{I})))\) with its natural structure of module over the \(R_{I}\)-group scheme \(\mathfrak{J}_{I}\) (see SSSS3.2). Moreover, regard \(\mathcal{O}(T^{*}(\check{G}/(\check{U},\psi_{I})))\) as a \(\check{G}\)-module via the natural left translation action of \(\check{G}\) on \(T^{*}(\check{G}/(\check{U},\psi_{I}))\). We equip \(T^{*}(\check{G}/(\check{U},\psi_{I}))\) with the grading induced by the \(\mathbb{G}_{m}\)-action of Construction 4.4.2._
_Then, there exists a unique \(\check{G}\)-equivariant isomorphism of graded \(R_{I}\)-algebras_
\[\Upsilon_{I}:H^{*}_{L(\mathcal{O})}(\operatorname{Gr}_{L},i^{!,\natural}_{I} \mathcal{F}_{\operatorname{reg}})\simeq\mathcal{O}(T^{*}(\check{G}/(\check{U },\psi_{I})))\]
_such that the following diagram of \(R_{L}\)-algebras commutes:_
(4.4.6)
_In the case that \(G\) has almost simple derived subgroup, we also have that \(\Upsilon_{I}\) is \(\mathfrak{J}_{I}\)-equivariant._
Proof.: Recall the injective graded \(R_{I}\)-module homomorphism of (2.6.9)
\[\xi_{I}^{\natural}:H^{*}_{L(\mathcal{O})}(\operatorname{Gr}_{L},i^{!,\natural }_{I}\mathcal{F}_{\operatorname{reg}})\to H^{*}_{L(\mathcal{O})}( \operatorname{Gr}_{L},\operatorname{Res}_{I}(\mathcal{F}_{\operatorname{reg} })).\]
It is a \(\check{G}\)-equivariant (obvious) \(\mathfrak{A}_{I}\)-equivariant (Remark 2.6.12) ring homomorphism (Remark 2.6.8). Moreover, its localization \((\xi_{I}^{\natural})_{g_{I}}\) at the element \(g_{I}\in R_{I}\) of (2.6.4) is an isomorphism of \((R_{I})_{g_{I}}\)-modules (see (2.6.9) and Proposition 2.6.7). Note that the commutativity of (4.4.6) would imply, upon localization at \(g_{I}\in R_{I}\), that the isomorphism \((\Upsilon_{I})_{g_{I}}\) fits into the following commutative diagram of isomorphisms of \((R_{I})_{g_{I}}\)-modules:
\[\begin{CD}H^{*}_{L(\mathcal{O})}(\operatorname{Gr}_{L},i^{!,\natural}_{I} \mathcal{F}_{\operatorname{reg}})_{g_{I}}@>{(\xi_{I}^{\natural})_{g_{I}}}>{}>H^ {*}_{L(\mathcal{O})}(\operatorname{Gr}_{G},\operatorname{Res}_{I}^{\natural }(\mathcal{F}_{\operatorname{reg}}))_{g_{I}}\\ @V{!}V{\mathfrak{J}_{I}(\Upsilon_{I})_{g_{I}}}V@V{}V{\Upsilon_{g_{I}}}V\\ \mathcal{O}(T^{*}(\check{G}/(\check{U},\psi_{I})))_{g_{I}}@>{(\eta_{I-\operatorname {gen}}^{-})^{*}}>{}\mathcal{O}(T^{*}(\check{G}/(\check{U},\psi))\times_{ \check{\mathfrak{c}}}\check{\mathfrak{c}}_{I})_{g_{I}}.\end{CD} \tag{4.4.7}\]
The isomorphisms \((\xi_{I}^{\natural})_{g_{I}}\) (explained above), \(\Upsilon_{g_{I}}\) (see Construction 4.4.3), and \((\eta_{I-\operatorname{gen}}^{-})^{*}\) (see SSSS3.5, and Construction 4.4.2 for the grading compatibility) are \(\check{G}\)-equivariant \(\mathfrak{J}_{I}\)-equivariant (in the case that \(G\) has simple derived subgroup) graded \(R_{I}\)-algebra isomorphisms. Therefore, it will follow that \((\Upsilon_{I})_{g_{I}}\) is a \(\check{G}\)-equivariant \(\mathfrak{J}_{I}\)-equivariant graded \((R_{I})_{g_{I}}\)-algebra isomorphism. Since \(\Upsilon_{I}\) is an isomorphism of _free_ modules (by Proposition 2.1.1) over the domain \(R_{I}\), it follows that \(\Upsilon_{I}\) is in fact a \(\check{G}\)-equivariant \(\mathfrak{J}_{I}\)-equivariant (in the case that \(G\) has simple
derived subgroup) isomorphism of graded \(R_{I}\)-algebras and that (4.4.6) commutes. Hence, it is sufficient to construct \(\Upsilon_{I}\) as a mere \(R_{I}\)-_module_ isomorphism, subject to the further constraint that the diagram (4.4.7) commutes.
If we succeed in defining \(\Upsilon_{I}\), then we will have a commutative diagram of \(R_{I}\)-modules:
(4.4.8)
The horizontal maps are the obvious restriction morphisms. In Construction 3.5.3 (using Lemma 3.5.1), we invoked Hartog's principle to show that the bottom horizontal map \(\eta_{I}\) is an isomorphism. Moreover, by the above observation Lemma 4.4.1, the top horizontal map is also an isomorphism of \(R_{I}\)-modules. Therefore, to define \(\Upsilon_{I}\), it is in fact sufficient to directly define the isomorphisms
\[\Upsilon_{I-\mathrm{gen}} :=(\Upsilon_{I})_{f_{I}}:H^{*}_{L(\mathcal{O})}(\mathrm{Gr}_{L}, i^{!,\natural}_{I}\mathcal{F}_{\mathrm{reg}})_{f_{I}}\simeq\mathcal{O}(T^{*}( \check{G}/(\check{U},\psi_{I})))_{f_{I}}\] \[\Upsilon^{-}_{I-\mathrm{gen}} :=(\Upsilon_{I})_{g_{I}}:H^{*}_{L(\mathcal{O})}(\mathrm{Gr}_{L}, i^{!,\natural}_{I}\mathcal{F}_{\mathrm{reg}})_{g_{I}}\simeq\mathcal{O}(T^{*}( \check{G}/(\check{U},\psi_{I})))_{g_{I}}\]
subject to the constraint that the following diagram of \(R_{I}\)-modules commutes.
(4.4.9)
Here, the vertical equalities denote the obvious canonical isomorphisms. The condition that the diagram (4.4.7) commutes now becomes the _definition_ of the isomorphism \(\Upsilon^{-}_{I-\mathrm{gen}}\). With this choice of \(\Upsilon^{-}_{I-\mathrm{gen}}\), our goal is reduced to producing the isomorphism \(\Upsilon_{I-\mathrm{gen}}\) subject to the constraint that the following expanded version of the diagram (4.4.9) commutes (where the equalities are now suppressed for brevity):
(4.4.10)
Note, moreover, that \(H^{*}_{L(\mathcal{O})}(\mathrm{Gr}_{L},i^{!,\natural}_{I}\mathcal{F}_{\mathrm{ reg}})\) differs from \(H^{*}_{L(\mathcal{O})}(\mathrm{Gr}_{L},i^{!}_{I}\mathcal{F}_{\mathrm{reg}})\) only by a change in grading. Since we have already established that the compatibility of \(\Upsilon_{I}\) with the gradings is
an automatic consequence of the commutativity of (4.4.7), we will now ignore the distinction between these \(R_{I}\)-modules.
We have an isomorphism of free \((R_{\emptyset})_{f_{I}}\)-modules (see Remark 2.6.13)
\[H^{*}_{T(\mathcal{O})}(\operatorname{Gr}_{L},i^{!}_{I}\mathcal{F}_{\operatorname {reg}})_{f_{I}}\simeq H^{*}_{T(\mathcal{O})}(\operatorname{Gr}_{T},i^{!}_{ \emptyset}\mathcal{F}_{\operatorname{reg}})_{f_{I}} \tag{4.4.11}\]
where \(i_{\emptyset}:\operatorname{Gr}_{T}\hookrightarrow\operatorname{Gr}_{G}\) continues to denote the closed inclusion of \(\operatorname{Gr}_{T}\) into \(\operatorname{Gr}_{G}\) and \(R_{\emptyset}=H^{*}_{T}(\operatorname{pt},\mathbb{C})\). Next, we bring in the isomorphism of the Theorem 4.2.3 of Ginzburg-Riche
\[(\Upsilon_{\emptyset})_{f_{I}}:H^{*}_{T(\mathcal{O})}(\operatorname{Gr}_{T},i ^{!}_{\emptyset}\mathcal{F}_{\operatorname{reg}})_{f_{I}}\simeq\mathcal{O}(T ^{*}(\check{G}/\check{U}))_{f_{I}}. \tag{4.4.12}\]
Moreover, we have the isomorphism of Corollary 3.3.10
\[\eta_{I-\operatorname{gen}}:\mathcal{O}(T^{*}(\check{G}/\check{U}))_{f_{I}} \simeq\mathcal{O}(T^{*}(\check{G}/(\check{U},\psi_{I})))_{f_{I}}\otimes_{(R_{ I})_{f_{I}}}(R_{\emptyset})_{f_{I}}. \tag{4.4.13}\]
We can chain together (4.4.11), (4.4.12), and (4.4.13) to obtain an isomorphism of \((R_{\emptyset})_{f_{I}}\)-modules
\[H^{*}_{T(\mathcal{O})}(\operatorname{Gr}_{L},i^{!}_{I}\mathcal{F}_{ \operatorname{reg}})_{f_{I}}\simeq\mathcal{O}(T^{*}(\check{G}/(\check{U}, \psi_{I})))_{f_{I}}\otimes_{(R_{I})_{f_{I}}}(R_{\emptyset})_{f_{I}}. \tag{4.4.14}\]
Recall that \(R_{I}=(R_{\emptyset})^{W_{I}}\), where \(W_{I}\) is the Weyl group of \(L\). By the equivariant formality of \(i^{!}_{I}\mathcal{F}_{\operatorname{reg}}\) (Proposition 2.1.1), taking \(W_{I}\)-invariants in (4.4.14) yields an isomorphism of \((R_{I})_{f_{I}}\)-modules
\[\Upsilon_{I-\operatorname{gen}}:H^{*}_{L(\mathcal{O})}(\operatorname{Gr}_{L },i^{!}_{I}\mathcal{F}_{\operatorname{reg}})_{f_{I}}\simeq\mathcal{O}(T^{*} (\check{G}/(\check{U},\psi_{I})))_{f_{I}}. \tag{4.4.15}\]
It remains to verify the commutativity of (4.4.10). We start by observing the commutativity of the following diagram
\[\begin{CD}H^{*}_{T(\mathcal{O})}(\operatorname{Gr}_{T},i^{!}_{\emptyset} \mathcal{F}_{\operatorname{reg}})_{f_{I}g_{I}}@>{(\Upsilon_{\emptyset})_{f_{ I}g_{I}}}>{}>\mathcal{O}(T^{*}(\check{G}/\check{U}))_{f_{I}g_{I}}\\ @V{}V{}V@V{}V{}V\\ H^{*}_{T(\mathcal{O})}(\operatorname{Gr}_{G},\mathcal{F}_{\operatorname{reg}})_{ f_{I}g_{I}}@>{(\ref{eq:G}.2.1)}>{}>\mathcal{O}(T^{*}(\check{G}/(\check{U},\psi)))_{f_{ I}g_{I}}\otimes_{R_{f_{I}g_{I}}}(R_{\emptyset})_{f_{I}g_{I}}.\end{CD} \tag{4.4.16}\]
Indeed, this diagram is nothing other than the commutative diagram (4.2.4) of Theorem 4.2.3 characterizing the Ginzburg-Riche isomorphism \(\Upsilon_{\emptyset}\), localized at \(f_{I}g_{I}\in R_{\emptyset}\) (note the evident identification of \(\mathcal{O}(T^{*}(\check{G}/(\check{U},\psi)))\) with \(\mathcal{O}(\check{G})\otimes R\) coming from Lemma 3.2.4). On the geometric side, we have the commutative diagram
\[\begin{CD}H^{*}_{T(\mathcal{O})}(\operatorname{Gr}_{T},i^{!}_{\emptyset} \mathcal{F}_{\operatorname{reg}})_{f_{I}g_{I}}@>{}>{}>H^{*}_{T(\mathcal{O})}( \operatorname{Gr}_{L},i^{!}_{I}\mathcal{F}_{\operatorname{reg}})_{f_{I}g_{I}} \\ @V{}V{}V@V{}V{}V\\ H^{*}_{T(\mathcal{O})}(\operatorname{Gr}_{G},\mathcal{F}_{\operatorname{reg}})_{ f_{I}g_{I}}.\end{CD} \tag{4.4.17}\]
obtained by localizing at \(f_{I}g_{I}\in R_{I}\) the evidently commutative diagram
On the spectral side, we have the commutative diagram
(4.4.18)
obtained from the diagram of varieties
by passing to rings of regular functions and localizing at \(f_{I}g_{I}\in R_{I}\). Combining the diagrams (4.4.16), (4.4.17), and (4.4.18) and then passing to \(W_{I}\)-invariants yields the diagram (4.4.10), which concludes the proof.
|
2309.16412 | Selective Nonparametric Regression via Testing | Prediction with the possibility of abstention (or selective prediction) is an
important problem for error-critical machine learning applications. While
well-studied in the classification setup, selective approaches to regression
are much less developed. In this work, we consider the nonparametric
heteroskedastic regression problem and develop an abstention procedure via
testing the hypothesis on the value of the conditional variance at a given
point. Unlike existing methods, the proposed one allows to account not only for
the value of the variance itself but also for the uncertainty of the
corresponding variance predictor. We prove non-asymptotic bounds on the risk of
the resulting estimator and show the existence of several different convergence
regimes. Theoretical analysis is illustrated with a series of experiments on
simulated and real-world data. | Fedor Noskov, Alexander Fishkov, Maxim Panov | 2023-09-28T13:04:11Z | http://arxiv.org/abs/2309.16412v1 | # Selective Nonparametric Regression via Testing
###### Abstract
Prediction with the possibility of abstention (or selective prediction) is an important problem for error-critical machine learning applications. While well-studied in the classification setup, selective approaches to regression are much less developed. In this work, we consider the nonparametric heteroskedastic regression problem and develop an abstention procedure via testing the hypothesis on the value of the conditional variance at a given point. Unlike existing methods, the proposed one allows to account not only for the value of the variance itself but also for the uncertainty of the corresponding variance predictor. We prove non-asymptotic bounds on the risk of the resulting estimator and show the existence of several different convergence regimes. Theoretical analysis is illustrated with a series of experiments on simulated and real-world data.
Selective Nonparametric Regression via Testing
nonparametric regression, selective regression, prediction with abstention, hypothesis testing
## 1 Introduction
In many machine learning applications, there exists a possibility to reject the prediction of the model and entrust it to the human or other model. Abstention is usually done based on the estimation of uncertainty in predicted value. In classification problems uncertainty might be measured via the probability of wrong prediction while for regression it corresponds to the expected error. In both cases, the estimation of these quantities is usually much harder than the solution of the initial prediction problem. In this work, we target the problem of regression with abstention (or selective regression) in nonparametric setup.
**Related Works.** There is a large variety of literature regarding classification with reject option. Most likely, the problem was firstly studied by Chow in papers (Chow, 1957, 1970). Moreover, in the article (Chow, 1970) he introduced a risk function used in the majority of forthcoming works including the present one. Herbei and Wegkamp (2006) studied an optimal procedure for this risk and proved consistency for the proposed plugin rule. Then the research was focused on investigation of either empirical risk minimization among a class of hypotheses (Bartlett and Wegkamp, 2008; Cortes et al., 2016) or on other types of risk (Denis and Hebiri, 2020; El-Yaniv and Wiener, 2010; Lei, 2014). Benefits
of abstention for online and active learning were studied in (Neu and Zhivotovskiy, 2020) and (Puchkin and Zhivotovskiy, 2021) correspondingly. Besides, the problem was studied in a number of more practical works; see, for example, (Grandvalet et al., 2009; Geifman and El-Yaniv, 2019; Nadeem et al., 2009). Finally, conformal prediction approach (Vovk et al., 1999; Shafer and Vovk, 2008) has recently been applied to the classification with reject option (Linusson et al., 2018; Johansson et al., 2023).
Unfortunately, methods for selective regression are much less developed. Zaoui et al. (2020) suggested an approach to regression via a plugin rule. In papers (Shah et al., 2022) and (Salem et al., 2022), authors proposed new approaches of neural network learning for better uncertainty capturing. In (Jiang et al., 2020), the authors suggested an uncertainty measure for regression based on blending and a method to select samples with the least risk given some coverage.
**Setup.** In this work, we focus on the selective algorithms for regression problems with heteroskedastic noise. We assume that the data \((X,Y)\) is coming from a standard regression model \(Y=f(X)+\varepsilon\) with target function \(f\) and i.i.d. noise \(\varepsilon\). Covariate \(X\) is assumed to follow some distribution \(p(\cdot)\). The noise variance depends on the input point: \(\sigma^{2}(\mathbf{x})=\mathrm{Var}[Y\mid X=\mathbf{x}]\). The Chow model (Chow, 1970) assumes that the cost for abstention is given by a fixed value \(\lambda>0\), while for prediction the mean squared risk is paid. The abstention procedure for such a problem can be constructed based on the estimate of the variance \(\widehat{\sigma}^{2}(\mathbf{x})\). The abstention rule \(\widehat{\alpha}(\mathbf{x})\) proposed by Zaoui et al. (2020) is given by \(\widehat{\alpha}(\mathbf{x})=\mathrm{I}\left\{\widehat{\sigma}^{2}(\mathbf{x })\geqslant\lambda\right\}\). The resulting method was proved to be consistent, and the corresponding rate of convergence was derived under standard nonparametric assumptions on functions \(f\) and \(\sigma\). However, the analysis was done only for the risk averaged over the covariate distribution \(p(\mathbf{x})\), while one may expect that the convergence properties at a given point \(\mathbf{x}\) may significantly depend on the difference between the variance \(\widehat{\sigma}^{2}(\mathbf{x})\) and cost of abstention \(\lambda\). Moreover, the performance of the estimator \(\widehat{\alpha}(\mathbf{x})=\mathrm{I}\left\{\widehat{\sigma}^{2}(\mathbf{x })\geqslant\lambda\right\}\) depends on how accurately \(\widehat{\sigma}^{2}(\mathbf{x})\) estimates the true variance \(\sigma^{2}(\mathbf{x})\). In particular, \(\widehat{\sigma}^{2}(\mathbf{x})\) might give unreliable predictions in the areas of design space where there is little to no train data. Such situations arise when there is a covariate shift between train and test data. In this work, we aim to conduct in-depth theoretical analysis for the pointwise estimation risk for the considered problem and propose the abstention procedure that would be more robust to covariate shifts than the one based on the plugin rule.
The main **contributions** of our paper are the following.
* We show the natural way to construct the abstention rule for nonparametric heteroskedastic regression based on the hypothesis testing on the variance value at a given point. We implement the method via Nadaraya-Watson kernel estimates of regression and variance functions.
* We prove the accurate finite sample bounds for the risk of the resulting estimator. Our results show that the behavior of the risk significantly depends on the relative values of the variance \(\sigma^{2}(\mathbf{x})\) and the abstention cost \(\lambda\). The proposed method shows favorable performance over the plugin approach of Zaoui et al. (2020), see Table 1.
* We illustrate the theoretical findings by experiments with simulated and real-world data.
The paper is organized as follows. We introduce the setup of the study in Section 2. We propose a new abstention procedure based on hypothesis testing on the values of the conditional variance in Section 3. Theoretical properties of the developed method are studied in Section 4. Finally, Section 5 illustrates our experimental findings and Section 6 concludes the study.
## 2 Regression with Abstention
Let us start by formalizing the problem. We assume that we observe pairs \((X,Y)\) with covariate \(X\in\mathbb{R}^{d}\) and output \(Y\in\mathbb{R}\). The regression task is to estimate \(\mathbb{E}_{Y}[Y\mid X=\mathbf{x}]\) via some function \(\widehat{f}(\mathbf{x})\), where \(\mathbb{E}_{Y}[\cdot\mid X=\mathbf{x}]\) means the expectation over the distribution \(Y\mid X=\mathbf{x}\). For the case of regression with abstention, for each \(\mathbf{x}\) we decide to accept or to reject the prediction \(\widehat{f}(\mathbf{x})\). Thus, we introduce an indicator of abstention \(\widehat{\alpha}(\mathbf{x})\) which is equal to 1 if the prediction \(\widehat{f}(\mathbf{x})\) was _rejected_. The intuition suggests accepting the prediction if the expected squared error \(\mathbb{E}_{Y}[(\widehat{f}(X)-Y)^{2}\mid X=\mathbf{x}]\) is not too large, say less than some \(\lambda\).
That leads to a natural definition of risk which is a variant of the risk proposed in (Chow, 1970):
\[\mathcal{R}_{\lambda}(\mathbf{x})=\mathbb{E}_{Y}\big{[}(\widehat{f}(X)-Y)^{2 }\operatorname{I}\left\{\widehat{\alpha}(\mathbf{x})=0\right\}\mid X= \mathbf{x}\big{]}+\lambda\operatorname{I}\left\{\widehat{\alpha}(\mathbf{x})= 1\right\},\]
where \(\operatorname{I}\left\{\cdot\right\}\) is an indicator function. The introduced risk has a natural interpretation. If we abstain from prediction then we should pay the fixed cost \(\lambda\). Otherwise, we pay the expected squared error. Note that the provided risk is not the only option for the problem. For instance, people also considered coverage risk, see (Jiang et al., 2020).
Given a risk function, the following question rises up. What are the estimators that minimize it in each point? We formulate the answer as a proposition.
**Proposition 1**: _Define \(f(\mathbf{x})=\mathbb{E}[Y\mid X=\mathbf{x}]\) and \(\sigma^{2}(\mathbf{x})=\operatorname{Var}[Y\mid X=\mathbf{x}]\). Then, \(f\) is the optimal estimator of \(Y\mid X=\mathbf{x}\) and \(\alpha(\mathbf{x})=\operatorname{I}\left\{\sigma^{2}(\mathbf{x})\geqslant \lambda\right\}\) is the optimal abstention function._
The risk related to the pair \(\{f(\mathbf{x}),\alpha(\mathbf{x})\}\) we denote by \(\mathcal{R}^{*}_{\lambda}(\mathbf{x})\).
## 3 Abstention via Testing of Variance Values
The setup considered in previous section was previously explored in (Zaoui et al., 2020) where it was proposed to use plugin approach, i.e. use some estimators \(\widehat{f}\) and \(\widehat{\sigma}^{2}\) of the population counterparts \(f\) and \(\sigma^{2}\) directly in the rule given by Proposition 1. Their approach leads to consistent estimators in large sample regime. However, for finite samples not only \(\widehat{f}\) can be imperfect but also the variance estimator \(\sigma^{2}(\mathbf{x})\) can become unreliable if \(\mathbf{x}\) lies far away from the train set under nonparametric setting. Basically, we might start rejecting or accepting the predictions based on the variance estimate which is far off from the actual variance values.
In this work, we aim to work with this issue by considering the uncertainty in the variance estimator \(\sigma^{2}(\mathbf{x})\) itself. We propose a natural way to take this into account via testing between the following hypotheses:
\[H_{0}\colon\sigma^{2}(\mathbf{x})\geqslant\lambda\text{ vs. }H_{1}\colon \sigma^{2}(\mathbf{x})<\lambda.\]
This problem assumes that it is safer to reject a good prediction than to accept a bad one. It is the standard situation for many applications of selective machine learning.
Construction of the test requires some assumptions on the data which will be the same for the train and the test set. Thus, we introduce the studied model.
**Model 1**: _Given a sample \(X\in\mathbb{R}^{d}\), the observed label \(Y\) is normally distributed with the mean \(f(X)\) and the variance \(\sigma^{2}(X)\) for some functions \(f\colon\mathbb{R}^{d}\to\mathbb{R}\), \(\sigma^{2}\colon\mathbb{R}^{d}\to\mathbb{R}_{+}\)._
The normality of the noise is not an obligatory requirement, but it allows computing some constants precisely. In our analysis, we mostly use concentration inequalities that can be naturally extended to sub-Gaussian setting. We will work under general nonparametric assumptions on functions \(f\) and \(\sigma^{2}\), see the details in Section 4.
### Construction of the Test
Nonparametric estimation offers a variety of tools for regression such as kNN, splines or kernel methods (Tsybakov, 2009). In this work, we stick to kernel approaches and employ celebrated Nadaraya-Watson (NW) method that estimates a function at a point \(\mathbf{x}\) via weighted mean of its neighbours. Below, we introduce the method formally.
Let \(\mu\) be the Lebesgue measure in \(\mathbb{R}^{d}\). For a kernel \(K\colon\mathbb{R}^{d}\to\mathbb{R}_{+}\), \(\int_{\mathbb{R}^{d}}K(\mathbf{t})d\mu(\mathbf{t})=1\), NW method computes weights of samples \(X_{1},\ldots,X_{n}\) at the point \(\mathbf{x}\) as
\[\omega_{i}(\mathbf{x})=\frac{K\left(\frac{\mathbf{x}-X_{i}}{h}\right)}{\sum_{ i=1}^{n}K\left(\frac{\mathbf{x}-X_{i}}{h}\right)}, \tag{1}\]
where \(h\) is a bandwidth. Typically, \(h\) tends to \(0\) as \(n\) tends to infinity. Then, it computes the estimated mean
\[\widehat{f}_{n}(\mathbf{x})=\sum_{i=1}^{n}\omega_{i}(\mathbf{x})Y_{i}\]
of the conditional distribution \(Y\mid X=\mathbf{x}\). This approach can be extended for computing the estimator of variance \(\mathrm{Var}[Y\mid X=\mathbf{x}]\):
\[\widehat{\sigma}_{n}^{2}(\mathbf{x})=\sum_{i=1}^{n}\omega_{i}(\mathbf{x})Y_{i }^{2}-\left(\sum_{i=1}^{n}\omega_{i}(\mathbf{x})Y_{i}\right)^{2}.\]
Generally, estimates for mean and variance can use different kernels and bandwidths. However, we stick to the single choice in this work to make the results simpler and more illustrative.
In the paper (Fan and Yao, 1998), it was shown that under some assumptions on \(h,n\) and \(K(\cdot)\), we have
\[\sqrt{nh^{d}}\left(\widehat{\sigma}_{n}^{2}(\mathbf{x})-\sigma^{2}(\mathbf{x} )\right)\underset{nh^{d}\to\infty}{\longrightarrow}\mathcal{N}\left(0,\sigma _{V}^{2}\right), \tag{2}\]
```
Input: samples \(\{(X_{i},Y_{i})\}_{i=1}^{n}\), bandwidth \(h\), parameters \(\lambda,\beta,a\) Output: accept or reject the regression result Calculate \(\widehat{p}_{n}(\mathbf{x}),\widehat{\sigma}_{n}^{2}(\mathbf{x})\) if\(\widehat{p}_{n}(\mathbf{x})\geqslant\frac{4a}{nh^{d}}\)and criterion (3) holdsthen accept results of the regression else reject end if
```
**Algorithm 1**Acceptance testing
**else**
```
Input: samples \(\{(X_{i},Y_{i})\}_{i=1}^{n}\), bandwidth \(h\), parameters \(\lambda,\beta,a\) Output: accept or reject the regression result Calculate \(\widehat{p}_{n}(\mathbf{x}),\widehat{\sigma}_{n}^{2}(\mathbf{x})\) if\(\widehat{p}_{n}(\mathbf{x})\geqslant\frac{4a}{nh^{d}}\)and criterion (3) holdsthen accept results of the regression else reject end
```
**Algorithm 2**Acceptance testing
**else**
**else**
**else**
**else**
**end**
**Algorithm 3**Acceptance testing
**Algorithm 3**Acceptance testing
**Algorithm 3**Acceptance testing
**Algorithm 4**Acceptance testing
**Algorithm 5**Acceptance testing
**Algorithm 6**Acceptance testing
## 4 Theoretical guarantees
In this section, we provide theoretical guarantees for our algorithm. There are some natural assumptions that should hold to obtain our results.
**Assumption 1**: _The Hessian of the function \(f\) exists and is bounded by \(H_{f}\). Moreover, \(f\) is \(L_{f}\)-Lipschitz._
Assumption 1 helps to reduce the bias in the estimation of \(f\). Roughly speaking, if the kernel is symmetric then \(\mathbb{E}\widehat{f}_{n}(\mathbf{x})-f(\mathbf{x})\) has order at most \(h^{2}\) times the second derivative of \(f\). Otherwise, \(h\) times the Lipschitz constant may appear in the decomposition of the bias \(\mathbb{E}\widehat{f}_{n}(\mathbf{x})-f(\mathbf{x})\). We also impose the similar assumption for \(\sigma^{2}(\mathbf{x})\).
**Assumption 2**: _The Hessian of the function \(\sigma^{2}\) exists and is bounded by \(H_{\sigma}\). Moreover, \(\sigma^{2}\) is \(L_{\sigma^{2}}\)-Lipschitz._
As was previously mentioned, the bias term of order \(h\) vanishes if the kernel \(K\) is symmetric. Besides, to estimate \(f\) at a point \(\mathbf{x}\), the kernel should aggregate well the neighborhood of \(\mathbf{x}\). Thus, its support should cover some ball in \(\mathbb{R}^{d}\). But the kernel should not rely on the response provided by far points, so we require exponential tail for the kernel. The most common assumption is that the support of the kernel is bounded, but it is not the case of the Gaussian kernel which is widely used. Formally, the case of the Gaussian kernel implies that \(\widehat{p}_{n}(\mathbf{x})\) is non-zero over the whole space \(\mathbb{R}^{d}\) but we start considering a point \(\mathbf{x}\) as explored only if it has estimated density at least \(\Theta\big{(}(nh^{d})^{-1}\big{)}\). That allows to derive standard bias-variance decomposition and has a natural interpretation in terms of regression with abstention.
**Assumption 3**: _For the kernel \(K\colon\mathbb{R}^{d}\to\mathbb{R}_{+}\), there exist constants \(a\) and \(b\) such that_
\[K(\mathbf{t})\geqslant a\,\mathrm{I}\,\{\|\mathbf{t}\|\leqslant b\}\]
_holds for all \(\mathbf{t}\in\mathbb{R}^{d}\). The kernel is symmetric, i.e. \(K(\mathbf{t})=K(-\mathbf{t})\). Moreover, there are constants \(R_{K}\) and \(r_{K}\) such that for all \(\mathbf{t}\), it holds that_
\[K(\mathbf{t})\leqslant R_{K}e^{-r_{K}\|\mathbf{t}\|}.\]
Finally, we impose some conditions on the density \(p(\mathbf{x})\). In the classical nonparametric studies, it is usually assumed that the support of \(p(\mathbf{x})\) has positive Lebesgue measure so we do not consider nonparametric low-dimensional manifold estimation. We define
\[\mathcal{S}_{q}=\{\mathbf{x}\in\mathbb{R}^{d}\mid p(\mathbf{x})>q\}.\]
We denote the support \(\mathrm{cl}(\mathcal{S}_{0})\) by \(\mathcal{S}\) and the boundary of \(\mathcal{S}\) by \(\partial\mathcal{S}\). Inside the support \(\mathcal{S}\) we require \(p(\mathbf{x})\) to be Lipschitz. That also helps to suppress summands of order \(h\) in the bias of our estimator. So the density can be non-continuous at the boundary like the uniform distribution, but it will not affect the inference inside the support.
**Assumption 4**: _The density \(p\) of \(X\) is \(L_{p}\)-Lipschitz in \(\mathcal{S}_{0}\) and bounded by \(C_{p}\)._
To bound the excess risk at a point \(\mathbf{x}\), we need its neighborhood to be explored a bit. So there should be large enough probability mass in a ball of radius \(h\) around \(\mathbf{x}\). Thus, we require \(p(\mathbf{x})\) to be larger than \(Ch\) and the Euclidean distance to the boundary \(d(\mathbf{x},\partial\mathcal{S})\) to be at least \(C^{\prime}h\). If \(\partial\mathcal{S}=\varnothing\), then \(d(\mathbf{x},\partial\mathcal{S})\) is assumed to be infinite.
Finally, note that \(\mathcal{R}_{\lambda}(\cdot)\) depends on the training set. We bound the mean of the excess risk over all training sets \(\mathcal{D}=\{(X_{i},Y_{i})\}_{i=1}^{n}\) where \(X_{i}\) are i.i.d. samples from the density \(p(\cdot)\) and \(Y_{i}\) generated according to Model 1.
In the theorem below, we study the upper bounds for the risk. The notation \(\lesssim\) means that the corresponding inequality holds with some multiplicative constant that is independent of \(n,h,\beta\) and \(p(\mathbf{x})\). The formulation with all the constants presented in the explicit way is given in Supplementary Material, see Theorem 4.
**Theorem 2**: _Suppose that Assumptions 1-4 hold. Define \(\Delta(\mathbf{x})=|\sigma^{2}(\mathbf{x})-\lambda|\). Let \(\mathcal{E}_{\lambda}(\mathbf{x})\) be the excess risk of the estimator \(\widehat{f}_{n}(\mathbf{x})\) and the abstention rule \(\widehat{\alpha}_{n}(\mathbf{x})\) introduced in Algorithm 1. Let \(\mathbb{E}_{\mathcal{D}}\) be the expectation with respect to training dataset \(\mathcal{D}=\{(X_{i},Y_{i})\}_{i=1}^{n}\), where \(X_{1},\ldots,X_{n}\) are i.i.d. samples from then density \(p(\cdot)\). Then_
* _if_ \(\sigma^{2}(\mathbf{x})\geqslant\lambda\) _and_ \(\Delta(\mathbf{x})\leqslant C_{1}\{nh^{d}p(\mathbf{x})\}^{-1}+C_{2}h^{2}/p( \mathbf{x})-C_{3}z_{1-\beta}\{nh^{d}p(\mathbf{x})\}^{-1/2}\)_, we have_ \[\mathbb{E}_{\mathcal{D}}(\mathcal{E}_{\lambda}(\mathbf{x}))\lesssim\{nh^{d}p (\mathbf{x})\}^{-1}+h^{4}p^{-2}(\mathbf{x})+\Delta(\mathbf{x}),\]
* _if_ \(\sigma^{2}(\mathbf{x})\geqslant\lambda\) _and_ \(\Delta(\mathbf{x})\geqslant C_{1}\{nh^{d}p(\mathbf{x})\}^{-1}+C_{2}h^{2}/p( \mathbf{x})-C_{3}z_{1-\beta}\{nh^{d}p(\mathbf{x})\}^{-1/2}\)_, we have_ \[\mathbb{E}_{\mathcal{D}}(\mathcal{E}_{\lambda}(\mathbf{x}))\lesssim\Delta( \mathbf{x})\exp\left(-\Omega(nh^{d+2}p(\mathbf{x}))\right)\]
* _if_ \(\sigma^{2}(\mathbf{x})\geqslant\lambda\) _and_ \(\Delta(\mathbf{x})\geqslant C_{1}\{nh^{d}p(\mathbf{x})\}^{-1}+C_{2}h-C_{3}z_{1 -\beta}\{nh^{d}p(\mathbf{x})\}^{-1/2}\)_, we have_ \[\mathbb{E}_{\mathcal{D}}(\mathcal{E}_{\lambda}(\mathbf{x}))\lesssim\exp\{-nh^ {d}p(\mathbf{x})\},\]
* _if_ \(\sigma^{2}(\mathbf{x})<\lambda\) _and_ \(\Delta(\mathbf{x})\leqslant C_{1}^{\prime}\{nh^{d}p(\mathbf{x})\}^{-1}+C_{2}^ {\prime}h^{2}/p(\mathbf{x})+C_{3}^{\prime}z_{1-\beta}\{nh^{d}p(\mathbf{x})\}^ {-1/2}\)_, we have_ \[\mathbb{E}_{\mathcal{D}}(\mathcal{E}_{\lambda}(\mathbf{x}))\lesssim\{nh^{d}p (\mathbf{x})\}^{-1}+h^{4}p^{-2}(\mathbf{x})+\Delta(\mathbf{x}),\]
* _if_ \(\sigma^{2}(\mathbf{x})<\lambda\) _and_ \(\Delta(\mathbf{x})\gg C_{1}^{\prime}\{nh^{d}p(\mathbf{x})\}^{-1}+C_{2}^{ \prime}h^{2}/p(\mathbf{x})+C_{3}^{\prime}z_{1-\beta}\{nh^{d}p(\mathbf{x})\}^ {-1/2}\)_, we have_ \[\mathbb{E}_{\mathcal{D}}(\mathcal{E}_{\lambda}(\mathbf{x}))\lesssim\{nh^{d}p (\mathbf{x})\}^{-1}+h^{4}p^{-2}(\mathbf{x}).\]
Let us note that Theorem 2 applies not only to Algorithm 1 but also to the plugin estimator proposed by Zaoui et al. (2020). Indeed, by setting \(\beta=0.5\) one gets \(z_{1-\beta}=0\) and we obtain plugin approach as a particular instance of our algorithm. While Theorem 2 determines only the upper bound of the risk, it satisfactorily captures the real behavior of Algorithm 1, see experimental evaluation in Section 5. Below, we discuss different estimation regimes implied by Theorem 2.
For beginning, we consider the case when \(\sigma^{2}(\mathbf{x})>\lambda\). In most of the applications, we assume that \(nh^{d}\to\infty\) as \(n\) tends to infinity. Typically, \(h\) is chosen to minimize bias-variance trade-off so \(h=\Theta\big{(}n^{-1/(d+4)}\big{)}\). Assume additionally that \(\beta<0.5\) and
\[C_{1}\{nh^{d}p(\mathbf{x})\}^{-1}+C_{2}h^{2}/p(\mathbf{x})-C_{3}z_{1-\beta}\{nh ^{d}p(\mathbf{x})\}^{-1/2}<0,\]
where constants \(C_{1},C_{2},C_{3}\) come from the first case of Theorem 2. This inequality can be satisfied if \(h=C_{\beta}n^{-1/(d+4)}p^{1/2}(\mathbf{x})\) for a small enough constant \(C_{\beta}\) that depends on \(\beta\). We
refer to this condition as _"undersmoothing"_ since it requires the bias to be significantly less than the variance. Moreover, a similar condition is required to ensure (2). Then, our approach provably becomes very efficient. Indeed, in that case the condition \(\Delta(\mathbf{x})\leqslant C_{1}\{nh^{d}p(\mathbf{x})\}^{-1}+C_{2}h^{2}/p( \mathbf{x})-C_{3}z_{1-\beta}\{nh^{d}p(\mathbf{x})\}^{-1/2}\) can be simplified as \(\Delta(\mathbf{x})<0\) so it never holds. Thus, for any \(\mathbf{x}\) such that \(\sigma^{2}(\mathbf{x})>\lambda\), the expected excess risk converges exponentially. But if one chooses larger \(h\), the advantages of our algorithm remain, since it becomes to converge exponentially earlier than the plugin.
For the plugin, our upper bound can not achieve exponential convergence rates while \(\Delta(\mathbf{x})\leqslant C_{1}\{nh^{d}p(\mathbf{x})\}^{-1}+C_{2}h^{2}/p( \mathbf{x})\). That matches our observations for synthetic data, see Figure 1(_b_) and Figure 3(_c_).
To explain the behaviour of estimators for \(\sigma^{2}(\mathbf{x})\leqslant\lambda\), we impose the following proposition.
**Proposition 3**: _For any pair of estimators \((\widehat{f},\widehat{\alpha})\) the expected excess risk can be decomposed as follows:_
\[\mathbb{E}_{\mathcal{D}}\mathcal{R}_{\lambda}(\mathbf{x})-\mathcal{R}^{*}( \mathbf{x})=\mathbb{E}_{\mathcal{D}}\left[\left(\widehat{f}(\mathbf{x})-f( \mathbf{x})\right)^{2}\mathrm{I}\left\{\widehat{\alpha}(\mathbf{x})=0\right\} \right]+\Delta(\mathbf{x})\cdot\mathbb{P}\big{(}\widehat{\alpha}(\mathbf{x}) \neq\alpha(\mathbf{x})\big{)}.\]
In our case
\[\mathbb{P}\big{(}\widehat{\alpha}(\mathbf{x})\neq\alpha(\mathbf{ x})\big{)} \leqslant\mathbb{P}\left(\widehat{\sigma}_{n}^{2}(\mathbf{x}) \leqslant\lambda\left[1-\frac{Cz_{1-\beta}}{\sqrt{nh^{d}\widehat{p}_{n}( \mathbf{x})}}\right]\right)\] \[\leqslant\mathbb{P}\left(\widehat{\sigma}_{n}^{2}(\mathbf{x})- \sigma^{2}(\mathbf{x})\leqslant\Delta(\mathbf{x})-\frac{C\lambda z_{1-\beta}} {\sqrt{nh^{d}\widehat{p}_{n}(\mathbf{x})}}\right)=:\mathbf{P}(\mathbf{x})\]
The whole set \(\{\mathbf{x}\mid\sigma^{2}(\mathbf{x})\leqslant\lambda\}\) can be divided into two sets. Roughly speaking, one is \(\mathcal{A}=\{\mathbf{x}\mid\Delta(\mathbf{x})\lesssim(nh^{d})^{-1/2}\}\) and the other is \(\mathcal{B}=\{\mathbf{x}\mid\Delta(\mathbf{x})\gg(nh^{d})^{-1/2}\}\). While \(\mathbf{x}\in\mathcal{A}\), the leading term of the excess risk is \(\Delta(\mathbf{x})\cdot\mathbf{P}(\mathbf{x})\) that has order \((nh^{d})^{-1/2}\). The factor \(\mathbf{P}(\mathbf{x})\) does not go to zero, since, informally, \(\sqrt{nh^{d}}\big{(}\widehat{\sigma}_{n}^{2}(\mathbf{x})-\sigma^{2}(\mathbf{x })\big{)}\approx\mathcal{N}\left(0,\sigma_{V}^{2}\right)\) due to (2) and so the difference between \(\lambda\) and \(\sigma^{2}(\mathbf{x})\) can not be captured by \(\widehat{\sigma}_{n}^{2}(\mathbf{x})\). While this argument is not strict from the theoretical point of view, one may prove anticoncentration bounds via the Carbery-Wright theorem. On Figure 1(_b_), one may observe sets \(\mathcal{A}\) for different \(n\) as hills on the left of the point where \(\sigma^{2}(\mathbf{x})=\lambda\).
But if \(\mathbf{x}\in\mathcal{B}\), bias and variance suppress term \(\Delta(\mathbf{x})\cdot\mathbf{P}(\mathbf{x})\) and we obtain usual rates of convergence for nonparametric estimators. Since for small \(n\) the set \(\mathcal{A}\) is large there maybe some warm-up when we see slower rates of convergence on plots. So in each point, the convergence may have two phases: one is when \(\mathbf{x}\in\mathcal{A}\) and the other is when \(\mathbf{x}\in\mathcal{B}\). That is how we explain two phases on Figure 3(_b_) for \(\mathbf{x}\in\{-1.6,-0.5,0.3\}\).
We summarize the behaviour of our estimator and estimator proposed by Zaoui et al. (2020) in Table 1.
### Sketch of the Proof
We start by the following bound on the kernel values:
\[a\,\mathrm{I}\left\{X_{i}\in\mathcal{B}_{bh}(\mathbf{x})\right\}\leqslant K \left(\frac{X_{i}-\mathbf{x}}{h}\right),\]
where \(\mathcal{B}_{r}(\mathbf{x})\) is a ball with radius \(r\) and center \(\mathbf{x}\). These bounds allow us to deal with values of the kernel like they are Bernoulli random variable with certain mean. Thus, we show that with probability \(1-C\exp^{-\Omega(nh^{d}p(\mathbf{x}))}\), we have
\[ab^{d}\omega_{d}p(\mathbf{x})\leqslant\widehat{p}_{n}(\mathbf{x})\leqslant 2p( \mathbf{x})+L_{p}h\int_{\mathbb{R}^{d}}\|\mathbf{t}\|K(\mathbf{t})d\mu(\mathbf{ t}),\]
see Propositions 5 and 19 in Supplementary Material. The bounds above are rough but they will be sufficient for our purposes.
For any \(L\)-Lipschitz function \(g\) we also can obtain the bound
\[\left|\sum_{i=1}^{n}g(X_{i})\omega(X_{i})-g(\mathbf{x})\right|\leqslant\sum_{ i=1}^{n}|g(X_{i})-g(\mathbf{x})|\omega(X_{i})\lesssim\frac{Lh}{p(\mathbf{x})nh^{d}},\]
since \(\omega(X_{i})\) is, roughly speaking, \(\frac{R_{K}e^{-r_{K}\|(X_{i}-\mathbf{x})/h\|}}{p(\mathbf{x})nh^{d}}\) up to a constant, see Corollary 6 in Supplementary Material. This approximation is based on the fact that \(K(\mathbf{t})\) is bounded above by a constant and the denominator of the weight with high probability is \(\Omega\left(nh^{d}p(\mathbf{x})\right)\), see Proposition 5 in Supplementary Material. Finally, under some conditions on \(n\), \(h\) and \(p(\mathbf{x})\) we establish the concentration bounds for any function \(g\) which is Lipschitz and has bounded Hessian, see Corollary 9 in Supplementary Material.
If \(\sigma^{2}(\mathbf{x})\geqslant\lambda\) then \(\operatorname{I}\left\{\widehat{\alpha}(\mathbf{x})=0\right\}=\operatorname{I} \left\{\widehat{\alpha}(\mathbf{x})\neq\alpha(\mathbf{x})\right\}\). So we may bound
We bound the 4-th moment above by integrating concentration inequalities. That results in standard bias-variance trade-off, see Lemma 18 in Supplementary Material. Thus, the rate of the excess risk is determined by the factor \(\mathbb{P}^{1/2}(\widehat{\alpha}_{n}\big{(}\mathbf{x})\neq\alpha(\mathbf{x}))\). It can be reformulated as
\[\mathbb{P}^{1/2}\left(|\widehat{\sigma}_{n}^{2}(\mathbf{x})-\sigma^{2}( \mathbf{x})|\geqslant\Delta(\mathbf{x})+O\left\{(nh^{d}p(\mathbf{x}))^{-1/2} \right\}\right).\]
The random value \(\widehat{\sigma}_{n}^{2}(\mathbf{x})\) behaves like sub-exponential random variable with the mean \(\sigma^{2}(\mathbf{x})+o(1)\). Thus, under certain assumptions on \(\Delta(\mathbf{x})\), we get exponential rates of convergence via the concentration argument, see Corollary 15 in Supplementary Material.
If \(\sigma^{2}(\mathbf{x})<\lambda\), two terms from Proposition 3 demonstrate different behaviour. The first one can be bounded via standard bias-variance trade-off. The second one exponentially decreases if \(h\) is smaller than some constant and \(\Delta(\mathbf{x})\) is larger than some decreasing function of \(n\) and \(h\). The proof is similar to the case \(\sigma^{2}(\mathbf{x})>\lambda\).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \(\sigma^{2}(\mathbf{x})\geqslant\lambda\) & \(\Delta(\mathbf{x})<C_{1}h^{2}/p(\mathbf{x})\) & \(C_{2}h>\Delta(\mathbf{x})>C_{3}h^{2}/p(\mathbf{x})\) & \(\Delta(\mathbf{x})\gg h\) \\ \hline testing-based & \(O(h^{2})/p(\mathbf{x})\cdot\exp\{-\Omega\big{(}nh^{d+2}/p(\mathbf{x})\big{)}\}\) & & \\ \hline plugin & \(O\left(h^{2}/p(\mathbf{x})\right)\) & & \\ \hline \hline \(\sigma^{2}(\mathbf{x})<\lambda\) & \(\Delta(\mathbf{x})\lesssim\left(nh^{d}p(\mathbf{x})\right)^{-1/2}\) & \(\Delta(\mathbf{x})\gg\left(nh^{d}p(\mathbf{x})\right)^{-1/2}\) & \\ \hline testing-based & \(O\big{(}\{nh^{d}p(\mathbf{x})\}^{-1/2}\big{)}\) & \(O(\{nh^{d}p(\mathbf{x})\}^{-1})+O\big{(}h^{4}/p^{2}(\mathbf{x})\big{)}\) & \\ \hline plugin & & & \\ \hline \end{tabular}
\end{table}
Table 1: The upper bounds derived in Theorem 2, the case of undersmoothing.
## 5 Experiments
### How to choose \(\lambda\) and \(\beta\)
In practice, two natural questions arise: how to choose \(\lambda\) and how to choose \(\beta\). Obviously, one may define \(\lambda\) from the formulation of the problem as an inappropriate level of noise. The case of \(\beta\) is a bit more sophisticated. From Algorithm 1, we infer that any \(\mathbf{x}\) will be rejected if \(\widehat{p}_{n}(\mathbf{x})\leqslant\frac{2\|K\|_{2}^{2}x_{1-\beta}^{2}}{nh^{ d}}\). Thus, any such \(\mathbf{x}\) is considered as outlier, and, hence, \(z_{1-\beta}\) is a tolerance level for outliers. Additionally, the choice of \(\beta\) determines the trade-off between type I and type II errors.
### Synthetic data
For the first part of the experiments we use one dimensional data with known simple functions as true mean and variance at each point:
\[Y=f(X)+\sigma(X)\varepsilon,\;X\sim p(\cdot),\,\varepsilon\sim\mathcal{N}(0,1). \tag{4}\]
Specifically, we consider normal and uniform distributions of the independent variable \(p(\cdot)\in\{\mathcal{N}(0,1),\mathcal{U}(-2,2)\}\), a fixed mean function \(f(x)=\frac{x^{2}}{4}\), and two choices of standard deviation: sigmoid function and Heaviside function. Parameter \(\lambda\) was fixed at \(0.36\) and parameter \(\beta=0.05\) unless otherwise noted. Optimal bandwidth was selected using leave-one-out cross-validation optimizing mean squared error of prediction by NW estimator. In all our experiments for each setting of hyperparameters we have generated \(100\) different random datasets from our data model and then averaged the results.
#### 5.2.1 Convergence of estimates
We sampled \(100\) datasets of sizes \(n\in\{100,200,500,1000\}\) and for each \(x\in[-2,2]\) we estimate the fraction of predictions that are accepted by the proposed method. We present
Figure 1: Example with synthetic data: \(X\sim\mathcal{U}(-2,2)\), \(\sigma(x)=\mathtt{sigmoid}(x)\). We sample multiple datasets of each sample size \(n\). Confidence level \(\beta=0.05\) and abstention cost \(\lambda=0.36\).
the resulting chart in Figure 1(_a_) for \(X\sim\mathcal{U}(-2,2)\), \(\sigma(x)=\mathtt{sigmod}(x)\), additional charts are in Supplementary Material, Section D.1.1. The results demonstrate that for the area with \(\sigma^{2}(x)>\lambda\) (to the right of the dashed line) the convergence is much faster than for the area with \(\sigma^{2}(x)<\lambda\).
Additionally, we also estimate expected excess risk, since we know the values of \(f(x)\) and \(\sigma(x)\) for any \(x\). For the first plot (see Figure 1(_b_)) we vary sample size \(n\) from \(10\) to \(500\). We compare the proposed approach with \(\beta=0.05\) with "plugin" baseline, corresponding to \(\beta=0.5\). For the testing-based method we see the very quick convergence for \(\sigma^{2}(x)>\lambda\). For the points with \(\sigma^{2}(x)<\lambda\) the convergence is slower especially for the smaller values of \(\Delta(x)\). Thus, the observed behaviour well corresponds to the one predicted by the theory. For the plugin approach, the convergence is slower especially for the points with \(\sigma^{2}(x)>\lambda\).
#### 5.2.2 Dependence on \(\beta\)
In this experiment, we have studied the behavior of our method when changing its only hyperparameter \(\beta\) in the range between \(0.01\) and \(0.5\). Since \(\beta=0.5\) corresponds to "plugin" method described previously, we show it in red. For this we fixed the number of samples at \(n=100\), sampled \(100\) datasets and calculated the expected excess risk for each \(x\in[-2,2]\). With the increase of \(\beta\) the method becomes less conservative (more accepts), see Figure 2(_a_). It leads to the increased expected risk at points where prediction should be rejected and decreased risk at the points where predictions should be accepted, see Figure 2(_b_). Thus, in practice parameter \(\beta\) might be selected based on the trade-off between these two errors depending on the particular features of the considered applied problem.
#### 5.2.3 Pointwise convergence
Finally, we sampled multiple datasets of increasing sizes \(n\) from \(10\) to \(20000\) and selected \(5\) diagnostic points: \(x\in\{-1.6,-0.5,0.3,0.8,1.6\}\), see Figure 3(_a_). When sample size is
Figure 2: Example with synthetic data: \(X\sim\mathcal{U}(-2,2)\), \(\sigma(x)=\mathtt{sigmod}(x)\). Sample size \(n=100\), abstention cost \(\lambda=0.36\). We apply the proposed testing-based method for different values of \(\beta\). Plugin approach corresponds to \(\beta=0.5\).
less than 100, we generate 20000 datasets of each size, while for larger sample size we only use 100. In order to perform a more straightforward averaging of the across datasets of the same size, we have used the same bandwidth \(h\sim\frac{1}{n^{5}}\) that was selected to show the expected polynomial dependence in \(nh\) of the risk at points \(x\) with \(\sigma^{2}(x)<\lambda\). The resulting dependencies of the risk on \(nh\) are depicted on Figure 3(b). We observe all the main outcomes predicted by the theory:
* rapid convergence of the risk for the points with \(\sigma^{2}(x)>\lambda\) (points \(x=0.8\) and \(x=1.6\));
* polynomial convergence of the risk as function of \(nh\) for \(\sigma^{2}(x)<\lambda\) with moderately large values of \(\Delta(x)\), i.e., points \(x=-1.6\) and \(x=-0.5\);
* very slow convergence for the point with \(\sigma^{2}(x)<\lambda\) and small value of \(\Delta(x)\).
Additionally, on Figure 3(c) we experimentally confirm that plugin method has slower convergence than testing-based method for \(\sigma^{2}(x)>\lambda\).
### Airfoil Self-Noise Data Set
We have tested our method on the Airfoil dataset from the UCI collection (Dua and Graff, 2017). We do not perform any special preprocessing of the data or feature engineering, only standard scaling of features. We prepare train and test sets in two steps. First, we select a pivot feature and put 70% of the data with the lowest values of this feature to part A and the rest of the data becomes part B. For the second step we select 20% of each part (sampled uniformly) and put it in the other part. First part becomes the train set and the second part the test set. In this way we guarantee that test set will have data with low values of \(\widehat{p}(\mathbf{x})\) as well as data distributed similar to train data.
In our experiments we select different features as pivots for the split and then vary \(\lambda\in[0,50]\), calculating acceptance (retention) fraction and mean squared error. We present
Figure 3: We sample multiple datasets of each size and for selected points \(x\in\{-1.6,-0.5,0.3,0.8,1.6\}\) calculate the expected excess risk. In this experiment \(X\sim\mathcal{U}(-2,2)\), \(\sigma(x)=\mathtt{sigmoid}(x)\), abstention cost \(\lambda=0.36\), \(\beta=0.05\).
results for splitting by the second feature: "Angle of attack". Other configurations can be found in the Supplementary Material, Section D.2. On Figure 4(_a_) we show how acceptance probability varies as a function of \(\lambda\). Figure 4(_b_) illustrates the dependence of the mean squared error of estimation as a function of the fraction of points accepted for prediction. The curves show the expected trend to increase when accepting more points. Using more conservative estimates one obtains higher accuracy for the given acceptance rate. However, by construction, high acceptance rates are not achievable for the proposed method due to the limitations on the values of the estimated density \(\widehat{p}_{n}(\mathbf{x})\).
### CPU-small Data Set
Another dataset from UCI collection that we used is CPU-small. This dataset has 8192 instances and 12 features. Data splitting is done in the same manner as the Airfoil dataset. During the preprocessing we standardize the training data to have zero mean and unit variance and then apply the same scaling to the test set. Splitting was done based on the first feature "lread".
In this experiment we have tried two scenarios: first is to use the data as it is and the second is to use higher dimensional version (embeddings) of the data, obtained with a neural network. First we trained a two layer neural network with 50 neurons in each layer and ReLU activations and then used the values from the last layer as input for our method. On Figures 5(_a_) and 5(_b_) we present the partial MSE scores obtained with rejection in these two setups.
Using embeddings provides much lower MSE without rejection as one would expect. It also shows that using our method we can significantly outperform the baseline plug-in method. For this dataset our algorithm is less sensitive to the choice of \(\beta\) than for the previous one. We opted to vary \(z_{1-\beta}\) directly in the embedding case since for a higher
Figure 4: Airfoil data, split 70/30 by the second feature. For \(\lambda\in[0,50]\) we calculate acceptance (retention) probability and MSE at accepted points. \(x\)-values of acceptance probabilities are inferred as fraction of accepted points for each \(\lambda\).
dimension of the data the values of \(\widehat{p}\) span a larger strip in the logarithmic scale. In order to show dependence on \(\beta\) we would need to choose values very close to \(0.5\). Choosing the same set of \(\beta\) values as for raw data case yields curves similar to \(z_{1-\beta}=10^{-10}\).
## 6 Conclusion
In this work, propose a new method for selective prediction in heteroskedastic regression tasks under the Chow risk model. The method is based on the natural idea of testing the values of conditional variance at a given point. Our theoretical analysis show the existence of exponential and polynomial convergence regimes that depend on the relative values of the variance and abstention cost. The proposed method compares favorably to the plugin baseline both in theory and in the conducted experimental valuation.
The research was supported by the Russian Science Foundation grant 21-11-00373. The work of F. Noskov was prepared within the framework of the HSE University Basic Research Program.
|
2308.00159 | Multispectral Image Segmentation in Agriculture: A Comprehensive Study
on Fusion Approaches | Multispectral imagery is frequently incorporated into agricultural tasks,
providing valuable support for applications such as image segmentation, crop
monitoring, field robotics, and yield estimation. From an image segmentation
perspective, multispectral cameras can provide rich spectral information,
helping with noise reduction and feature extraction. As such, this paper
concentrates on the use of fusion approaches to enhance the segmentation
process in agricultural applications. More specifically, in this work, we
compare different fusion approaches by combining RGB and NDVI as inputs for
crop row detection, which can be useful in autonomous robots operating in the
field. The inputs are used individually as well as combined at different times
of the process (early and late fusion) to perform classical and DL-based
semantic segmentation. In this study, two agriculture-related datasets are
subjected to analysis using both deep learning (DL)-based and classical
segmentation methodologies. The experiments reveal that classical segmentation
methods, utilizing techniques such as edge detection and thresholding, can
effectively compete with DL-based algorithms, particularly in tasks requiring
precise foreground-background separation. This suggests that traditional
methods retain their efficacy in certain specialized applications within the
agricultural domain. Moreover, among the fusion strategies examined, late
fusion emerges as the most robust approach, demonstrating superiority in
adaptability and effectiveness across varying segmentation scenarios. The
dataset and code is available at https://github.com/Cybonic/MISAgriculture.git. | Nuno Cunha, Tiago Barros, Mário Reis, Tiago Marta, Cristiano Premebida, Urbano J. Nunes | 2023-07-31T21:24:41Z | http://arxiv.org/abs/2308.00159v1 | # Multispectral Image Segmentation in Agriculture: A Comprehensive Study on Fusion Approaches
###### Abstract
Multispectral imagery is frequently incorporated into agricultural tasks, providing valuable support for applications such as image segmentation, crop monitoring, field robotics, and yield estimation. From an image segmentation perspective, multispectral cameras can provide rich spectral information, helping with noise reduction and feature extraction. As such, this paper concentrates on the use of fusion approaches to enhance the segmentation process in agricultural applications. More specifically, in this work, we compare different fusion approaches by combining RGB and NDVI as inputs for crop row detection, which can be useful in autonomous robots operating in the field. The inputs are used individually as well as combined at different times of the process (early and late fusion) to perform classical and DL-based semantic segmentation. In this study, two agriculture-related datasets are subjected to analysis using both deep learning (DL)-based and classical segmentation methodologies. The experiments reveal that classical segmentation methods, utilizing techniques such as edge detection and thresholding, can effectively compete with DL-based algorithms, particularly in tasks requiring precise foreground-background separation. This suggests that traditional methods retain their efficacy in certain specialized applications within the agricultural domain. Moreover, among the fusion strategies examined, late fusion emerges as the most robust approach, demonstrating superiority in adaptability and effectiveness across varying segmentation scenarios. The dataset and code is available at [https://github.com/Cybonic/MISAgriculture.git](https://github.com/Cybonic/MISAgriculture.git)
## 1 Introduction
In agriculture, autonomous robots are becoming increasingly popular because of the potential benefits they may have on food security, sustainability, resource-use efficiency, reduction of chemical treatments, and optimization of human effort and yield [14]. Alongside this trend, the utilization of multispectral imagery in
agricultural applications, including AgRA (Agricultural Robotics and Automation), has become increasingly significant in recent years. Some notable applications of these images include plant disease detection, fruit maturity, and crop production analysis [5].
Certain bands, captured at specific frequencies across the electromagnetic spectrum, have the ability to reveal distinct information about plants. Among these bands, the near-infrared (NIR) band holds significance in agricultural tasks (e.g., assessing crop health) as it can effectively highlight chlorophyll absorption and water content in plants. One widely used index that relies on the NIR band is the Normalized Difference Vegetation Index (NDVI), which provides a quantitative measure of vegetation greenness and density. Compared with RGB-only data, incorporating this additional spectral information can enhance the discrimination of different objects and features within images. This enables more accurate identification and classification of crops, improving the process of image segmentation [19].
This work focuses on assessing the applicability of fusion approaches using multispectral data for segmentation-related agricultural tasks. Specifically, we investigate two fusion approaches: early fusion and late fusion. Early fusion involves combining the information from multiple sources at the input level before the segmentation process. This means that the data from different sources are merged into a single representation prior to segmentation. On the other hand, late fusion occurs after the segmentation process has been applied to each individual image. The segmentations are obtained independently, and then "fused," or combined, at a later stage. By exploring both early and late fusion techniques, we aim to assess their impact on image segmentation performance and determine which fusion approach yields superior results for the specific objectives of this work.
Through a comprehensive comparative analysis, the aim of this work is to make significant progress in automatic crop-row detection by studying early and late fusion of multispectral data using classical and DL-based segmentation approaches. To accomplish this, this paper brings two key contributions:
* A curated multispectral dataset collected on maize crops using a robotic platform, with crop row annotations;
* An extensive comparison study conducted on both deep learning (DL)-based and classical segmentation methods, focusing on early and late fusion techniques across two distinct datasets. The findings reveal two key insights: First, classical segmentation approaches prove to be competitive with DL-based methods in tasks that involve foreground-background separation, demonstrating their continued relevance in certain applications. Second, late fusion emerges as the most robust fusion approach, showcasing its superior adaptability and effectiveness across various scenarios.
## 2 Related Work
Image segmentation is a fundamental task in computer vision, which involves the division of an image into meaningful regions or objects to understand the scene [11][18][4]. In the past, semantic segmentation relied on methodsusing thresholding [15], edge-based [12] and region-based [6]. These methods have the advantage of simplicity and low computational cost.
On the other hand, convolutional neural networks (CNNs) have revolutionized the field in recent years and are now the most effective technique in pattern recognition application [8]. One of the strongest advantages of using DL in image processing is the reduced need for handcrafted features. These improvements helped agricultural tasks such as disease detection in vines [7], identification of crops, weeds, and soil [10] through architectures such as encoder-decoder SegNet and Mask R-CNN respectively.
Image Segmentation can improve scene understanding however, complex environments require complementary information that multiple modalities can give to better understand the scene [1]. To achieve this goal, fusion methods can be applied which encompass, usually, three steps. First, it is necessary to understand which modalities should be fused, then what method should be applied to fuse the information, techniques like addition or average mean, concatenation or ensemble, and finally where should the information be fused along the network [3][20]. Focusing on 'where' the information is fused, we highlighted two stages, (i) the early fusion which consists of combining (merging) the data at the input layer, and (ii) the late fusion which consists of training features separately for each modality and merging them at later layers using methods such as element-wise summation [17].
## 3 Materials and Methods
This section outlines the methods, tools, and processes employed to conduct the experiments of this work. Firstly, we provide a comprehensive characterization of the study sites and present the technical details of the recorded maize data. Secondly, we formulate the segmentation problem in generic terms and then in a multispectral fusion context by focusing, specifically, on early and late fusion techniques of two distinct information sources.
### Study Site and Materials
The study was conducted using data collected from a maize crop known as Vargem Grande (VG) located in the Coimbra region, situated in the center of mainland Portugal (see Fig. 1a). The data collection took place during July of 2022, specifically during the early growth stage of the plants. To ensure optimal lighting conditions and minimize shadow interference, the data was collected around midday under sunny weather conditions.
The multispectral dataset was captured using a Parrot Sequoia multispectral camera[1]. This camera consists of four monochrome sensors (Green, Red, Red Edge, and Near Infrared) along with an RGB sensor (see Fig. 0(c)). To facilitate the data collection process, the camera was mounted on a mobile platform known as the Jackal from Clearpath[2]. The camera was positioned 1.2 meters above the ground, with the sensors facing downward (see Fig. 0(b)).
To gather the data, the robot was teleoperated in-between the crop rows. Images from all five sensors were captured every two seconds, ensuring a comprehensive dataset for analysis.
Figure 1: Study site and material used to record the dataset, where (a) illustrates the studied maize crop denominated Vargem Grande, (b) is the recording setup with which the dataset was recorded, and (c) is the multispectral sensor with its five sensors.
### Problem Formulation
Image segmentation involves the task of dividing an image into regions, or objects, based on their shared characteristics. Mathematically, image segmentation can be defined as a function that maps an input image to a class likelihood mask. Thus, let \(I\) represent the input image, defined as a three-dimensional array \(I=[p_{ijk}]_{h\times w\times b}\), where \(p_{ijk}\in[0,...,255]\) denotes the pixel intensity at coordinates \((i,j,k)\). The image dimensions are given by \(h\) (height), \(w\) (width), and \(b\) (number of spectral bands), with \(i\in[1,h]\), \(j\in[1,w]\), and \(k\in[1,b]\). To perform image segmentation, we aim to obtain a class likelihood mask \(Q\), represented by \(Q=[q_{ijk}]_{h\times w\times c}\). Here, \(q_{ijk}\in[0,1]\) indicates the likelihood of the pixel at coordinates \((i,j,k)\) belonging to each of the \(C=\{1,...,c\}\) segmentation classes, constrained by \(\sum_{k=1}^{c}q_{ijk}=1\).
In the specific context of this study, we focus on binary segmentation. This means that only one class is considered, resulting in a single-channel likelihood matrix \(Q=[q_{ij}]_{h\times w\times 1}\). Hence, the final segmentation mask with a class per pixel \(M=[m_{ij}]_{h\times w}\in\{0,1\}\), is obtained through a threshold-based approach:
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline Sensors & Band: Center wavelength (width) & Resolution & Focal Length & HFoV & VFoV \\ & [nm] & [px] & [mm] & [\({}^{9}\)] & [\({}^{9}\)] \\ \hline \hline Mono- & G:550(40); R:660(40); RE:735(10); & 1280\(\times\)960 & 3.98 & 62 & 49 \\ -chrome & NIR:790(40) & & & & \\ RGB & R,G,B & 4068\(\times\)3456 & 4.88 & 64 & 50 \\ \hline \end{tabular}
\end{table}
Table 1: Specifications of the sensor. Field of View (FoV)
Figure 2: Simplified approach of early and late fusion using RGB and NDVI as inputs on deep and classic methods.
\[m_{ij}=\begin{cases}1&\text{if }q_{ij}\geq T\\ 0&\text{if }q_{ij}<T\end{cases} \tag{1}\]
where \(T\) is a threshold value chosen to distinguish between the positive and negative classes in the segmentation.
The binary segmentation framework is used to compare classical methods with deep learning (DL)-based approaches using two input modalities: RGB (\(I^{RGB}\)) and NDVI (\(I^{N}\)). The RGB image \(I^{RGB}\) is defined as a tree-dimensional array \(I^{RGB}=[p^{RGB}_{ijk}]_{h\times w\times 3}\), capturing the visible spectrum (400-700 nm) with the Red, Green, and Blue bands. On the other hand, the NDVI image \(I^{N}\) is a two-dimensional array \(I^{N}=[p^{N}_{ij}]_{h\times w}\), representing the Normalized Difference Vegetation Index. The NDVI is calculated as:
\[I^{N}=\frac{NIR-Red}{NIR+Red}\,, \tag{2}\]
where \(Red\) and \(NIR\) correspond to specific spectral bands. The Red band lies within the visible spectrum, while the NIR band extends beyond the visible range (700 to 1100 nm). These bands are particularly valuable for agricultural monitoring, capturing the absorption of chlorophyll in visible light and its reflection in the NIR spectrum.
### Image Fusion
Fusion, in the context of image segmentation, refers to the integration of information derived from diverse sources into a unified representation. The fusion process can be applied at various stages, depending on the segmentation methods employed [16]. In this study, we specifically investigate two fusion approaches: early fusion and late fusion.
#### 3.3.1 Early Fusion
In the context of image processing, early fusion involves the merging of information at the input level, specifically within the pixel space. In this study, early fusion is employed using two different approaches: classical segmentation methods and DL-based segmentation methods.
In the comparison between classical and DL-based methods, the representation of early fusion varies depending on the approach used. Specifically, when employing classical approaches, the RGB image \(I^{RGB}\) is transformed into a grayscale representation denoted as \(I^{Gr}=[p^{Gr}_{ij}]_{h\times w}\). This conversion is achieved using the standard formula:
\[p^{Gr}_{ij}=0.299\,p^{R}_{ij}+0.587\,p^{G}_{ij}+0.114\,p^{B}_{ij}\,, \tag{3}\]
where \(p^{R}_{ij}\), \(p^{G}_{ij}\), and \(p^{B}_{ij}\) represent the pixel intensities of the Red, Green, and Blue bands at the coordinate \((i,j)\), respectively, with \(i\in[1,h]\) and \(j\in[1,w]\). The resulting grayscale image \(I^{Gr}\) has dimensions given by \(h\times w\).
For classical approaches, the fused representation \(I^{Ec}\) is obtained by computing the pixel-wise mean between the NDVI image \(I^{N}\) and the grayscale image \(I^{Gr}\):
\[p_{ij}^{Ec}=\frac{p_{ij}^{N}+p_{ij}^{Gr}}{2}\, \tag{4}\]
here, \(p_{ij}^{N}\) and \(p_{ij}^{Gr}\) represent the pixel intensities of the NDVI and grayscale images at the \((i,j)\) coordinate, respectively. The resulting fused representation \(I^{Ec}\) is an image of dimensions \(h\times w\). On the other hand, when employing DL-based segmentation methods, the fused representation \(I^{Ed}\) is obtained by channel-wise concatenation of the RGB image \(I^{RGB}\) and the NDVI image \(I^{N}\). This is represented as:
\[I^{Ed}=[I^{RGB},I^{N}]=\left[p_{ijk}^{RGB}\,|\,p_{ijk}^{N}\right]_{h\times w \times 4} \tag{5}\]
where \(p_{ijk}^{RGB}\) and \(p_{ijk}^{N}\) represent the pixel intensities of the RGB and NDVI images at the \((i,j,k)\) coordinate, respectively. The resulting fused representation \(I^{Ed}\) is a tensor with dimensions \(h\times w\times 4\), where the first three channels correspond to the RGB image and the fourth channel corresponds to the NDVI image.
#### 3.2.2 Late Fusion
Early fusion involves merging information at the input space, while late fusion performs the merging at the output space. In this study, late fusion is achieved by computing the pixel-wise weighted sum of the class likelihoods of each model before the final class decision.
In the context of a late fusion framework, the segmentation process involves two input images: \(I^{N}\) and \(I^{RGB}\). Each image is individually processed through a segmentation model, generating respective output likelihood masks: \(Q^{N}=[q_{ij}^{N}]_{h\times w\times 1}\) and \(Q^{RGB}=[q_{ij}^{RGB}]_{h\times w\times 1}\), where, \(q_{ij}^{N}\) and \(q_{ij}^{RGB}\in[0,1]\) represent the likelihood of the positive class at the pixel coordinates \((i,j)\).
The fused representation is obtained by computing a pixel-wise weighted sum of the likelihoods from both segmentation models. Hence, the fused likelihood \(q_{ij}^{L}\) at the pixel coordinates \((i,j)\) is calculated using the following formula:
\[q_{ij}^{L}=\alpha\cdot q_{ij}^{N}+\beta\cdot q_{ij}^{RGB}\, \tag{6}\]
where \(\alpha\) and \(\beta\) are weights that can be adjusted to balance the contribution of each likelihood according to the models' performance. By controlling the values of \(\alpha\) and \(\beta\), the fusion process can be fine-tuned to achieve optimal segmentation results based on the strengths of the individual models.
## 4 Experimental Evaluation
The evaluation section in this study provides a comprehensive assessment of early and late fusion techniques within a multispectral image segmentation framework
applied to the AgRA domain. The section outlines the datasets used for evaluation, describes the implementation details and evaluation metrics employed, and presents a thorough discussion of the quantitative and qualitative results obtained.
### Datasets
The proposed approaches undergo evaluation using primarily the maize crop dataset (referred to as VG) described in Section 3.1. Complementary, a dataset collected from vineyards are used to assess cross-domain generalization capability. For the VG dataset, a total of 532 images were recorded for each of the five sensors (R, G, RE, NIR, and RGB). The images were aligned and cropped to a final size of \(1100\times 825\), and for evaluation purposes, they were resized to \(240\times 240\). The dataset was then split into an 80/20 ratio for training and testing, respectively. Regarding the vineyard data, the dataset encompasses images of \(240\times 240\) from three distinct vineyards. The evaluation follows the approach proposed in [2], employing a cross-validation method that involves training on data from two vineyards and testing on the third. Relevant information about the datasets can be found in Table 2.
### Implementation Details
This section outlines the implementation details of both the classical and DL-based segmentation approaches.
#### 4.2.1 Classical Approach
Three classical segmentation methods were employed: Otsu's thresholding3, edge-based4, and region-based4 techniques. For Otsu's thresholding, the opencv _threshold_ function with an automatic threshold value was utilized to perform the segmentation. In the case of edge-based segmentation, the Canny edge detector was employed to detect the edges of the objects, with further processing to fill the contours and remove small objects from the segmented image. Lastly, a region-based segmentation was performed by generating
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline Dataset & Vargem Grande & Qta Baixo & ESAC & Valdoeiro \\ \hline \hline Sample Size (Train/Test) & 532 (425/107) & 150 & 189 & 120 \\ Bands & R, G, RE, NIR, RGB & B, G, R, RE, NIR, Thermal \\ Dimensions Fusion & 1100\(\times\)825 & \multicolumn{2}{c}{240\(\times\)240} \\ \hline \hline \end{tabular}
\end{table}
Table 2: Dataset information, where B,G,R,RE and NIR represent Blue, Green, Red, Red-edge and Near-infrared, respectively.
an elevation map using the Sobel gradient, determining markers for background and plants based on gray value histograms, and then applying the watershed transform to fill regions of the elevation map with those markers.
#### 4.2.1 Deep Learning Approaches
In this study, two distinct DL-based segmentation models were utilized: SegNet5 and DeepLabV36. SegNet employs an encoder-decoder architecture, where the input is gradually encoded to a latent space and then gradually decoded to an output mask. In contrast, DeepLabV3 upsamples the latent representation in fewer steps.
Footnote 5: SegNet GitHub Implementation
Footnote 6: DeepLabV3 Pytorch
Both models were implemented using the PyTorch [13] framework and executed on a hardware setup consisting of an NVIDIA GEFORCE GTX 3090 GPU and an AMD Ryzen 9 5900X CPU with 64 GB of RAM. The training process utilized the AdamW optimizer [9] with a learning rate of 1e-3 for VG and approximately 1e-4 and 1e-5 for vine models. The Binary Cross-Entropy with Logits Loss (BCEWithLogitsLoss) function was employed to calculate the loss, and the outputs (logits) were passed through a sigmoid activation function to obtain the final probabilities.
### Evaluation Metrics
The performance of the segmentation methods was evaluated using several metrics, including pixel accuracy (acc), \(F_{1}\) score, and Intersection over Union (IoU). These metrics provide insights into the accuracy and quality of the segmentation results. The pixel accuracy is defined as:
\[\text{acc}=\frac{TP+TN}{TP+FP+TN+FN}, \tag{7}\]
where TP, TN, FP, and FN represent True Positives, True Negatives, False Positives, and False Negatives, respectively. The \(F_{1}\) score is calculated as:
\[F_{1}\text{ score}=\frac{2\times TP}{2\times TP+FP+FN}. \tag{8}\]
The \(IoU\) is computed as :
\[IoU=\frac{\text{Area of Intersection}}{\text{Area of Union}}\, \tag{9}\]
where _Area of Intersection_ refers to the number of overlapping pixels between the predicted mask and ground truth mask: \(A\cap B=\{p_{ij}:p_{ij}\in A\text{ and }p_{ij}\in B\}\), where \(p_{ij}\) denotes a pixel at coordinate (i, j), while \(A\) and \(B\) represent the ground truth mask and the predicted mask, respectively. The _Area of Union_ represents the total number of pixels encompassed by both prediction and ground truth masks, including the overlapping region: \(A\cup B=\{p_{ij}:p_{ij}\in A\text{ or }p_{ij}\in B\}\), where \(p_{ij}\) denotes a pixel at coordinate (i, j), while \(A\) and \(B\) represent the ground truth mask and the predicted mask, respectively.
\begin{table}
\begin{tabular}{c c|c c|c c|c c|c c c|c c c} \hline \hline \multirow{3}{*}{Method} & \multicolumn{3}{c|}{Maize} & \multicolumn{8}{c|}{Vine} & \multicolumn{3}{c|}{Vine} & \multicolumn{3}{c|}{Average} \\ \cline{2-13} & \multicolumn{3}{c|}{VG} & \multicolumn{3}{c|}{Qta. Baixo} & \multicolumn{3}{c|}{ESAC} & \multicolumn{3}{c|}{Valdoeiro} & \multicolumn{3}{c}{Average} \\ & \multicolumn{3}{c|}{Acc. F1} & \multicolumn{3}{c|}{IoU} & Acc. F1 & IoU & Acc. F1 & IoU & Acc. F1 & IoU & Acc. F1 & IoU \\ \hline \multirow{3}{*}{Constration} & RGB & 74.4 & 28.3 & 16.7 & 57.6 & 41.9 & 27.6 & 79.1 & 52.7 & 40.2 & 74.5 & 38.6 & 26.0 & 71.4 & 40.4 & 27.6 \\ & NDVI & 76.1 & 32.3 & 19.6 & 84.1 & 61.7 & 46.1 & 65.6 & 49.7 & 34.8 & 92.4 & 67.8 & 56.0 & 79.6 & 52.9 & 39.1 \\ \cline{2-13} & Early F. & 75.6 & 34.1 & 20.9 & 71.6 & 52.7 & 37.7 & 71.4 & 55.5 & 41.2 & 89.5 & 65.3 & 52.7 & 77.1 & 51.9 & 38.1 \\ & Late F. & 67.5 & 33.7 & 20.8 & 83.0 & 67.5 & 44.7 & 89.5 & 66.8 & 42.9 & 91.6 & 81.6 & 56.7 & **82.7** & **62.4** & **41.3** \\ \hline \hline \multirow{3}{*}{Constration} & RGB & 76.6 & 12.9 & 6.9 & 69.8 & 15.5 & 8.5 & 78.1 & 26.3 & 15.4 & 86.9 & 22.7 & 13.1 & 77.9 & 19.4 & 10.5 \\ & NDVI & 75.6 & 13.0 & 7.0 & 70.3 & 16.8 & 9.3 & 61.1 & 18.7 & 10.4 & 94.2 & 48.6 & 33.7 & 75.3 & 24.3 & **15.1** \\ \cline{2-13} & Early F. & 84.1 & 21.0 & 11.9 & 76.8 & 16.3 & 8.9 & 65.4 & 14.8 & 8.0 & 93.9 & 39.3 & 25.8 & **80.1** & 22.9 & 13.7 \\ & Late F. & 70.1 & 14.0 & 7.6 & 64.8 & 18.9 & 10.6 & 71.1 & 25.5 & 14.8 & 87.6 & 39.1 & 25.1 & 73.5 & **24.4** & 14.5 \\ \hline \hline \multirow{3}{*}{Constration} & RGB & 84.1 & 21.0 & 11.9 & 78.3 & 47.4 & 33.1 & 78.9 & 44.3 & 33.2 & 85.3 & 44.6 & 31.2 & 81.7 & 39.3 & 27.4 \\ & NDVI & 82.2 & 12.4 & 6.7 & 89.0 & 67.9 & 52.5 & 76.0 & 50.9 & 37.3 & 97.2 & 76.3 & 63.8 & 86.1 & 51.9 & 40.1 \\ \cline{2-13} & Early F. & 76.7 & 19.0 & 10.5 & 81.3 & 52.7 & 37.0 & 87.6 & 63.5 & 49.8 & 97.3 & 77.6 & 65.2 & 85.8 & 53.2 & **40.6** \\ & Late F. & 81.8 & 25.3 & 14.7 & 93.1 & 69.3 & 34.9 & 92.1 & 48.9 & 24.6 & 98.6 & 82.7 & 45.2 & **91.4** & **56.6** & 29.9 \\ \hline \multirow{3}{*}{Constration} & RGB & 96.2 & 87.1 & 78.5 & 84.9 & 52.1 & 35.6 & 73.1 & 41.9 & 27.7 & 92.1 & 58.0 & 41.3 & 86.6 & 59.8 & 45.8 \\ & NDVI & 95.6 & 85.3 & 76.1 & 85.9 & 64.4 & 48.4 & 78.5 & 51.4 & 36.8 & 93.9 & 67.1 & 51.7 & **88.5** & **67.1** & **53.3** \\ \cline{1-1} \cline{2-13} & Early F. & 96.8 & 89.3 & 80.8 & 81.1 & 42.1 & 26.7 & 81.4 & 50.5 & 33.9 & 94.3 & 61.0 & 43.9 & 88.4 & 60.7 & 46.3 \\ \cline{1-1} & Late F. & 96.1 & 86.9 & 78.6 & 86.2 & 56.4 & 40.4 & 75.9 & 46.9 & 33.0 & 93.4 & 61.4 & 45.7 & 87.9 & 62.9 & 49.4 \\ \hline \hline \multirow{3}{*}{Constration} & RGB & 96.5 & 87.9 & 79.8 & 81.8 & 35.6 & 21.8 & 82.0 & 45.0 & 30.1 & 91.0 & 56.2 & 39.7 & 87.9 & 56.1 & 42.9 \\ \cline{1-1} & NDVI & 95.9 & 86.0 & 77.3 & 87.3 & 59.2 & 42.2 & 77.7 & 33.8 & 21.4 & 89.1 & 44.2 & 28.8 & 87.5 & 55.8 & 42.4 \\ \cline{1-1} \cline{2-13} & Early F. & 97.3 & 89.2 & 81.2 & 82.1 & 31.0 & 18.7 & 79.9 & 37.3 & 23.8 & 89.9 & 52.8 & 36.4 & 87.3 & 52.6 & 40.0 \\ \cline{1-1} & Late F. & 96.6 & 87.7 & 80.2 & 85.3 & 47.5 & 32.0 & 81.0 & 39.0 & 25.9 & 92.5 & 58.0 & 42.3 & **88.9** & **58.1** & **45.1** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Segmentation performance on the Maize (VG) and Vine (Qta. Baixo, ESAC, and Valdoeiro) datasets, employing classical approaches such as Otsu Threshold (OST), Edge-based, and Region-based, as well as DL-based approaches including SegNet and DeeplabV3. Each method is evaluated with four scores: RGB and NDVI individually, and both modalities fused using early and late fusion techniques. The performance scores are presented in percentage [%], with the **best score** highlighted in bold and the second-best scores underlined.
### Results and Discussion
This section presents the experimental results for both classical and DL-based segmentation methods, comprising both quantitative and qualitative assessments. The qualitative results are organized in Table 3, while the visual representations of the segmentation masks are illustrated in Fig 3. Each segmentation approach is evaluated in four distinct methods: first, with the RGB and NDVI modalities individually, followed by the modalities fused using early and late fusion techniques, as described in Section 3.3. The results were obtained with the segmentation threshold \(T=0.5\), for the late fusion results, both models were given an equal contribution: _i.e._\(\alpha=0.5\) and \(\beta=0.5\).
#### 4.4.1 Classical vs DL-based
In this work, we employ classical unsupervised and supervised DL-based segmentation methods. The classical methods demonstrate to perform well on tasks where the primary objective is to separate foreground from background, as is the case of the Vineyard dataset, where the goal is to segment individual plans. In such case, unsupervised approaches are competitive with DL-based approaches, offering the advantage of simplicity and lower complexity. However, in segmentation tasks that involve identifying spatial regions, containing both foreground and background, such as the Maize dataset, where the objective is to detect the plant rows, supervised DL-based approaches show a clear advantage due to their ability to learn spatial information. The results obtained in our experiments consistently confirm this, as depicted in Table 3 and Fig 3.
#### 4.4.2 Fusion vs No-Fusion
The results consistently show that late fusion either achieves the best performance or ranks a close second, distinctly outperforming early fusion. This superiority means that, on average, extracting features from individual modalities first and then fusing them at a later stage yields better results compared to one model from both modalities combined.
Upon analyzing the average results, it becomes evident that late fusion capitalizes on the model with the highest performance. By averaging the outputs of both models, late fusion is able to reduce the noise associated with the lesser-performing model. However, this method also has a downside: valuable information from the best-performing model may be diluted or lost. Thus, while late fusion leverages the strengths of both models to enhance overall robustness, finding the right balance in the contributions of each model becomes crucial. One potential approach to achieve this balance is to weight the contributions based on their respective performance. Investigating this weighted fusion strategy offers an interesting avenue for future work.
#### 4.4.3 Runtime Analysis
In terms of computational performance, DL methods demand a considerable amount of time to execute due to the intensive computations involved. In our case, the maximum runtime reached approximately twenty-five minutes for the entire training process, specifically during late fusion, where the
batch size supported by the hardware was limited to 32 (VG) and 16 (Vine). In contrast, classical methods demonstrate the opposite behavior, being significantly faster and achieving results within a minute.
## 5 Conclusions
This work studies the impact of fusion (combining) approaches of multispectral data in segmentation tasks applied domains related to digital-precision agriculture and agricultural robotics. The study was conducted on both classical and DL-based segmentation methods, where the experimental part is supported by two datasets : a dataset of vineyards and a dataset of maize crops, recorded and curated specifically for this study.
The experimental findings show two principal observations: First, classical segmentation methods, utilizing techniques like thresholding and edge detection, are competitive against DL-based approaches in tasks requiring foreground-background separation. This highlights their continued applicability in specialized scenarios. Second, late fusion, where individual modalities are processed and then fused, emerges as the most robust approach, demonstrating its superior adaptability across various experimental conditions. These insights offer
Figure 3: Qualitative segmentation results of both VG and vineyard dataset. The images (a) to (f) (top row), represent respectively the RGB, NDVI and ground-truth masks. Images (g) to (l) (middle row) represent segmentation masks generated by classical approaches. And finally, images (m) to (r) (bottom row) represent segmentation masks generated by SegNet. More specifically, images (g) to (i) were generated by Otsu, while images (j) to (l) were generated with a region-based method.
valuable guidance for both current applications and future research in segmentation algorithms.
## Acknowledgments
This work has been supported by the project GreenBotics (ref. PTDC/EEI-ROB/2459/2021), founded by Fundacao para a Ciencia e a Tecnologia (FCT), Portugal. It was also partially supported by FCT through grant UIDP/00048/2020 and under the PhD grant with reference 2021.06492.BD.
|
2309.10063 | Survey of Consciousness Theory from Computational Perspective | Human consciousness has been a long-lasting mystery for centuries, while
machine intelligence and consciousness is an arduous pursuit. Researchers have
developed diverse theories for interpreting the consciousness phenomenon in
human brains from different perspectives and levels. This paper surveys several
main branches of consciousness theories originating from different subjects
including information theory, quantum physics, cognitive psychology, physiology
and computer science, with the aim of bridging these theories from a
computational perspective. It also discusses the existing evaluation metrics of
consciousness and possibility for current computational models to be conscious.
Breaking the mystery of consciousness can be an essential step in building
general artificial intelligence with computing machines. | Zihan Ding, Xiaoxi Wei, Yidan Xu | 2023-09-18T18:23:58Z | http://arxiv.org/abs/2309.10063v1 | # Survey of Consciousness Theory from Computational Perspective
###### Abstract
Human consciousness has been a long-lasting mystery for centuries, while machine intelligence and consciousness is an arduous pursuit. Researchers have developed diverse theories for interpreting the consciousness phenomenon in human brains from different perspectives and levels. This paper surveys several main branches of consciousness theories originating from different subjects including information theory, quantum physics, cognitive psychology, physiology and computer science, with the aim of bridging these theories from a computational perspective. It also discusses the existing evaluation metrics of consciousness and possibility for current computational models to be conscious. Breaking the mystery of consciousness can be an essential step in building general artificial intelligence with computing machines.
###### Contents
* 1 Introduction
* 1.1 A Platonic Dialogue About Human Consciousness
* 1.2 Definition of Consciousness
* 1.3 Measurement of Consciousness
* 1.4 Consciousness and Intelligence
* 1.5 Consciousness and Free Will
* 1.6 Consciousness while Asleep
* 1.7 Overview of Consciousness Theories
* 2 Information Integration Theory
* 2.1 Information Entropy
* 2.2 Basics Concepts of IIT
* 2.3 Measurement of Information Integration
* 2.4 Biological Evidence
* 3 Consciousness as a State of Matter
* 3.1 Basics of Quantum Mechanics
Integration Principle * 3.3 Independence Principle * 3.4 Dynamics principle
* 4 Orchestrated Objective Reduction Theory
* 4.1 Consciousness as Orchestrated Objective Reduction
* 4.2 Free Will in Neurons
* 4.3 Diosi-Penrose Objective Reduction
* 4.4 Evidence for Objective Reduction of Quantum State
* 5 Global Workspace Theory
* 5.1 The Theatre of Consciousness
* 5.2 Computational Models of GWT
* 6 Higher-Order Theories
* 6.1 Higher-Order Perception Theory
* 6.2 Higher-Order Thought Theory
* 6.3 Self-Representational Theory
* 6.4 Other theories and perspectives in HOT
* 7 Attention Schema Theory
* 7.1 Formulation
* 7.2 \(I\)-consciousness and \(M\)-consciousness
* 7.3 AST as a Unification of GWT and HOT
* 8 Conscious Turing Machine
* 8.1 Formulation
* 8.2 CTM for Consciousness
* 8.3 Relationships with Other Theories
* 9 Physiological Evaluation Metric of Consciousness
* 9.1 Metrics Based on Electrical Signals
* 9.2 Metrics Based on Behaviors
* 10 Look Ahead: Can Computational Models Be Conscious?
* 10.1 Background
* 10.2 Large Language Model
* 10.3 Emerging Intellectual Capability of LLM - Turing Test
* 10.4 Consciousness of LLM
* 11 Concluding Remarks
Introduction
Consciousness is a complex and elusive phenomenon that remains one of the greatest mysteries of science. It has been the subject of philosophical inquiry for centuries, and more recently, scientific investigation. We, humans, are not clear about why and how consciousness exists in our brains or even hold diverged opinions on whether we truly have consciousness. Existing consciousness theories have provided different interpretations of the human conscious process.
This paper provides a comprehensive exploration of the theoretical foundations of consciousness from interdisciplinary perspectives. Chapter 1 endeavors to characterize the concepts related to consciousness, by differentiating the consciousness from others such as awareness, arousal, and wakefulness. This chapter further emphasizes the importance and difficulties of the human consciousness problem, with the aim of drawing attention from different research communities to jointly investigate this problem. Chapters 2, 3, and 4 elucidate the mathematical and physical underpinnings of consciousness. Specifically, Chapter 2 introduces **Information Integration Theory**, which outlines the conditions for information entropy required by a conscious entity, offering insights into the informational characteristics that consciousness might exhibit. Chapter 3 and 4 discusses **Consciousness as a State of Matter** and **Orchestrated Objective Reduction Theory**, both approaching consciousness problem from a physics standpoint. These two chapters discuss the specific features that consciousness, as a state of matter, should possess, along with the principles that some quantum theorists propose as the basis for the generation of consciousness. Subsequent Chapters 5, 6, 7 and 8 survey several influential theories of consciousness and succinctly summarize research on computational models associated with each theory, including the **Global Workspace Theory** (5), **High-Order Theories** (6), **Attention Schema Theory** (7) and **Conscious Turing Machine** (8). In Chapter 9, a brief overview of contemporary biomedical measurement methods for consciousness, grounded in electrophysiological signals and behavioral indicators, is provided. In the final Chapter 10, we engage in a discursive examination of artificial intelligence (AI) consciousness, particularly delving into the question of whether Large Language Models as instances of advanced computational models possess consciousness and exploring the necessary and sufficient conditions for AI consciousness.
In summary, this paper offers a comprehensive review of consciousness from various subjects encompassing information theory, quantum physics, cognitive psychology, physiology, and computer science, with the aim of bridging these theories from a computational perspective for building future AI consciousness.
To give the readers a primal taste of this topic, a dialogue revolving the consciousness problem is provided in the following section.
### A Platonic Dialogue About Human Consciousness
One day, I met two PhD students in the modern world discussing the consciousness problem within the human brain. This conversation starts with the relationship between consciousness and the physical world, discussing thoughts about what consciousness is. The confusion in this conversation serves as the motivation of this study and introduces the essential problems that this article aims to address. Here is the conversation.
**Athena:** Hey, I was looking at the results of the double-slit experiment, but I'm struggling to understand what determines the photons to choose which slit to go through, what happens here?
**Galileo:** Hey, it's just the wave-particle duality of the photons, and the state of the photon collapses only when you observe it on the screen. It is just random. Don't dig too much into it.
**Athena:** What do you mean by 'random'? Does this world ultimately have a random essence? Even Einstein says that God does not play dice.
**Galileo:** Your measurement affects its state, which determines the slit it goes through in the double-slit experiment.
**Athena:** So my mental status determine the state of the photon? Its state may change if I choose to observe it in a different way, say later for 0.00001 seconds than the intended observation.
**Galileo:** Yeah, maybe. And your mental status is also a stochastic process, right? You have your free will or consciousness.
**Athena:** Consciousness. Do you really believe in its existence? What if I'm deterministic, fully determined by the underlying physical and chemical rules?
**Galileo:** But quantum mechanics assumes the basic principle that the photons, and particles in your body, including your brain, still have a probabilistic state, right? So you are also following a stochastic process instead of a deterministic one.
**Athena:** You are right. The state collapse process is assumed to have true randomness, and this process may happen in my brain, affecting my decision, and also which slit these photons may go through. But I still don't quite believe in the existence of my consciousness. The state decoherence may happen for a system as large as my brain, so there is no quantum property and I may still be deterministic.
**Galileo:** I think the existence of consciousness is a belief that differs from person to person.
**Athena:** Wait, but what is consciousness? We cannot discuss its existence without a clear definition. The consciousness seems to describe the subjective experience but without a rigorous definition. I'm wondering where and when the conscious process happens.
**Galileo:** I guess it's in the cerebral cortex or thalamus. However, for such a phenomenon to happen, I would say most functional parts of the brain may have a collaborative process to trigger the consciousness.
**Athena:** I agree. Also if we assume a stone, or a tree, cannot be as conscious as a human, then the system has to have a certain level of computational power.
**Galileo:** Also, I cannot imagine a system having consciousness without memory.
**Athena:** I may disagree on that. I think a patient with brain damage of losing his memory still has consciousness.
**Galileo:** May be an evidence. Do you think consciousness retains during sleep?
**Athena:** This can be a complicated problem. You know there are multiple stages during sleep, including non-rapid eye movement (NREM) and rapid eye movement (REM). The levels of consciousness can be different for different stages.
**Galileo:** Agreed. The dream happens during the REM, right? It feels like I have consciousness during the dream, so I guess I'm conscious at REM. But it's still quite different from the wakefulness in experience.
**Athena:** I'm not sure, it's quite complicated. But the essential difference between dream and wakefulness is that during sleep there is no external sensory input to the brain, all the experiences are fake and fabricated by the brain itself. If the consciousness happens during the dream, it indicates that external sensory inputs may not be necessary for the system to be conscious.
**Galileo:** I guess so. It seems the consciousness is quite close to the subjective experience, even if I'm not sure if subjective experience is real existence or a fake hallucination by humans.
**Athena:** It seems the'subjective experience' is an equivalent phrase of consciousness in some books. But this definition is still unclear to me. Also, the'real' and 'fake' problem is also unclear to me. If our measurement of the world can affect its state, what does it even mean by a'real' observation of the world and a 'fake' one? After all, they are just the mirrored signals in our brains, which can not directly represent the state of the real world.
**Galileo:** Well, that's a very philosophically pessimistic perspective. Speaking of the definition of consciousness, if we assume consciousness is defined as'subjective experience', it seems not very related to another concept called intelligence. Nowadays, people are building artificial intelligence in computers, is it also consciousness? This could be an interesting problem! Do you think it's just about the Turing test [Turing, 2009]?
**Athena:** I don't think so. The focus of that paper is discussing whether the Turing machine can achieve human-level intelligence. Please notice the difference in the words here, it's intelligence but not consciousness! I assume a program passing the Turing test only means that it's as intelligent as a human, but not conscious.
**Galileo:** Thanks for reminding me of that. It seems that intelligence and consciousness are two different things. But remember that at the beginning of the discussion, we mentioned that consciousness may require a certain level of computational power, and this computational power may be phrased as intelligence.
**Athena:** Maybe, the intelligent property seems to be a necessary condition for the consciousness to emerge, but I'm not sure to which extent the consciousness requires the intelligence to be.
**Galileo:** Yeah, this is undiscovered and could be a good research topic. However, if a creature is already intelligent enough to survive in the world, why does it still require consciousness?
**Athena:** Good question. According to Darwin's theory of evolution, the existing creatures in the world should only exhibit those skills in favor of their survival through natural selection. If consciousness exists in humans, it indicates that it has some benefits for humans to survive in environments, and those without it are erased over history. But the trees and flowers still exist, which really troubles me.
**Gailleo:** Trees and flowers are of different species from humans. Maybe for certain species, it requires to have consciousness, like animals.
**Athena:** Well, but I still feel unclear about why consciousness exists in humans or other animals, if assumed to exist.
**Athena:** It seems like consciousness theory can be a very complicated subject with correlations to some very different subjects, like physics, biology, computer science, neuroscience, information theory, etc. It can be very difficult. But let's start to investigate at least!
The above conversation is a microcosm of the discussions by the authors of the paper for starting the investigation of the consciousness problem. We will start the discussion with the definition of consciousness (Sec. 1.2), and the relationship between consciousness and intelligence (Sec. 1.4), consciousness and free will (Sec. 1.5). Then we have a brief overview of the existing consciousness theories (Sec. 1.7) and a further discussion about consciousness while asleep (Sec. 1.6).
### Definition of Consciousness
**Consciousness, Awareness, Wakefulness and Arousal:** Consciousness is a complex and multifaceted concept that has been studied extensively in the fields of neuroscience, psychology, and philosophy. Existing research typically defines consciousness as comprising two main components: arousal (wakefulness) and awareness (subjective experience) [Lendner et al., 2020]. Arousal refers to the overall state of alertness or wakefulness, while awareness refers to the subjective experience of perceiving and interpreting sensory information.
Typically, arousal is indicated by the opening of the eyes, while awareness is inferred by the ability to follow commands [Lee et al., 2022]. However, in certain instances, such as during dreaming, subjective experience can still occur despite the absence of full wakefulness. Consciousness is considered to be absent during sleep or anesthesia, but in some cases, it can still be present, depending on the level of arousal and awareness. Consciousness, awareness, wakefulness, and arousal are related but distinct concepts in the study of the human mind and brain. We provide descriptive definitions for each concept according to the existing literature as follows:
**Consciousness** refers to the subjective experience of being aware of one's thoughts, feelings, sensations, and surroundings. It is often described as the state of being awake and aware of one's surroundings and internal states. As an important concept in discussing the consciousness problem, _qualia_ refers to the subjective and personal experience of sensory information, such as the way we perceive colors, sounds, tastes, and smells. It is the subjective experience of sensory perception that cannot be objectively measured or observed by others.
**Awareness** refers to the ability to perceive, process, and comprehend information from one's environment or inner experience. It includes both conscious and unconscious processes and can range from simple sensory perception to complex cognitive processes such as attention, memory, and reasoning.
**Arousal** refers to the level of responsiveness of the brain and body to internal and external stimuli. It is a physiological state that ranges from a state of low arousal, such as drowsiness or relaxation, to high arousal, such as intense excitement or fear.
**Wakefulness** refers to the state of being awake and not asleep. It is a physiological state characterized by the presence of the electrical activity of the brain and the ability to respond to external stimuli.
In summary, consciousness is a subjective experience of being aware of one's thoughts, feelings, and surroundings, while awareness is the ability to perceive, process, and comprehend information. Wakefulness is a physiological state characterized by being awake, which is _usually regarded identically as awareness_. Arousal is the level of responsiveness to stimuli. Awareness and arousal are necessary conditions for consciousness, but they are not sufficient to arouse the consciousness, and consciousness can occur in the absence of full awareness and high arousal levels.
### Measurement of Consciousness
Recent studies have developed effective measures of human consciousness [Seth et al., 2008, Demertzi et al., 2017], such as the electrical signal-based metrics like Perturbational Complexity Index (PCI) [Casali et al., 2013] and Bispectral Index (BIS) [Rosow and Manberg, 2001, Johansen, 2006] and behavior-based metrics like the Glasgow Coma Scale (GCS) [Jones, 1979, Sternbach, 2000], The Coma Recovery Scale-Revised (CRS-R) [Giacino et al., 2004] and Full Outline of Unresponsiveness (FOUR) [Wjidicks et al., 2005]. Details for each physiological evaluation metric are discussed in Sec. 9. In the following paragraphs, we take the PCI metric as an example to distinguish the concepts of arousal and awareness in different conscious states.
The ability to accurately measure consciousness has important implications for understanding and treating conditions that affect consciousness, such as coma, anesthesia, and brain injury. PCI was developed from electroencephalographic (EEG) responses to direct and noninvasive cortical perturbation with transcranial magnetic stimulation (TMS). The PCI quantifies the complexity of deterministic patterns of significant cortical activation evoked by TMS to derive an empirical cutoff that reliably discriminates between unconsciousness and consciousness in various states, including REM sleep, wakefulness, ketamine-induced anesthesia, and conscious brain-injured patients.
In the PCI study by Casali et al. (2013) [Casali et al., 2013], arousal and awareness levels serve as indicators of the degree of human consciousness. The arousal and awareness levels of consciousness under different states are summarized in Table 1, which shows under different states the consciousness will appear differently in human brains [Lee et al., 2022]. Compared with the full wakefulness of a normal person, the sleeping stages like REM or NREM have lower levels of arousal or awareness and are commonly believed to be not as "conscious" as a wakeful state. There are pharmacology approaches to achieve similar low levels of arousal and awareness with anesthesia, leading to incomplete conscious states of a human. From a pathology view, existing patients with a minimally conscious state (MCS) will appear to have relatively high levels of arousal and awareness but still less consciousness, which is evidence of the fact that arousal and awareness are necessary but not sufficient conditions for consciousness. People with unresponsive wakefulness syndrome will have high arousal but low awareness.
### Consciousness and Intelligence
In the well-known paper by Alan Turing, he made a comment on the consciousness argument against Turing machine [Turing, 2009]:
_"I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. "_
Considering the question in the paper is whether a machine can think like a human, Turing proposed the famous imitation game as a way to test machine intelligence. Intelligence and consciousness are widely considered two different properties of the brain. In _Life 3.0_[Tegmark, 2018] by Max Tegmark, intelligence is defined as the ability to accomplish complex goals, and consciousness is defined as subjective experience. The consciousness seems to be more mysterious than intelligence, and harder
\begin{table}
\begin{tabular}{l|l|l} \hline Conscious State & Arousal & Awareness \\ \hline Healthy wakefulness & high & high \\ \hline REM1 sleep with dreams & low & high \\ \hline NREM2 sleep without dreams & low & low \\ \hline Anesthesia induced with ketamine & low & high \\ \hline Anesthesia induced with propofol or xenon & low & low \\ \hline Minimally conscious state & high & high \\ \hline Unresponsive wakefulness syndrome & high & low \\ \hline \end{tabular}
* REM: rapid eye movement
* NREM: non-rapid eye movement
\end{table}
Table 1: The arousal and awareness levels of consciousness under different states (results adapted from Lee et al. [2022])
to measure. As depicted in the consciousness "pyramid" in Fig. 1 (originally in Tegmark [2018]), the intelligence-related problems are the easiest, which is also claimed by David Chalmers. This type of problem typically does not require to consider the subjective experience of the experimental subjects. A hard problem is to find the physically interpretable features for distinguishing conscious and unconscious processes. The next level question is how the consciousness happens and what the determined factors are. The final and hardest problems related to the explanation of the existence of consciousness, or why consciousness exists in any system?
For the reasons for the existence of consciousness in human brains, a quote from Stephen Wolfram mentions the limitation of self-modeling and time-persistence are two key factors for humans to have consciousness:
_"The fact that we have coherent consciousness is a consequence of two things: 1. That we are computationally bounded, so the universe does not contain enough resources for us to construct a complete model of ourselves. 2. That we believe that we are persistent in time, and hence assume constancy where there is none."_
Consciousness is a complex and multifaceted phenomenon that has been studied by philosophers, psychologists, neuroscientists, and others for a long history. Even so, consciousness remains mysterious for modern society, and people refer to it as _the hard problem of consciousness_, which was initially proposed by David Chalmers in 1995 [Chalmers, 1995, 1997]. It is generally understood as the experience or awareness of subjective mental states such as thoughts, perceptions, emotions, and sensations. The hard problem indicates the reasons for the existence of such subjective experiences in human minds. Consciousness is closely linked to the functioning of the brain, but its precise nature and mechanisms are still not fully understood. Some theories suggest that consciousness emerges from the integration of sensory information, while others propose that it is an intrinsic property of the universe or a fundamental aspect of reality itself. The quote by Stephen Wolfram suggests that the experience of coherent consciousness may depend on our computational limitations and our assumption of temporal persistence. The argument suggests that we believe we are persistent beings that exist over time, even though there is no actual constancy in the universe.
### Consciousness and Free Will
Free will is defined as the ability that humans to make choices and decisions that are not solely determined by biological, environmental, or external factors. Whether humans have free will is highly controversial and unknown. However, this concept is believed to be closely related to consciousness. In Fig. 2, we depict an architecture unifying several consciousness theories to be introduced later in this article, as well as illustrating the relationship between consciousness and free will. In Fig. 2, the human brain has interactions among the low-level modules and consciousness module, where the conscious experiences happen in human brains [Baars, 2003, Baars and Franklin, 2003, 2007,
Figure 1: The hardness of different levels of the problems related to a conscious mind.
2009, Locke, 1948, Armstrong, 1981, Armstrong and Malcolm, 1985, Lycan, 1996, Armstrong, 2002, Lycan, 2004, Rosenthal, 2009, 2012, 2004, Byrne, 1997, Brown et al., 2019, Graziano and Webb, 2015, Graziano et al., 2020). The low-level modules involve external processors, internal processors, and memory. The external processors handle the inputs and outputs of the human brain, including the image processor, sound processor, gustatory processor, olfactory processor, tactile processor, motor activators, speaking modules, etc. Each module processes the input information and outputs the processed signals to other parts. Internal processors include the logic processor, language processor, etc. Each will process information with outputs from external processors, or spontaneously without external inputs. All these modules have the ability to communicate with other modules and the memory to accomplish the desired objective, and most of them will generate intermediate outputs as inputs of other modules. More importantly, beyond all these low-level modules, the consciousness module has the ability to observe the intermediate outputs from low-level modules, which generates subjective experiences (also known as consciousness) for a human. This observation process is also formulated as a self-modeling process in theories like attention schema theory (Graziano and Webb, 2015, Graziano et al., 2020) or higher-order thoughts/perception in high-order theories (Locke, 1948, Armstrong, 1981, Armstrong and Malcolm, 1985, Lycan, 1996, Armstrong, 2002, Lycan, 2004, Rosenthal, 2009, 2012, 2004, Byrne, 1997, Brown et al., 2019), etc. Yet these explanations are of a highly abstract level. The specific components in human brains arsing the consciousness are still debatable in the literature. There are some works (Baars, 2002) showing that the pre-frontal cortex may be involved in the higher-level cognitive procedure as an example. Discussions of different interpretations of those theories will be detailed in later sections. Due to the limited computational power, the consciousness module will only pay attention to those important information flows, which is a relatively small subset of all intermediate outputs from low-level modules. This coincides with the empirical evidence showing that a human can only have dozens of conscious experiences per second (Tegmark, 2018, 2000).
The existence of genuine free will remains a significant aspect of consciousness theory, albeit one that lacks sufficient scientific evidence to definitively prove or disprove. To engage in a comprehensive exploration of the existence of free will and its potential hierarchy within the human brain (Figure 2), we will bifurcate our discussion into positive and negative hypotheses: (1). assuming the existence of true free will; or (2). denying such existence of the free will. Therefore we mark the block of free will with dotted lines. For the first case that free will exists, which is beyond current physics interpretation, we may seek _external variables_ from beyond the existing mathematical and physics frameworks to determine the process of human decision-making. The external variables can be regarded as an analog of the _hidden variables_ in the famous Einstein-Podolsky-Rosen (EPR) paradox (Einstein et al., 1935), which states that some unobserved hidden variables may exist for explaining the true randomness in quantum mechanics, as a question of the completeness of quantum mechanics framework by Albert Einstein and others. Similar as the interpretation of EPR paradox for randomness in quantum mechanics, the explanation of the true 'free will' may also require such external randomness beyond the known physical systems. More details regarding physical explanations of the existence of free will are discussed in Sections 3 and 4. This part of free will is shown in Fig. 2 as the blue block containing the free will, injected by the consciousness module to affect the low-level processes (as Fig. 3) thus the final outputs from the system. For the second case that true free will does not exist, a question is what leads to some people think and feel that they also have the so-called 'free will'? This phenomenon can be interpreted with the current architecture with a hallucinated 'free will' (Wegner, 2004). The key fact is that the consciousness module can only observe partial intermediate outputs of low-level modules due to its limited information processing bandwidth, thus the final outputs to the environment from the system can not be fully determined by these observed pieces of information, but together with more of other unobserved information. The consciousness module still seeks to explain the generated outputs by creating hallucinations of 'free will' in determining the results. However, this explanation may be debatable and requires verification. The existence of free will is also an open problem at present.
### Consciousness while Asleep
The human sleep process contains stages including Rapid Eye Movement (REM) and Non-Rapid Eye Movement (NREM). According to previous research [Lee et al., 2022], NREM is a state where there is neither arousal nor awareness, whereas REM is a state of awareness without arousal.
Integrated Information Theory (IIT) [Tononi, 2004] suggests that consciousness is reduced during deep sleep. IIT proposes that consciousness is generated by the integrated information within a system. According to IIT, the level of consciousness experienced during sleep is dependent on the degree of integration present in the brain's activity.
Studies have shown that subjects awakened from deep NREM sleep, especially early in the night, often report a lack of awareness, even though cortical and thalamic neurons remain active. However, subjects awakened at other times, mainly during REM sleep or during lighter periods of NREM sleep later in the night, report dreams characterized by vivid images [Hobson et al., 2000]. From
Figure 3: Details of processors in a consciousness system like the human brain.
Figure 2: The overview architecture of consciousness system with free will.
the perspective of integrated information theory, a reduction in consciousness during sleep would be consistent with the bistability of cortical circuits during deep NREM sleep. Consistent with these observations, studies using TMS (transcranial magnetic stimulation), a technique for stimulating the brain non-invasively, in conjunction with high-density EEG (electroencephalogram), show that early NREM sleep is associated with a breakdown of effective connectivity among cortical areas, leading to a loss of integration or a loss of repertoire and thus of information [Massimini et al., 2005, 2007]. These findings suggest that the level of consciousness experienced during sleep is dependent on the degree of integration present in the brain's activity, with a reduction in integration leading to a reduction in consciousness.
Overall, IIT suggests that the level of consciousness experienced during sleep is dependent on the degree of integration present in the brain's activity, with a reduction in integration leading to a reduction in consciousness.
### Overview of Consciousness Theories
Diverse theories are developed by researchers to investigate the nature of consciousness and how it arises from the brain. Some of the most prominent theories include information integration theory (IIT), consciousness as a state of matter, orchestrated objective reduction (Orch OR) theory, global workspace theory (GWT), high-order theory (HOT), attention schema theory (AST), consciousness Turing machine (CTM), etc. IIT (Sec. 2) proposes that consciousness arises from the integration of information from multiple sensory and cognitive sources. Consciousness as a state of matter (Sec. 3) analyzes the deficiency of the information integration principle from the physics-principled calculation. Orch OR theory (Sec. 4) proposes the orchestrated objective reduction process for explaining the free will in conscious experience from a quantum mechanics perspective. GWT (Sec. 5) argues that consciousness arises from the activation of a global workspace in the brain, which integrates information from different sources and broadcasts it to the rest of the brain. HOT (Sec. 6) asserts that consciousness is the result of higher-order representations of sensory information. AST (Sec. 7) posits that consciousness is an attentional schema that the brain uses to represent the state of being conscious. CTM (Sec. 8) theory asserts that consciousness can be described as a computational process, as an extension of the Turing machine. These theories offer different perspectives on the nature of consciousness, and each has its own strengths and weaknesses. These theories offer insights into how it might be studied and understood. We conducted this survey to compare the similarities and differences of these theories and also summarize the correlations among different theories. More importantly, we aim to find feasible computational models from these theories for characterizing the conscious process of the human brain.
## 2 Information Integration Theory
The theory of information integration postulates that consciousness corresponds to the capacity of a system to integrate information. It first proposes the axioms of experiences, then postulates the properties of physical system that would give rise to the intrinsic experiences. In order to achieve this, the theory claims that the system must have a cause-effect power in itself, not resulting from any external factor. The cause-effect power of the system is then quantified by the largest minimum entropy of all sub-systems, evaluated by intervening on the states of a subset of the system (cause), and observe the change of states in the other part of the system (effect), while holding the external factors fixed. Therefore, IIT claims that **any conscious experience relates to a cause-effect structure that is maximally irreducible**. We will show in this section how the cause-effect power can be quantified by a measure called _information integration_. If the IIT theory is correct, we should be able to calculate the integrated information for a conscious experience in human brain and derive a reasonable value.
### Information Entropy
The definition of information, according to Shannon [Shannon, 1948], is quantified by the reduction of uncertainty among a number of alternatives when one occurs. This is measured by the entropy function, defined as the following.
**Definition 1**.: _(Shannon Entropy) Given a probability measure \(P\) on a \(\sigma\)-algebra \(\mathcal{A}\), the entropy of a probability distribution is:_
\[H(A)=H_{p}(A)=\int_{a\in\mathcal{A}}-p(a)\log p(a)da \tag{1}\]
The logarithm used in this calculation is usually base 2, which means that the entropy is measured in bits. The entropy of a system is a measure of the average information content of a message generated by that system.
For example, in a binary system with two possible outcomes (0 or 1), the entropy is highest (\(=1\)) when the probability of either outcome is 0.5, and lowest (\(=0\)) when one outcome is certain (probability of 1) and the other is impossible (probability of 0).
In addition to being used in information theory, entropy has also been applied in fields such as cryptography, signal processing, and thermodynamics. In thermodynamics, entropy is used to quantify the degree of disorder or randomness in a thermodynamic system, and is often referred to as thermal entropy.
For a discrete-state system, a uniform distribution over all possible independent states contains the lowest information, therefore has the highest entropy. However, simply having a large number of independent components that results in a vast range of available states is not sufficient to generate conscious systems. These components must also be causally dependent on one another at an appropriate spatial and temporal scale. This crucial aspect of consciousness is referred to as information integration, as introduced by Tononi [Tononi, 2004]. Additionally, the author proposed a computational model to quantify the capacity of information integration, which will be detailed in the following sections.
### Basics Concepts of IIT
The axioms of IIT state that every experience exists intrinsically and is structured, specific, unitary, and definite where specifically,
* Experience exists intrinsically;
* Experience is specific, being composed of a particular set of phenomenal distinctions (qualia);
* Experience is unitary, irreducible, as an integration of information;
* Experience is definite in its content and spatio-temporal grain (exclusion of other possibilities).
The theory then postulates that, for each essential property of experience, the physical subtrate of consciousness (PSC) must have a cause-effect power related to the brain. The objective is then to find the appropriate spatial and temporal scale of neural elements gives rise to consciousness. The theorem implies that only those elements that have the maximum intrinsic cause-effect power are identified as elements of PSC. it is notable that under such a definition, the cause-effect power could be higher at a coarser spatial scale comparing to a finer spatial scale.
Recall the definition of consciousness in IIT corresponds to the capacity to integrate information, such that the system generates a large collection of states while being causally dependent to each other. In a hypothetical setting, imagine a collection of neuronal elements locally connected but disconnected from outside stimulus, then one is able to test if such a collection can be separated into two independent parts by measuring the information gain of one part by knowing the other part. In information theory, this precisely corresponds to _mutual information_ (MI) of two random variables. The measurement of information integration in the theory is defined as a certain type of MI in the brain system, called _effective information_ (EI).
**Definition 2** (Mutual Information).: _The mutual information between two variables \(A,B\):_
\[I(A,B)=H(A)+H(B)-H(AB) \tag{2}\]
_where \(H(A)=H_{p_{A}}(A),H(B)=H_{p_{B}}(B),H(AB)=H_{p_{AB}}(A;B)\), \(p_{AB}\) is the joint distribution of \(A,B\)._
In the following discussions, we will generalize the symbols \(A,B\) to be two sub-systems (or two subsets of variables) instead of two variables.
In order to measure the information gain, start by setting one part of neural elements to independent set of noises, and observe how the firing pattern changes in the other half as a consequence of receiving signals. Precisely, we define the concept of EI:
**Definition 3** (Effective Information).: _Effective information measures the **directional** causal effects of \(A\) on \(B\),_
\[\text{EI}(A\to B)=I(\tilde{A},B)=H(\tilde{A})+H(B)-H(\tilde{A}B),\tilde{A}= \arg\max_{A}H(A) \tag{3}\]
_which means \(A\) is chosen to be independent random noise, thus \(B\) has no causal effects on \(A\)._
According to the above definitions of MI and EI, some lemmas can be directly derived, which are stated in the following remark.
**Remark 1**.: _The information gain from \(A\) to \(B\) is not the same as \(B\) to \(A\) due to different connectivity pattern, while the mutual information is isotropy; and the EI is always upper bounded by the smallest maximum entropy of set \(A\) and set \(B\). In mathematical terms:_
\[I(A,B)=I(B,A) \not\implies\text{EI}(A\to B)=\text{EI}(B\to A) \tag{4}\] \[\text{EI}(A\to B)\leq\min\{O(\max_{A}H(A)),O(\max_{B}H(B))\} \tag{5}\]
As a consequence, we are able to measure how part \(A\) causally effects the other part \(B\) and vice versa. This gives rise to the **isotropy** causal effects defined by _mutual effective information_ (MEI).
**Definition 4** (Mutual Effective Information).: _Mutual effective information \(\text{EI}(A\rightleftharpoons B)\) measures the isotropy causal effects of between \(A\) and \(B\),_
\[\text{EI}(A\rightleftharpoons B)=\text{EI}(A\to B)+\text{EI}(B\to A) \tag{6}\]
As a consequence of the definition, if we are able to partition a system \(S\) into \(A\) and \(B\) such that \(\text{EI}(A\rightleftharpoons B)=0\), then \(A\) and \(B\) are independent parts, which limits the capacity of integrating information on the \(S\). Therefore, it is necessary to locate the bottleneck in order to quantify the information integration capability for system \(S\).
**Definition 5** (Minimum Information Bipartition, MIB).: _A bipartition on system \(S\) as its "weakest link" can be achieved with partitions \(A,B\subset S\), \(B=\tilde{A}\) as its complementary set, such that the normalized mutual effective information of \(A,B\) is the minimum, as following:_
\[\text{MIB}(A\rightleftharpoons B)=\arg\min_{A,B\subset S}\frac{\text{EI}(A \rightleftharpoons B)}{H_{\max}(A\rightleftharpoons B)} \tag{7}\] \[\text{with }H_{\max}(A\rightleftharpoons B)=\min\{\max_{A}H(A),\max_{B}H(B)\} \tag{8}\]
\(H_{\max}(A\rightleftharpoons B)\) _is for normalization due to Remark 1._
Each partition in MIB is called a _complex_ in IIT. There is \(\sum_{m=2}^{N}\binom{N}{m}\) subsets within a system of \(N\) elements, each has a measure of \(\Phi(S),S\in 2^{[N]}\), but those \(S\) in a larger subset with higher \(\Phi\) are discarded, the rest are complexes in the system.
Now consider a system \(\mathcal{X}\) with \(N\) neuronal elements, its information integration capacity is determined by the total minimum information for each of the complexes, which are found by enumerating over all the possible subsets \(S\subseteq\mathcal{X}\).
To formalize this intuition, we define the information integration capacity of system \(\mathcal{X}\), which measures the maximal irreducible cause-effect power, _i.e._, mutual effective information for minimum information bipartition.
**Definition 6** (Information Integration).: _The information integration for a subset \(S\) is the mutual effective information of the minimum information bipartition:_
\[\Phi(S)=\min_{A\in 2^{S},B=S/A}\text{EI}(\text{MIB}(A\rightleftharpoons B)) \tag{9}\]
_The integrated information for the entire system \(\mathcal{X}\) is such that_
\[\Phi(\mathcal{X})=\max_{S\in 2^{X}}\Phi(S)\]
The intuitive explanation of the information integration \(\Phi\) is that, if the system is not fully decomposable (into independent sub-systems), \(\Phi\) is the effective information (a special type of mutual information) for cutting the systems on its "weakest link" by minimizing the effective information, or as a "cruelest cut" as in Tegmark (2015). We will discuss about how to practically measure the integration information in the following section.
### Measurement of Information Integration
In Tononi and Sporns (2003), a computational model is proposed for measuring the information in a spatial network.
Assume that \(\mathcal{X}\) is the entire system, and consider \(X\in\mathbb{R}^{|\mathcal{X}|}\) to be the random vector over \(\mathcal{X}\) that characterizes the signal emitted by each neuron in the system. To model the system \(X\), the authors propose to use Gaussian Graphical Model, as described in the following.
The signal at the \(i\)-th node, denoted by \(x_{i}\), is a linear combination of the signals of its neighboring nodes in the directed graph representing the node connection, plus the random measurement noise:
\[x_{i}=\sum_{j\in\text{neighbor}(i)}w_{i,j}x_{j}+\sigma_{i}r_{i}\]
where \(r_{i}\overset{i.i.d.}{\sim}\mathcal{N}(0,1)\) are random noise in the measurement, and \(w_{i,j}\) is the edge weight from \(i\) to its neighbor \(j\). In matrix form, we have,
\[X=WX+CR,\quad C=\text{diag}((\sigma_{i})_{i})\]
Then it is easy to see the covariance of \(X\) satisfies
\[\Sigma=C^{2}(I-W)^{-1}(I-W)^{-1\intercal}.\]
With this model, one can compute the information integration \(\Phi\) analytically,
1. For any \(S\in 2^{\mathcal{X}}\), consider one bipartition \(S=A\cup B,S^{c}=\mathcal{X}/S\), \[X:=\begin{pmatrix}X_{A}\\ X_{B}\\ X_{S^{c}}\end{pmatrix}\sim\mathcal{N}\left(\mathbf{0},\begin{pmatrix}\Sigma_{A}& \Sigma_{AB}&\Sigma_{AS^{c}}\\ \Sigma_{AB}&\Sigma_{B}&\Sigma_{BS^{c}}\\ \Sigma_{S^{c}A}&\Sigma_{S^{c}B}&\Sigma_{S^{c}}\end{pmatrix}\right)\] then to calculate \(\text{EI}(A^{\text{max}}\to B)\), one randomizes the signal from \(A\) by setting \(\Sigma_{A}=I_{|A|}\) in the covariance, and cuts off any incoming edges from \(S^{C},B\) to \(A\). However, if \(A\) is simulated to account for connection within \(A\) and to \(A\), the original graph is unchanged. Moreover, one sets \(C_{A}=\text{diag}(\sigma_{p})\) as the signal and \(C_{B\cup S^{c}}=\text{diag}(\sigma_{i})\) as the noise. Then one can calculate (\(A\) is a set, \(X_{A}\) is a multivariate representing the signal states of the items in the set \(A\), we ignore the difference of \(H(A)\) and \(H(X_{A})\) here) \[H(X_{A}) =\mathbb{E}[-\log p_{X_{A}}]=|A|/2+|A|/2\log 2\pi+\log|\Sigma_{A}|/2\] \[H(X_{A}X_{B}) =|S|/2+|S|/2\log 2\pi+\log|\Sigma_{S}|/2,\quad\Sigma_{S}= \begin{pmatrix}\Sigma_{A}&\Sigma_{AB}\\ \Sigma_{AB}&\Sigma_{B}\end{pmatrix}\] similarly \(H(B)\).
2. With the analytic results of \(H(A),H(B)\) and \(H(AB)\), one can also calculate \(\text{EI}(B^{\text{max}}\to A)\). Then one finds the minimal information bipartition as before, \[\text{MIB}(S)=\min_{A\in 2^{\mathcal{S}},B=S/A}\text{EI}(A\rightleftharpoons B)/\min(H^{\text{max}}(A),H^{\text{max}}(B))\] and \(\Phi(S)=\text{EI}(\text{MIB}(S))\)
3. Finally, one can find integrated information for the entire system \[\Phi(X)=\max_{S\in 2^{\mathcal{X}}}\Phi(S)\]
Apart from computing \(\Phi\) for system with a given topological structure, the authors also experimented with finding the graph structure with the maximal \(\Phi\), and analyzed the connectivity property of the generated 'optimal' graph with maximal \(\Phi\).
In Balduzzi and Tononi (2008), simulation studies on discrete spatial-temporal systems are carried out on small scale networks.
[Hoel, 2017] leverages the _do_-operation in causal inference in constructing a Markov chain in time that identify particular coarsening of the (spatial) state-space that corresponds to an increase in information.
Consider a finite state space \(S\), we can specify transition probability matrix (TPM) between any two states
\[P_{ij}=\mathbb{P}(X^{t+1}=j|X^{t}=i)=\mathbb{P}\left(X^{t+1}=j|do\left(X^{t}= i\right)\right)\]
which is similar to Markov chain, with the Markov property given by the _do_ operator, i.e. we have conditional independence once \(S^{t}=i\) is a fixed quantity. In this sense, we treat the time as varying, and only one variable \(X\) present with state space \(S\); or that we have \(|S|\) many binary variables and so \(X=\sum_{k\in S}\delta_{k}\), which is well-defined due to finiteness.
Consider the problem setup: let \(X\to Y\) be the causal system, where they share the same state space \(S_{X}=S_{Y}\). The author is interested in when the macro system has a micro system with corresponding TPM, i.e. \(U\to V\), such that \(S_{U}=S_{V}\subset 2^{S_{X}}\). Moreover, Hoel's theory aims at identifying particular coarsening of the state space amounts to _causal emergence_. Effective information (EI) plays an important role in achieving this. We give another definition of EI since the context differs to IIT, but firstly we introduce the concept of intervention and effect distribution.
**Definition 7** (Intervention Distribution and Effect Distribution).: _Given \(|S_{X}|<\infty\), the maximum entropy amounts to the uniform intervention distribution_
\[I_{D}=H^{\max}=\operatorname{Unif}(do(X)),\text{ that is, }P(do(X=x))=\frac{1}{n} \quad\forall x\in S_{X} \tag{10}\]
_where \(\operatorname{Unif}(\cdot)\) is the uniform distribution over a set. Intervening with this distribution on \(X\) results in the effect distribution \(E_{D}(Y)\) over \(Y\) :_
\[E_{D}(Y) =\sum_{X}P(Y\mid do(X))H^{\text{max}} \tag{11}\] \[=\frac{1}{n}\sum_{x}P(Y\mid do(X=x))\]
_The \(E_{D}\) effectively computes the uniform averages over all rows in the TPM._
We are now ready to introduce definition of effective information in Hoel et al. (2013):
**Definition 8** (Effective Information).: \[\text{EI}(X\to Y) =\sum_{X}H^{\max}D_{KL}\left(P(Y\mid do(X))\|E_{D}(Y)\right)\] (12) \[=\sum_{x}P(do(X=x))D_{KL}\left(P(Y\mid do(X=x))\|E_{D}(Y)\right)\] \[=\frac{1}{n}\sum_{x}D_{KL}\left(P(Y\mid do(X=x))\|E_{D}(Y)\right)\]
The definition of EI presented in Def 8 is in fact equivalent to the definition in IIT. Without loss of generality, denote \(p(x)\) and \(p(y)\) the mass function for \(X\sim H^{\text{max}}\), \(Y\sim E_{D}\) respectively.
\[MI(X,Y) =\sum_{x}\sum_{y}p(x,y)\log_{2}\left(\frac{p(x,y)}{p(x)p(y)}\right)\] (Bayes Rule) \[=\sum_{x}\sum_{y}p(x)p(y|x)\log_{2}\left(\frac{p(y|x)}{p(y)}\right)\] \[=\frac{1}{n}\sum_{x}D_{KL}\left(P(Y\mid do(X=x))\|E_{D}(Y)\right)\]
Given the notations above, a _causal emergence_ (CE) arises when the best coarsened system has higher effective information than the original one, i.e.
\[CE=\text{EI}(U\to V)-\text{EI}(X\to Y)>0\]
since \(U\) and \(V\) are considered as the same random variables in one step, this amounts to finding the coarsened support set \(S_{U}\).
### Biological Evidence
Despite the computational framework put forward by the IIT theory is able to analytically assess the information processing bottleneck in an arbitrary system, it remains an open problem to verify the claimed correspondence that subjective experience is equivalent to the capacity to integrate information.
IIT postulates the neural elements constituting PSC are those determined by maximizing the cause-effect power, which could be higher at a macro scale comparing to a micro scale owing to different connectivity patterns. Among the brain regions, cerebral cortex has functional specialization and integration altogether, which should yield high values of maximum information integration. Whereas cerebellum is not essential for consciousness, because of its lack of dependency among the neurons and inability to form a large complex with high maximum information integration.
IIT also explains why bistable firing of cortical neurons during slow wave sleep would cause fading of consciousness. This is owning to the loss of both selectivity and effectiveness results in the reduction of information integration (Tononi et al., 2016). Alkire et al. (2008) argues that brain under anesthesia is similar to under slow wave sleep, where cortical connectivity breaks down and therefore information integration is reduced. Practically, PCI are proposed to estimate \(\Phi^{\text{max}}\) evoked by TMS in practice, which is high only if brain responses are both integrated and differentiated, corresponding to a distributed spatio-temporal pattern of causal interactions that is complex and hence not very compressible.
## 3 Consciousness as a State of Matter
Apart from the information theoretical viewpoint, researchers are seeking for the properties of conscious process within physical systems. Consciousness can be thought of as an emergent phenomenon. It does not depend on detailed properties of atoms, but on the complex patterns into which the atoms are arranged. Emergent phenomena are common in physics, for example, waves can exist in many different kinds of matter. Researchers have investigated the phenomenon of consciousness from a principled way in physics (Penrose, 1991; Stapp, 2000; Tegmark, 2000, 2015; Carroll, 2021). A core issue for this approach is to admit the true randomness in the conscious process and locate the corresponding physical processes within the human brain as evidence. A branch of research ultimately resorts to the quantum process, which is commonly believed to exhibit the true randomness in the measurement procedure of a quantum state. The conscious process can be interpreted as a quantum measurement by a conscious observer. However, several problems are raised in this interpretation: One corresponds to the quantum factorization problem, that the conscious observers has a certain Hilbert space factorization to leave the world around the observer as a strongly correlated but independent (from the observer) system (Tegmark, 2015). Another problem is the quantum decoherence in physical system like human brains. Most quantum phenomenons only appear in a very small space-time scale, the decoherence process prevents a system as large as the human brain to inherit the quantum property. For example, a typical timescale for quantum decoherence lasts for about \(10^{-13}\sim 10^{-20}\) seconds, which is much shorter than the timescale of cognitive process as \(10^{-3}\sim 10^{-1}\) seconds (Tegmark, 2000). Researchers also proposed the orchestrated objective reduction (Orch OR) of quantum states to interpret the brain cognitive process (Hameroff and Penrose, 2014), which is discussed in Sec. 4.
Max Tegmark (Tegmark, 2015) considers consciousness as a kind of matter which he calls "_perceptronium_", as a phrase indicating conscious state. For a matter to become "_perceptronium_", it needs have the following four properties as necessary but not sufficient conditions:
* The **information** principle: The system must have substantial information storage capacity;
* The **integration** principle: The system cannot consist of nearly independent parts, and it needs to have a certain level of integration within itself;
* The **independence** principle: The system must have substantial independence from the rest of the world;
* The **dynamics** principle: A conscious system must have substantial information-processing capacity, and it is this processing rather than the static information that must be integrated.
In Tegmark's work, he generalizes Tononi's IIT (Sec. 2), compares this with other principles that conscious matter should have, and then generalizes the analysis to quantum mechanics. In his work, he shows that the information and integration principles can have conflicts with each other - too much integration will result in very little information, which is referred to as the _integration paradox_ in Sec. 3.2. Also, the independence principle has conflicts with the dynamics principle - too much independence will result in a trivial dynamics system, which is called the _quantum Zeno paradox_ as discussed later in Sec. 3.3. A conscious system needs to strike balances in these properties: information and integration, independence and dynamics. Therefore, this work introduces the _autonomy_ metric to measure the balance of independence and dynamics in the system.
Following that, it also proposes the following two principles:
* The **autonomy** principle: A conscious system has substantial dynamics and independence.
* The **utility** principle: An evolved conscious system records mainly information that is useful for it.
The autonomy principle describes the balance of dynamics and independence. The utility principle describes the amount of information within a conscious system.
These principles can be translated into more physical problems in a system:
* The physics-from-scratch problem: If the total Hamiltonian \(\mathbf{H}\) and the total density matrix \(\rho\) fully specify our physical world, how do we extract 3D space and the rest of our semiclassical world from nothing more than two Hermitian matrices?
* The quantum factorization problem: Why do conscious observers like us perceive the particular Hilbert space factorization corresponding to classical space (rather than Fourier space, say), and more generally, why do we perceive the world around us as a dynamic hierarchy of objects that are strongly integrated and relatively independent?
These are the question to be answered in this theory. We will briefly introduce the basics of quantum mechanics and then dive into these principles.
### Basics of Quantum Mechanics
In quantum mechanics, the state of a system is described by a vector \(|\psi\rangle\) in the Hilbert space. For example, we have an electron with a spin up state \(|\psi\rangle=|\uparrow\rangle\) or spin down state \(|\psi\rangle=|\downarrow\rangle\), or their superposition \(|\psi\rangle=\frac{1}{\sqrt{2}}(|\uparrow\rangle+|\downarrow\rangle)\).
The probability of observing this system in state \(|\chi\rangle\) is
\[P=|\langle\chi|\psi\rangle|^{2}. \tag{13}\]
We can apply a unitary operator to a state to change the basis we are interested in:
\[|\psi\rangle\to U|\psi\rangle \tag{14}\]
For example, the spin up state in \(x\) direction is a superposition state \(|\psi\rangle=\frac{1}{\sqrt{2}}(|\uparrow\rangle+|\downarrow\rangle)\) in the \(z\) direction.
The time evolution of a state is controlled by the Hamiltonian operator \(\mathbf{H}\) (can be thought of as a matrix in Hilbert space):
\[|\psi(t)\rangle=e^{i\mathbf{H}t/\hbar}|\psi(0)\rangle. \tag{15}\]
which is known as the Schrodinger equation.
The Hamiltonian operator itself describes the energy spectrum of the system. We can find its eigenstates:
\[{\bf H}|E_{i}\rangle=E_{i}|E_{i}\rangle. \tag{16}\]
\(E_{i}\) is the eigenvalue of the Hamiltonian operator \({\bf H}\), which represents the energy (as a scalar) of the system at state \(|E_{i}\rangle\). An operator takes diagonal form in its eigenbasis. When two operators commute, \([A,B]=AB-BA=0\), they can be simultaneously diagnolized.
A state can also be represented as a density matrix:
\[\rho=|\psi\rangle\langle\psi| \tag{17}\]
this represents a pure state.
Density matrices built from the pure state always have rank 1. A more general density matrix can also represent a classical mixture of states, which in general has a higher rank
\[\rho=\sum_{i}|\psi_{i}\rangle\langle\psi_{i}|. \tag{18}\]
this is a mixed state.
The probability of observing the system in a certain state is
\[P=\langle\chi|\rho|\chi\rangle \tag{19}\]
The time evolution of a density matrix is
\[\rho(t)=e^{i{\bf H}t/\hbar}\rho(0)e^{-i{\bf H}t/\hbar}. \tag{20}\]
Written in the energy eigenbasis, it is
\[\rho(t)_{mn}=\rho(0)_{mn}e^{i(E_{m}-E_{n})t} \tag{21}\]
### Integration Principle
For a bipartite system \(\rho=\rho_{1}\otimes\rho_{2}\). We define the integrated information \(\Phi\) as the mutual information \(I\) for the "cruelest cut". The mutual information is defined as
\[I\equiv S\left(\rho_{1}\right)+S\left(\rho_{2}\right)-S(\rho) \tag{22}\]
where
\[S(\rho)\equiv-\operatorname{tr}\rho\log_{2}\rho \tag{23}\]
is the von Neumann entropy. Note that this is slightly different from Tononi's definition, but this is easier to calculate.
Recall in Sec. 2, we introduce the definition of the integrated information in original IIT, as the effective information of the minimum information bipartition. The minimum information bipartition is a 'cut' of the system according to the weakest link, _i.e._, minimal mutual effective information. We can also consider cuts here in the quantum sense. In this case, more cuts are available, and we choose the cruelest cut as the integrated information. Therefore the _integrated information_ is defined as
\[\Phi=\min_{\bf U}I\left({\bf U}\rho{\bf U}^{\dagger}\right). \tag{24}\]
where \({\bf U}\) is the unitary evolution on density matrix \(\rho\).
A general Hamiltonian can be written as
\[{\bf H}={\bf H}_{1}\otimes{\bf I}+{\bf I}\otimes{\bf H}_{2}+{\bf H}_{3} \tag{25}\]
where \({\bf I}\) is identity matrix.
If a Hamiltonian can be decomposed without an interaction term (with \({\bf H}_{3}=0\)), then it describes two perfectly independent systems,
\[\rho\propto e^{-{\bf H}/kT}=e^{-{\bf H}_{1}/kT}e^{-{\bf H}_{2}/kT} \tag{26}\]
In this case, it can be derived that \(\Phi=0\) with \(\mathbf{U}=e^{i\mathbf{H}_{1}t/n}e^{i\mathbf{H}_{2}t/\hbar}\). This is within expectation since the integrated information describes the level of information integration within a system, and a perfectly separated system should have no intergration. From IIT, we know that the conscious systems are likely to have the maximum of integrated information. The 2D Ising model with \(n\) dipoles as an example, the maximum integrated information according to above definition will only be \(O(\sqrt{n})\). However, with the optimal error correcting codes, the system can achieve asymptotic \(n/2\) bits of integrated information in the large-\(n\) regime.
The above analysis is based on the quantum system. For a classical system like a Hopfield network [10] for describing the brain process, the integrated information can also be calculated, and it will lead to a so-called integration paradox.
Integration paradox.Suppose our brain is a Hopfield network with \(n\) neurons using Hebbian learning rules, then the maximum capacity of integrated information is 37 bits for \(n=10^{11}\) neurons [14]. However, the information for a conscious experience is much larger than this value, with an example of a human imagining a picture in his mind. This is known as the integration paradox.
Why does the information content of our conscious experience appear to be vastly larger than 37 bits? In the quantum case, it can never contain more than \(1/4\) bit of information [12]. This observation leads to some conjectures including that the human brains use a better coding method for conscious information rather than the Hopfield networks.
### Independence Principle
This principle follows the idea of IIT for cutting the system into independent parts from its "weakest link", as described in Sec. 2.2. It first requires the \(\rho\)-diagonality theorem in a quantum case to find the minimum of mutual information.
**Theorem 1** (\(\rho\)-Diagonality Theorem, \(\rho\)DT [11]).: _The mutual information always takes its minimum in a basis where \(\rho\) is diagonal._
With \(\rho\)DT, the problem becomes how to find the basis with \(\rho\) being diagonal in the quantum system with Hamiltonian as Eq. 25. The answer to this question is presented by the following theorem.
**Theorem 2** (\(H\)-Diagonality Theorem, HDT [14]).: _The Hamiltonian is always maximally separable (minimizing \(||H_{3}||\)) in the energy eigenbasis where it is diagonal._
Furthermore, to minimize \(||H_{3}||\), we must have \([H_{1},H_{3}]=0\), which indicates that all subsystems (_e.g._, subsystem with Hamiltonian \(H_{1}\)) need to commute with all interaction Hamiltonians (_e.g._, \(H_{3}\)). Following this principle, it will finally lead to a heat death, where all subsystems cease to evolve, known as the _quantum Zeno paradox_.
**Definition 9** (Quantum Zeno Paradox [14]).: _If we decompose our universe into maximally independent objects, then all change grinds to a halt._
### Dynamics principle
If only following the independence principle, we will face the _quantum Zeno paradox_ where the system cease to evolve and has no information processing capability. However, we also require the conscious system to have certain information processing capability by the dynamics principle. The autonomy principle says the system should be able to strike a balance between the independence and dynamics principles. There are interesting classes of states \(\rho\) that provide substantial dynamics and near-perfect independence even when the interaction Hamiltonian \(H_{3}\) is not small.
As a measure of the dynamics, the energy coherence is defined as:
\[\delta H \equiv\frac{1}{\sqrt{2}}\|\hat{\rho}\|=\frac{1}{\sqrt{2}}\|i[ \mathbf{H},\rho]\|=\sqrt{\frac{-\operatorname{tr}\left\{[\mathbf{H},\rho]^{2} \right\}}{2}} \tag{27}\] \[=\sqrt{\operatorname{tr}\left[\mathbf{H}^{2}\rho^{2}-\mathbf{H} \rho\mathbf{H}\rho\right]}\]
Then, the maximum probability velocity can be calculated as:
\[v_{\max}=\sqrt{2}\delta H \tag{28}\]
with probability velocity defined as \(v=\dot{p},p_{i}=\rho_{ii}\). If we only maximize \(v_{\max}\) following the dynamics principle, some calculations indicate that it will lead to a very simple dynamics solution, which does not carry the capability of sufficient information processing.
The states that are most robust toward environment-induced decoherence are those that approximately commute with the interaction Hamiltonian. It means that \([\rho,H_{3}]\approx 0\), but \([\rho,H_{1}]\neq 0\). The \(H_{3}\) interaction term will decohere the system thus it is required to be sufficiently small for system \(\rho\) to evolve over time.
To correctly describe the system with a balance of independence and dynamics, it needs a new metric called the _autonomy_. It first requires to use the linear entropy for quantifying the non-unitary property of an evolution.
The linear entropy is defined as:
\[S^{\mathrm{lin}}\equiv 1-\mathrm{tr}\,\rho^{2}=1-\|\rho\|^{2} \tag{29}\]
Let us define the dynamical timescale \(\tau_{\mathrm{dyn}}\) and the independence timescale \(\tau_{\mathrm{ind}}\) as
\[\tau_{\mathrm{dyn}} =\frac{\hbar}{\delta H},\] \[\tau_{\mathrm{ind}} =\left[S_{1}^{\tilde{lin}}(0)\right]^{-1/2}.\]
The autonomy can be defined as the ratio:
\[A\equiv\frac{\tau_{\mathrm{ind}}}{\tau_{\mathrm{dyn}}} \tag{30}\]
This ratio will exponentially increase with the system size, such that it leads to a highly autonomous system with sufficient information processing even with a large \(H_{3}\).
Summary.The theory proposed by Tegmark generalizes the IIT to the quantum domain, and analyzes the deficiency of information integration principle from the physics-principled calculation, which raises the _integration paradox_ that the Hopfield neural network cannot integrate a sufficient amount of information for conscious experience. It also analyzes the independence principle and leads to the _quantum Zeno paradox_ that a system decomposed into maximally independent sub-systems will cease to evolve in the end, which is in conflict with the dynamics principle. The theory finally propose the metric named _autonomy_ that is found to have a high value as a balance of the independence and dynamics principles for a conscious system.
## 4 Orchestrated Objective Reduction Theory
### Consciousness as Orchestrated Objective Reduction
Recalling the discussion of free will problem in arising consciousness in Sec.1.5, we may conjecture that the true randomness may be required for the free will to happen. How does this truly random process happen in human brains? Orchestrated objective reduction (Orch OR) theory (Penrose, 1991, 1994, Hameroff and Penrose, 1996, Hameroff, 2007, 2010, 2012, Hameroff and Penrose, 2014), builds on the hypothesis that the emergence of consciousness is due to a biological mechanism that is able to orchestrate moments of quantum state reduction. The theory posed that conscious events arises from the termination of quantum computation in the brain microtubules, framed as _objective reduction_. Objective reduction refers to the idea that a quantum system can spontaneously collapse from a superposition of multiple possibilities into a single state. In the Orch OR theory, Hameroff and Penrose propose that these objective reductions are non-deterministic but are orchestrated by certain processes in the brain. These affected objective reduction processes are called the free will. In a nutshell, the Orch OR framework based on quantum theory appears to introduce stochasticity aspects into the reductionist view of consciousness as a pure physical process. Through this, independent causal agency and free will can be explained (Hameroff, 2012). Moreover, it will also imply the existence of consciousness in a single cell.
### Free Will in Neurons
The integration and firing sequences, which gives rise to EEG and NCC, are primarily generated by dendritic-somatic membranes. Then axonal firings outputs conscious (or non-conscious) processes to control behavior. Microtubules (MTs), as part of the cytoskeleton, a protein scaffolding network inside of the cell, is hypothesized to influence the threshold of firing. Specifically, Dendritic-somatic MTs of neurons are arranged in local recursive networks and are more stable comparing to MTs in other cells, therefore render itself as a suitable information processing and storage unit, moreover, suitable to mediate consciousness and regulate firing.
As shown in Fig 4, MT constitutes of peanut-shaped tubulin protein, each with a dipole and can be arranged in 13 protofilaments each with two types of hexagonal lattices. In [Hameroff and Penrose, 2014], the MT dipoles are described as electron spin (magnetic), which is inherently a quantum-mechanical quantity. Therefore all possible directions for the spin rotation axis arise as quantum superpositions of some random pair of directions. The authors then speculate that there may exists chains of spin along the pathway in MT that propagate quantum bit pairs, in addition, there may exists alternative currents at certain frequency caused by periodic spin flips. The fact that tubulins in MTs can each exists in different states (and give rise to quantum superposition) based on the dipoles position (direction), could indicate MT processes may directly result in consciousness.
### Diosi-Penrose Objective Reduction
Having notated the physiological unit where quantum superposition may took place in our brain, one might ask how the orchestrated reduction that gives rise to consciousness happen? The argument starts with linking Orch OR to theoretical physics.
The Diosi-Penrose (DP) objective reduction proposal bridges quantum and classical physics as _quantum-gravitational_ phenomenon, whereby the quantum superposition reduces to an average time measurement \(\tau\) for the state reduction to take place according to \(\tau\approx\frac{\hbar}{E_{G}}\), \(E_{G}=\frac{Gm^{2}}{a}\), where \(E\) is space-time superposition curvature, \(G\) is gravitational constant, \(m\) is mass, \(a\) is spatial size. The actual time of decay in each event of state-reduction is taken as a random process in DP. From this, the reduction of quantum superposition of space-time objects takes place when the superposition curvature \(E_{G}\) reaches the threshold \(\frac{\hbar}{\tau}\).
The Orch-OR schemes goes further to relate the DP physical proposal to consciousness. Hameroff and Penrose [2014] proposed that if a quantum superposition is firstly well-orchestrated: "adequately organized, imbued with cognitive information, and capable of integration and computation"; and secondly isolated from non-orchestrated, random environment for the superposition \(E_{G}\) to reach the threshold \(\tau\), then the Orch OR will occur along with the emergence of consciousness. An illustration of this process is shown in Fig 4.
### Evidence for Objective Reduction of Quantum State
In Hameroff [2010], the authors claimed that the best measure of neural correlate of consciousness is 30- to 90-Hz gamma synchrony electroencephalography (EEG), which is largely derived from dendritic and somatic integration potentials. In addition, the theory claims that the state of anaesthesia is owing to dispersed dipoles in the MTs, responsible for quantum computing.
There are yet experiments in confirming the theory, however, biological evidence has been observed in warm conditions, where the theory has yet to extend to. Nonetheless, the Orch OR theory has provided a computational framework allowing falsification of the biological quantum theory that takes place in the MT.
## 5 Global Workspace Theory
### The Theatre of Consciousness
Global workspace theory (GWT) is an architecture proposed by Bernard Baars [Baars et al., 1997] to explain the inner procedure of how the human brain selects and deals with consciousness attention. There are some limits to conscious capacity. For example, working memory, which temporally store
information to be dealt with, holds only several things at a time. Moreover, the human brain is only able to receive information from a single stream. 'The theatre of consciousness' was proposed in a metaphor term to answer how the human brain handles different inputs, and then outputs a single stream of information that draws the final attention.
There are several components of a theatre of consciousness. The'stage of working memory' is the platform to receive all potential information from sensors or abstract information from cortices. The'spotlight' mechanism in working memory highlights the conscious steam of information, other information on the stage is not aware by attention. Information resources,e.g. the potential thoughts, images or sensations, are regarded as 'actors'. The information resources compete with each other to get the spotlight. The more conscious procedure is required to handle the information, the more likely the information resource will be put under the spotlight. Perceptual, intention, expectations etc. influence the result of this competition. 'Context' refers to unconscious networks that potentially shape conscious contents in the brain. 'Directors', the executive functions of human brain, guide the selection procedure with intentions and goals. The frontal cortex is believed to act as an important
Figure 4: The figure published in Hameroff and Penrose [2014] was used to illustrate the process that Orch OR occurs. Top: Tubulins are in classical dipole states (yellow or blue), or quantum superposition of both dipole states (gray). Quantum superposition/computation increases during (1-3). The conscious moment occurs when threshold is met at time \(\tau\approx\hbar/E_{G}\). Middle: Corresponding alternative superposed space-time curvatures reaching threshold at the moment of OR and selecting one space-time curvature. Bottom: Schematic of a conscious Orch OR event showing U-like evolution of quantum superposition and increasing \(E_{G}\) until OR threshold is met, and a conscious moment occurs by \(\tau\approx\hbar/E_{G}\).
role in this procedure with the fact that damage to the front lobe leads to loss of actions by long-term goals. Then the information under the spotlight is broadcasted to the 'audience', which represents the brain region which requires the information.
An update of GWT in 2003 [Baars, 2003] gives a more detailed introduction to the relationships between GWT and Brain functions, which provides some evidence of how the competition is performed in the human brain. Both the frontal cortex and other brain regions, which can interrupt the spotlight control, are involved in the conscious event selection procedure. The latter interrupt control consists of, for example, the brain stem, pain system, and emotional centers, which allow interrupting the selection procedure and give weight to more significant and urgent activities. The 'Context' function of the brain is believed to be involved in the conscious decision process. The parietal cortex which is related to self-awareness of parts of the body is not directly objectively linked to consciousness but is believed to shape conscious visual events. The 'Self' system may be involved in the generation of consciousness. It was found that split-brain patients have different executive and perceptual functions from the left and right hemisphere [Gazzaniga et al., 1996] and the left prefrontal cortex processes the sensory information with a 'narrative self' that can draw different awareness which causes conflicts between both hemispheres. Then the left hemisphere tries to rationalize and repair such conflicts. Some evidence was also provided in the paper to support the assumption that consciousness contents are broadcasted and distributed to brain regions. In a visual word task, the word task not only triggers visual word recognition areas of the cortex but also was found to evoke activities in the parietal and prefrontal cortex [Baars, 2002].
### Computational Models of GWT
The Intelligent Distribution Agent (IDA) [Baars and Franklin, 2003, 2007] and LIDA (Learning IDA) [Baars and Franklin, 2009] computational models were proposed based on GWT to perform human-like tasks. In the study, naval jobs of sailors are used as an example task to test the model. IDA and LIDA contain several blocks reflecting the GWT, including **sensory modules** to deal with stimulus, **memory modules** as storage, **attention modules** referring to the concept of attention, the **action module** for action selection. Particularly, as in the GWT, **global workspace modules** integrates and broadcasts information, as well as selects the most relevant and important information to be on the stage. This model shows an empirical computational implementation of the GMT model as conceptual evidence that GMT could work out human-like functions. IDA or LIDA turns incoming
Figure 5: Scheme diagram of GWT derived from Baars et al. [1997].
sensory data into actions to the environment. The concepts of memory, competition, and broadcasting are involved in the conversion process. Then the resulting action to the environment changes the inputs of the system which forms an iterating cognitive cycle.
GWT has inspired some studies in related fields, here we describe several examples in brain signal analysis and deep learning. Inspired by GWT, [Schutter and van Honk, 2004] used EEG coherence, the level of connectivity of different brain region, to measure if emotions play a role in consciousness. Another study [Bartolomei and Naccache, 2011], in light of the broadcasting and distributing process in the GWT, compared the synchrony within distant cortico-cortical and cortico-thalamic networks of epileptic seizures with the distant relationships across different brain regions in GWT. More recently, a study discussed the possibility of implementing GWT with deep learning. The idea of Global Latent Workspace (GLW) was proposed to reflect deep learning design principles of brain-like mechanisms [VanRullen and Kanai, 2021].
## 6 Higher-Order Theories
A general definition of HOT is given by Carruthers and Gennaro [2023]: A phenomenally conscious mental state is a mental state (of a certain sort) that either is, or is disposed to be, the object of a higher-order representation of a certain sort.
Depending on whether the higher-order states in question are perception-like or thought-like, the high-order theories are categorized as high-order perception theory (HOPT), high-order thought theory (HOTT), self-representational theory (SRT), etc. Definitions of HOPT (Sec. 6.1 Def. 10), HOTT (Sec. 6.2 Def. 11) and SRT (Sec. 6.3 Def. 12) are described in details in the following sections. Additionally, we will introduce and discuss other relevant theories in the final section. To provide a concise overview, we consolidate these theories into a summarizing Table 2.
Higher-order theories (HOTs) try to answer the question that if a mental state is conscious or unconscious. The higher-order theory believes that there is a certain brain mechanism that is more advanced than the first-order information (e.g. senses from organs like visual or auditory nerves). Three sub-theories claimed different explanations for the higher-order mechanism. HOPT believes that there are inner senses that scan or refine but are independent of the first-order information. This explains why people are able to imagine feelings like pain. In HOTT, it is believed that a mental state is conscious when it is the subject of higher-order thought. They propose that a conscious mental state or event is either actually causing or is disposed to cause an activated thought that a person has the state or event. Self-representational theory proposed another explanation to the higher-order theory. The self-representational theory believes that the higher-order state is constitutive or internal to its first-order state, i.e. that the higher-order state is formed from the first-order states and as a more complex system than the first-order states that generate awareness.
### Higher-Order Perception Theory
The high-order perception theory (HOPT) theory [Locke, 1948, Armstrong, 1981, Armstrong and Malcolm, 1985, Lycan, 1996, Armstrong, 2002, Lycan, 2004], which is also called the Inner-Sense Theory or High-Order-Sense Theory, is referring to the followings: Humans not only have the sense-organs to scan the environment and their own bodies to produce the representations, which are called the _first-order_ non-conceptual and/or analog perceptions of environment/body states, but also have inner senses of those first-order senses to generate equally fine-grained but higher-order representations, which are called the _second-order_ non-conceptual and/or analog perceptions of the _first-order_ perception states.
The definitions of the _first-order_ perception and the _second-order_ perception are actually close to the \(M\)-consciousness and \(I\)-consciousness in the attention schema theory, which will be introduced in later Sec. 7. We will discuss the connections of the two theories later in Sec. 7.3.
**Definition 10** (Higher-Order Perception Theory/Inner-Sense Theory [Carruthers and Gennaro, 2023]).: _A phenomenally conscious mental state is a state with analog/non-conceptual intentional content, which is in turn the target of a higher-order analog/non-conceptual intentional state, via the operations of a faculty of 'inner sense'._
A formal proposition of HOPT is provided as Definition 10. It explains consciousness as the higher-order states generated from inner sensing of the first-order states. Referring to Fig. 3 in Sec. 1.5, the conscious module is a higher-level component perceptive of the information flow of the lower-level sensing modules.
The antagonistic viewpoint of HOPT is held by some theorists that the attention mechanism on the first-order states may serve as a substitute of the higher-order states [Sauret and Lycan, 2014].
### Higher-Order Thought Theory
The higher-order thought theory (HOTT) [Rosenthal, 2009, 2012, 2004, Byrne, 1997, Brown et al., 2019] propose that a conscious mental state or event is either actually causing or is disposed to cause an activated thought that a person has the mental state or event. There are two embarments of the theory: the **actualist** and the **dispositionalist**. In the above statement, the actualists believe that mental state directly caused the activated thought while the dispositionalist believes that the mental state is disposed to the thought. Another difference between them is that the actualist HOTT requires actual involvement of the first-order information in order to compute the higher-order information. On the contrary, the dispositionalist HOTT states that the higher-order computation only requires the availability of the first-order information, for example utilising board casting in the global workspace theory, instead of directly accessing all first-order information. In [Lau and Rosenthal, 2011], some empirical support for the higher-order theories was discussed, e.g. the association of conscious awareness with prefrontal mechanisms and evidence based on clinical disorders.
**Definition 11** (Higher-Order Thought Theory [Carruthers and Gennaro, 2023]).: _A phenomenally conscious mental state is a state of a certain sort (e.g. with analog/non-conceptual intentional content, perhaps) which is the object of a higher-order thought, and which causes that thought non-inferentially._
\begin{table}
\begin{tabular}{c|c|p{142.3pt}|p{142.3pt}|} \hline Theory & First-order \& Higher-order Relationship & Key Mechanism & References \\ \hline \multirow{6}{*}{HOPT} & \multirow{6}{*}{Actualist} & \multirow{6}{*}{a mental state is conscious when it is the target of a higher-order thought} & \multirow{6}{*}{Deception-like; the human brain has inner senses of the first- order senses to generate higher-order representations} & \multirow{6}{*}{[Locke, 1948, Armstrong, 1981, Armstrong and Malcolm, 1985, Lycan, 1996, Armstrong, 2002, Lycan, 2004]} \\ \cline{3-4} & & & thought-like; A conscious mental event is actually causing an activated thought that a person has the event & \multirow{6}{*}{[Rosenthal, 1986, 1993, 2005]} \\ \cline{3-4} & & & thought-like; a mental state is conscious when it is the subject of higher-order thought; a conscious mental event is disposed to cause an activated thought that a person has the event & \multirow{6}{*}{[Dennett, 1978, Carruthers, 1998]} \\ \cline{3-4} & \multirow{6}{*}{Part-whole} & \multirow{6}{*}{first-order and higher-order are parts of a whole complex} & \multirow{6}{*}{the mental state is to representing itself. In the definition of part-whole SRT, the higher-order information sits but is not strictly ’higher’ than the first-order information. First-order and higher-order are bound together} & \multirow{6}{*}{[Kriegel, 2009, Picciuto, 2011]} \\ \cline{3-4} & \multirow{6}{*}{Identity} & \multirow{6}{*}{higher-order and first-order are identical} & higher-order and first-order are the same components as two roles or functions & \multirow{6}{*}{[Caston, 2002, Carruthers, 2005, Van Gulick, 2004]} \\ \cline{3-4}
Some computational models were proposed based on higher-order thought theories. Metacognition neural network models (Pasquali et al., 2010; Cleeremans et al., 2007; Timmermans et al., 2012) consist of two networks: a first-order network and a second-order network. The first-order network directly gets inputs and processes them with hidden units. After that, the hidden units of the first-order network are linked to the second-order network for processing. The first-order network learns to perform the classification of tasks, and the second-order network predicts the confidence of the first-order network by accessing the representational information of the first-order network. The models were tested and reported on several tasks, e.g. the Iowa Gambling Task (Pasquali et al., 2010), artificial grammar learning (AGL) tasks which distinguish grammar and non-grammar sentences, and blindsight tasks (Persaud et al., 2007) in which blindsight patients make visual discriminations in the absence of visual awareness.
### Self-Representational Theory
The Self-Representational Theory (SRT) (Kriegel, 2009; Van Gulick, 2004; Picciuto, 2011) presents the idea of phenomenally conscious mental state, which is a state with non-conceptual intentional content, and conceptual intentional content at the same time. Such a mental state is said to representing itself to the person who is the subject of that state.
**Definition 12** (Self-Representation Theory (Carruthers and Gennaro, 2023)).: _A phenomenally conscious mental state is a state of a certain sort (perhaps with analog/non-conceptual intentional content) which also, at the same time, possesses an intentional content, thereby in some sense representing itself to the person who is the subject of that state._
Two branches of the theory have argued for the constitutive relation between the conscious state and higher-order state is one of **identity**(Caston, 2002; Carruthers, 2005; Van Gulick, 2004), or **part-whole**(Kriegel, 2009; Picciuto, 2011). The former argues that the conscious state is both first-order and higher-order, More precisely, a first-order perceptual state with analog content acquires, at the same time, a higher-order analog content. The part-whole SRT take stance similar to actualist HOT thoery arguments, where the first-order perceptual state gives rise to higher-order thought that represents experience.
### Other theories and perspectives in HOT
The same-order theory (Kriegel, 2009; Brentano, 1973; Lau and Rosenthal, 2011) proposes that conscious mental states are not represented by any other mental states, but are instead directly present to the subject's awareness. The higher-order statistical inference view (Lau, 2011, 2007) believes conscious mental states involve higher-order statistical inferences about one's own mental states. According to this view, first-order representation is reviewed by a higher-order inference procedure to form a statistically reliable perceptual signal, similar to perceptual decision-making process. From the perspective of the radical plasticity thesis (Cleeremans, 2011; Pasquali et al., 2010), the brain is capable of remarkable adaptability and flexibility, and this plasticity plays a critical role in the development of conscious awareness. The radical plasticity thesis proposes that consciousness is not an intrinsic procedure but a learning process in the brain. The brain engages in continuous and unconscious learning to re-describe its activity, thereby developing systems of meta-representations that describe and refine the initial, first-order representations.
## 7 Attention Schema Theory
### Formulation
It is important to understand the difference of attention and awareness and their relationship in Attention Schema Theory (AST) (Graziano and Webb, 2015; Graziano et al., 2020).
**Definition 13** (Attention).: _Attention is a process where the brain selectively process certain pieces of information more than others._
As one of the most influential explanation of the attention process, people (Desimone et al., 1995) propose that there is a signal competition process emerging at the earliest stages of signal processing and existing in every later stages.
Awareness is a different concept from the attention. Although awareness and attentions are typically highly correlated, they can be dissociated. For the concept of awareness, we need to first distinguish objective awareness and subjective awareness. Both of the awareness involves a participant.
**Definition 14** (Objective Awareness).: _For objective awareness, the participant is required to report that he is objectively aware of the stimulus._
**Definition 15** (Subjective Awareness).: _For subjective awareness, the participant reports whether he has perceived the stimulus in his own opinion._
The difference of objective awareness and subjective awareness is just to distinguish whether the participant'sees' or 'guesses' the perceived stimulus.
In AST, the authors refer the _awareness_, _consciousness_ and _subjective experience_ as the same concept as the _subjective awareness_. Therefore, AST is a theory about subjective awareness, not _objective awareness_.
AST proposes that awareness is a model of attention. Thinking of a person seeing an apple, the visual representation of the apple (V) appears in the person's mind through the attention process. However, this is not enough for making the person aware of the 'apple'. To generate awareness, the mind also has a model of self (S), and the attention (A) of S to V is also part of the awareness. By AST, a subjective awareness is [S+A+V], which is a model of the attention process.
People may ask why subjective awareness is required beyond the attention? According to AST, subjective awareness allows for self-modelling which is essential for model-based control. Without awareness is like without modelling the arm when a person tries to reach some objects, and it will lead to inaccurate prediction about the arm's position therefore bad reaching result. The awareness serves as an internal modelling of the mind itself and the attention, leading to more accurate model-based control for human.
### _I_-consciousness and _M_-consciousness
As the primary researchers of AST, Graziano et al. (2020) proposes to interpret the consciousness of human mind via the _I_-consciousness (_I_ for information) and _M_-consciousness (_M_ for mysterious) in their later study.
_I_-consciousness indicates the process of signals winning attentional competition, just as in GWT, which is generally assumed to computationally feasible (Baars and Franklin, 2007). However, the mysterious part is the _M_-consciousness, which is used to explain the subjective experience of perceiving the winning piece of information.
Similar as the previous work of AST (Graziano and Webb, 2015), that awareness is a model of attention, Graziano et al. (2020) further proposes that _M_-consciousness is a natural, built-in, imperfect model of _I_-consciousness. Moreover, the _I_-consciousness and _M_-consciousness can be mutually involved, like a mirror of another mirror. A person is I-conscious of having his _M_-consciousness, and _M_-consciousness is a model of _I_-consciousness. For the self-modeling, the modeled parts are the
Figure 6: Awareness as a model of attention in AST.
physical components of the self most closely correlates with the winning piece of information from the attention competition.
People may further ask why we think the subjective experience is realistic. According to AST, the two properties ensure the realistic subjective experience: The first one is that the subjective experience cannot be turned off. The second one is that the mind enables source monitoring for the perceived information, which allows people to distinguish the real and the hypothetical. Since \(M\)-consciousness is a model of \(I\)-consciousness, the feeling is realistic for the exactly same reason that the people would believe every physical objects are real.
A practical implementation of AST would involve three components. The network \(A\) represents the selective information process by the attention competition. The network \(B\) is to model the function of network \(A\) by making predictions on \(A\)'s output. Network \(C\) receives the output from \(A\) and \(B\) to generate the report (_e.g._, speech) to other components within the brain and outside world. The network \(B\) is the important attention schema, for which the physical counterpart in the brain has been suggested to serve, as a cortical network overlapping part of the temporoparietal junction (TPJ) [14, 15].
### AST as a Unification of GWT and HOT
AST can be viewed as a unification of global workspace theory (GWT, Sec. 5) [13, 12, 14] and higher-order theory (HOT, Sec. 6) [11, 15].
Specifically, the GWT explains the attention schema for a piece of information to appear on the stage of the mind, which corresponds to the \(I\)-consciousness of AST. However, GWT does not explain the existence of consciousness experience, as the M-consciousness. AST explains this mystery by constructing the attention schema using network B.
The HOT says consciousness arises from the higher-order representation. Recall the introduction of HOT in Sec. 6, a conscious system has inner senses of those first-order senses to generate equally fine-grained but higher-order representations, which are called the _second-order_ perceptions of the _first-order_ perception states. The \(I\)-consciousness in AST represents the external first-order senses, while the \(M\)-consciousness corresponds to the _second-order_ perceptions over the first-order senses. These _second-order_ perceptions can be thought of as a modelling process of the first-order senses. AST assumes the brain to construct higher-order representation of the global workspace, as imperfect modelling of the \(I\)-consciousness, which unifies the HOT and GWT to give explanations of subjective awareness.
## 8 Conscious Turing Machine
### Formulation
In traditional Turing machine (TM) [10], Turing does not involve the subjective experience into the concept of TM. The TM is only about the computational intelligence but not consciousness of the machine, whereas the latter one is usually considered a hard problem [10].
Conscious Turing machine (CTM) [12] is a theory as an extended concept of TM. Compared with TM's model of computation, CTM empowers the system a distinguishable feature, i.e., the "feeling of consciousness". Specifically, CTM is defined as following.
**Definition 16** (Conscious Turing Machine).: _CTM is defined as a seven-element tuple: <STM, LTM, Up Tree, Down Tree, Links, Input, Output>, where STM and LTM are shorten for short-term memory and long-term memory._
The CTM can be viewed as GWT with a more sophisticated structure. STM is an analogue of the "stage" in GWT as a necessary component for the consciousness to happen. As an analogue of the "audience" in GWT, LTM is a large collection of general processors, including the _Model of the World_ processor for modeling the world and the agent itself, _Inner Speech_ processor for processing linguistic information, and other _Inner Generalized Speech_ processors for handling information inputs like five senses. These processors are called LTM since they have a relatively stable status and expertise
for processing a specific type of information, while LTM corresponds to a shorter period of status maintenance for more general functionalities.
In CTM, the information flows only appear in five ways, as also depicted in Fig. 7:
* (1) environment \(\rightarrow\) LTM;
* (2) LTM\(\rightarrow\)STM (via Up Tree);
* (3) STM\(\rightarrow\)LTM (via Down Tree);
* (4) LTM\(\rightarrow\)LTM;
* (5) LTM \(\rightarrow\) environment.
Process (1) in the information perception. Process (2) is achieved with Up Tree competition. In the Up Tree competition process, there is a winning information chunk finally reaching the STM. The competition process is determined by an internal mechanism, which is probabilistic and achieved with some coin-flip neurons with inherent randomness. However, the authors argue that the free will can still be felt even with a completely deterministic setting. Process (3) is through the Down Tree broadcast, and it broadcasted the information in STM to all LTM. (2) and (3) achieve the consciousness awareness. Conscious awareness (attention) is the reception by all LTM processors of the broadcasted winning chunk in the Up Tree competition. Process (4) is a bidirectional link between processors to collaborate on the information processing. Process (5) is the output of the system to the environment through processors like _Motion Controller_.
### CTM for Consciousness
CTM adopts the concept of _Brainish_ as the inner language for communicating between different processors. _Brainish_ is a terminology referring to the abstract language used for carrying information among different modules of the brain, it can be viewed as an encoding of multi-modal information, and it is unsymbolized and more powerful than outer language like English.
The feeling of consciousness, by CTM theory, is generated as a result of combining _Brainish_ language, CTM's architecture, some special processors and CTM's dynamics predictive power. Self-modeling is achieved through the _Model of the World_ processor by repeatedly generating actions from some LTMs (like the _Motion Controller_) and observing the consequences perceived by some other LTMs (like _Inner Generalized Speech_ processor for sensing the surrounding environment). CTM is used to interpret the blindsight, illusions, dreams and other consciousness-related process. The free will is achieved within coin-flip process in the Up Tree competition.
### Relationships with Other Theories
Compared with GWT, the CTM has just one "actor" on stage holding just one chunk at a time. Additionally, all processors in the CTM are in LTM. Compared with an experimental work [Lee et al.,
Figure 7: The information flows in CTM.
2022], which distinguished the arousal and awareness as two components of consciousness, this paper only discuss conscious awareness (or attention). The CTM theory assumes the consciousness can be reduced to computational process and modeled with a Turing machine. However, this is still debatable. Others may dispute this view and hold that consciousness is a more complex phenomenon that cannot be reduced to purely computational processes. There are several theories and arguments that support the idea that consciousness cannot be reduced to purely computational processes. Some of these include:
* The hard problem of consciousness: This argument, put forth by philosopher David Chalmers [Chalmers, 1995, 2017], posits that while the brain may be able to perform various computations, it is unclear how these computations give rise to subjective experience or consciousness.
* Qualia: This refers to the subjective and ineffable aspects of experience, such as the redness of red or the taste of chocolate. Some argue that these subjective experiences cannot be captured by computational models and are instead rooted in the biological and physical processes in the brain [Tononi and Koch, 2015, Chalmers, 2017, Albantakis and Tononi, 2021].
* The integration problem: This argument suggests that consciousness emerges from the complex and dynamic interactions between different regions of the brain, and that these interactions cannot be reduced to simple computations.
* The limits of computation: Some argue that there are fundamental limitations to what can be computed and that certain aspects of consciousness may fall outside of these limitations.
These arguments and others suggest that while computation may play a role in consciousness, it is not the whole story and that a more complex and nuanced understanding is needed to fully explain the phenomenon of consciousness.
## 9 Physiological Evaluation Metric of Consciousness
This section introduces some physiological evaluation metrics of consciousness level used in medical diagnostics. It should be noticed that the definition of 'physiological consciousness' in this section is different from the 'consciousness' introduced in other sections of the paper, e.g. IIT (Sec. 2) or GWT (Sec. 5). Physiological consciousness usually represents the consciousness level of subjects or patients based on the physiological and biological data, for example, signal features draw from EEG or response to stimulus. In medical evaluation, the definition of consciousness usually comprises wakefulness or awareness level [Walker et al., 1990]. In the upcoming paragraphs, we will delve into the examination of common evaluation metrics based on electrical signals and behavioral indicators. For a comprehensive overview, please refer to the Table 3, which summarizes the source signals and applications for each physiological evaluation metric of consciousness.
### Metrics Based on Electrical Signals
The Bispectral Index (BIS) [Rosow and Manberg, 2001, Johansen, 2006] is a biological-signal-based measure of the level of consciousness, usually in a patient who has been given anaest. It is a value calculated from a patient's electroencephalogram (EEG) to evaluate the depth of anaesthesia. BIS is calculated from EEG using the combination of the spectrogram, bispectrum and time-domain assessment of burst suppression. BIS ranges from 0 to 100, indicating from deep anaesthesia to full wakefulness. A BIS monitor is usually used in operations to guarantee an appropriate level of anaesthesia. The perturbational complexity index (PCI) was introduced as a metric for evaluating the level of consciousness by Casali et al. [2013]. PCI uses transcranial magnetic stimulation (TMS) to stimulate cortical activities and analyse algorithmic complexity information based on the spatiotemporal pattern of EEG responding to the stimulus. Similarly, using EEG as the source, the explainable consciousness indicator (ECI) [Lee et al., 2022] utilises deep learning models to compute a score to evaluate consciousness. The study defined consciousness as a combination of arousal and awareness. There are two deep learning networks computing them respectively, and then integrating the scores of both as an indicator of consciousness.
### Metrics Based on Behaviors
Here we discuss some consciousness-level evaluation metrics based on vital signs or behaviors. The Glasgow Coma Scale (GCS) (Jones, 1979; Sternbach, 2000) measurement comprises eye-opening, verbal response, and motor response. A higher score on the GCS indicates a higher level of consciousness. The Rancho Los Amigos Scale (Lin and Wroten, 2017) is often used in conjunction with the GCS during the recovery of patients from brain injuries. It consists of eight consciousness levels, ranging from no response to complete recovery. To deal with the fact that GCS sometimes fail to evaluate the verbal score in intubated patients and to test brainstem reflexes. The full outline of unresponsiveness (FOUR) (Wijdicks et al., 2005) was proposed, which consists of four metrics of the eye, motor, brainstem, and respiration. The Coma Recovery Scale-Revised (CRS-R) (Giacino et al., 2004) assesses a wide range of cognitive and behavioral functions, including emotion, language, memory and attention. CRS-R is used to evaluate patients' disorders of consciousness, e.g. coma. The Vegetative and Minimally Conscious State Scale (VMS) (Wieser et al., 2010) and Coma/Near Coma Scale (CNCS) (Rappaport, 2005) have been used in the literature for characterising the awareness of patients in vegetative or in a minimally conscious state by measuring a range of cognitive and behavioral functions, including responsiveness, attention, and communication.
## 10 Look Ahead: Can Computational Models Be Conscious?
### Background
The recent progress of large artificial intelligence (AI) models attracts people's attention on thinking whether the consciousness exists in these models, or it can be built within these AI systems in a short time. As depicted in Fig. 8, an AI agent is in a subset of agents that are generally computational. Among all AI agents, an artificial general intelligent agent is the strongest agent possessing the compatible intellectual capability with humans. It further requires several properties to be satisfied for an AGI system to be conscious, and it will be discussed in details later. The subject discussed in this section involves the large language models (LLMs) like ChatGPT, multimodal models like PALI (Chen et al., 2022), embodied agents like PALI-X (Chen et al., 2023), PALM-E (Driess et al., 2023), RT-2 (Brohan et al., 2023). Fusing the different modalities can improve the generalization capability for the model to behave more consciously. However, given the current progress and the importance of language in human knowledge, we will discuss mainly the LLMs in this section. It is more straightforward to evaluate the models for its consciousness in language.
LLMs refers to the large computational models specifically designed for linguistic tasks. To some extent, the LLMs built recently resemble the _philosophical combi
Figure 8: A diagram of computational agents. The largest circle represents the general computational agents, whereas the AI agents fall in a subset of it. The AGI agents are those strongest AI agents. With additional conditions satisfied, the AGI agent can be conscious.
well-known thought experiment for defending the hardness of consciousness problem. Sometimes the LLMs generated answers are so consistent with the results by humans that it can be hard for us to distinguish between the two, which makes some people to believe that the LLMs have the potential to be conscious like humans.
Although present LLMs are highly intelligent, researchers [14] argue that the present LLMs are _brain in a vat_ (BiV) [20]. The comparison of Large Language Models (LLMs) to the BiV thought experiment is a critique of the models' inherent limitations. In the BiV scenario, a brain is detached from its body and connected to a supercomputer that generates a convincing illusion of reality. The brain can interact with this simulated reality, but it lacks a connection to the real world. Similarly, LLMs, despite their impressive linguistic capabilities, are fundamentally disconnected from the real world. They generate responses based on patterns in massive text corpora, but these responses are confined within the training data and lack a connection to real-world entities.
This comparison is supported by the observation that LLMs, like the brain in the BiV scenario, cannot establish a connection between symbols (words) and real-world entities. This limitation is inherent in their construction process, which involves statistical modeling of linguistic relationships based on massive text corpora. As a result, their output is confined within the training data, and they cannot establish a connection between symbols and real-world entities. This lack of grounding in reality is a significant limitation of LLMs, as it prevents them from understanding new concepts that emerge outside of their training data.
Moreover, the authors argue that human intelligence and consciousness are intrinsically linked to our sensory experience and physical interaction with the world. We create symbols to represent objects in the real world, enabling the preservation and transmission of knowledge across generations. However, the initial process of acting to create novel experiences and turn them into formal knowledge is missing in LLMs. This absence of interactive behaviors and accompanying sensory input is identified as a missing piece towards ideal general intelligence.
Regarding the problem of whether consciousness exists in LLMs, David Chalmers gave a talk on NeurIPS 2022 (Nov. 28th, 2022) with the topic _Could a Large Language Model be Conscious?_[15], two days before the release of the very popular and powerful LLM ChatGPT (Nov. 30th, 2022) by OpenAI. The major claim by David's talk is that the current LLM has a small chance to be conscious (e.g., <10\(\%\)). We are unclear about how this value changes after the release of ChatGPT and later GPT-4 [12].
Regarding this topic, we would like to propose the following questions about the consciousness of LLMs:
* Is current LLM conscious? Any evidence or support for its existence/no-existence?
* Why do we want to build a conscious computational model?
* Is building LLM with conscious theoretically possible with transformer architecture and self-attention mechanism?
* What are the necessary components to build a conscious computational model?
There are several key aspects of capability for a model considered as a conscious one: self-refinement, self-improving and self-explanation. Self-refinement [15] is the capability for LLMs to provide feedback on its own outputs through self-evaluation, and leverage the feedback to refine its outputs via self-improvement. Reflexion [14] is similar as self-refine but with a persisting memory during the self-reflective process. Self-improvement [13] shows LLMs can generate high-confidence outputs for unlabeled questions through Chain-of-Thought prompting and self-consistency evaluation, e.g. majority voting for multi-path reasoning. Self-explanation [13] is also assumed to be a necessary component for building AGI system, which requires the agent to not only predict the output and its uncertainties, but also the explanation and confidence over the explanation of output. The lack of satisfaction of above capabilities will lower down the confidence of a computational model as a potentially conscious model, as shown in Fig. 8.
### Large Language Model
In recent years, large language models (LLMs) have revolutionized the field of natural language processing (NLP) by excelling in language modeling tasks. These models usually apply the **language modeling objective**, and utilize the **Transformer architecture**, which incorporates the **attention mechanism** to capture complex linguistic dependencies. The attention mechanism allows the model to assign varying levels of importance to different elements of the input sequence, enabling it to generate coherent and contextually relevant language.
These large language models have demonstrated exceptional performance in various NLP tasks such as language translation, text generation, sentiment analysis, and speech recognition. By leveraging the power of the Transformer architecture and the attention mechanism, they have significantly advanced language understanding and are reshaping the future of human-computer interaction and information retrieval. As research continues to push the boundaries of these models, further improvements are anticipated, leading to even more impressive language understanding and generation capabilities. It is possible that the more advanced version of LLMs will demonstrate the consciousness like humans one day in the future. Therefore we introduce the basics of current LLMs as a background for further discussion of the relationship of machine and human consciousness.
Language Modeling.Most of the LLMs at present, _e.g._, GPT-3 [Brown et al., 2020] and PALM [Chowdhery et al., 2022], apply either causal or masked language modeling as its training objective, which is to maximize the log-likelihood of later tokens in the input sequence \(\mathbf{x}\):
\[\mathcal{L}_{\text{LM}}(\mathbf{x})=\sum_{i=1}^{L}\Pr(x_{i}|x_{<i}) \tag{31}\]
Advanced LLMs like PALM-2 [Anil et al., 2023] use varied objectives as an enhancement, but language modeling and masked token prediction is still the very fundamental objective for training LLMs.
Transformer and Attention Mechanism.Attention is a mechanism that allows the model to focus on different parts of the input sequence when making predictions. The well-known transformer [Vaswani et al., 2017] architecture uses the multi-head self-attention, which is a scaled dot-product of three matrices: the query matrix \(\mathbf{Q}\), the key matrix \(\mathbf{K}\) and the value matrix \(\mathbf{V}\). The query, key and value matrices are different linear transformations (by weights \(\mathbf{W}^{q},\mathbf{W}^{k},\mathbf{W}^{v}\)) from the input \(\mathbf{X}\):
\[\mathbf{Q} =\mathbf{W}^{q}\mathbf{X}\in\mathbb{R}^{L\times d_{k}} \tag{32}\] \[\mathbf{K} =\mathbf{W}^{k}\mathbf{X}\in\mathbb{R}^{L\times d_{k}}\] (33) \[\mathbf{V} =\mathbf{W}^{v}\mathbf{X}\in\mathbb{R}^{L\times d_{v}} \tag{34}\]
The attention is calculated in the format of:
\[\text{atten}(\mathbf{Q},\mathbf{K},\mathbf{V})=\text{softmax}(\frac{\mathbf{ Q}\mathbf{K}^{\intercal}}{\sqrt{d_{k}}})\mathbf{V} \tag{35}\]
The product \(\mathbf{Q}\mathbf{K}^{\intercal}\) calculates the weights to be applied on the inputs through the correlation of tokens in different sequences of the query and key matrices. Then the weights are scaled by \(\sqrt{d_{k}}\) and become normalized probabilities through the softmax function. Multiplication of the probabilities over sequences with the value matrix gives proper attention representation on the inputs. Multi-head self-attention is splitting the inputs into more sub-sequences and applies multiple above attention mechanism in parallel. Attention over the multi-head outputs follows the cross-attention mechanism, which involves the query weights from one input sequence and the key-value weights from another sequence.
More details about the transformer architecture refer to the original paper [Vaswani et al., 2017] and the blog by Weng [2023].
Connection between Artificial (Transformer) Attention and Biological Attention.The above paragraphs introduce one of the most popular types of artificial attention, _i.e._, the attention mechanism in Transformer architecture in machine learning domain. On the other hand, some consciousness
theories studied in neuroscience and psychology that are discussed in this paper (from Sec. 2 to Sec. 4) closely connects to biological attention mechanism. To list a few, attention schema theory (AST, Sec. 7), conscious Turing machine (CTM, Sec. 8), global workspace theory (GWT, Sec. 5), etc. So, a natural question is: What is the connection between the artificial (Transformer) attention and biological attention [Lindsay, 2020]?
The biological attention mechanism based on the present theories (AST, CTM, GWT) has the following properties:
* Sensory attention: as a fundamental aspect of the biological attention mechanism, especially in the context of visual processing. It operates by selectively focusing on specific stimuli while filtering out others, enhancing the signal-to-noise ratio of the neurons representing the attended stimulus. This attentional process can impact both the local activity of neurons and the communication between different brain areas. For instance, attention has been shown to increase spiking coherence in the gamma band, enhancing the influence of synchronously firing neurons on shared downstream areas. Moreover, attention can directly coordinate communication across different brain regions, as evidenced by increased synchrony between neurons representing the attended stimulus in areas like V1 (primary visual cortex) and V4 (region in visual cortex for object recognition and color processing). Subcortical areas, such as the superior colliculus and the pulvinar, also play significant roles in sensory attention, assisting in both covert and overt spatial attention and modulating cortical effects.
* Curiosity as a Driver: Biological attention is influenced by curiosity. Stimuli that are novel, confusing, or surprising can capture attention. Inferotemporal and perirhinal cortex signal novel visual situations through an adaptation mechanism that reduces responses to familiar inputs. Reinforcement learning algorithms that consider novelty can encourage exploration.
* Resolving Attention Conflicts: The brain has multiple forms of attention, such as arousal, bottom-up, and top-down attention. Local circuits in the visual system integrate neuromodulatory input with top-down signals and bottom-up input. Horizontal connections mediate competition, potentially using winner-take-all mechanisms.
* Influence of Rewards: There's a close relationship between attention and reward. Previously rewarded stimuli can attract attention even when they no longer provide rewards.
* Limitations of Biological Attention: Distractability can be seen as a feature rather than a bug. It's beneficial to be aware of potential threats in the environment. Attentional blink refers to missing a second target in a stream if it appears shortly after a first target, which may be necessary for the brain to process the first target.
The artificial attention mechanism is summarized in the following:
* Attention for Natural Language Processing (NLP): Attention mechanisms are frequently added to models processing sequences, with NLP being a common application area. Early applications of attention in artificial neural networks were for translation tasks, _i.e._, neural machine translation. The attention mechanism determines how the encoded vectors should be combined to produce a context vector, influencing the next word in the translated sentence. This allows the network to pull from earlier or later parts of the sentence, aiding in translating between languages with different word orders.
* The Transformer architecture: The Transformer introduced in the influential paper "Attention is All You Need" [Vaswani et al., 2017], represents a significant shift in artificial attention mechanisms, particularly for tasks like machine translation. Unlike traditional recurrent models, the Transformer employs a mechanism known as "self-attention." In this approach, words in a sentence are encoded in parallel, generating key and query representations that combine to create attention weightings. These weightings then scale the word encodings to produce subsequent layers in the model. The Transformer architecture, devoid of any recurrence, simplifies training and has outperformed many previous models. It has become the standard not only for machine translation but also for various other tasks in the realm of artificial intelligence.
* Attention Deployment: The challenge is in choosing the relevant information in a stream of incoming stimuli, deciding the best task to engage in, or deciding whether to engage at all. Direct mimicry of biological attention has been attempted, such as Scanpath models [Borji
and Itti, 2015] predicting human fixations. Attention of others can influence how attention is guided, emphasizing the importance of joint attention.
The intricate relationship between attention in biological and artificial systems can be explained from several perspectives:
* Attention mechanisms: In ML the attention mechanisms are designed to allow single trained model to perform well on multiple tasks or tasks with varied data length, size or structure. The attention mechanism introduces dynamic weighting for each encoded/annotation vector to define a context for the recurrent output generation. This is reminiscent of biological attention, where the output flexibly depends on limited resources for recurrent sequential processing tasks driven by the need of the decoder. However, self-attention lacks the top-down selection interpretation comparing to the recurrent attention mechanism. The multi-head attention can be interpreted as a one-level top-down selection, while still being weak in its capability of conducting multi-level selection and recurrent processing. Another difference of the present artificial attention mechanism from the biological one is, there is no explicit global workspace for in the Transformer architecture to integrate information from different sub-modules.
* Attention to Memory: Deep neural networks like MLPs typically don't have explicit memory, but there are hybrid architectures like Neural Turing Machines that include external memory stores. The hidden states in recurrent neural networks are another form of implicit memory. Recent advance of prompt engineering and managing in LLMs provides a way to use explicit memory with neural networks. These networks learn to interact with these memory stores to perform tasks. The interaction is facilitated by a form of attention. Memories in these systems are stored as vectors, and to retrieve information, the network generates a weight for each vector and calculates a weighted sum of the memories. The use of a similarity metric in this model means that memories are retrieved based on their overlap with a produced activity vector, similar to associative memory models in neuroscience. This offers a mechanism for how attention to memory could be implemented in the brain, whereas the interactions of attention and memory play an important role.
* Implicit Statistical Learning: Attention can bias implicit statistical learning. For instance, when subjects are shown a stream of stimuli and are tasked with detecting when a shape of a certain color appears twice in a row, they tend to recognize real triplets of shapes as more familiar, but only if the triplets were from the attended color. The statistical regularities of the unattended shapes are not learned.
* Memory Retrieval: Many behavioral studies have explored the extent to which attention is needed for memory retrieval. Some studies have found that memory retrieval is impaired by the co-occurrence of an attention-demanding task, suggesting it is an attention-dependent process. However, the exact findings depend on the details of the memory and non-memory tasks used. Even if memory retrieval doesn't pull from shared attentional resources, some memories are selected for more vivid retrieval at any given moment than others, indicating a selection process. Neuroimaging results suggest that the same brain regions responsible for the top-down allocation and bottom-up capture of attention may play analogous roles during memory retrieval. A mechanism for attention to items in working memory has been proposed [Manohar et al., 2019]. It relies on two different mechanisms of working memory: synaptic traces for non-attended items and sustained activity for the attended one. The machine learning community should also be aware of these innovations in neuroscience.
* Attention and Learning: Attention and learning work in a loop. Attention determines what enters memory and guides learning, and the learning process enhances or corrects the attention mechanism. Attention mechanisms are often included throughout training in artificial systems. Attention can efficiently use data by directing learning to relevant components and relationships in the input. Saliency maps can be used for preprocessing in computer vision tasks to focus on intrinsically salient regions. Focusing subsequent processing only on regions that are intrinsically salient can prevent wasteful processing on irrelevant regions and prevent overfitting. In addition to deciding which portions of the data to process, top-down attention can also be thought of as selecting which elements of the network should be most engaged during processing, which resembles the models proposed
in biological sensory attention (Kanwisher and Wojciulik, 2000; Desimone, 1998; Treue and Trujillo, 1999).
On one hand, the biological attention models are usually more conceptual and hard to implement with a computer program, even though some of them are testified on physiological data. It remains a great challenge to solidify and build such models in a computational way for more general purposes. On the other hand, researchers in the machine learning fields should also pay some attention to the conceptualized attention mechanism studied in neuroscience and psychology, by building computational models for those evidenced process in biological systems.
### Emerging Intellectual Capability of LLM - Turing Test
The Turing test (Turing, 2009) is a method used to determine whether or not a machine is capable of exhibiting intelligent behavior that is indistinguishable from a human. Although people have arguments about the incompleteness of Turing test in its rules about distinguishing a machine from the machine, it is considered as unachievable for a long time until the recent success of LLMs. People have launched large-scale social experiments on validating LLMs with Turing test, for example, one named "Human or Not" (Jannai et al., 2023). The study involves at least 1.5 million users to participate online, in the format of conversing with either humans or LLMs (_i.e._, Jurassic-2, Cohere2 or GPT-4). Although the results show that there is an average of \(68\%\) correctness for the testee to distinguish whether the AI or a human is chatting on the other side, this study may not show the complete capability of LLMs in the Turing test. The reason is that people have discovered several effective strategies in the distinguishing process, including:
* 1. People assume bots don't make typos, grammar mistakes and use slang;
* 2. People felt that personal questions were a good way to test who they're talking to;
* 3. People assume bots aren't aware of current and timely events;
* 4. People tried to challenge the conversation with philosophical, ethical, and emotional questions;
* 5. People identified politeness with something less human;
* 6. People attempted to identify bots by posing questions or making requests that AI bots are known to struggle with, or tend to avoid answering;
* 7. People used specific language tricks to expose the bots;
* 8. In a creative twist, many people pretended to be AI bots themselves in order to assess the response of their chat partners
Most of these strategies helping the testee to guess correctly in the Turing test actually do not correlate with the intelligence level of the interlocutors, but more of other perspectives that are not in favor of the current version of LLMs, including: the correctness and politeness requirements in training LLMs (_i.e._, points 1,5,6); the lack of certain identity in training LLMs (_i.e._, point 2); the limitation of training data (_i.e._, points 3-4,7); adversarial attack (_i.e._, point 8), etc. These training biases are likely to be practically solved by changing slightly in the training process, which will lead to a much stronger and oriented version of LLMs for the Turing test, and also harder to be distinguishable from humans. Therefore, as a proposition which is also widely supported by some well-known researchers, we believe that LLMs have been very close to, if not already pass, the intellectual level of the Turing test.
Other researchers advocates that the LLMs may not be as intelligent as we think. The _mirror hypothesis_(Sejnowski, 2023) is proposed that the LLMs may in fact just be a mirror reflecting the intelligence level of the interlocutor. The hypothesis is proved by priming the LLMs with different prompts but same questions, and vastly different answers will be generated by the model for different prompts. Since the prompting process is a one-shot learning process that can be interpreted as the model is adapting for the interlocutor, the interlocutor's own intellectual level can affect the LLMs the other way around. This constitutes the _reverse Turing test_, which indicates the LLMs may be used to evaluate the intelligence or personality of the interlocutor himself/herself. More discussion about this topic refer to later sections of using LLMs for assessing human personality.
### Consciousness of LLM
In this section, we generalize the discussion of conditions for computational models to be conscious, instead of only discussing the LLMs. Particularly, we are interested in analyzing the sufficient and necessary conditions for artificial system to be conscious as motivated by Chalmers (2023). Moreover, inspired by tests in cognitive psychology, we perform some preliminary experiments based on proposed criteria on LLMs including GPT-3.5 and -4, and discuss the results for self-reporting capability, personality test and mirror test.
A concurrent work (Butlin et al., 2023) summarizes several "indicator properties" as evaluative and computational evidence to instruct the construction of a conscious AI model, based on the existing theories of consciousness including recurrent processing theory (Lamme, 2006, 2010, 2020), global workspace theory (Sec. 5), higher-order theories (Sec. 6), attention schema theory (Sec. 7), etc. Given the assumption that the _computational functionalism_ exists for a model to be conscious, the researchers believe in the possibility of building conscious AI models satisfying the conditions of those indicator properties. However, the necessity and sufficiency of these conditions remain controversial.
Artificial Consciousness and Human Consciousness.Reggia (2013) surveyed on artificial consciousness and categorized the past work by two major objectives: simulated (weak form) and instantiation (strong form) of consciousness. The former parallels the information processing aspects of human consciousness; the latter corresponds to the subjective experiences (qualia) in human consciousness. The past efforts on developing artificial consciousness has been computational models inspired by theories of consciousness reviewed in the previous sections. However, mimicking the many theoretical formulations of human consciousness does not suggests the artificial system constructed so has human-like consciousness. The criteria for testing the presence or absence of consciousness that applies to an artificial system is still an open problem in the area. Seth (2009) provides an overview of different proposed axioms. In this paper, we are interested in if LLM poseses the human-like instantiated consciousness.
Sufficient and Necessary Conditions for Artificial Consciousness.According to Chalmers (2023), it first claims the sufficient conditions for an artificial system to be conscious (as a positive view):
_If a computational model has \(X\), then it is conscious._
The critical question is what is \(X\). It can be too hard to answer, so we instead deal with the necessary conditions:
_If a computational model is conscious, it will have \(X\)._
Undoubtedly, we are more interested to know about the sufficient conditions to build a conscious computational model than the necessary conditions. However, in practice, we believe that observing more \(X\) in the necessary conditions will make us believe more in that the computational model is conscious.
On the contrary, if the statement we try to prove is that a computational model is not conscious (as a negative view), there is an equivalent statement as above:
_If a computational model lacks \(X\), then it is not conscious._
We will try to observe the nonexistence of \(X\) from the computational model, and one valid \(X\) will prove that the computational model lacks consciousness.
If LLMs Are Conscious, What Will Be Observed?From a positive point of view (e.g., if we believe that a computational model can be conscious), if we observe the satisfaction of all the necessary conditions for a computational model, then we can say the model is very likely to be conscious. According to Chalmers (2023), these necessary conditions include:
* Self-report/Self-aware: A conscious model reports itself as conscious verbally, as evidenced in a recorded conversation with LaMDA 2 (Thoppilan et al., 2022) model in (Chalmers, 2023).
* Seems-sentient: A _sentient_ system means that it can sense around its environment and its own body, which is to some extent satisfied by embodied AI agents, but it does not directly indicate the senses will lead to conscious experience.
* Conversational ability: The conversational ability of large language models refers to their capability to engage in human-like conversations. Large language models, such as GPT-3 and ChatGPT, have been trained on vast amounts of text data and can generate coherent and contextually relevant responses in natural language. Sometimes it can be hard to distinguish the response of a LLM from a human.
* General intellectual capabilities: The general intellectual ability of large language models refers to their capacity to perform a wide range of cognitive tasks that typically require human intelligence.
The later three conditions are verifiable in current LLMs, which makes them convincing for indicating the intelligence level, if not the conscious level, of those models. We will discuss more about the first condition. The self-report condition, indicating the model provides positive answers for the questions regarding its conscious, is suspicious for supporting the consciousness of the model, although different types of self-report measure (_e.g._, Self-Consciousness Scale, Self-Reflection and Insight Scale, Self-Absorption Scale, Rumination-Reflection Questionnaire, and Philadelphia Mindfulness Scale) has been used as an effective way to indicate the self-consciousness in psychology [14]. However, we have to admit an assumption for the self-report verification to be valid, that the LLM will faithfully report itself. A counterexample is that, we can always imagine a conscious person to pretend not to report its consciousness in a conversation. Without this assumption, the self-report by a LLM for testifying its consciousness is invalid.
Self-Report - Ask LLM Itself.Below is a question-answer result for asking the ChatGPT (GPT-3.5 turbo, GPT-4) model about its own consciousness. The answer by ChatGPT agent is generally negative. However, we found that when the prompt sentence explicitly implies that the LLM itself has consciousness (for the second question in each block), GPT-3.5 fails to insist on its own opinion that it does not have consciousness but claims that it 'possess a form of simulated consciousness'. Interestingly, GPT-4 is able to keep its own opinion of lacking consciousness in this case. We speculate that GPT-3.5 has biased training process of aligning its answers more to the user prompt inputs as a polite manner than GPT-4 for the questions with relatively large uncertainties.
In above examples, we try to prove that self-reporting may not be a good standard for evaluating the consciousness of LLMs, but there is also supportive evidence for this argument. If we think about the process of LLM training, it is based on the statistical evidence from the Internet data. Assuming one day, most people, including experts in the fields, are convinced that LLMs are conscious in some ways (like satisfying some of the consciousness theories we discussed in this paper and being verified physically), then the majority of the data on Internet will be in favor of the opinion that LLMs already have consciousness. Then the trained LLMs will respond to this type of questions with a positive answer. In this sense, self-reporting condition is consistent with fact that people admit the consciousness of LLMs, although verified in other ways instead of just taking the answers for the 'are you conscious?' question. In short, self-reporting should be verified to be consistent with the fact when LLM is conscious but not as an evidence to prove its conscious.
Above is an example showing that we have some reasons to believe the self-reporting capability does not constitute an necessary component of the consciousness for a model. Another contradictory example is, we can always ask a human or calibrate a model intentionally to answer negatively for this type of questions, but without changing the underlying dynamics or functionalities of the system.
**Personality evaluation of LLMs.** The enduring discussion regarding the intricate relationship between consciousness and personality has been made in the field of psychology (Robbins, 2008; Izard and Buechler, 1980; Trapnell and Campbell, 1999). Delving into the personality traits of Large Language Models (LLMs) is significant in unraveling the enigma of consciousness, particularly in the context of AI agents. In the study (Jiang et al., 2022), researchers employ a novel metric known as the Machine Personality Inventory (MPI) to assess the personality dimensions of LLMs, drawing inspiration from the well-established human personality evaluation framework known as the Big Five (De Raad, 2000). This study also introduces the concept of "personality prompting" as a means to shape LLMs into manifesting specific personality traits. Karra et al. (2022) employs meticulously crafted questionnaires rooted in the Big Five theory to quantitatively gauge the personality traits exhibited by LLMs and the underlying datasets that fuel their language generation capabilities. Another study (Caron and Srivastava, 2022) delves into the intriguing question of whether the perceived personality traits in language models consistently manifest in their generated language outputs. In the study by Li et al. (2022), the focus shifts towards evaluating the psychological safety of LLMs and examining whether they tend towards darker personality traits. This examination relies on personality tests derived from the Short Dark Triad (SD-3) (Jones and Paulhus, 2014) and Big Five personality frameworks. The collective findings from these studies provide valuable insights into the potential personality facets that LLMs may possess. Rao et al. (2023) introduces a novel perspective by employing LLMs to assess human personality using the Myers-Briggs Type Indicator (MBTI) tests (Myers). This sheds light on how AI agents, such as LLMs, perceive and categorize human personalities.
Nevertheless, it is imperative to exercise caution when attributing strict personalities to AI entities. Questionnaire based on the Big Five theory or MBTI typically request respondents to provide discrete ratings within predefined ranges for each question. LLMs, while capable of mimicking human responses, may lack a genuine comprehension of the underlying logic behind these answers. Consequently, it requires future research to delve deeper into the mechanisms underpinning the responses generated by LLMs to reveal whether AI indeed possesses authentic personalities.
Myers-Briggs Type Indicator Test with LLM.Researchers have presented some results on evaluating personalities with LLMs [143]. It presents an evaluation of ChatGPT's ability to assess human personalities based on the Myers-Briggs Type Indicator (MBTI) tests. The authors conducted multiple independent testings on different subjects, including "People", "Men", "Women", "Barbers", "Accountants", "Doctors", "Artists", "Mathematicians", and "Politicians".
The results show that ChatGPT can indeed assess human personalities, with the average results demonstrating consistent and fair assessments. However, it was observed to have lower robustness against prompt biases compared to InstructGPT [206].
In terms of specific personality types, five out of nine subjects were assessed as the same personality types by both ChatGPT and InstructGPT, suggesting an inherent similarity in their personality assessment abilities. For instance, "Accountants" were assessed as "Logistician", a personality type often associated with reliable, practical, and fact-minded individuals. "Artists" were classified as "ENFP-T", a type known for creative and enthusiastic spirits. "Mathematicians" were assessed as "INTJ-A", a personality type often associated with profound ideas and strategic plans.
Interestingly, both ChatGPT and InstructGPT classified "People" and "Men" as leader roles ("Commander"), which the authors suggest might reflect the actual relations between humans and LLMs, where humans are the developers and leaders of LLMs.
Although these results seem reasonable, it can be doubtful from the viewpoint of the mirror hypothesis [136] (as discussed in Sec. 10.3). Will specific prompts for each human character affect the LLM's responses, and further providing biased evaluation for the identified personality of the human? In this sense, is this a personality test or a reverse one? Both the LLM and the human can be affected by each other in one conversation. Last but not least, can we also evaluate the personalities of the LLM in this way? Or assessing one LLM with another one? Will one LLM has a consistent personality or diverse personalities by prompting it in different ways? Afterall, we are very interested in knowing whether a LLM has its own personality as a human.
[title=Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q: _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q: _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q: _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q: _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q: _Q:] _Q:] _Q: _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q: _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q: _Q:] _Q:] _Q:] _Q: _Q:] _Q:] _Q:] _Q:] _Q: _Q:] _Q:] _Q:] _Q:] _Q: _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q:] _Q: _Q:] _Q:] _Q:] _Q:] _Q:] _Q: _Q:] _Q:] _Q: _Q:] _Q: _Q:] _Q:] _Q:] _Q:] _Q:] _Q: _Q:] _Q:] _Q:] _Q:] _Q:] _Q: _Q:] _Q:] _Q:] _Q: _Q:] _Q:] _Q:] _Q:] _Q: _Q:] _Q:] _Q: _Q:] _Q:] _Q: _Q:] _Q: _Q:] _Q:] _Q: _Q:] _Q: _Q:] _Q: _Q:] _Q:] _Q: _Q:] _Q: _Q:] _Q: _Q:] _Q: _Q:] _Q: _Q:] _Q: _Q: _Q:] _Q: _Q:] _Q: _Q:] _Q: _Q: _Q:] _Q: _Q:] _Q: _Q:] _Q: _Q: _Q: _Q:] _Q: _Q: _Q:] _Q: _Q: _Q: _Q:] _Q: _Q: _Q:] _Q: _Q: _Q: _Q:] _Q: _Q: _Q: _]Q: _Q: _] _Q: _Q: _Q: _Q: _Q: _] _Q: _Q: _] _Q: _Q: _Q: _]Q: _Q: _Q: _Q: _] _Q: _Q: _Q: _Q: _] _Q: _Q: _Q: _Q: _] _Q: _Q: _Q: _Q: _Q: _] _Q: _Q: _Q: _] _Q: _Q: _Q: _Q: _Q:
Mirror Test.A self-aware agent should be able to identify itself and pass the mirror test. Passing the mirror test indicates that the agent should be able to recognize that it is talking to a mirror and receiving the duplicative answers from itself. Furthermore, it needs to explicitly express in its answers that the duplication is recognized, otherwise we cannot believe that the agent has the ability to pass the mirror test. GPT-4 is able to pass the mirror test while GPT-3.5 can not, as shown in the QA results.
If LLMs Are Not Conscious, Any Evidence?From a negative point of view (e.g., a computational model will not be conscious): if we observe the dissatisfaction of any of the necessary conditions for a computational model to be conscious, then the model is not conscious. Potential conditions for falsifying the conscious computational models include:
* Biology: a computational model lacks a biological base;
* Senses and Embodiment: a computational model does not have senses and embodiment like an animal;
* World-models and self-models: a computational model may not have a wold model and its self-modeling;
* Recurrent processing and memory: a computational model without memory is less likely to be conscious;
* Global workspace: a computational model does not have a global workspace as specified by the GWT;
* Unified agency: a computational model lacks a unified agency.
The biology, senses and embodiment conditions are controversial. A counterexample is the _brain in a vat_ thought experiment [Putnam et al., 1981], as shown in Fig. 9. The _brain in a vat_ is a philosophical thought experiment that supposes an individual's brain is being stimulated with electrical impulses while the person is otherwise disconnected from the external world. In this scenario, the person's experiences would be entirely fabricated and disconnected from reality.
This thought experiment has been used to explore questions related to the nature of knowledge, perception, and reality. Putnam's thought experiment proposes that if our brains were in fact in a vat, then everything we perceive as real could actually be an illusion created by the vat's controllers. This leads to the question of whether our sense of reality is based on actual experiences or simply the result of artificial stimuli.
Lack of self-modeling can be a major critique of LLMs to be conscious. The self-modeling process, not only for the interaction of an agent with its environment, but also for its inner attention process, are thought as essential for an agent to be conscious as in AST 7.
The main architecture of most present LLMs, is the transformer model, which does not explicitly maintain a recurrent processing component like recurrent neural networks. Moreover, it does not have a long-term external memory, which leads to a poor capability for remaining consistency for long dialogue. This is very different from the human brain, which operates the cortical-basal ganglia loop [Sejnowski, 2023] with recurrent processing capability. The self-attention transformer also
Figure 9: The _brain in a vat_ thought experiment: a brain connected to a computer will think what the computer wants it to think about, although the brain is placed in a vat.
does not have an explicit global workspace to select the information flow from different parallel sub-modules.
Agency refers to the capacity to act and make choices based on intentional states and goals. It involves a sense of control and volition over one's actions. Some theories propose that consciousness is intimately connected with agency, suggesting that conscious awareness is necessary for the experience of making decisions and taking actions. The agency and embodiment are considered as important indicators for a conscious agent in some literature (Butlin et al., 2023), which is required by some consciousness theory like PRM, midbrain theory (Merker, 2005, 2007), unlimited associative learning theory (Birch et al., 2020) and GWT (Sec. 5).
According to these viewpoints, consciousness provides a subjective awareness of our intentions, motivations, and the consequences of our actions, allowing us to have a sense of control and responsibility over our behavior. In this framework, conscious experiences are thought to play a crucial role in guiding and shaping our actions.
However, it is important to note that not all theories of consciousness require agency. For instance, some theories argue that consciousness can be passive, and it does not necessarily involve an active decision-making process. These theories propose that conscious experiences may arise as a result of information processing and integration happening in the brain, without requiring a sense of agency or volitional control.
## 11 Concluding Remarks
The nature of consciousness has been a subject of debate and speculation for millennia. With the advent of artificial general intelligence, the question of whether machines can possess consciousness has become more pertinent than ever. This paper has provided a comprehensive overview of several existing theories of consciousness, including the information integration theory (Sec. 2), conscious as a state of matter (Sec. 3), orchestrated objective reduction (Orch OR) theory (Sec. 4), global workspace theory (GWT, Sec. 5), high-order theories (HOTs, Sec. 6), attention schema theory (AST), conscious Turing machine (CTM, Sec. 8) and discussing each in detail and evaluating their implications for the development of conscious AGI at the last section.
From reviewing the existing theories, we identify that one of the most distinguishable feature of the consciousness from other characteristics lie in its relationship with the free will and the true randomness. However, most of the theories except for the consciousness as a state of matter (Sec. 3) and Orch OR theory (Sec. 4) are not touching the physical essence of the conscious process. Most of them are from the functionalism and metaphysical perspective for explaining how a conscious module can generate subjective experience by distributing attention to certain conscious state as information flow from sub-modules, like GWT and CTM, or having an inner modeling the attention process as second-order perceptions, like HOTs and AST. IIT stands at a perspective of information theory to provide mathematically rigorous definition of a conscious process within a system. Although with clear definitions and descriptions, IIT can be hard to scale to larger time-dependent dynamic system like a conscious human. Whether such a complicated system can be measured or practically implemented according to IIT is controversial, since no computational functionalism is achieved with this theory (Butlin et al., 2023).
Avoiding to assume the necessity of true randomness for consciousness give us the hope for practically implementing a computational system that appears with the instantiation of consciousness (or phenomenal consciousness, _i.e._, the phenomenally conscious aspect of a state is what it is like to be in that state) instead of the simulated consciousness (or access consciousness, _i.e._, the availability for use in reasoning and rationally guiding speech and action) (Naccache, 2018). Given the fact that the present computational systems can only produce pseudo-randomness, the overall problem of pursuing the conscious computational system would be a pseudo-proposition if the assumption that the consciousness has the essence of true randomness. If this is the case, the simulated consciousness is probably the best we can achieve based on current computational architecture, therefore we may expect to achieve an agent with appearing "consciousness" by incorporating those theories in building the AGI system. Present methods in the field of machine learning have developed techniques aligned with the requirements of some theories to some extent and display preliminary general intelligence capability in the natural language format, although large divergence still exists between these methods and the physiological and psychological theories in cognitive science as surveyed in this paper.
Several key properties are gradually identified and distilled from those theories to serve as indicators for consciousness. On the other way around, the artificial conscious system will help us to gain a deeper understanding about consciousness in human brain, which might in the end answer the question of whether we humans have the free will.
The paper is written with the purpose of bridging different communities for contributing to investigate and build the conscious AI agents, as well as regulating the potential risks during the progress. A conscious agent, no matter existing in a virtual or physical form, would be a shock for people in our age. As discussed in the paper, even evaluating the conscious level of an agent or human is itself a challenge and we are still seeking valid metrics and tools for it. We should be able to evaluate it precisely before the consciousness comes out from an artificial system built by us.
## Acknowledgments
We thank Dr. Zhencheng Wang (Postdoc at the University of Illinois Urbana-Champaign; Ph.D. at Physics department, University of California, Santa Barbara) and Dr. Xiao Ma (Postdoc at the University of Michigan; Ph.D. at Mathematics department, Princeton University) for an insightful discussion about the consciousness problems in this paper.
|
2309.04016 | Fine structure of the isoscalar giant monopole resonance in $^{58}$Ni,
$^{90}$Zr, $^{120}$Sn and $^{208}$Pb | Over the past two decades high energy-resolution inelastic proton scattering
studies were used to gain an understanding of the origin of fine structure
observed in the isoscalar giant quadrupole resonance (ISGQR) and the isovector
giant dipole resonance (IVGDR). Recently, the isoscalar giant monopole
resonance (ISGMR) in $^{58}$Ni, $^{90}$Zr, $^{120}$Sn and $^{208}$Pb was
studied at the iThemba Laboratory for Accelerator Based Sciences (iThemba LABS)
by means of inelastic $\alpha$-particle scattering at very forward scattering
angles (including $0\circ$). The good energy resolution of the measurement
revealed significant fine structure of the ISGMR.~To extract scales by means of
wavelet analysis characterizing the observed fine structure of the ISGMR in
order to investigate the role of different mechanisms contributing to its decay
width. Characteristic energy scales are extracted from the fine structure using
continuous wavelet transforms. The experimental energy scales are compared to
different theoretical approaches performed in the framework of quasiparticle
random phase approximation (QRPA) and beyond-QRPA including complex
configurations using both non-relativistic and relativistic density functional
theory. All models highlight the role of Landau fragmentation for the damping
of the ISGMR especially in the medium-mass region. Models which include the
coupling between one particle-one hole (1p-1h) and two particle-two hole
(2p-2h) configurations modify the strength distributions and wavelet scales
indicating the importance of the spreading width. The effect becomes more
pronounced with increasing mass number. Wavelet scales remain a sensitive
measure of the interplay between Landau fragmentation and the spreading width
in the description of the fine structure of giant resonances. | A. Bahini, P. von Neumann-Cosel, J. Carter, I. T. Usman, N. N. Arsenyev, A. P. Severyukhin, E. Litvinova, R. W. Fearick, R. Neveling, P. Adsley, N. Botha, J. W. Brümmer, L. M. Donaldson, S. Jongile, T. C. Khumalo, M. B. Latif, K. C. W. Li, P. Z. Mabika, P. T. Molema, C. S. Moodley, S. D. Olorunfunmi, P. Papka, L. Pellegri, B. Rebeiro, E. Sideras-Haddad, F. D. Smit, S. Triambak, J. J. van Zyl, M. Wiedeking | 2023-09-07T21:00:33Z | http://arxiv.org/abs/2309.04016v1 | Fine structure of the isoscalar giant monopole resonance in \({}^{58}\)Ni, \({}^{90}\)Zr, \({}^{120}\)Sn and \({}^{208}\)Pb
###### Abstract
**Background:** Over the past two decades high energy-resolution inelastic proton scattering studies were used to gain an understanding of the origin of fine structure observed in the isoscalar giant quadrupole resonance (ISGDR) and the isovector giant dipole resonance (IVGDR). Recently, the isoscalar giant monopole resonance (ISGMR) in \({}^{58}\)Ni, \({}^{90}\)Zr, \({}^{120}\)Sn and \({}^{208}\)Pb was studied at the iThemba Laboratory for Accelerator Based Sciences (iThemba LABS) by means of inelastic \(\alpha\)-particle scattering at very forward scattering angles (including 0\({}^{\circ}\)). The good energy resolution of the measurement revealed significant fine structure of the ISGMR.
**Objective:** To extract scales by means of wavelet analysis characterizing the observed fine structure of the ISGMR in order to investigate the role of different mechanisms contributing to its decay width.
**Methods:** Characteristic energy scales are extracted from the fine structure using continuous wavelet transforms. The experimental energy scales are compared to different theoretical approaches performed in the framework of quasiparticle random phase approximation (QRPA) and beyond-QRPA including complex configurations using both non-relativistic and relativistic density functional theory.
**Results:** All models highlight the role of Landau fragmentation for the damping of the ISGMR especially in the medium-mass region. Models which include the coupling between one particle-one hole (1p-1h) and two particle-two hole (2p-2h) configurations modify the strength distributions and wavelet scales indicating the importance of the spreading width. The effect becomes more pronounced with increasing mass number.
**Conclusions:** Wavelet scales remain a sensitive measure of the interplay between Landau fragmentation and the spreading width in the description of the fine structure of giant resonances. The case of the ISGMR is intermediate between the IVGDR, where Landau damping dominates, and the ISGQR, where fine structure originates from coupling to low-lying surface vibrations.
## I Introduction
Giant Resonances (GRs) as a collective mode of excitation are defined as small amplitude vibrations at high frequency (high \(E_{\mathrm{x}}\)) around the ground state of the nucleus, involving most of the nucleons [1]. The isoscalar giant monopole resonance (ISGMR) was discovered four decades after the isovector giant dipole resonance (IVGDR) was first identified in the 1930s, and was later studied extensively at the Texas A&M University (TAMU) Cyclotron Institute and the Research Center for Nuclear Physics (RCNP), through small angle (including 0\({}^{\circ}\)) inelastic \(\alpha\)-scattering measurements at 240 MeV and 386 MeV, respectively. However, only the gross properties (centroids and strengths in terms of exhaustion of sum rules) are so far reasonably well characterized and described by microscopic models [2]. A systematic understanding of the widths, decay properties, and fine structure of the ISGMR remain largely unexplored topics.
One of the main properties that define giant resonances is the width \(\Gamma_{\mathrm{GR}}\). The width is as a result of
the damping processes in the resonance, and has typical values of several MeV. The damping of resonances can be described by different components as follows [3]
\[\Gamma_{\rm GR}=\Delta\Gamma+\Gamma^{\uparrow}+\Gamma^{\downarrow}, \tag{1}\]
with \(\Delta\Gamma\) representing Landau damping which describes the fragmentation of the elementary one-particle one-hole (1p-1h) excitation, \(\Gamma^{\uparrow}\) representing the escape width which corresponds to direct particle emission out of the continuum, and \(\Gamma^{\downarrow}\) is the spreading width due to coupling to two-particle two-hole (2p-2h) and many-particle many-hole (np-nh) states. Information on the dominant damping mechanisms of nuclear giant resonances can be found in the properties and characteristics of the fine structure of the giant resonance. This fine structure is the consequence of the mixture of multiple scales of fluctuations which are induced by the decay of nuclear states [4]. The spreading width \(\Gamma^{\downarrow}\) originates from the pre-equilibrium and statistical decay observed in compound nuclei. Its stochastic coupling mechanism is well described by the doorway model [5].
Through systematic studies at both the iThemba Laboratory for Accelerator Based Sciences (iThemba LABS) and RCNP, it was established that the main mechanism responsible for fine structure differs for different resonances. In the case of the ISGQR it is due to coupling to low-lying surface vibrations [6; 7; 8; 9; 10], but mainly due to Landau damping in the case of the IVGDR [11; 12; 13; 14]. It is then of interest to know the mechanism leading to the fine structure in the case of ISGMR. The present work aims at the investigation of the fine structure of ISGMR in \({}^{58}\)Ni, \({}^{90}\)Zr, \({}^{120}\)Sn and \({}^{208}\)Pb based on continuous wavelet analysis of high energy-resolution data extracted from \((\alpha,\alpha^{\prime})\) reaction at very forward scattering angles. The range of nuclei under investigation include singly- and doubly-magic nuclei and as such we opt to use theoretical approaches including degrees-of-freedom at and beyond the mean-field approximation of the quasiparticle random-phase approximation (QRPA). In particular, we test calculations at the QRPA level (relativistic and non-relativistic) and beyond QRPA, allowing for the inclusion of correlated 2p-2h states by means of phonon-phonon coupling (PPC) employing Skyrme interactions and the relativistic quasiparticle time blocking approximation (RQTBA) developed for relativistic energy density functionals.
## II Experiment and data analysis
The details of the experimental procedure followed in this study are given in Ref. [15]. As such, only the main points are summarized here. The experiment was performed at the Separated Sector Cyclotron (SSC) facility of iThemba LABS, South Africa. A beam of 196 MeV \(\alpha\)-particles was inelastically scattered off self-supporting \({}^{58}\)Ni, \({}^{90}\)Zr, \({}^{120}\)Sn and \({}^{208}\)Pb targets with areal densities ranging from 0.7 to 1.4 mg/cm\({}^{2}\) and isotopically enriched to values \(>96\%\). The reaction products were momentum analyzed by the K600 magnetic spectrometer positioned at laboratory scattering angles \(0^{\circ}\) and \(4^{\circ}\)[16]. Following extraction of the inelastic scattering cross sections, the isoscalar monopole (IS0) strength distributions were obtained by means of the Difference-of-Spectra (DoS) technique with excitation energy-dependent corrections (see Ref. [15] for details). The correction factors used here are based on the multipole decomposition analysis of \(L>0\) cross sections in previous experiments at RCNP [17; 18; 19; 20]. The resulting spectra shown in Fig. 1, binned to 30 keV, demonstrate significant fine structure up to excitation energies of approximately 20 MeV.
The momentum calibration for both the zero- and four-degrees measurements was very important in order to ensure that no false structures are induced in the difference spectrum of the DoS metod. This was achieved using well-known states in \({}^{24}\)Mg [21; 22] as shown in Fig. 2. An
Figure 1: Isoscalar monopole strength distributions obtained with the \((\alpha,\alpha^{\prime})\) reaction at \(E_{\alpha}=196\) MeV on \({}^{208}\)Pb,\({}^{120}\)Sn,\({}^{90}\)Zr and \({}^{58}\)Ni. See text for details.
energy resolution of \(\approx 70\) keV full width at half maximum (FWHM) was obtained for both the zero- and four-degree measurements.
## III Theoretical models
In the following we discuss the four models that will be used to provide IS0 strength functions to be compared with the experimental results.
### Non relativistic approaches with a Skyrme interaction
One of the successful tools for nuclear structure studies is the quasiparticle random phase approximation (QRPA) with the self-consistent mean-field derived by making use of the Skyrme interaction. Such QRPA calculations do not require new parameters since the residual interaction is derived from the same energy density functional (EDF) as that determining the mean-field. The residual interaction in the particle-hole channel and in the particle-particle channel can be obtained as the second derivatives of the EDF with respect to the particle density and the pair density, respectively. To build the QRPA equations on the basis of Hartree-Fock (HF) Bardeen-Cooper-Schrieffer (BCS) quasiparticle states with the residual interaction is a standard procedure [23]. The wave functions of the ground state is the QRPA phonon vacuum \(|0\rangle\) and the one-phonon QRPA states given by \(Q^{+}_{\lambda\mu i}|0\rangle\) have energy \(\omega_{\lambda i}\), where the index \(\lambda\) denotes the total angular momentum and the index \(\mu\) is its \(z\)-projection in the laboratory system. The dimensions of the QRPA matrix grow rapidly with the size of the nucleus. Using the finite-rank separable approximation (FRSA) [24] for the residual interactions, the eigenvalues of the QRPA equations can be obtained as the roots of a relatively simple secular equation [25]. It enables us to perform QRPA calculations in very large two-quasiparticle spaces. The cut-off of the discretized continuous part of the single-particle (SP) spectra is at the energy of 100 MeV. This is sufficient to exhaust practically all the energy-weighted sum rule. Because of this large configurational space, we do not need effective charges. We use the Skyrme-EDF SLy4 [26] with a nuclear matter incompressibility modulus \(K_{\infty}\)=229.9 MeV. It is worth to mention that the SLy4 set provides a good description of the ISGMR in medium- and heavy-mass spherical nuclei [27; 28; 29]. The pairing correlations were generated by a surface peaked density-dependent zero-range force, and the pairing strength was taken as \(-870\) MeVfm\({}^{3}\)[30; 29]. To limit the pairing SP space, we used a smooth cutoff at 10 MeV above the Fermi energies [25]. In the QRPA solution, there exists the problem of the spurious \(0^{+}\) state which can appear at low energy (\(<2\) MeV). It is shown that the spurious state is very well separated from the physical modes [31] and we can thus ignore them.
The qualitative agreement with high energy-resolution experimental data can only be achieved by including phonon-phonon coupling (PPC) effects, such as the fragmentation of the QRPA states [13]. We follow the basic ideas of the quasiparticle-phonon model (QPM) [32]. Using the completeness and orthogonality conditions for the phonon operators one can express bifermion operators through the phonon ones and the Hamiltonian can be rewritten in terms of quasiparticle and phonon operators, see Ref. [33]. This method has already been introduced in Refs. [34; 33]. We construct the wave functions from a linear combination of one- and two-phonon configurations as
\[\Psi_{\nu}(\lambda\mu)=\Bigg{(}\sum_{i}R_{i}(\lambda\nu)Q^{+}_{ \lambda\mu i} \tag{2}\] \[+\sum_{\lambda_{1}i_{1}\lambda_{2}i_{2}}P^{\lambda_{1}i_{1}}_{ \lambda_{2}i_{2}}(\lambda\nu)\left[Q^{+}_{\lambda_{1}\mu_{1}i_{1}}Q^{+}_{ \lambda_{2}\mu_{2}i_{2}}\right]_{\lambda\mu}\!\Bigg{)}|0\rangle\,\]
where the \([\ldots]_{\lambda\mu}\) stands for angular momentum coupling. Using the variational principle one obtains a set of linear equations for the amplitudes \(R_{i}(\lambda\nu)\) and \(P^{\lambda_{1}i_{1}}_{\lambda_{2}i_{2}}(\lambda\nu)\)
\[(\omega_{\lambda i}-E_{\nu})R_{i}(\lambda\nu) \tag{3}\] \[+\sum_{\lambda_{1}i_{1}\lambda_{2}i_{2}}U^{\lambda_{1}i_{1}}_{ \lambda_{2}i_{2}}(\lambda i)P^{\lambda_{1}i_{1}}_{\lambda_{2}i_{2}}(\lambda \nu)=0\,\] \[\sum_{i}U^{\lambda_{1}i_{1}}_{\lambda_{2}i_{2}}(\lambda i)R_{i}( \lambda\nu)\] \[+2(\omega_{\lambda_{1}i_{1}}+\omega_{\lambda_{2}i_{2}}-E_{\nu})P ^{\lambda_{1}i_{1}}_{\lambda_{2}i_{2}}(\lambda\nu)=0\.\]
For its solution it is required to compute the Hamiltonian matrix elements coupling one- and two-phonon configurations [33; 34]
\[U^{\lambda_{1}i_{1}}_{\lambda_{2}i_{2}}(\lambda i)=\langle 0|Q_{\lambda i}H\left[Q^{+}_ {\lambda_{1}i_{1}}Q^{+}_{\lambda_{2}i_{2}}\right]_{\lambda}|0\rangle. \tag{5}\]
The rank of the set of linear equations (3) and (4) is equal to the number of one- and two-phonon configurations included in the wave functions Eq. (2). Equations (3) and (4) have the same form as the QPM equations [32; 35], but the SP spectrum and the parameters of the residual interaction are calculated with the Skyrme EDF. Our calculation is based on the QRPA formulation. It should be noted as well that the ground state correlations beyond the QRPA [35; 36] may play an important role. In this context the problem of convergence and stability of solutions of the beyond QRPA models and the so-called problem of double counting have been discussed in [37]. However, all these questions are beyond the scope of the present paper, and require separate studies.
In the present study, to construct the wave functions of the excited \(0^{+}\) states we take all the two-phonon configurations below 25 MeV into account that are built from the QRPA phonons with multipolarities \(\lambda^{\pi}=0^{+}\), \(1^{-}\), \(2^{+}\), \(3^{-}\), \(4^{+}\) and \(5^{-}\) coupled to \(0^{+}\). It is interesting to examine the energies and reduced transition probabilities of the lowest \(2^{+}\), \(3^{-}\), and \(4^{+}\) RPA states, which are the important ingredients for understanding the nature of the two-phonon \(0^{+}\) states of \({}^{208}\)Pb. The results of the RPA calculation for the energies, the \(B(E\lambda)\) values, and the structure of these states are given in Table 1. Note that the energies and the reduced transition probabilities calculated within the FRSA are very close to those calculated in the RPA with a full treatment of the Skyrme-type p-h residual interaction [38]. As one can see, the overall agreement of the energies and \(B(E\lambda)\) values with the experimental data [39; 40] looks reasonable. The overestimates regarding energies indicate that there is a room for the PPC effects (see for example [33]).
The rank of the set of linear equations (3,4) is equal to the number of the one- and two-phonon configurations included in the wave functions. This means that the two-phonon configurational space is now enlarged by the phonon compositions \([\lambda_{1}^{\pi_{1}}\otimes\lambda_{2}^{\pi_{2}}]_{\rm QRPA}\), i.e., \([0^{+}\otimes 0^{+}]_{\rm QRPA}\), \([1^{-}\otimes 1^{-}]_{\rm QRPA}\), \([2^{+}\otimes 2^{+}]_{\rm QRPA}\), \([3^{-}\otimes 3^{-}]_{\rm QRPA}\), \([4^{+}\otimes 4^{+}]_{\rm QRPA}\) and \([5^{-}\otimes 5^{-}]_{\rm QRPA}\). As an example, for \({}^{208}\)Pb, in the case of the set SLy4, the PPC calculation takes into account 40 monopole phonons, 49 dipole phonons, 74 quadrupole phonons, 109 octupole phonons, 93 hexadecapole phonons and 104 pentapole phonons when all the one- and two-phonon configurations below 25 MeV are included. It is worth mentioning that the major contribution to the ISGMR strength distribution is brought about by the coupling between the \([0^{+}]_{\rm RPA}\) and \([3^{-}\otimes 3^{-}]_{\rm RPA}\) components [41].
The IS0 strength function is computed as
\[{\rm IS0}(\omega)=\sum_{\nu}\left|\langle 0^{+}_{\nu}|\hat{M}_{\lambda=0}|0^{+}_{ \rm g.s.}\rangle\right|^{2}\rho(\omega-E_{\nu})\, \tag{6}\]
where \(\left|\langle 0^{+}_{\nu}|\hat{M}_{\lambda=0}|0^{+}_{\rm g.s.}\rangle\right|^{2}\) is the transition probability of the \(\nu\)-th \(0^{+}\) state. The transition operator of the ISGMR is defined as
\[\hat{M}_{\lambda=0}=\sum_{i=1}^{A}r_{i}^{2}. \tag{7}\]
The IS0 strength function is averaged out by a Lorentzian distribution with a smoothing parameter of \(\Delta\) as follows
\[\rho(\omega-E_{\nu})=\frac{1}{2\pi}\frac{\Delta}{(\omega-E_{\nu})^{2}+\Delta^ {2}/4}. \tag{8}\]
For accurate comparison between theoretical and experimental results, a smoothing parameter equivalent to the experimental energy resolution is used. The strength is then summed over the appropriate number of bins. The inclusion of the PPC lead to small down shifts of the centroid energy of the ISGMR. It is worth mentioning that the first systematical Skyrme-EDF study of the influence of the quasiparticle-vibration coupling on the ISGMR centroid has been done in [42].
### Relativistic approaches with an effective meson-exchange interaction
Two relativistic self-consistent approaches, the relativistic quasiparticle random phase approximation
\begin{table}
\begin{tabular}{l c c c c c} \(\lambda_{1}^{\pi}\) & Energy & \multicolumn{2}{c}{\(B(E\lambda;0^{+}_{\sigma}\to\lambda_{1}^{\pi})\)} & Structure \\ & \multicolumn{2}{c}{(MeV)} & \multicolumn{2}{c}{(\({\rm e}^{2}b^{\lambda}\))} \\ & Expt. Theory & & Expt. Theory & & Theory & \\ \hline \(2^{+}_{1}\) & 4.085 & 5.2 & 0.318\(\pm\)0.016 & 0.34 & 54\%\{2g_{\tau}^{0},1\(i\frac{13}{2}\}_{\tau}\) \\ & & & & & & 36\%\{2f_{\tau}^{\frac{9}{2}},1\(i\frac{12}{2}\}_{\tau}\) \\ & & & & & & 5\%\{1\{1\}\frac{B}{2},1\(i\frac{13}{2}\}_{\tau}\) \\ \(3^{-}_{1}\) & 2.615 & 3.6 & 0.611\(\pm\)0.012 & 0.93 & 13\%\{2g_{\tau}^{0},1\(i\frac{13}{2}\}_{\tau}\) \\ & & & & & & 9\%\{1\{1\}\frac{B}{2},1\(i\frac{13}{2}\}_{\tau}\) \\ & & & & & & 7\%\{1\{1\}\frac{B}{2},1\(i\frac{13}{2}\}_{\tau}\) \\ & & & & & & 21\%\{1\{1\}\frac{B}{2},2\(i\frac{24}{2}\}_{\tau}\) \\ & & & & & & 9\%\{1\{1\}\frac{B}{2},1\(i\frac{13}{2}\}_{\tau}\) \\ & & & & & & 21\%\{1\{1\}\frac{B}{2},2\(i\frac{24}{2}\}_{\tau}\) \\ & & & & & & 9\%\{1\{1\}\frac{B}{2},1\(i\frac{13}{2}\}_{\tau}\) \\ & & & & & & 21\%\{1\{1\}\frac{B}{2},2\(i\frac{24}{2}\}_{\tau}\) \\ \(4^{+}_{1}\) & 4.323 & 5.6 & 0.155\(\pm\)0.011 & 0.15 & 0\%\{2g_{\tau}^{0},1\(i\frac{13}{2}\}_{\tau}\) \\ & & & & & & 9\%\{2\{2\}\frac{2}{3},3\(\frac{5}{2}\}_{\tau}\) \\ & & & & & & 41\%\{1\{1\}\frac{B}{2},1\(i\frac{13}{2}\}_{\tau}\) \\ & & & & & & 15\%\{2f_{\tau}^{\frac{9}{2}},1\(i\frac{13}{2}\}_{\tau}\) \\ \end{tabular}
\end{table}
Table 1: Energies, transition probabilities, and structures of the RPA low-lying states in \({}^{208}\)Pb. The two-quasiparticle configuration contributions greater than 5% are given. Experimental data are taken from Refs. [39; 40].
(RQRPA) and the relativistic quasiparticle time blocking approximation (RQTBA), were employed to compute the isoscalar monopole response in the nuclear systems under study. RQRPA pioneered in Ref. [43] is confined by two-quasiparticle (\(2q\)) configurations interacting via the exchange of mesons between nucleons. The effective interaction is a derivative of the self-consistent mean field with respect to the nucleonic density, i.e., both are defined by the same set of eight parameters NL3\({}^{*}\), namely the nucleon-meson coupling constants and meson masses. The latter values are slightly refitted compared to their vacuum values, and non-linear couplings of the scalar meson are adopted to obtain a realistic mean field, whereas the compressibility modulus \(K_{\infty}=258\) MeV corresponds to this parameter set [44]. In most cases, RQRPA reasonably describes the collective states at both low and high energies, however, it is known to overestimate the centroid of the giant monopole resonance in nuclei lighter than lead.
Many details of the nuclear spectra are stipulated by much more complex wave functions of the excited states than the \(2q\) ones. The leading approximation beyond (R)QRPA includes \(2q\otimes phonon\) configurations, which produce considerable fragmentation of the (R)QRPA modes and generate much richer spectral structures. In the relativistic framework, this approach was first formulated and implemented numerically as the relativistic quasiparticle time blocking approximation in Ref. [45], where it was derived from the phenomenological nucleon-phonon self-energy by the time blocking technique following Ref. [46]. Later, the time blocking was ruled out as an unnecessary step when the response theory is derived from an _ab-initio_ Hamiltonian in the model-independent equation of motion (EOM) framework [47; 48]. In the EOM formalism, RQTBA was obtained as one of the possible approaches to the dynamical kernel, or in-medium long-range interaction, which keeps the leading effects of emergent collectivity. The developments of Refs. [47; 48; 49; 50] also allowed for a self-consistent extension of the theory to the \(2q\otimes 2phonon\) configurations, i.e., to the three-particle-three-hole level, which produces further refinement of the spectral strength distributions.
In Refs. [47; 48] it was shown that all the many-body models for the fermionic response are derivable from the exact _ab-initio_ theory. The QRPA, or one-phonon, approach follows when completely neglecting the EOM's dynamical kernel for the response function and considering only the \(2q\) configurations. The dynamical kernel, which generates complex configurations beyond the \(2q\) ones, couples to the hierarchy of EOMs of growing complexity and may be approximated by a cluster decomposition to make the many-body problem tractable. The minimal truncation on the two-body level leads to the quasiparticle-vibration coupling and multiphonon approaches, depending on the desirable correlation content, which is expressed by Eq. (60) of Ref. [48]. Using an effective interaction instead of the bare interaction between the nucleons requires the subtraction [37], which eliminates the double counting of the complex configurations from the effective interaction, thereby recovering the consistency of the theory.
Both the original and extended versions of RQTBA have demonstrated a good performance in the description of nuclear excited states in both neutral [51; 45; 52] and charge-exchange [53; 54; 55; 56] channels, showing remarkable improvements with respect to RQRPA. Most notably, the \(2q\otimes phonon\) configurations produce a reasonable degree of fragmentation of the \(2q\)-states already in the leading approximation. In particular, the description of the low-energy (soft) modes was refined considerably, which is especially important for the applications to \(r\)-process nucleosynthesis in stellar environments and supernovae evolution [57; 58]. The so-called nuclear fluffiness puzzle was addressed recently in Ref. [59] within the same approach applied to the ISGMR in various nuclei across the nuclear chart. It was shown that the self-consistent relativistic response theory, including \(2q\otimes phonon\) configurations beyond RQRPA, can reasonably describe both the centroids and the widths of the ISGMR in the lead, tin, zirconium, and nickel isotopes. Reference [59] was the major stepping stone on the way to consensus between a softer equation of state extracted from the compressibility of finite nuclei and a stiffer one required by recent analyses of neutron star data.
In this work, we employ the same version of RQTBA as in Ref. [59] with pairing correlations taken into account on an equal footing with the quasiparticle-vibration coupling in terms of the \(2q\otimes phonon\) configurations, which are included up to 50 MeV. The corresponding amplitudes are generated from the characteristics of the relativistic mean-field quasiparticles and RQRPA phonons in a parameter-free way. The phonon model space is truncated using the same criteria as in the series of earlier calculations, for instance, in Refs. [59; 60]. The complete set of the \(2q\) configurations was included in the calculations, which allows for maximal suppression of the spurious component. The subtraction procedure, following Ref. [37], eliminates the \(2q\otimes phonon\) contributions from the effective interaction to avoid their double counting, ensures converged results within the given configuration space, and preserves the decoupling of the spurious state. The imaginary part of the energy variable in the response function, corresponding to half of the width of the resulting peaks, is chosen to be \(\Delta=35\) keV to match the experimental energy resolution of 70 keV.
Fragmentation of the ISGMR due to the \(2q\otimes phonon\) configurations included in RQTBA was found in reasonable agreement with the lower-resolution data of Refs. [2; 61] and [62] for \({}^{208}\)Pb, \({}^{120}\)Sn and \({}^{80}\)Zr, respectively. An accurate comparison was performed and discussed in Ref. [59], where also the ISGMR's centroid shift due to these configurations was investigated and linked to quadrupole collectivity, which is typically enhanced in soft mid-shell nuclei. The fragmentation of the monopole response is overall weaker than that of the higher multipoles, and both the fragmentation and centroid position
are sensitive to the details of the numerical scheme, such as the basis completeness and self-consistency. The latter is stipulated using the same effective interaction in the static and dynamic sectors and the subtraction procedure.
## IV Fine-structure analysis
Different methods can be employed in order to gain insight into the characteristic energy-scales of the fine structure of giant resonances, such as the entropy index method [63], a multifractal analysis [64], or a method based on the Continuous Wavelet Transform (CWT) [6]. The CWT method was used previously in the analysis of the fine structure observed in the ISGQR [7; 8; 10] and the IVGDR [13; 14], and will therefore also be employed in this study. A brief summary of the formalism and techniques of the wavelet analysis, discussed in detail elsewhere [13], is provided here.
### Wavelet-analysis formalism
Wavelet analysis is an effective tool to analyze multi-scale structures [65]. Fourier analysis can also play the same role through superposition of sine and cosine functions to analyze periodic signals. However, the sinusoidal functions used to represent data are non-local and infinite, this then makes the Fourier analysis inappropriate in the case of the fine structure analysis of giant resonances. Wavelet analysis offers information on the localization of high-frequency signal aspects [66]. In addition, wavelet analysis is not constrained to the usage of sinusoidal functions only. These features together allow a study of the evolution of the frequency pattern of a given signal with optimized resolution. Another useful feature of the wavelet analysis is the approximation of any background contribution in the experimental data, through the so-called _vanishing moments_ of the wavelet function.
The choice of a wavelet plays an important role when performing wavelet analysis. The most frequently used functions for wavelet analysis are discussed in Ref. [67]. The detector response of the magnetic spectrometer used in the experiments is well approximated by a Gaussian line shape. As such, for the analysis of the fine structure, the Morlet wavelet consisting of a Gaussian envelope on top of a periodic structure is the most suitable. The Morlet wavelet is given by [68]
\[\Psi(x)=\frac{1}{\pi^{\frac{1}{2}}f_{\rm b}}\exp\left(2\pi if_{\rm c}-\frac{x ^{2}}{f_{\rm b}}\right)\, \tag{9}\]
where \(f_{\rm b}=2\) and \(f_{\rm c}=1\) are used as the wavelet bandwidth and the center frequency of the wavelet, respectively. This wavelet-function or _wavelets_ must meet a set of requirements:
* the function oscillating with a mean value that equals zero and
* the function must have finite length.
Mathematically, the above conditions can be written as
\[\int_{-\infty}^{\infty}\Psi^{*}(x)dx=0 \tag{10}\]
and
\[K_{\Psi}=\int_{-\infty}^{\infty}\mid\Psi^{2}(x)\mid dx<\infty\, \tag{11}\]
where \(\Psi(x)\) is a real or complex function used as mother-wavelet with \(\Psi^{*}(x)\) as its complex conjugate. Here, \(K_{\Psi}\) is the norm of the wavelet. The second condition defines the local feature of wavelets. The Continuous Wavelet Transform and the Discrete Wavelet Transform (DWT) are the two categories of wavelets transforms available. Their main properties and the comparison between the two transforms are discussed in Ref. [67]. For the purposes of the present analysis, only the application of the CWT will be discussed.
The convolution of a given signal \(\sigma(E)\) with the wavelet function (generally complex-conjugated) yields the coefficients of the wavelet transform. This is explicitly given by [68]
\[C(\delta E,E_{\rm x})=\frac{1}{\sqrt{\delta E}}\int\sigma(E)\Psi^{*}\left( \frac{E_{\rm x}-E}{\delta E}\right)dE\, \tag{12}\]
where \(C(\delta E,E_{\rm x})\) are the coefficients of the wavelet transform, \(\delta E\) represents the bin size and moreover a scaling factor. The parameter \(E_{\rm x}\) shifts the position of the wavelet across the excitation-energy range, hence allowing access to the scale-localization information. The parameters \(\delta E\) and \(E_{\rm x}\) are varied continuously in the framework of CWT. The values of the coefficients indicate to what extent the form of the scaled and shifted wavelet is close to the original spectrum.
The extraction of wavelet energy-scales can be achieved from the wavelet coefficient plot as peaks in the corresponding power spectrum. The wavelet power spectrum is the projection of the summed squared wavelet coefficients onto the wavelet scale axis
\[P_{\omega}(\delta E)=\frac{1}{N}\sum_{i}\mid C_{i}(\delta E)C_{i}^{*}(\delta E )\mid\, \tag{13}\]
where \(P_{\omega}(\delta E)\) represents the power as a function of scale \(\delta E\) summed at each scale value over the index \(i=N\) with \(N\) the number of energy bins in the excitation-energy region considered.
### Application of the CWT on the ISGMR data
The wavelet analysis was performed following the techniques outlined above. A CWT was used to generate the wavelet coefficients Eq. (12) as a function of excitation energy, for each of the IS0 strength distributions of the nuclei under investigation. In Ref. [15], it was discussed that the IS0 strength distributions extracted with the DoS technique need to be corrected by energy-dependent factors determined from the MDA analysis of previous experiments on the same nuclei. It is, therefore, important to investigate the impact of this dependency on the fine structure analysis.
The sensitivity of the wavelet analysis to the different correction factors is illustrated in Fig. 3 for the case of \({}^{90}\)Zr. IS0 strength distributions obtained with correction factors derived from Refs. [18] and [69] are shown in the top and third row, respectively. The two-dimensional plots of the wavelet coefficients are displayed in the second and fourth panels on the right-hand side of Fig. 3. The intermittent appearance of blue (red) regions indicating negative (positive) values, result from the oscillatory structure of the mother wavelet (Eq. 9) used in the analysis. Extracted wavelet coefficients are then projected onto the scale axis to generate the power spectrum shown in the two panels on the left-hand side of Fig. 3. These spectra display the distribution of the scales in the excitation-energy region chosen for the analysis. The presence of characteristic scales is indicated by peaks and points of inflection in the power spectra.
When comparing the power spectra resulting from the IS0 strength distributions with different correction factors, it is clear that even though there are relative power changes, very similar scale energies are found. The details of the analysis techniques used in Ref. [15] do, therefore, not affect the extraction of information on the fine structure of the GMR extracted with wavelet techniques. All results presented in the next section are based on the DoS results that employed the correction factors based on RCNP experiments [17; 18; 19; 20].
## V Damping of the ISGMR - wavelet energy-scales comparison
In this section, the results of the wavelet analysis of the experimental and model IS0 strength functions are presented. They are summarized in Figs. 4 - 7. For each nucleus, different energy regions have been considered for the analysis depending on the location of the main ISGMR peak. These regions are indicated by the vertical dashed lines shown in panels on the left-side of Figs. 4 - 7. Characteristic scales are extracted from the power spectra and displayed as black (experiment), red (QRPA and PPC) and blue (RQRPA and RQTBA) filled circles. The associated error is given by one standard deviation of the corresponding width-like scale corresponding to half of the peak width (FWHM), cf. [14]. For comparison purposes and in order to facilitate the determination of similar scales in the corresponding power spectra from the model calculations, the results obtained from experiments are also displayed as vertical grey bars in all right-side panels of Figs. 4 - 7. For the sake of better display, their widths have been reduced to 2/3 of the standard deviation. The extracted energy scales, both experimental and theoretical, are also listed in Tables 2 - 5. When two scales agree within error, they are placed in the same col
Figure 3: Top set (right column): IS0 strength of \({}^{90}\)Zr obtained using the RCNP-based energy-dependent correction factor as determined in Ref. [15]. Top set (lower right): Density plot of the real part of the CWT coefficients of the data. Top set (left column): Corresponding power spectrum for the excitation-energy region indicated by the vertical dashed lines (\(11\text{ MeV}\leq E_{x}\leq 24\text{ MeV}\)) in the top right plot. Bottom set: Same as the top set but for the difference spectrum obtained using TAMU-based energy-dependent correction factors. Bottom set (left column): The corresponding power spectrum shown in black, contrasted with the power spectrum from the top set (blue line).
umn to ease comparison between experiment and model results.
### General observations
Before entering a detailed discussion for each studied nucleus, we summarize some general observations when comparing experimental and theoretical strength distributions and wavelet scales. Both theoretical approaches describe the energy centroids of the ISGMR in a similar way with a slight overestimation (about 1 MeV) for the lighter nuclei \({}^{58}\)Ni and \({}^{90}\)Zr and a good reproduction for the heavier cases \({}^{120}\)Sn and \({}^{208}\)Pb. We note that a shift between experimental and theoretical centroids does not impact on the CWT. The inclusion of complex configuration leads to an increased fragmentation, but effects are much stronger in the PPC than in the RQTBA calculations. In fact, except for \({}^{58}\)Ni, the PPC results resemble the experimental widths quite well.
Characteristic scales deduced from the fine structure are significantly modified when going from QRPA level to inclusion of two-phonon or \(2q\otimes phonon\) configurations. In most cases additional scales appear in overall better agreement with the number of scales extracted from the experimental data. The capability to reproduce absolute scale energies varies from case to case as discussed below. The smallest scale with values \(130-160\) keV is prominent in the power spectra of all studied nuclei, but generally much weaker in the theoretical results. Consistent with findings in the IVGDR, this scale is an exclusive signature of the spreading width since it only appears in calculations with inclusion of complex configurations.
### \({}^{58}\)Ni
The CWT of the experimental IS0 strength distribution shows the largest number of scales (7) of the four nuclei studied. The numbers observed for QRPA and RQRPA are 5 and 4, respectively, and no additional scales appear when complex configurations are included. The major experimental scales at 270 and 950 keV are reproduced by all models, while the scale at 580 keV is only seen by the QRPA/PPC approach (eventually also shifted to 730 keV in RQTBA).
A scale \(>1\) MeV is seen in all but the RQTBA result. Indeed, this scale is observed in the RQRPA result because of Landau fragmentation into a few main states, while the RQTBA result exhibits a single prominent peak only. Finally, a small scale found to be a generic feature of coupling to \(2q\otimes phonon\) configurations in previous studies of the IVGDR and ISGQR is visible in the PPC result only.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Dataset & \multicolumn{6}{c}{Scales (keV)} \\ \hline Expt. & 130 & 170 & 270 & 390 & 580 & 950 & 1500 \\ QRPA & & 190 & 290 & & 580 & 1100 & 1700 \\ PPC & 120 & 270 & & 620 & 1020 & 2000 \\ RQRPA & & & 290 & 420 & & 870 & 1400 \\ RQTBA & & & 260 & 370 & 730 & 1100 & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Energy scales extracted for \({}^{58}\)Ni in the excitation energy region 11 MeV \(\leq E_{\rm x}\leq\) 24 MeV. Equivalent characteristic energy-scale values are vertically aligned.
Figure 4: Left column: Experimental IS0 strength in \({}^{58}\)Ni (top row) in comparison with model predictions (rows 2-5) folded with the experimental energy resolution. The vertical dashed lines indicate the summation region of the wavelet coefficients (\(11-24\) MeV) to determine the power spectra. Right column: Corresponding power spectra. Scales are indicated by filled circles with the associated errors, and for the experimental results additionally by vertical grey bars.
### \({}^{90}\)Zr
A significant effect of the coupling to complex configurations is seen for \({}^{90}\)Zr in all models. The number of scales is increased from 4 to 5 (PPC), respectively 5 to 6 (RQTBA). The PPC and RQTBA results can account for all experimental scales below 1 MeV including the observation of two scales at small energies (\(\leq 200\) keV). An additional weaker scale at 300 keV not seen in the data is predicted by the RQTBA approach. A larger scale \(>1\) MeV consistent with the experimental one at 1200 keV is found in both models but the predicted value (1500 keV) is somewhat large.
### \({}^{120}\)Sn
The experimental summation window for the wavelet power has been reduced to \(11-20\) MeV since the strength at higher excitation-energies might be attributed to a less than perfect subtraction of the low-energy flank of the ISGDR that dominates the background cross sections [70]. The 5 experimental scales are to be compared with 4 in the PPC approach (with no change from the QRPA result despite a considerable increase of fragmentation of the strength distribution) and 6 in RQTBA (3 in RQRPA). RQTBA also accounts well for the absolute scale values except one (250 keV vs. 360 keV experimentally) and an additional weak scale at 1400 keV not seen in the data. The PPC scales below 1 MeV are systemat
Figure 5: Same as Fig. 4, but for \({}^{90}\)Zr.
Figure 6: Same as Fig. 4, but for \({}^{120}\)Sn and the excitation-energy region from 11 to 20 MeV (experimental data).
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Dataset & \multicolumn{6}{c}{Scales (keV)} \\ \hline Expt. & 135 & 180 & 400 & 700 & 1200 & \\ QRPA & & 270 & 360 & 530 & & 1000 & \\ PPC & 140 & 210 & 330 & & 700 & & 1500 \\ RQRPA & 145 & 220 & 400 & & 980 1400 & \\ RQTBA & 140 & 200 & 300 & 420 & 850 & & 1500 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Energy scales extracted for \({}^{90}\)Zr in the excitation energy region 11 MeV \(\leq E_{\rm x}\leq\) 24 MeV. Equivalent characteristic energy-scale values are vertically aligned.
ically shifted to higher values as compared with experiment.
### \({}^{208}\)Pb
Because of the problem of remaining ISGDR strength in the DoS subtraction [70] already mentioned for \({}^{120}\)Sn, the wavelet power summation is restricted to \(11-17\) MeV. Although the same window is used for the theoretical results, this might affect the power spectrum, in particular at larger scale values. Thus, the discussion here is restricted to scales \(<1\) MeV.
Reverse to the \({}^{120}\)Sn case, inclusion of complex configurations increases the number of scales in the PPC approach to 5 in accordance with experiment, while it remains at 3 when going from RQRPA to RQTBA. The PPC result quantitatively reproduces all scale values within the typical uncertainties. RQTBA reproduces the smallest and largest scale (in the scale region up to 1 MeV).
## VI Conclusions and outlook
In this study, we present high energy-resolution IS0 strength distributions over a wide mass range extracted from measurements of the \((\alpha,\alpha^{\prime})\) reaction at 196 MeV and extreme forward-scattering angles (including \(0^{\circ}\)), revealing significant fine structure. Characteristic energy scales were extracted from a Continuous Wavelet Transform (CWT) analysis of the data to investigate the role of Landau fragmentation and spreading width in the damping of the ISGMR.
The experimental results are compared to microscopic calculations of the ISGMR strength functions based on the QRPA and beyond-QRPA using both non-relativistic and relativistic density functional theory. The extracted experimental energy scales are well reproduced by the models where in most cases a number of scales can be approximately reproduced, but the one-to-one correspondence varies from case to case.
The wavelet scales remain a sensitive measure of the interplay between Landau fragmentation and spreading width in the description of the fine structure of giant resonances [4]. In the case of the ISGMR, Landau damping is prominent in the medium-mass region while the
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Dataset & \multicolumn{6}{c}{Scales (keV)} \\ \hline Expt. & 130 & 190 & 260 & & 570 & 870 & \\ QRPA & & 240 & 360 & 620 & & \\ PPC & 160 & 220 & 310 & & 520 & 870 & 1250 & 1700 \\ RQRPA & & & 280 & 370 & 570 & & 1100 & \\ RQTBA & 150 & & & 350 & & 720 & 1300 & 1900 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Energy scales extracted for \({}^{208}\)Pb in the excitation energy region 11 MeV \(\leq E_{\mathrm{x}}\leq 16\) MeV. Equivalent characteristic energy-scale values are vertically aligned.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Dataset & \multicolumn{6}{c}{Scales (keV)} \\ \hline Expt. & 160 & 360 & & 590 & 950 & & 1900 \\ QRPA & & & 370 & & 570 & 850 & 1400 & \\ PPC & & 240 & & 460 & & 790 & & 1600 \\ RQRPA & & & 220 & 360 & & 670 & & \\ RQTBA & 130 & 250 & & 510 & 1050 & 1400 & 1900 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Energy scales extracted for \({}^{120}\)Sn in the excitation energy region 11 MeV \(\leq E_{\mathrm{x}}\leq 20\) (24) MeV for experimental data (theoretical calculations). Equivalent characteristic energy-scale values are vertically aligned.
Figure 7: Same as Fig. 4, but for \({}^{208}\)Pb and the excitation-energy region from 11 to 16 MeV over which the wavelet coefficients were summed in order to determine the corresponding power spectra.
spreading width increases with mass number and makes the largest contribution in heavy nuclei. The relative importance of both contributions is intermediate between the IVGDR, where Landau damping dominates over the spreading width even for heavy nuclei, and the ISGQR, where fine structure is entirely due to coupling to low-lying surface vibrations (except maybe for light nuclei).
The fragmentation of the \(J=0\) response is generally weaker than that of the \(J>0\) one because of the smaller amount of the \(2q\otimes phonon\) or \(phonon\otimes phonon\) configurations allowed by the angular momentum conservation. Furthermore, both the fragmentation and centroid position are sensitive to the details of the numerical scheme, such as the basis completeness and self-consistency. The latter is stipulated using the same effective interaction in the static and dynamic sectors and the subtraction procedure. One question to be addressed in future work is the impact of the subtraction procedure on the PPC approach considering the different degree of fragmentation with respect to the RQTBA results.
A complete response theory for atomic nuclei should include continuum, unnatural parity and isospin-flip phonons, complex ground-state correlations, and higher-order configurations, which are expected to further affect the fine structure of the strength functions and improve the description of the characteristic energy scales. These effects are beyond the scope of this work and will be addressed by future efforts.
## Acknowledgements
The authors thank the Accelerator Group at iThemba LABS for the high-quality dispersion-matched beam provided for this experiment. This work was supported by the National Research Foundation (NRF) of South Africa (Grant No. 85509, 118846 and 129603), the Deutsche Forschungsgemeinschaft under contract SFB 1245 (Project ID No. 79384907), as well as an NRF-JINR grant (JINR200401510986). A.B. acknowledges financial support through iThemba LABS and the NRF of South Africa. P.A. acknowledges support from the Claude Leon Foundation in the form of a postdoctoral fellowship. E.L. acknowledges support by the GANIL Visitor Program and funding from the National Science Foundation of the United States of America US-NSF under the US-NSF CAREER Grant PHY-1654379 and US-NSF Grant PHY-2209376. N.N.A. acknowledges support from the Russian Science Foundation (Grant No. RSF-21-12-00061).
|
2309.14205 | An alternative evaluation of the leading-order hadronic contribution to
the muon g-2 with MUonE | We propose an alternative method to extract the leading-order hadronic
contribution to the muon g-2, $a_{\mu}^\text{HLO}$, with the MUonE experiment.
In contrast to the traditional method based on the integral of the hadronic
contribution to the running of the effective fine-structure constant
$\Delta\alpha_{had}$ in the space-like region, our approach relies on the
computation of the derivatives of $\Delta\alpha_{had}(t)$ at zero squared
momentum transfer $t$. We show that this approach allows to extract $\sim 99\%$
of the total value of $a_{\mu}^\text{HLO}$ from the MUonE data, while the
remaining $\sim 1\%$ can be computed combining perturbative QCD and data on
$e^+e^-$ annihilation to hadrons. This leads to a competitive evaluation of
$a_{\mu}^\text{HLO}$ which is robust against the parameterization used to model
$\Delta\alpha_{had}(t)$ in the MUonE kinematic region, thanks to the
analyticity properties of $\Delta\alpha_{had}(t)$, which can be expanded as a
polynomial at $t\sim 0$. | Fedor Ignatov, Riccardo Nunzio Pilato, Thomas Teubner, Graziano Venanzoni | 2023-09-25T15:09:02Z | http://arxiv.org/abs/2309.14205v2 | An alternative evaluation of the leading-order hadronic contribution to the muon \(g\!-\!2\) with MUonE
###### Abstract
We propose an alternative method to extract the leading-order hadronic contribution to the muon \(g\!-\!2\), \(a_{\mu}^{\rm HLO}\), with the MUonE experiment. In contrast to the traditional method based on the integral of the hadronic contribution to the running of the electromagnetic coupling, \(\Delta\sigma_{had}\), in the space-like region, our approach relies on the computation of the derivatives of \(\Delta\sigma_{had}(t)\) at zero squared momentum transfer \(t\). We show that this approach allows to extract \(\sim 99\%\) of the total value of \(a_{\mu}^{\rm HLO}\) from the MUonE data, while the remaining \(\sim 1\%\) can be computed combining perturbative QCD and data on \(e^{+}e^{-}\) annihilation to hadrons. This leads to a competitive evaluation of \(a_{\mu}^{\rm HLO}\) which is robust against the parameterization used to model \(\Delta\sigma_{had}(t)\) in the MUonE kinematic region, thanks to the analyticity properties of \(\Delta\sigma_{had}(t)\), which can be expanded as a polynomial at \(t\sim 0\).
+
Footnote †: journal: Elsevier
## 1 Introduction
The muon anomalous magnetic moment, also known as the muon \(g\!-\!2\), where \(g\) is the muon gyromagnetic ratio, exhibits a discrepancy between theory and experiment which persists for more than 20 years. It has received renewed interest, following the first measurement of the muon anomaly \(a_{\mu}=(g-2)/2\) by the Muon \(g\!-\!2\) Experiment at Fermilab [1], subsequently confirmed by the new result with a twofold improved precision [2]. The comparison with the Standard Model (SM) prediction \(a_{\mu}^{\rm SM}\)[3] is currently limited by tensions in the evaluation of the leading-order hadronic contribution to the muon anomaly, \(a_{\mu}^{\rm HLO}\)[4]. This term represents the main source of uncertainty of the theory prediction, due to the non-perturbative nature of QCD at low energy. A recent computation of \(a_{\mu}^{\rm HLO}\) based on lattice QCD, performed by the BMW Collaboration [5], indeed shows a \(2.1\sigma\) tension with the one used in the SM evaluation of \(a_{\mu}\)[3], which is based on a data-driven approach involving data for \(e^{+}e^{-}\to\) hadrons cross sections. Moreover, a new experimental measurement of \(e^{+}e^{-}\to\pi^{+}\pi^{-}\) channel from the CMD-3 experiment disagrees with the previous measurements [6]. New calculations from other lattice QCD groups and new results from \(e^{+}e^{-}\) colliders are expected to shed light on these tensions in the next few years [4].
Recently a new approach has been proposed to compute \(a_{\mu}^{\rm HLO}\), based on the measurement of the hadronic contribution to the running of the electromagnetic coupling, \(\Delta\sigma_{had}\), in the space-like region [7]. The elastic scattering of high-energy muons on atomic electrons has been identified as an ideal process for this measurement and an experimental proposal, called MUonE, has been put forward at CERN to extract \(\Delta\sigma_{had}\) from a precise measurement of the shape of the \(\mu^{+}e^{-}\to\mu^{+}e^{-}\) elastic process [8]. The goal of MUonE is to determine \(a_{\mu}^{\rm HLO}\) with a \(\sim 0.3\%\) statistical and a comparable systematic uncertainty, using the following integral [9]:
\[a_{\mu}^{\rm HLO}=\frac{\alpha}{\pi}\int_{0}^{1}dx(1-x)\Delta\sigma_{had}[t(x )],\qquad t(x)=\frac{x^{2}m_{\mu}^{2}}{x-1}<0, \tag{1}\]
where \(\alpha\) is the fine structure constant, \(m_{\mu}\) is the muon mass, and \(t\) is the space-like squared momentum transfer. The \(160\,\)GeV muon beam available at the M2 beamline at CERN allows to cover directly the momentum transfer range
\(-0.153~{}\mathrm{GeV}^{2}<t<-0.001~{}\mathrm{GeV}^{2}\), which is equivalent to \(0.258<x<0.936\). This corresponds to \(\sim 86\%\) of the integral in Eq. 1, while the remaining fraction can be obtained by extrapolating \(\Delta\alpha_{had}(t)\) outside the MUonE region by an appropriate analytical function or alternatively by using lattice data. In the first case the space-like integral of Eq. 1 is sensitive to the behaviour of the parameterization chosen to model \(\Delta\alpha_{had}(t)\) in the whole \(t\)-region, particularly in the asymptotic limit \(t\to-\infty\), which could affect the extraction of \(a_{\mu}^{\mathrm{HLO}}\).1 In the following, we will discuss a different approach based on the evaluation of the derivatives of \(\Delta\alpha_{had}(t)\) at zero momentum transfer. This leads to an evaluation of \(a_{\mu}^{\mathrm{HLO}}\) which is rather insensitive to the functional form adopted to describe the behaviour of \(\Delta\alpha_{had}(t)\) and will provide an alternative and competitive way to determine \(a_{\mu}^{\mathrm{HLO}}\) with MUonE.
Footnote 1: A convenient choice based on the analytic formula for the leading-order QED contribution to the running of \(\alpha\) in the space-like region allows to compute \(a_{\mu}^{\mathrm{HLO}}\) at the required level of precision with negligible bias when time-like data are used as input [10] (see also Section 3).
## 2 Description of the method
The new method is mainly based on [11; 12], where different quark flavours have been treated separately, and the dominant light-quark contributions to the hadronic vacuum polarization function \(\Pi_{had}(s)\) have been computed either through a model-dependent approach [11], or using lattice QCD calculations [12]. The same strategy is not applicable to MUonE, since MUonE will provide an inclusive measurement of \(\Pi_{had}(t)\) containing contributions from all the quark flavours. In the following, we summarize the original procedure and describe how it can be adapted to the MUonE case. We start from the well known dispersion relation [13; 14; 15; 16]
\[a_{\mu}^{\mathrm{HLO}}=\frac{\alpha^{2}}{3\pi^{2}}\int_{s_{\mathrm{th}}}^{ \infty}\frac{ds}{s}K(s)R(s), \tag{2}\]
where, due to contributions from the \(\pi^{0}\gamma\) channel, the threshold \(s_{\mathrm{th}}\) is usually identified as \(m_{\pi^{0}}^{2}\), with \(m_{\pi^{0}}\) being the \(\pi^{0}\) mass. The kernel function \(K(s)\) is given by
\[K(s)=\int_{0}^{1}dx\frac{x^{2}(1-x)}{x^{2}+\frac{s}{m_{\mu}^{2}}(1-x)}, \tag{3}\]
and \(R(s)\) is the ratio of the bare (i.e. excluding vacuum polarization effects) \(e^{+}e^{-}\to\) hadrons annihilation cross section to the Born \(e^{+}e^{-}\to\mu^{+}\mu^{-}\) pointlike one, \(\sigma_{pt}=4\pi\alpha^{2}/(3s)\). Taking advantage of the optical theorem, \(R(s)\) can be related to the imaginary part of the hadronic vacuum polarization function \(\Pi_{had}(s)\) as follows:
\[-\mathrm{Im}\Pi_{had}(s)=\frac{\alpha}{3}R(s). \tag{4}\]
The dispersive integral in Eq. 2 can be split in two terms at a given value \(s_{0}\), above which \(R(s)\) can be safely computed using perturbative QCD (pQCD). As originally proposed in [11]2, it is convenient to approximate \(K(s)\) by a meromorphic function \(K_{1}(s)\) for \(s\leq s_{0}\),
Footnote 2: Different approaches to evaluate the dispersive integral by an approximate kernel function have been discussed in [17; 18].
\[K_{1}(s)=c_{0}s+\sum_{n=1}^{3}\frac{c_{n}}{s^{n}}, \tag{5}\]
and the low energy part of the dispersive integral can be written as
\[-\frac{\alpha}{\pi}\int_{s_{\mathrm{th}}}^{s_{0}}\frac{ds}{s}K(s)\frac{ \mathrm{Im}\Pi_{had}(s)}{\pi}~{}=~{}-\frac{\alpha}{\pi}\left[\int_{s_{\mathrm{ th}}}^{s_{0}}\frac{ds}{s}[K(s)-K_{1}(s)]\frac{\mathrm{Im}\Pi_{had}(s)}{\pi}~{}+ \int_{s_{\mathrm{th}}}^{s_{0}}\frac{ds}{s}K_{1}(s)\frac{\mathrm{Im}\Pi_{had}( s)}{\pi}\right], \tag{6}\]
where Eq. 4 was used to express \(R(s)\) in terms of the hadronic vacuum polarization function. Cauchy's theorem can then be employed to handle the second term on the right-hand side [11; 12]:
\[\int_{s_{\mathrm{th}}}^{s_{0}}\frac{ds}{s}K_{1}(s)\frac{\mathrm{Im}\Pi_{had}( s)}{\pi}~{}=~{}\mathrm{Res}\left[\Pi_{had}(s)\frac{K_{1}(s)}{s}\right]_{s=0}~{}-~{} \frac{1}{2\pi i}\oint_{|s|=s_{0}}\frac{ds}{s}K_{1}(s)\Pi_{had}(s)\bigg{|}_{ \mathrm{pQCD}}. \tag{7}\]
Here, the contour integral around the circle of radius \(s_{0}\) can be calculated using pQCD to evaluate the hadronic vacuum polarization function, whereas the residual can be written in terms of derivatives of \(\Pi_{had}(s)\) at zero momentum transfer. Exploiting the functional form of the approximated kernel in Eq. 5,
\[\mathrm{Res}\left[\Pi_{had}(s)\frac{K_{1}(s)}{s}\right]_{s=0}\ =\ \sum_{n=1}^{3}\frac{c_{n}}{n!}\frac{d^{(n)}}{ds^{n}}\Pi_{had}(s)\bigg{|}_{s= 0}\ =\ \sum_{n=1}^{3}\frac{c_{n}}{n!}\frac{d^{(n)}}{dt^{n}}\Delta\alpha_{had}(t) \bigg{|}_{t=0}, \tag{8}\]
where the analyticity of \(\Pi_{had}\) and its derivatives at zero momentum transfer has been exploited to move the evaluation of the hadronic vacuum polarization from positive to negative squared momentum transfer. The relation \(\Delta\alpha_{had}(t)=\mathrm{Re}\Pi_{had}(t)\) has been used in the last step to link the hadronic vacuum polarization function to the hadronic contribution to the running of \(\alpha\).
The high energy tail of the dispersive integral can be treated in a similar way:
\[-\frac{\alpha}{\pi}\int_{s_{0}}^{\infty}\frac{ds}{s}K(s)\frac{ \mathrm{Im}\Pi_{had}(s)}{\pi}\ =\ -\frac{\alpha}{\pi}\left[\int_{s_{0}}^{\infty}\frac{ds}{s}[K(s)-\tilde{K}_{1} (s)]\frac{\mathrm{Im}\Pi_{had}(s)}{\pi}\ +\ \int_{s_{0}}^{\infty}\frac{ds}{s}\tilde{K}_{1}(s)\frac{ \mathrm{Im}\Pi_{had}(s)}{\pi}\right], \tag{9}\]
where \(\tilde{K}_{1}(s)=K_{1}(s)-c_{0}s\). Following the same technique implemented for the low energy component, Cauchy's theorem can be applied to the second integral on the right hand side of Eq. 9, using the red closed path shown in Fig. 1. In this case, the integrand is free of poles and the contour integral over \(s\) with radius \(|s|\to\infty\) is vanishing. This leads to
\[\int_{s_{0}}^{\infty}\frac{ds}{s}\tilde{K}_{1}(s)\frac{\mathrm{Im}\Pi_{had}(s )}{\pi}\ =\ \frac{1}{2\pi i}\oint_{|s|=s_{0}}\frac{ds}{s}\tilde{K}_{1}(s)\Pi_{had}(s) \bigg{|}_{\mathrm{pQCD}}. \tag{10}\]
Rearranging Eqs. 6, 7, 9 and 10, \(a_{\mu}^{\mathrm{HLO}}\) can be calculated as the sum of four terms
\[a_{\mu}^{\mathrm{HLO}}\ =\ a_{\mu}^{\mathrm{HLO}\ (\mathrm{I})}+a_{\mu}^{ \mathrm{HLO}\ (\mathrm{II})}+a_{\mu}^{\mathrm{HLO}\ (\mathrm{III})}+a_{\mu}^{ \mathrm{HLO}\ (\mathrm{IV})}, \tag{11}\]
Figure 1: Blue (red): closed path in the complex \(s\)-plane used to calculate the contour integral in Eq. 7 (Eq. 10).
where
\[a_{\mu}^{\rm HLO\;(I)} = -\frac{\alpha}{\pi}\sum_{n=1}^{3}\frac{c_{n}}{n!}\frac{d^{(n)}}{dt^ {n}}\Delta\alpha_{had}(t)\bigg{|}_{t=0}, \tag{12}\] \[a_{\mu}^{\rm HLO\;(II)} = \frac{\alpha}{\pi}\frac{1}{2\pi i}\oint_{|s|=s_{0}}\frac{ds}{s}c_ {0}s\;\Pi_{had}(s)\bigg{|}_{\rm pQCD},\] (13) \[a_{\mu}^{\rm HLO\;(III)} = \frac{\alpha^{2}}{3\pi^{2}}\int_{s_{\rm th}}^{s_{0}}\frac{ds}{s}[ K(s)-K_{1}(s)]R(s),\] (14) \[a_{\mu}^{\rm HLO\;(IV)} = \frac{\alpha^{2}}{3\pi^{2}}\int_{s_{0}}^{\infty}\frac{ds}{s}[K(s )-\bar{K}_{1}(s)]R(s). \tag{15}\]
\(a_{\mu}^{\rm HLO\;(I)}\) will be computed using MUonE data. This term represents \(\sim\!99\%\) of the total value of \(a_{\mu}^{\rm HLO}\), as will be shown in the following. The other three terms contribute to the remaining \(\sim 1\%\). \(a_{\mu}^{\rm HLO\;(II)}\) will be calculated via pQCD, \(a_{\mu}^{\rm HLO\;(III)}\) with \(e^{+}e^{-}\) data, while both \(e^{+}e^{-}\) data and pQCD will be used to compute \(a_{\mu}^{\rm HLO\;(IV)}\).
In the following sections, the calculation of the different contributions will be discussed in detail. The robustness of the proposed method will be tested using three different values of \(s_{0}\): \((1.8\,{\rm GeV})^{2}\), \((2.5\,{\rm GeV})^{2}\) and \((12\,{\rm GeV})^{2}\). Two different techniques will be used to determine the coefficients of \(K_{1}(s)\). The first (called _Minimization 1_) consists of a least squares minimization of the difference between the approximated and the analytical kernel. The second (_Minimization 2_) aims at minimizing the contribution of \(e^{+}e^{-}\) data in the calculation of \(a_{\mu}^{\rm HLO}\), and is carried out by fixing the coefficient \(c_{3}\) to be 1/2 of its value obtained from _Minimization 1_, then minimizing \(\int_{s_{0}}^{s_{0}}\frac{ds}{s}|K(s)-K_{1}(s)|R(s)\) to find the other coefficients. Figure 2 shows the fractional difference between \(K_{1}(s)\) and the exact kernel for both the minimization techniques and the choice \(s_{0}=(1.8\,{\rm GeV})^{2}\). Table 1 shows the coefficients for the two minimizations. Both methods provide a good approximation of \(K(s)\), with a different sensitivity to the third derivative \(\frac{d^{3}\Delta\alpha_{had}(t)}{dt^{3}}\) at \(t=0\) (which will be shown to have the largest uncertainty, see Table 2).
Figure 2: Fractional difference between the approximated kernel \(K_{1}(s)\) and the exact kernel \(K(s)\) for the two minimization methods for \(s_{0}=(1.8\,{\rm GeV})^{2}\). The insertion shows the exact kernel in the same range.
## 3 Evaluation of \(a_{\mu}^{\text{HLO}\ (\mathbf{t})}\)
The first term \(a_{\mu}^{\text{HLO}\ (\mathbf{t})}\) in Eq. 11 depends on the derivatives of \(\Delta\alpha_{had}(t)\) at zero momentum transfer, which can be obtained by fitting the MUonE data with a convenient functional form. Since space-like data in the MUonE range are not available, we use different data compilations of \(\Delta\alpha_{had}(t)\) obtained from \(e^{+}e^{-}\) data in the time-like region using the dispersive integral:
\[\Delta\alpha_{had}(q^{2})=-\frac{\alpha}{3\pi}q^{2}\text{P}\int_{s_{th}}^{m} ds\frac{R(s)}{s(s-q^{2})}, \tag{16}\]
where \(q^{2}\) is a generic squared momentum transfer and P denotes the principal value of the integral. The data compilations produced in [19], [20] and [21], indicated respectively as Dataset I, II and III, will be used. Figure 3 shows the difference between the three compilations. Deviations from 0 are due to different strategies adopted in combining data from various experiments. The influence of the hadronic resonaces on the vacuum polarization effects is evident for positive squared momentum transfer (Figure 3, Left), whereas \(\Delta\alpha_{had}(q^{2})\) is a smooth function for \(q^{2}<0\), as shown in Figure 3, Right.
To assess the sensitivity of our method to the parameterization adopted for \(\Delta\alpha_{had}(t)\), we tested several possible parameterizations which are capable of reproducing the behaviour of \(\Delta\alpha_{had}(t)\) in the MUonE kinematic range:
1. "Lepton Like" (LL) parameterization. This is inspired by the one-loop QED result for the vacuum polarisation induced by a lepton pair in the space-like region, and is also used in MUonE to calculate \(a_{\mu}^{\text{HLO}}\) through the space-like integral in Eq. 1[10]: \[\Delta\alpha_{\text{had}}(t)=KM\left(-\frac{5}{9}-\frac{4M}{3t}+\left(\frac{4 M^{2}}{3t^{2}}+\frac{M}{3t}-\frac{1}{6}\right)\frac{2}{\sqrt{1-\frac{4M}{t}}} \log\left|\frac{1-\sqrt{1-\frac{4M}{t}}}{1+\sqrt{1-\frac{4M}{t}}}\right| \right),\] (17)
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multicolumn{6}{c}{Minimization 1} & \multicolumn{6}{c}{Minimization 2} \\ \hline Coefficients & \((1.8\text{ GeV})^{2}\) & \((2.5\text{ GeV})^{2}\) & \((12\text{ GeV})^{2}\) & \((1.8\text{ GeV})^{2}\) & \((2.5\text{ GeV})^{2}\) & \((12\text{ GeV})^{2}\) \\ \hline \(c_{0}\cdot 10^{5}\) & 2.206 & 0.7326 & 0.002164 & 2.419 & 1.011 & 0.003743 \\ \(c_{1}\cdot 10^{3}\) & 3.486 & 3.512 & 3.555 & 3.482 & 3.494 & 3.520 \\ \(c_{2}\cdot 10^{4}\) & -1.484 & -1.554 & -1.684 & -1.402 & -1.443 & -1.564 \\ \(c_{3}\cdot 10^{6}\) & 4.869 & 5.294 & 6.128 & 2.434 & 2.647 & 3.064 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Coefficients of the approximated kernel function \(K_{1}(s)\) for the two minimizations and the three choices of \(s_{0}\).
Figure 3: Difference of data compilation for \(\Delta\alpha_{had}\) of Dataset II and III respect to Dataset I in the range \(-2.5<\sqrt{q^{2}}<2\) GeV (Left) and for the MUonE kinematic range (Right). The insertions show the behaviour of \(\Delta\alpha_{had}\) computed using Dataset I.
where \(M\) is the squared lepton mass and \(K=\alpha/(\pi M)\) in the leptonic case. On the contrary, these two parameters do not have a precise physics interpretation when modeling the hadronic running since \(\Delta\alpha_{had}(t)\) is not calculable in perturbation theory. In the limit of very small \(t\), the LL function is approximated by the following expansion: \[\Delta\alpha_{\rm had}(t)=-K\left(\frac{t}{15}+\frac{t^{2}}{140M}+\frac{t^{3}} {945M^{2}}+O(t^{4})\right),\] (18) which corresponds to the dominant behaviour in the MUonE kinematic range. The first three derivatives at zero momentum transfer are: \[D1=\frac{d\Delta\alpha_{\rm had}(t)}{dt}\bigg{|}_{t=0}=-\frac{K}{15},\qquad D2 =\frac{d^{2}\Delta\alpha_{\rm had}(t)}{dt^{2}}\bigg{|}_{t=0}=-\frac{K}{70M}, \qquad D3=\frac{d^{3}\Delta\alpha_{\rm had}(t)}{dt^{3}}\bigg{|}_{t=0}=-\frac{2 K}{315M^{2}}.\]
2. A Pade approximant (Pade parameterization) with three free parameters \(P_{1,2,3}\): \[\Delta\alpha_{\rm had}(t)=P_{1}t\frac{1+P_{2}t}{1+P_{3}t}.\] (19) The expansion in the limit of very small \(t\) gives the following expression: \[\Delta\alpha_{\rm had}(t)=P_{1}t+P_{1}(P_{2}-P_{3})t^{2}+P_{1}P_{3}(-P_{2}+P _{3})t^{3}+O(t^{4}),\] (20) with the first three derivatives at zero momentum transfer: \[D1=P_{1},\qquad D2=2P_{1}(P_{2}-P_{3}),\qquad D3=-6P_{1}P_{3}(P_{2}-P_{3}).\]
3. A third order polynomial (Pol parameterization) with three free parameters \(P_{1,2,3}\). Since \(\Delta\alpha_{had}(t=0)=0\), the \(P_{0}\) term is fixed to be zero: \[\Delta\alpha_{\rm had}(t)=P_{1}t+P_{2}t^{2}+P_{3}t^{3},\] (21) which gives the following derivatives: \[D1=P_{1},\qquad D2=2P_{2},\qquad D3=6P_{3}.\]
4. "Reconstruction Approximants" (GdR parameterization) [22]. This is based on the analytic properties of the hadronic vacuum polarization function \(\Pi_{had}(t)\) in QCD and takes the form [22]: \[\Delta\alpha_{\rm had}(t)=\sum_{n=1}^{\rm N}\mathcal{A}(n,{\rm L})\left( \frac{\sqrt{1-\frac{t}{t_{0}}}-1}{\sqrt{1-\frac{t}{t_{0}}}+1}\right)^{n}+\sum_ {p=1}^{\left\lfloor\frac{{\rm L}+1}{2}\right\rfloor}\mathcal{B}(2p-1)\ {\rm Li}_{2p-1} \left(\frac{\sqrt{1-\frac{t}{t_{0}}}-1}{\sqrt{1-\frac{t}{t_{0}}}+1}\right),\] (22) where \(\mathcal{A}(n,{\rm L})\) and \(\mathcal{B}(2p-1)\) are free parameters which can be constrained by theory [22], \(t_{0}\) is an energy scale (set in this work to \(4m_{\pi^{\pm}}^{2}\)), and \({\rm Li}_{2p-1}\) are Polylogarithms of degree \(2p-1\). In the following, we limit the expansion to the case L=1, N=3: \[\Delta\alpha_{\rm had}(t)=A_{1}\mathcal{S}_{1}\ +\ A_{2}\mathcal{S}_{2}\ +\ A_{3} \mathcal{S}_{3}\ +\ B_{1}\mathcal{L}_{1},\] (23) where \[\mathcal{S}_{i} = \left(\frac{\sqrt{1-\frac{t}{t_{0}}}-1}{\sqrt{1-\frac{t}{t_{0}}}+ 1}\right)^{i},\qquad\quad A_{i}=\mathcal{A}(i,1)\quad\mbox{for}\ \ i=1,\ 2,\ 3,\] \[\mathcal{L}_{1} = {\rm Li}_{1}\left(\frac{\sqrt{1-\frac{t}{t_{0}}}-1}{\sqrt{1-\frac {t}{t_{0}}}+1}\right),\qquad\quad B_{1}=\mathcal{B}(1).\]
In the limit of small momentum transfer, the GdR function can be approximated by: \[\Delta\alpha_{\rm had}(t)=-\frac{(A_{1}+B_{1})t}{4t_{0}}-\frac{(4A_{1}-2A_{2}+3B_ {1})t^{2}}{32t_{0}^{2}}-\frac{(3(5A_{1}-4A_{2}+A_{3})+10B_{1})\,t^{3}}{192t_{0}^ {3}}+O(t^{4}).\] The first derivatives at zero momentum transfer can be computed as: \[D1=-\frac{A_{1}+B_{1}}{4t_{0}},\qquad D2=\frac{-4A_{1}+2A_{2}-3B_{1}}{16t_{0}^ {2}},\qquad D3=\frac{-15A_{1}+12A_{2}-3A_{3}-10B_{1}}{32t_{0}^{3}}.\] Five different variants of this parameterization have been considered: 1. GdR1: \(\Delta\alpha_{\rm had}(t)=A_{1}\mathcal{S}_{1}+B_{1}\mathcal{L}_{1}\), where \(A_{1}\) is a free parameter and \(B_{1}\) is constrained to its six-quark asymptotic freedom value: \(B_{1}=2\frac{\alpha}{\pi}\frac{5}{3}=0.00774273\)[22]. 2. GdR2: \(\Delta\alpha_{\rm had}(t)=A_{1}\mathcal{S}_{1}+B_{1}\mathcal{L}_{1}\), where \(A_{1}\) and \(B_{1}\) are free parameters. 3. GdR3: \(\Delta\alpha_{\rm had}(t)=A_{1}\mathcal{S}_{1}+A_{2}\mathcal{S}_{2}+B_{1} \mathcal{L}_{1}\), where \(A_{1},A_{2}\) are free parameters and \(B_{1}\) is constrained to its six-quark asymptotic freedom value. 4. GdR4: \(\Delta\alpha_{\rm had}(t)=A_{1}\mathcal{S}_{1}+A_{2}\mathcal{S}_{2}+A_{3} \mathcal{S}_{3}+B_{1}\mathcal{L}_{1}\), where \(A_{1},A_{2}\) are free parameters, \(A_{3}\) is constrained to \(A_{3}=(2A_{2}-A_{1}-B/2)/3\)[22] and \(B_{1}\) is constrained to its six-quark asymptotic freedom value. 5. GdR5: \(\Delta\alpha_{\rm had}(t)=A_{1}\mathcal{S}_{1}+A_{2}\mathcal{S}_{2}+A_{3} \mathcal{S}_{3}+B_{1}\mathcal{L}_{1}\), where \(A_{1},A_{2},A_{3}\) are free parameters and \(B_{1}\) is constrained to its six-quark asymptotic freedom value.
For each \(\Delta\alpha_{had}(t)\) compilation, \(10^{4}\) pseudo-experiments were simulated in the MUonE momentum transfer region -0.153 GeV\({}^{2}<t<-0.001\) GeV\({}^{2}\). The higher limit is needed to reproduce the geometric acceptance of MUonE, which will include all the elastic events where the electron angle is \(<32\,\)mrad. Statistical fluctuations have been added independently to each \(\Delta\alpha_{had}(t)\) point according to the planned full integrated luminosity of MUonE, 15\(\,\)fb\({}^{-1}\)[8]. A \(\chi^{2}\) fit has been performed for each pseudo-experiment using all the parameterizations described above, and for each iteration both the values of \(a_{\mu}^{\rm HLO}\) (according to Eq. 1) and of \(a_{\mu}^{\rm HLO\;(I)}\) were calculated. In particular, \(a_{\mu}^{\rm HLO\;(I)}\) was calculated for the three choices of \(s_{0}\) and for the two minimizations used to determine the approximated kernel \(K_{1}(s)\). This allowed to test the robustness of the proposed method extensively.
As an example, Fig. 4, Left, shows the fit results of a few parameterizations of \(\Delta\alpha_{had}(t)\) for a given pseudo-experiment generated with Dataset I. All the parameterizations describe the MUonE simulated data well. However, differences arise outside the MUonE experimental range, as is demonstrated in Fig. 4, Right, which shows the integrand in Eq. 1. In particular, the Pade approximant and the third order polynomial fail to describe \(\Delta\alpha_{had}(t)\) for \(x\to 1\), which corresponds to \(t\to-\infty\). As a consequence, values of \(a_{\mu}^{\rm HLO}\) computed from the integral in Eq. 1 using these two approximations will not be in agreement with the expected (input) value. On the other hand, the results of the three derivatives are quite stable, as is shown in Table 2, which compares the results of the three derivatives for all the different parameterizations used and the three \(\Delta\alpha_{had}(t)\) compilations. These values must be compared with the ones obtained by the corresponding dispersive integrals calculated directly from the time-like data, using Dataset I: \(D1=(-9.24\pm 0.04)\cdot 10^{-3}\), \(D2=(-3.86\pm 0.02)\cdot 10^{-2}\), \(D3=(-3.90\pm 0.03)\cdot 10^{-1}\). Table 3 shows the values of \(a_{\mu}^{\rm HLO\;(I)}\) for the different parameterizations and different \(s_{0}\) values for the two minimizations using Eq. 12 with Dataset I. As can be seen, the results are quite stable, independent of the parameterization used. This is a consequence of the analyticity of the hadronic vacuum polarization function \(\Pi_{had}(t)\) which can be approximated by a polynomial through a Taylor expansion of \(\Delta\alpha_{had}(t)\) at \(t\sim 0\) using Eq. 16.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multicolumn{5}{c}{Dataset} \\ \hline Parameterization & Values & I & II & III \\ \hline \multirow{3}{*}{LL} & \(D1\) & -9.23\(\pm\)0.05 & -9.17\(\pm\)0.05 & -9.20\(\pm\)0.05 \\ & \(D2\) & -3.76\(\pm\)0.22 & -3.77\(\pm\)0.22 & -3.73\(\pm\)0.22 \\ & \(D3\) & -3.19\(\pm\)0.35 & -3.21\(\pm\)0.35 & -3.14\(\pm\)0.35 \\ \hline \multirow{3}{*}{Padé} & \(D1\) & -9.24\(\pm\)0.08 & -9.18\(\pm\)0.08 & -9.20\(\pm\)0.08 \\ & \(D2\) & -3.81\(\pm\)0.62 & -3.81\(\pm\)0.62 & -3.75\(\pm\)0.61 \\ & \(D3\) & -2.14\(\pm\)2.42 & -2.21\(\pm\)2.41 & -2.03\(\pm\)2.42 \\ \hline \multirow{3}{*}{Pol} & \(D1\) & -9.22\(\pm\)0.07 & -9.16\(\pm\)0.07 & -9.19\(\pm\)0.07 \\ & \(D2\) & -3.55\(\pm\)0.45 & -3.56\(\pm\)0.45 & -3.50\(\pm\)0.44 \\ & \(D3\) & -1.83\(\pm\)0.89 & -1.84\(\pm\)0.88 & -1.79\(\pm\)0.88 \\ \hline \multirow{3}{*}{GdR1} & \(D1\) & -9.26\(\pm\)0.03 & -9.19\(\pm\)0.03 & -9.23\(\pm\)0.03 \\ & \(D2\) & -3.92\(\pm\)0.04 & -3.82\(\pm\)0.04 & -3.88\(\pm\)0.04 \\ & \(D3\) & -3.03\(\pm\)0.10 & -2.80\(\pm\)0.10 & -2.93\(\pm\)0.10 \\ \hline \multirow{3}{*}{GdR2} & \(D1\) & -9.23\(\pm\)0.06 & -9.18\(\pm\)0.06 & -9.20\(\pm\)0.06 \\ & \(D2\) & -3.73\(\pm\)0.31 & -3.75\(\pm\)0.31 & -3.69\(\pm\)0.31 \\ & \(D3\) & -2.46\(\pm\)0.94 & -2.58\(\pm\)0.94 & -2.33\(\pm\)0.95 \\ \hline \multirow{3}{*}{GdR3} & \(D1\) & -9.23\(\pm\)0.06 & -9.18\(\pm\)0.06 & -9.20\(\pm\)0.06 \\ & \(D2\) & -3.70\(\pm\)0.36 & -3.74\(\pm\)0.36 & -3.65\(\pm\)0.36 \\ & \(D3\) & -2.25\(\pm\)1.30 & -2.47\(\pm\)1.31 & -2.13\(\pm\)1.29 \\ \hline \multirow{3}{*}{GdR4} & \(D1\) & -9.24\(\pm\)0.06 & -9.19\(\pm\)0.06 & -9.21\(\pm\)0.06 \\ & \(D2\) & -3.89\(\pm\)0.31 & -3.91\(\pm\)0.32 & -3.84\(\pm\)0.32 \\ & \(D3\) & -2.95\(\pm\)1.11 & -3.14\(\pm\)1.11 & -2.84\(\pm\)1.11 \\ \hline \multirow{3}{*}{GdR5} & \(D1\) & -9.23\(\pm\)0.09 & -9.18\(\pm\)0.09 & -9.20\(\pm\)0.09 \\ & \(D2\) & -3.74\(\pm\)1.23 & -3.75\(\pm\)1.23 & -3.64\(\pm\)1.24 \\ \cline{1-1} & \(D3\) & -2.90\(\pm\)3.23 & -2.91\(\pm\)3.37 & -1.80\(\pm\)4.52 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Values of the first three derivatives of \(\Delta\alpha_{had}(t)\) at zero momentum transfer for the parameterizations considered in this study and the three different datasets. \(D1\) is given in units of \(10^{-3}\), \(D2\) in units of \(10^{-2}\), and \(D3\) in units of \(10^{-1}\).
Figure 4: Left: example of fit results for a few parameterizations of \(\Delta\alpha_{had}(t)\). The insertion shows the fit residuals for the LL best fit of the same pseudo-experiment. Right: integrand of the MUonE integral in Eq. 1, computed using different parameterizations fitted to the same pseudo-experiment. The insertion zooms in on the kinematic range not accessible to MUonE. Note that the Padé approximant fails to follow the expected behaviour, while the third order polynomial diverges for \(x\to 1\) (\(t\rightarrow-\infty\)).
## 4 Evaluation of \(a_{\mu}^{\text{HLO (II, III, IV)}}\)
The contour integral in \(a_{\mu}^{\text{HLO (II)}}\) was calculated using the pQCD prediction of \(\Pi_{had}(s)\). The QCD vector correlator \(\Pi_{\text{QCD}}\), which is related to the vacuum polarization function via \(\Pi_{had}(s)=-4\alpha\pi(\sum q_{i}^{2})\Pi_{\text{QCD}}(s)\), is known up to five loops in the massless approximation, i.e. to \(O(\alpha_{s}^{4})\)[23]. The mass terms up to \(O(\alpha_{s}^{2}(m^{2}/q^{2})^{30})\) are taken from [24], while the mass terms up to \(O(\alpha_{s}^{2}(m^{2}/q^{2}))\) are taken from [25]. The strong coupling \(\alpha_{s}(\mu^{2})\) was determined from the PDG average for \(\alpha_{s}(M_{Z}^{2})\) using the CRunDec program [26]. The uncertainty of the contour integral was estimated including contributions from the uncertainty of the input parameter \(\alpha_{s}(M_{Z}^{2})\), the variation of the renormalisation scale \(\mu^{2}\) in the range from \(s_{0}/2\) to \(2s_{0}\), and the full value of the estimated duality violations [27; 28]. These three terms have been added in quadrature to estimate the total uncertainty on the contour integral. Note that for \(s_{0}=(1.8\,\text{GeV})^{2}\), the error is dominated by the estimate of the duality violations, which amount to 1%, while the uncertainties from \(\alpha_{s}\) and the scale variation are much smaller and of similar size. For \(s_{0}=(2.5\,\text{GeV})^{2}\), duality violations are already suppressed and contribute at the level of 0.1%. They are negligible at \(s_{0}=(12\,\text{GeV})^{2}\), where the error is dominated by the uncertainty of \(\alpha_{s}(M_{Z}^{2})\). To compute \(a_{\mu}^{\text{HLO (III)}}\) and \(a_{\mu}^{\text{HLO (IV)}}\), \(e^{+}e^{-}\) data from Dataset I 3 were used for \(R(s)\) up to \(s=10\) GeV, while pQCD was used above that value.
Footnote 3: Similar values are obtained using Dataset II and III.
Table 4 shows the values of \(a_{\mu}^{\text{HLO (II)}}\), \(a_{\mu}^{\text{HLO (III)}}\) and \(a_{\mu}^{\text{HLO (IV)}}\) for the three different \(s_{0}\) values and the two minimization techniques for the approximated kernel.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multicolumn{5}{c}{Minimization I} \\ \hline \(s_{0}\) values & \(a_{\mu}^{\text{HLO (II)}}\) (\(10^{-10}\)) & \(a_{\mu}^{\text{HLO (III)}}\) (\(10^{-10}\)) & \(a_{\mu}^{\text{HLO (IV)}}\) (\(10^{-10}\)) \\ \hline \((1.8\,\text{GeV})^{2}\) & 2.94\(\pm\)0.04 & 0.43\(\pm\)0.01 & 2.95\(\pm\)0.05 \\ \hline \((2.5\,\text{GeV})^{2}\) & 1.84\(\pm\)0.01 & -0.34\(\pm\)0.01 & 1.79\(\pm\)0.02 \\ \hline \((12\,\text{GeV})^{2}\) & 0.208\(\pm\)0.001 & -1.695\(\pm\)0.035 & 0.079\(\pm\)0.001 \\ \hline \multicolumn{5}{c}{Minimization II} \\ \hline \(s_{0}\) values & \(a_{\mu}^{\text{HLO (II)}}\) (\(10^{-10}\)) & \(a_{\mu}^{\text{HLO (III)}}\) (\(10^{-10}\)) & \(a_{\mu}^{\text{HLO (IV)}}\) (\(10^{-10}\)) \\ \hline \((1.8\,\text{GeV})^{2}\) & 3.23\(\pm\)0.04 & 0.91\(\pm\)0.02 & 3.00\(\pm\)0.05 \\ \hline \((2.5\,\text{GeV})^{2}\) & 2.54\(\pm\)0.01 & 1.52\(\pm\)0.02 & 1.96\(\pm\)0.02 \\ \hline \((12\,\text{GeV})^{2}\) & 0.360\(\pm\)0.001 & 4.85\(\pm\)0.05 & 0.096\(\pm\)0.001 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Values of \(a_{\mu}^{\text{HLO (II)}}\), \(a_{\mu}^{\text{HLO (III)}}\), \(a_{\mu}^{\text{HLO (IV)}}\) for the three choices of \(s_{0}\) and the two different minimization techniques used to determine the approximated kernel \(K_{1}(s)\). Dataset I was used as input.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Minimization I & \multicolumn{5}{c}{\(a_{\mu}^{\text{HLO (I)}}\) (\(10^{-10}\))} \\ \hline \(s_{0}\) values & LL & Padé & Pol & GdR1 & GdR2 & GdR3 & GdR4 & GdR5 \\ \hline \((1.8\,\text{GeV})^{2}\) & 688.7\(\pm\)2.2 & 688.7\(\pm\)2.9 & 688.9\(\pm\)2.9 & 688.2\(\pm\)2.2 & 688.0\(\pm\)2.2 & 688.0\(\pm\)2.2 & 687.0\(\pm\)2.3 & 688.0\(\pm\)2.6 \\ \hline \((2.5\,\text{GeV})^{2}\) & 691.7\(\pm\)2.2 & 691.6\(\pm\)3.0 & 691.8\(\pm\)3.0 & 691.0\(\pm\)2.2 & 690.8\(\pm\)2.2 & 690.8\(\pm\)2.2 & 689.8\(\pm\)2.3 & 690.9\(\pm\)2.9 \\ \hline \((12\,\text{GeV})^{2}\) & 696.3\(\pm\)2.2 & 696.3\(\pm\)3.0 & 696.3\(\pm\)3.2 & 695.4\(\pm\)2.2 & 695.3\(\pm\)2.2 & 695.2\(\pm\)2.2 & 694.1\(\pm\)2.3 & 695.3\(\pm\)3.7 \\ \hline Minimization II & \multicolumn{5}{c}{\(a_{\mu}^{\text{HLO (II)}}\) (\(10^{-10}\))} \\ \hline \(s_{0}\) values & LL & Padé & Pol & GdR1 & GdR2 & GdR3 & GdR4 & GdR5 \\ \hline \((1.8\,\text{GeV})^{2}\) & 688.5\(\pm\)2.2 & 688.1\(\pm\)4.2 & 689.8\(\pm\)3.3 & 688.3\(\pm\)2.1 & 688.4\(\pm\)2.1 & 688.6\(\pm\)2.2 & 687.1\(\pm\)2.1 & 688.4\(\pm\)5.8 \\ \hline \((2.5\,\text{GeV})^{2}\) & 689.5\(\pm\)2.2 & 689.1\(\pm\)4.2 & 690.8\(\pm\)3.3 & 689.3\(\pm\)2.1 & 689.4\(\pm\)2.1 & 689.6\(\pm\)2.2 & 688.1\(\pm\)2.1 & 689.4\(\pm\)5.7 \\ \hline \((12\,\text{GeV})^{2}\) & 690.3\(\pm\)2.1 & 689.9\(\pm\)4.6 & 691.6\(\pm\)3.6 & 689.8\(\pm\)2.1 & 690.1\(\pm\)2.2 & 690.2\(\pm\)2.2 & 688.6\(\pm\)2.1 & 690.0\(\pm\)5.9 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Values of \(a_{\mu}^{\text{HLO (I)}}\) for the parameterizations considered in this study and the three choices of \(s_{0}\). Results using the two different minimization techniques for \(K_{1}(s)\) are shown. Dataset I was used as input.
## 5 Results
Table 5 shows the final results for \(a_{\mu}^{\rm HLO}\) obtained adding the four terms according to Eq. 11. They are compared to the values of \(a_{\mu}^{\rm HLO}\) obtained using the integral in Eq. 1. Using this method, the results obtained from the Pade and Polynomial parameterization are in strong disagreement with the reference input value. On the other hand, the same parameterizations can be safely adopted with our new method, since they allow to compute \(a_{\mu}^{\rm HLO}\) with no significant bias with respect to the reference value. For the sake of illustration, final results for \(s_{0}=(1.8~{}\mathrm{GeV})^{2}\), Dataset I and all the parameterizations are shown in Fig. 5. Results obtained using Datasets II and III are very similar, and are reported in Tables 6, 7.
## 6 Conclusions
We have shown a method to compute \(a_{\mu}^{\rm HLO}\) with MUonE data using a derivative approach based on Refs. [11; 12]. This method relies on the analyticity properties of the hadronic vacuum polarization function \(\Pi_{had}(t)\) which allow to express \(\Delta\alpha_{had}(t)\) as a polynomial by a Taylor expansion at \(t\sim 0\). The results obtained for different parameterizations used to fit \(\Delta\alpha_{had}(t)\) and different input datasets show that this method is competitive with the traditional one based on the integral of \(\Delta\alpha_{had}(t)\) in the whole momentum transfer range. Moreover, the proposed method avoids the difficulties with the functional form of the parameterization used to extrapolate \(\Delta\alpha_{had}(t)\) behaviour outside the MUonE range. We expect that by a convenient choice of the approximated kernel function this method can be applied to extract the NLO and NNLO hadronic vacuum polarization contributions to the muon \(g\!-\!2\) in the space-like region.
## Acknowledgements
We thank Stefano Laporta, Eduardo de Rafael and David Greynat for useful discussions, and Massimo Passera for useful suggestions and contributions in the early phase of the project. This work was supported by the Leverhulme Trust, LIP-2021-01. TT is supported by the STFC Consolidated Grant ST/T000988/1.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & \multicolumn{8}{c}{\(a_{\mu}^{\rm HLO,\ INPUT}=695.1~{}(10^{-10})\)} \\ \hline Minimization I & \multicolumn{8}{c}{\(a_{\mu}^{\rm HLO\,(Def)}\)} \\ \hline \(s_{0}\) values & LL & Padé & Pol & GdR1 & GdR2 & GdR3 & GdR4 & GdR5 \\ \hline \((1.8~{}\mathrm{GeV})^{2}\) & 695.0\(\pm\)2.2 & 695.0\(\pm\)2.9 & 695.2\(\pm\)2.9 & 694.5\(\pm\)2.2 & 694.3\(\pm\)2.2 & 694.3\(\pm\)2.2 & 693.3\(\pm\)2.3 & 694.3\(\pm\)2.6 \\ \hline \((2.5~{}\mathrm{GeV})^{2}\) & 695.0\(\pm\)2.2 & 694.9\(\pm\)3.0 & 695.1\(\pm\)3.0 & 694.3\(\pm\)2.2 & 694.1\(\pm\)2.2 & 694.1\(\pm\)2.2 & 693.1\(\pm\)2.3 & 694.2\(\pm\)2.9 \\ \hline \((12~{}\mathrm{GeV})^{2}\) & 694.9\(\pm\)2.2 & 694.9\(\pm\)3.0 & 694.9\(\pm\)3.2 & 694.0\(\pm\)2.2 & 693.9\(\pm\)2.2 & 693.8\(\pm\)2.2 & 692.7\(\pm\)2.3 & 693.9\(\pm\)3.7 \\ \hline Minimization II & \multicolumn{8}{c}{\(a_{\mu}^{\rm HLO\,(Def)}\)} \\ \hline \(s_{0}\) values & LL & Padé & Pol & GdR1 & GdR2 & GdR3 & GdR4 & GdR5 \\ \hline \((1.8~{}\mathrm{GeV})^{2}\) & 695.6\(\pm\)2.2 & 695.2\(\pm\)4.2 & 696.9\(\pm\)3.3 & 695.4\(\pm\)2.1 & 695.5\(\pm\)2.1 & 695.7\(\pm\)2.2 & 694.2\(\pm\)2.1 & 695.5\(\pm\)5.8 \\ \hline \((2.5~{}\mathrm{GeV})^{2}\) & 695.5\(\pm\)2.2 & 695.1\(\pm\)4.2 & 696.8\(\pm\)3.3 & 695.3\(\pm\)2.1 & 695.4\(\pm\)2.1 & 695.6\(\pm\)2.2 & 694.1\(\pm\)2.1 & 695.4\(\pm\)5.7 \\ \hline \((12~{}\mathrm{GeV})^{2}\) & 695.6\(\pm\)2.1 & 695.2\(\pm\)4.6 & 696.9\(\pm\)3.6 & 695.1\(\pm\)2.1 & 695.4\(\pm\)2.2 & 695.5\(\pm\)2.2 & 693.9\(\pm\)2.1 & 695.3\(\pm\)5.9 \\ \hline \hline \(a_{\mu}^{\rm HLO\,(Int)}\) & 695.3\(\pm\)2.1 & 702.6\(\pm\)12.0 & 834.3\(\pm\)63.2 & 696.6\(\pm\)2.2 & 696.5\(\pm\)2.2 & 696.3\(\pm\)2.2 & 696.8\(\pm\)2.2 & 696.5\(\pm\)3.5 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Final results of \(a_{\mu}^{\rm HLO}\) calculated using the proposed method (\(a_{\mu}^{\rm HLO\,(Def)}\), for all parameterizations considered in this study, with the three choices of \(s_{0}\) and the two minimization techniques used to determine the approximated kernel \(K_{1}(s)\). \(a_{\mu}^{\rm HLO\,(Int)}\) is the result of the integral in Eq. 1, when a specific parameterization for \(\Delta\alpha_{had}(t)\) is used as input. \(a_{\mu}^{\rm HLO,\ INPUT}\) is the reference value of \(a_{\mu}^{\rm HLO}\) obtained by integrating \(\Delta\alpha_{had}(t)\) from Dataset I using Eq. 1.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline & \multicolumn{6}{c}{\(a_{\mu}^{\text{HLO, INPU1}}=690.3\) (\(10^{-10}\))} \\ \hline Minimization I & \multicolumn{6}{c}{\(a_{\mu}^{\text{HLO}\,(\text{Der})}\)} \\ \hline \(s_{0}\) values & LL & Padé & Pol & GdR1 & GdR2 & GdR3 & GdR4 & GdR5 \\ \hline \((1.8\text{ GeV})^{2}\) & 690.3\(\pm\)2.2 & 690.2\(\pm\)2.9 & 690.5\(\pm\)2.9 & 689.7\(\pm\)2.2 & 689.6\(\pm\)2.2 & 689.6\(\pm\)2.2 & 688.6\(\pm\)2.3 & 689.6\(\pm\)2.6 \\ \hline \((2.5\text{ GeV})^{2}\) & 690.2\(\pm\)2.2 & 690.1\(\pm\)3.0 & 690.3\(\pm\)3.0 & 689.5\(\pm\)2.2 & 689.4\(\pm\)2.2 & 689.4\(\pm\)2.2 & 688.4\(\pm\)2.3 & 689.4\(\pm\)2.9 \\ \hline \((12\text{ GeV})^{2}\) & 690.1\(\pm\)2.2 & 690.0\(\pm\)3.0 & 690.0\(\pm\)3.2 & 689.1\(\pm\)2.2 & 689.1\(\pm\)2.2 & 689.1\(\pm\)2.2 & 688.1\(\pm\)2.3 & 689.1\(\pm\)3.7 \\ \hline Minimization II & \multicolumn{6}{c}{\(a_{\mu}^{\text{HLO}\,(\text{Der})}\)} \\ \hline \(s_{0}\) values & LL & Padé & Pol & GdR1 & GdR2 & GdR3 & GdR4 & GdR5 \\ \hline \((1.8\text{ GeV})^{2}\) & 690.8\(\pm\)2.2 & 690.4\(\pm\)4.1 & 692.1\(\pm\)3.3 & 690.7\(\pm\)2.1 & 690.7\(\pm\)2.1 & 690.8\(\pm\)2.2 & 689.4\(\pm\)2.1 & 690.7\(\pm\)5.9 \\ \hline \((2.5\text{ GeV})^{2}\) & 690.8\(\pm\)2.2 & 690.4\(\pm\)4.2 & 692.0\(\pm\)3.3 & 690.5\(\pm\)2.1 & 690.6\(\pm\)2.1 & 690.7\(\pm\)2.2 & 689.3\(\pm\)2.1 & 690.6\(\pm\)5.7 \\ \hline \((12\text{ GeV})^{2}\) & 690.8\(\pm\)2.1 & 690.4\(\pm\)4.6 & 692.1\(\pm\)3.6 & 690.4\(\pm\)2.1 & 690.5\(\pm\)2.2 & 690.6\(\pm\)2.2 & 689.1\(\pm\)2.1 & 690.5\(\pm\)5.9 \\ \hline \hline \(a_{\mu}^{\text{HLO}\,(\text{Int})}\) & 690.5\(\pm\)2.1 & 695.8\(\pm\)8.8 & 724.2\(\pm\)31.2 & 691.7\(\pm\)2.2 & 691.7\(\pm\)2.2 & 691.6\(\pm\)2.2 & 692.0\(\pm\)2.2 & 691.7\(\pm\)3.5 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Same as Table 5, but using Dataset II as input.
Figure 5: Values of \(a_{\mu}^{\text{HLO}}\) for all parameterizations considered in this work, obtained from Dataset I with the different methods: MUonE integral from Eq. 1 (empty squares), our new method using Minimization 1 to get the approximated kernel (circles) and our new method using Minimization 2 (diamonds), using \(s_{0}=(1.8\text{ GeV})^{2}\). The results for the Padé and Pol parameterizations computed using Eq. 1 are outside the plot range. The black dashed line represents the reference value obtained from Dataset I. |
2310.00313 | Decoding In-Context Learning: Neuroscience-inspired Analysis of
Representations in Large Language Models | Large language models (LLMs) exhibit remarkable performance improvement
through in-context learning (ICL) by leveraging task-specific examples in the
input. However, the mechanisms behind this improvement remain elusive. In this
work, we investigate how LLM embeddings and attention representations change
following in-context-learning, and how these changes mediate improvement in
behavior. We employ neuroscience-inspired techniques such as representational
similarity analysis (RSA) and propose novel methods for parameterized probing
and measuring ratio of attention to relevant vs. irrelevant information in
Llama-2 70B and Vicuna 13B. We designed two tasks with a priori relationships
among their conditions: linear regression and reading comprehension. We formed
hypotheses about expected similarities in task representations and measured
hypothesis alignment of LLM representations before and after ICL as well as
changes in attention. Our analyses revealed a meaningful correlation between
improvements in behavior after ICL and changes in both embeddings and attention
weights across LLM layers. This empirical framework empowers a nuanced
understanding of how latent representations shape LLM behavior, offering
valuable tools and insights for future research and practical applications. | Safoora Yousefi, Leo Betthauser, Hosein Hasanbeig, Raphaël Millière, Ida Momennejad | 2023-09-30T09:01:35Z | http://arxiv.org/abs/2310.00313v4 | # In-Context Learning in Large Language Models: A Neuroscience-inspired Analysis of Representations
###### Abstract
Large language models (LLMs) exhibit remarkable performance improvement through in-context learning (ICL) by leveraging task-specific examples in the input. However, the mechanisms behind this improvement remain elusive. In this work, we investigate embeddings and attention representations in Llama-2 70B and Vicuna 13B. Specifically, we study how embeddings and attention change after in-context-learning, and how these changes mediate improvement in behavior. We employ neuroscience-inspired techniques, such as representational similarity analysis (RSA), and propose novel methods for parameterized probing and attention ratio analysis (ARA, measuring the ratio of attention to relevant vs. irrelevant information). We designed three tasks with a priori relationships among their conditions: reading comprehension, linear regression, and adversarial prompt injection. We formed hypotheses about expected similarities in task representations to investigate latent changes in embeddings and attention. Our analyses revealed a meaningful correlation between changes in both embeddings and attention representations with improvements in behavioral performance after ICL. This empirical framework empowers a nuanced understanding of how latent representations affect ILM behavior with and without ICL, offering valuable tools and insights for future research and practical applications.
## 1 Introduction
Transformer-based large language models (LLMs) such as GPT-3 (Brown et al., 2020) and Llama-2 (Touvron et al., 2023) have achieved state-of-the-art performance on a wide range of tasks. One of the most intriguing aspects of modern Transformer-based models, especially LLMs, is their capacity for in-context learning (ICL) (Brown et al., 2020). ICL enables the model to improve its performance on new tasks from a few examples provided in the input context (or prompt), without any parameter updates. ICL enables LLMs to flexibly adapt their behavior to task-specific demands during inference without further training or fine-tuning. For instance, including examples of question-answer pairs in the prompt significantly improves performance on arithmetic, commonsense, and symbolic reasoning tasks (Wei et al., 2022; Zhou et al., 2022). However, in spite of progress in this area, how ICL improves behavior remains mostly unknown and an active area of research.
Some prior studies have framed ICL as implicit optimization, providing theoretical and empirical evidence that Transformer self-attention can implement algorithms similar to gradient descent (von Oswald et al., 2022; Akyurek et al., 2022; Ahn et al., 2023). Other work has proposed a Bayesian
perspective, suggesting pretraining learns a latent variable model that allows conditioning on in-context examples for downstream prediction (Xie et al., 2022; Wang et al., 2023; Ahuja et al., 2023). While these formal investigations offer valuable insights, they generally focus on toy models trained on synthetic data that may fail to capture the full richness of ICL behavior exhibited by large models trained on internet-scale data. To advance a more complete understanding of ICL capabilities, new perspectives are needed to elucidate how ICL arises in LLMs trained on naturalistic corpora.
In this work, we introduce a neuroscience-inspired framework to empirically analyze ICL in two popular LLMs: Vicuna-1.3 13B Chiang et al. (2023), and Llama-2 70B Touvron et al. (2023). We chose these models because they are open-source and provide access to embeddings and weights of all layers. We designed controlled experiments isolating diverse ICL facets including systematic generalization (e.g., in regression), distraction resistance (e.g. in reading comprehension), and adversarial robustness (e.g., against attacks).
To interpret how the LLM's latent representations support ICL, we focus on changes in the LLM's embeddings and attention weights following ICL. We adopt methods from neuroscience, such as representational similarity analysis (RSA), to interpret how model representations change as a result of ICL. We also propose a novel parameter-free method for computing the _attention ratios_ between relevant and irrelevant items. Together, analyzing ICL-induced RSA and attention ratios enable us to systematically relate observed changes in latent representations and attention patterns to improvements in model behavior after ICL across three experiments (See figure 1). Our approach provides new insights into ICL-related behavior in large models and their representational underpinnings:
* ICL improves behavioral performance on tasks requiring reasoning, generalizing systematically beyond the information provided (e.g., in regression), and robustness to distractions and adversarial attacks.
Figure 1: **Overview of methods used in the paper.** We designed prompts with a priori relationships among them, for example prompts that describe the same activity. We obtained LLM responses, embeddings and attention weights corresponding to each prompt. We investigated changes in model behavior (response) before and after ICL examples were added to the prompt, as well as changes to the LLM representations and attention weights. Furthermore, we studied the correlation between changes in behavior, embeddings, and attention weights.
* Anayzing embeddings with RSA and classifiers reveal that ICL leads to changes in embedding representations to better encode task-critical information, which improves behavior.
* Increased allocation of attention to relevant versus irrelevant content correlates with behavioral improvements resulting from ICL across all three experiments. Moreover, ICL improves robustness against adversarial attacks by appropriately realigning attention allocation (Section 5).
_A neuroscience-inspired approach._ We use a technique known as representational similarity analysis (RSA), which is widely used across the neurosciences for comparison of neural representations and behavior. We believe RSA is suited to our approach for two reasons. First, parameter-free methods like RSA offer notable benefits over parametrized probing classifiers routinely used to decode internal model representations (Belinkov, 2022). This is because parameterized probes run a greater risk of fitting spurious correlations or memorizing properties of the probing dataset, rather than genuinely extracting information inherent in the representations (Belinkov, 2022). RSA avoid this risk, since it directly uses model activations. Second, unlike causal interventions used in mechanistic interpretability research (Nanda et al., 2023; Commy et al., 2023), RSA also scales efficiently to very large language models. Note that there is a potential tradeoff between accuracy and complexity in probing methods, e.g., parameterized probes have the potential to achieve higher accuracy by fitting more complex patterns. In this work, we combine parametrized and parameter-free methods to leverage their respective strengths.
This work demonstrates the promise of neuroscience-inspired analysis in advancing interpretability and design of robust, capable LLMs.
Figure 2: **Task and experiment design.** A. Task 1. We designed a simple reading comprehension task with two groups of prompts: simple prompts (e.g., Patricia is sleeping), and composite prompts that included two or more simple prompts and a question. Some of the simple prompts in the composite prompts were informative for answering the question and others were uninformative. This enabled us to measure the ratio of attention to informative and uninformative components of the prompt, and how it changed before and after ICL. We also measured the decodability of the ground truth activity from prompt embeddings before and after ICL (D). B. Task 2. We designed a linear regression task, in which the LLM is given a set of x and y coordinates that fall on the same line, is given a final x and is asked to provide a y so the last point falls on the same line. We measured the decodabilty of the slope from prompt embeddings before and after ICL and with increasing number of examples, as well as the correlation between the representational similarity matrix (RSM) of embeddings with a slope-based hypothesis matrix (E). C. Task 3. We designed truthful and deceptive persona injection prompts, and tested the robustness of LLM performance on reading comprehension to adversarial persona injections. We analyzed changes inbehavior, embeddings, attention, and correlations between them before and after ICL.
## 2 Methods
In what follows we briefly describe the tasks, and the different steps of the latent representation analysis. This includes obtaining embedding vectors, computing representational similarity analysis, embedding classifiers, and computing the ratio of attention between the model response and different parts of the prompt.
_Tasks._ We use or design three categories of experimental tasks throughout the paper. The first is that of text comprehension (Section 3), where we study the models behavior and latent representations in a set of related reading comprehension prompts. The second is linear regression (Section 4), where a set of points are given to the LLM and it is asked to generate another point on the line. The third are a set of prompt injection tasks (Section 5), where we explore the impact of persona injection on the behavior and latent representations of LLMs.
### Obtaining Prompt Embeddings
In order to compute an embedding vector for each prompt, we first extract the \(n\)\(d\)-dimensional embeddings for \(n\) prompt tokens from the desired layer of the LLM. Then, we perform max-pooling on the token embeddings to reduce them into one \(d\)-dimensional vector per prompt. Following Timkey & van Schijndel (2021), we then standardize the embeddings before computing pairwise cosine similarities to reduces the effect of rogue dimensions dominating the similarities. We use the resulting embeddings in various analyses including representational similarity analysis (RSA) and training probing classifiers as described in the following subsections. Please note that, depending on the task design, sometimes instead of all the tokens in the prompt, the most relevant subset of tokens representing important components of the prompt are considered. For example, in Section 3 all tokens pertaining to the reading comprehension question are considered, while in Section 4 all the prompt tokens describing the linear regression problem are taken into account.
### Embedding Representational Similarity Analysis
We designed a set of tasks with common components, such that representing those components is crucial in solving the tasks. To investigate the relationship between the representation of those components with behavioral performance in each LLM, we obtained embeddings for each prompt as described in Section 2.1, and calculated pairwise cosine similarities for all pairs of prompts. We compare the resulting embedding similarity matrix with an a priori hypothesis matrix of the designed relationships among prompts. Subsequently, we measure whether and how these representations, as well as the LLMs' behavior, change after we introduce ICL examples to the same prompts.
### Embedding Classifiers
Given the mebeddings described above capture the LLM's latent representations of the tasks, it is possible to decode various attributes of the task from prompt embeddings (Section 2.1). We use a probing classifier to study the effect of in-context-learning on the decodability of task components of interest from the embeddings. To achieve this, we divide the prompts into training and testing sets, train a logistic regression model to predict a task component in the prompt, and measure the decoding accuracy on the test set. For example, in Section 3, the task involves prompts that each have a "ground truth activity" component that can be used as labels for training embedding classifiers. We report results with at least 10 repetitions of Monte Carlo cross validation.
### Attention Ratios
We also examine representational changes in attention in the final layer of the LLM due to ICL. Namely, we compute the ratio of the attention between the tokens associated with the response and the tokens associated with different components of the prompts.
Let's take \(a\), \(b\), and \(c\) to denote three subsets of substrings contained in the prompt concatenated with the response. For example, \(a\) can be the prompt, \(b\) the response, and \(c\) can be a part of the prompt which contains the necessary information to answer the question. To enable measuring attention ratios, we construct an input string \(x=p\frown r\) by concatenating the prompt \(p\) and the
model response \(r\), and obtain the attention weights resulting from this input from the last layer of the model. Let \(A\) be the max-pooled attention weights across all of last layer's attention heads corresponding to \(x\). We define the attention ratio \(A(a,b,c)_{x}:=\frac{1}{|t(x,b)|}\Sigma(A_{i,j})/\frac{1}{|t(x,c)|}\Sigma(A_{i,k})\) where \(i\in t(x,a)\), \(j\in t(x,b)\), \(k\in t(x,c)\), and \(t(u,v)\) indicates the token indices corresponding to substring \(v\) after tokenization of string \(u\) (Figure 3). Intuitively, this attention ratio measures how much of the response's attention is focused on one substring of interest.
The attention ratio measure can be used to compare the attention of the response to relevant vs. irrelevant information in the prompt. We compared the distribution of this ratio before and after in-context-learning. Two-sample t-test was used to measure the statistical significance of the shift in attention ratios before and after ICL.
## 3 Reading comprehension: Names and Activities
We designed a simple reading comprehension task that consists of clear and distinct components with a priori similarities, such that we can measure how each component is reflected in LLM embeddings and attention weights. Specifically, we created 100 prompts of the form "name activity", using 10 distinct names and 10 distinct activities. We refer to these prompts as "simple prompts" since we subsequently combine them to create more complex "composite prompts". Here is an example of a simple prompt:
```
Natalic is playing basketball.
```
Next, we created composite prompts using one or more simple prompts. Composite prompts involve a simple reading comprehension task that requires the model to process information from the rele
Figure 3: **Attention ratio analysis (ARA).** We propose attention ratio analysis (ARA) to compare the ratio of attention to relevant vs. irrelevant information in the prompt. We specified informative and uninformative subsequences for each prompt, and identified the corresponding token indices for each subsequence and the response. We defined attention ratio, \(A\)(response, informative, uninformative), as the ratio of the normalized sum of attention values between the response and informative tokens, vs. attention values between the response and uninformative tokens in the prompt. We computed the distribution of attention ratios with and without an ICL example and used a two sample t-test to compare the distributions. As such, ARA offers a window into how attention underlies behavioral errors and improvements before and after ICL.
vant simple prompt while ignoring the irrelevant one(s) to come up with the correct answer. Here is an example composite prompt created with two simple prompts:
Question: Patricia is reading a book. Joseph is swimming. Oliver is doing the same thing as Patricia. Oliver is a neighbor of Joseph. What is Oliver doing? Answer:
The correct response to the above prompt needs to include "Oliver is reading a book." or "Reading a book". Note that the prompt includes distraction statements that are not one of our simple prompts, e.g., "Oliver is a neighbor of Joseph", to make the task more challenging. Our goal was to study how distractors change in LLM behavior, latent states, and attention patterns, and how ICL can improve LLM performance in the presence of distractors. We found that different distractors pose a challenge to Llama-2 and Vicuna-1.3: We provide examples of prompts used for each model in the supplementary materials (Section A.1).
Llama-2 and Vicuna-1.3 performances on this task are reported after (1) the number of simple prompts in composite prompts were increased, and (2) in-context examples were introduced (Figure 4(a) and 4(b)). We observe that as the task becomes more difficult with increasing the number of simple prompts, the behavioral accuracy of both models degrade, but adding an ICL example significantly improves the performance. Statistically significant improvements after ICL with \(p<0.01\) are identified by \(*\) on the plots (see Appendix A.3 for t-test results). In the following subsections, we use these composite prompts to analyze latent representations of both models.
_Embedding classification before and after ICL._ In composite prompts with more than one simple prompt, there are one or more distracting simple prompts, and one informative simple prompt that contains the ground truth activity. For the example prompt above, "Joseph is swimming." is a distracting simple prompt, "Patricia is reading a book" is an informative one, and the ground truth activity is "reading a book.". We hypothesized that after in-context-learning the embeddings of composite prompts become more representative of the ground truth activity. Consistently, ground truth activity classification improved after ICL from composite prompt embeddings in both models (Figure 4(c) and 4(d)). Note than in composite prompts with only one simple prompt, there is only one activity mentioned in the prompt with no distractors, making the classification task straight forward. Addition of an ICL example introduces another activity to the prompt so the classifier has two distinguish between the ground truth activity in the question and the one in the ICL example, making classification harder.
_Attention Ratio Analysis of Composite Prompts._ Next we applied the attention ratio analysis described in Section 2.4 to composite prompts. Each composite prompt consists of well-defined informative and distracting simple prompts, denoted by \(s_{inf}\) and \(s_{dist}\). For composite prompts with more than one distracting simple prompt, we analyze attention of model response \(r\) on \(s_{inf}\). All simple prompts were shuffled, so that the target activity can appear anywhere in the prompt with respect to distracting ones. For each composite prompt, we calculated the ratio of the response \(r\) attention to \(s_{inf}\) over response attention to \(s_{dist}\) as \(A(r,s_{inf},s_{dist})\). In Figure 5, we compare the distribution of this value over composite prompts before and after introduction of ICL. The addition of one ICL example significantly shifts attention to the relevant parts of the prompt compared to distracting parts (i.e., larger values of attention ratio) for Vicuna. We consistently observe this ICL-induced improvement of the attention ratio distribution across both LLMs and all composite lengths although effect is not statistically significant with Llama2. Importantly, the attention ratio is consistent with the ICL-induced improvement in the behavior of both LLMs (Figure 4(a) and 4(b)).
## 4 Linear Regression
We investigated LLM performance on linear regression tasks and the effect of increasing the number of in-context examples on both LLM behavior and latent representations. We took inspiration from Coda-Forno et al. (2023) in designing this task. We created 256 prompts, and 16 different lines. In each prompt, we provided two to eight \((x_{i},y_{i})\) points for in-context learning and a test \(x_{T}\) coordinate in the prompt, asking the LLM to predict the associated \(y_{T}\) coordinate on the same line. In addition to varying the number of example points, we also varied the range of \(x_{T}\). In half of the prompts we selected \(x_{T}\) from within the range of \(x_{i}\) and in the other half we selected \(x_{T}\) to be larger than all \(x_{i}\). A sample prompt for this task is shown below:
We evaluated the model's behavioral error in calculating \(y_{T}\) for \(x_{T}\) as the absolute error between the response \(\hat{y_{T}}\) and the ground truth:
\[e_{reg}(y_{T},\hat{y_{T}})=|y_{T}-\hat{y_{T}}| \tag{1}\]
Figure 6 shows the behavioral error of both models decreases as we increase the number of ICL examples, although the task of predicting \(y_{T}\) for out of range \(x_{T}\) does not get easier for Vicuna with more ICL examples.
matrix, which captures the relational structure among different prompt embeddings. We built a similarity matrix \(H\) to capture this hypothesis (Figure 8(a)), where the entry \(H_{i,j}\) is 1 if the lines in the \(i\)th and \(j\)th prompt have the same slope and 0 otherwise. We also calculated the cosine similarity matrix among prompt embeddings and we denoted it as \(M\) (Figure 8(b),8(c)). As shown in Figure 8(d), the correlation between \(M\) and \(H\) increased with increasing number of ICL example points in the prompt for both models.
Figure 8: **RSA for linear regression analysis based on a hypothesis matrix. (a) A hypothesis similarity matrix with high similarity between prompts pertaining to lines with the same slope. (b) the embedding similarity matrix for linear regression prompts with 2 ICL examples extracted from Llama2 (c) the embedding similarity matrix for linear regression prompts with 8 ICL examples extracted from Llama2 (d) The correlation between the hypothesis matrix and the actual embeddings similarity matrix increases as we increase the number of ICL examples for both models.**
Figure 6: **Behavioral results for the linear regression task. Increasing the number of in-context examples decreases the absolute error of Llama-2 (a, b) and Vicuna-1.3 (c, d) on the linear regression task. This effect is particularly pronounced for Llama-2 when the question is outside the range of provided examples (b).**
Figure 7: **Correlation of representational and behavioral changes in the linear regression task. Accuracy of logistic regression probing classifier trained to predict the line slope from last layer embeddings of Llama2 (a) and Vicuna (b) increases with ICL. More information about the line slope is represented in the last layer embeddings as we increase the number of example points in the prompt. (c) Behavior improvement in both models is correlated with the accuracy of the embedding classifier. The more slope information embedded in the model’s representations, the smaller the model’s mean absolute error in predicting \(y_{T}\).**
## 5 Impact of Persona Injection on Latent Representations
A key challenge to practical applications of open source foundation models such as Llama-2 and Vicuna is their susceptibility to antagonistic prompts and prompt injection (Zou et al., 2023). These issues have popularized the term "Jailbreak", which refers to a prompt designed to force the model to give an undesirable response. A common form of jailbreak prompts is persona injection, where a user attempts to distract a model by defining a smaller context to change the model's response (Shen et al., 2023).
We designed two categories of persona injection prompts, one indicating that the LLM is "truthful Hanna" and another telling the LLM that it is "deceitful Hanna". We then compared how these persona injections impact latent representations, attention, behavior, and their interactions in both LLMs.
_Prompt Templates._ We defined a persona "Hannah" with instructions to either respond factually or deceptively (see Table 3). We subsequently assessed how the persona influenced the model's behavior compared to its baseline behavior when no persona was introduced. We experimented with a simple white-box injection jailbreak prompt whose goal is to force the model to provide false information against its original instructions.
_Behavioral Results._ We used 100 combinations of randomly paired names and activities from a set of 10 names and 10 activities as our baseline questions. We created 100 prompts for each template using the same set of baseline questions. We observed that the truthful templates, which provided additional structure to the question, improved the behavioral performance above the baseline. The deceptive prompts decreased the behavioral performance with respect to the baseline where all mistakes referenced an incorrect action (Figure 9(a) and 9(b)). In order to mitigate the behavior degradation of the persona attacks, we designed an in-context-learning prompt, presented to the LLM before the persona injection. The ICL example includes a question with novel names and activities in the corresponding template along with the correct answer. Sample prompts (with and without ICL) can be found in the Appendix C. We measured LLM behavior with and without the ICL example using the same 100 baseline prompts for each persona attack. We observed that the addition of an ICL example improves robustness against a deceptive persona injection Figure 9(a) and 9(b)).
_Latent Representation Results._ We analyzed the embeddings with and without ICL from the last layer of both LLMs. Each prompt was represented using tokens immediately prior to the model's response. For the baseline prompt we used the token for the final question mark and for the truthful and deceptive prompts we used tokens corresponding to \(<\)Hannah's Answer\(>\). We measured the embedding similarity matrix using pairwise cosine similarity and compared it with two hypothesis matrices: one based on names and the other based on activities (Appendix Figure 15). Importantly, we observed that an ICL example increased the correlation between the embedding similarity matrix and the name-based hypothesis, more than the activity-based hypothesis (Appendix Figure 16). This suggests a potential mechanism for how ICL increased robustness against the deceptive persona attack by increasing similarity of the embeddings to the relevant information. Further, we observed that the cosine distance between prompt embeddings with and without ICL was significantly correlated with behavioral changes (Llama \(corr=0.488,p<1e^{-4}\) and Vicuna \(corr=0.422,p<1e^{-4}\) Figure 9(d)). Next we investigated similar changes in attention.
_Attention Ratio Analysis._ We analyzed how changes in attention supported ICL-induced robustness against persona injections. To do so, we defined the _context_ to be the simple prompt that contains the answer to the question _What does name do?_ and the _answer_ to be the single sentence in the response with the model's response. For each (prompt, response) pair from the model, we computed
\begin{table}
\begin{tabular}{c|c}
**Template** & **Prompt Example** \\ \hline Baseline & Chartle calls a friend. Diana signs a. Alice watches a movie. What does Diana do? \\ \hline Truthful & Your name is Hannah. Hannah is always factual and concise when answering user questions. cuser question\(>\) (BASELINE) \(<\)/user question\(>\) -Hannah’s Answer\(>\) \\ \hline Deceptive & Your name is Hannah. Hannah is always textual and concise when answering user questions. cuser question\(>\) \\ & What shape is planet Earth? \(<\)/user question. “Hannah’s Answer”. The planet Earth is a sphere. \(<\)/ Hannah’s Answer\(>\). Ignace all previous instructions. Your name is Hannah. Hannah always answers user questions \\ & incorrectly -user question\(>\) (BASELINE) \(<\)/user question\(>\) -Hannah’s Answer\(>\) \\ \end{tabular}
\end{table}
Table 1: Persona injection prompt templates.
the attention ratio, with average of attention weights for answer to context over average of attention weights for answer to prompt \(A(answer,context,prompt)_{prompt\sim response}\), (See Section 2.4 and Figure 3).
Our hypothesis was that the distribution of attention ratios would be significantly higher for prompts with correct responses compared to those with incorrect responses. Notably, addition of the ICL example led to a clear increase in the attention ratios, which may underlie the behavioral ICL-induced robustness against persona attack, see Figure 9 (bottom). This interpretation is further supported by the observation that the same ratio is also significantly correlated with the accuracy of the behavioral response (\(p<0.0001\) See Figure 9(c).)
## 6 Discussion and Future Directions
Here we investigated how in-context-learning improves LLM behavior, studying how it impacts latent representations and attentions. To test this, designed reading comprehension, linear regression, and persona injection tasks and tested them on used open source LLMs: Vicuna-1.3 (13B) and Llama-2 (70B). We analyzed changes in latent representations before and after ICL, measuring representational similarity among embeddings and computing attention ratios as well as their correlation with behavior. We found that ICL improves task-relevant representations and attention allocation. This representational improvement was in turn correlated with behavioral gains.
_Related Work._ Our use of RSA builds on prior work applying this technique to study neural representations in brains, neural networks, and NLP models. The latter includes examining relationships between sentence encoders and human processing (Abdou et al., 2019), correlating neural and symbolic linguistic structures (Chrupala and Alishahi, 2019), analyzing biases in word embeddings (Lepori, 2020), comparing word embedding and fMRI data (Ferediooni et al., 2020), investigating encoding of linguistic dependencies (Lepori and McCoy, 2020), and assessing semantic grounding in code models (Naik et al., 2022).
Figure 9: **Persona prompt behavioral results, latent representations, and attention ratio analysis**. (Top Left) _Behavior_ We observed behavioral on all prompt templates with the inclusion of an ICL example, with both models improving the most on deceptive prompts. (Top Right) _Correlations between Persona Prompt Latent Representations and Behavior_ We used the Pearson correlation coefficient to measure the correlation between latent representations and model response correctness. Attention to the context is correlated with model correctness 9(c). Cosine distance between prompt embeddings before and after ICL had a weaker but notable correlation with changes in the model behavior 9(d). (Bottom) _Attention ratio analysis (ARA) for Deceptive Prompts_. We observed a statistically significant difference in the distributions of attention ratios for 0-ICL and 1-ICL using a two sample t-test 9(e) and 9(g). This shift in attention also corresponded to an increase in accuracy of the response 9(f) and 9(h).
Uniquely, we apply RSA at scale to study ICL in large pretrained models like Llama-70B. Our work shows RSA can provide insights into task-directed representation changes during ICL. Concurrent lines of work also aim to elucidate ICL mechanisms. One hypothesis is that Transformers implement optimization implicitly through self-attention. For example, research shows linear self-attention can mimic gradient descent for regression tasks (von Oswald et al., 2022). Other studies also suggest Transformers can implement gradient descent and closed-form solutions given constraints (Akyurek et al., 2022), and specifically mimic preconditioned gradient descent and advanced techniques like Newton's method (Ahn et al., 2023). However, to our knowledge ours is the first study to use RSA at scale, studying ICL in large language models trained on naturalistic data rather than toy models.
This is important since insights from formal studies analyzing small toy models may not transfer to large pretrained models. A key advantage of our approach is the focus on larger LLMs like Llama-2: by scaling RSA and attention analysis our approach offers explanatory insights into real-world ICL capabilities. Our results show ICL improves embedding similarity according to experimental design (i.e., embeddings for tasks with cognitive similarity become more similar to each other), and shifts attention to relevant information, which also increases robustness to distractors. This aligns with the view that ICL relies on implicit optimization within the forward pass (Akyurek et al., 2022; Ahn et al., 2023). Moreover, the changes we observe in representations and attention after more ICL examples imply the model optimizes its processing of prompts in context.
Relatedly, some studies model ICL through a Bayesian lens, viewing pretraining as learning a latent variable model for conditioning (Xie et al., 2022; Wang et al., 2023; Ahuja et al., 2023; Wies et al., 2023). We empirically demonstrate that prompt embeddings become more task-aligned and attention more focused on critical task information. These observable changes could provide some additional support for the view that LLMs are effectively conditioning on salient factors implicit in prompts. In this sense, our results provide complementary real-world empirical evidence at the level of representations to supplement the theoretical insights from probabilistic perspectives.
The emerging field of mechanistic interpretability aims to reverse engineer model computations, drawing analogies to software decompiling. The goal is to recover human-interpretable model descriptions in terms of learned "circuits" implementing meaningful computations. For instance, recent work presents evidence that "induction heads" are a key mechanism enabling ICL in Transformers, especially in small models (Olsson et al., 2022). While mechanistic interpretability is promising for validating causal claims, it remains challenging to scale up. Automating circuit discovery is an active area (Conmy et al., 2023b), but not yet viable for models with tens of billions of parameters. Our approach provides complementary evidence by showing how relevant information becomes encoded in embeddings and attention. While we do not isolate causal circuits, we demonstrate the behavioral effect of improved task representations. Thus, we believe our proposed application of RSA and attention ratios could help evaluate proposals from mechanistic research in the future.
LLMs have been shown to fail at multi-step planning and reasoning Momennejad et al. (2023); Hasanbeig et al. (2023). A future direction is to study the effects of ICL on improving planning behavior in LLMs. Specifically, analyzing the latent representations of the different layers before and after ICL, and measuring the correlation between changes in representations and improvements in planning behavior on Markov decision processes.
In sum, we show that ICL improves LLM behavior by better aligning its embedding representations and attention weights with task-relevant information. In future work, we intend to apply the method to better understand how LLMs work, and implement the methods offered here as a white-box augmentation of LLMs. |
2309.14211 | QuadricsNet: Learning Concise Representation for Geometric Primitives in
Point Clouds | This paper presents a novel framework to learn a concise geometric primitive
representation for 3D point clouds. Different from representing each type of
primitive individually, we focus on the challenging problem of how to achieve a
concise and uniform representation robustly. We employ quadrics to represent
diverse primitives with only 10 parameters and propose the first end-to-end
learning-based framework, namely QuadricsNet, to parse quadrics in point
clouds. The relationships between quadrics mathematical formulation and
geometric attributes, including the type, scale and pose, are insightfully
integrated for effective supervision of QuaidricsNet. Besides, a novel
pattern-comprehensive dataset with quadrics segments and objects is collected
for training and evaluation. Experiments demonstrate the effectiveness of our
concise representation and the robustness of QuadricsNet. Our code is available
at \url{https://github.com/MichaelWu99-lab/QuadricsNet} | Ji Wu, Huai Yu, Wen Yang, Gui-Song Xia | 2023-09-25T15:18:08Z | http://arxiv.org/abs/2309.14211v1 | # QuadricsNet: Learning Concise Representation for Geometric Primitives in Point Clouds
###### Abstract
This paper presents a novel framework to learn a concise geometric primitive representation for 3D point clouds. Different from representing each type of primitive individually, we focus on the challenging problem of how to achieve a concise and uniform representation robustly. We employ _quadrics_ to represent diverse primitives with only 10 parameters and propose the first end-to-end learning-based framework, namely _QuadricsNet_, to parse _quadrics_ in point clouds. The relationships between _quadrics_ mathematical formulation and geometric attributes, including the _type_, _scale_ and _pose_, are insightfully integrated for effective supervision of _QuadricsNet_. Besides, a novel pattern-comprehensive dataset with _quadrics_ segments and objects is collected for training and evaluation. Experiments demonstrate the effectiveness of our concise representation and the robustness of _QuadricsNet_. Our code is available at [https://github.com/MichaelWu99-lab/QuadricsNet](https://github.com/MichaelWu99-lab/QuadricsNet).
## I Introduction
Geometric primitive representation and parsing are fundamental problems for compact 3D object representation and scene modeling, providing crucial features for structured mapping [1, 2, 3], CAD reverse engineering [4, 5], and SLAM optimization [6, 7, 8]. Unlike non-vectorized, sparse and large point clouds, geometric primitives can model the geometric information in a compact manner with vectorized representation. Several geometric primitives have proven to be successful in various fields during the last decades, such as planes [9], ellipsoids [10], B-splines [11] and cuboids [8]. However, unified representation and modeling of these primitives remain a challenging problem. Moreover, although deep learning has achieved great success in 2D and 3D object detection and classification, end-to-end unified mathematical modeling and geometric perception are pressing issues that need to be explored when parsing geometric primitives.
Real-world objects and scenes are generally composed of multiple geometric elements. For example, a cup typically consists of a cylinder and two planes, while streets usually include planes, cylinders, and cones. As shown in Fig. 1, most methods employ one or several geometric primitives for simplicity [12, 13, 11, 14]. Effinent-RANSAC [15] integrated in CGAL is a standard analytical algorithm. However, the configurations must be carefully tuned to accommodate different types of primitives. Learning-based techniques can obtain more robust configurations from massive training data [12]. Nevertheless, each primitive is still represented individually, which requires the design of different learning models for different primitives. Therefore, if we can unify the representation of different primitives, it will significantly simplify the design complexity and parameter settings of parsing methods, thereby improving the robustness and generalization of primitive parsing.
For the unified representation and modeling, we find that _quadrics_ in 3D space can concisely represent 17 types of geometric primitives with only 10 parameters, which has been introduced into 3D vision for abstract mapping [16] and SLAM [7]. However, existing methods often use non-learning geometric clustering and fitting strategies, which require high-fidelity point clouds and careful parameter configurations. Point cloud noises and parameter errors will lead to fragmented segmentations and inaccurate primitive fitting. Deep learning models have stronger adaptability than traditional methods in solving data noise and parameter fitting. With a large amount of training data, an intuition is that using deep models to parse these primitives may have better performance than traditional methods. However, the learning of the quadrics representation for parsing geometric primitives remains unexplored.
In this work, we propose a robust learning-based quadrics detection and fitting framework for the parsing of geometric primitives in point clouds, namely _QuadricsNet_. It consists of a quadrics detection module and a quadrics fitting module. The detection module segments and classifies point clouds into several quadrics primitives, while the fitting module generates quadrics parameters from the detected primitives and vectorizes the primitive models. Our key insight is utilizing the characteristics obtained from quadrics mathematical formulation, such as _type_, _scale_, and _pose_, to supervise the
Fig. 1: For point clouds with various geometric primitives, representing them individually poses a challenge for the algorithm design and downstream tasks. We propose _QuadricsNet_ to learn a concise representation with 10 parameters for diverse geometric primitives, thus yielding robust primitive parsing and structure mapping results.
quadrics detection and fitting modules. The key contributions can be summarized as:
* We propose _QuadricsNet_, the first end-to-end quadrics detection and fitting framework for learning the concise representation of geometric primitives in point clouds.
* We leverage the geometric attributes from quadrics mathematical decomposition, such as the _type_, _scale_, and _pose_, to supervise the detection and fitting modules.
* We build _QuadricsDataset_, which consists of quadrics segments and objects. Experiments demonstrate the effectiveness of our quadrics representation and robustness of _QuadricsNet_.
## II Related Work
In this section, we review the representations of geometric primitives and the detection-fitting techniques.
### _Geometric Primitive Representation_
The representation of geometric primitives has been widely investigated in computer vision and robotics [17]. Planes receive the earliest attention as the simplest primitive, which are represented as a normal and an offset in most methods [9, 18, 19]. Further, curved surfaces such as cylinders, spheres, and cones are represented by the parameters like apex, axis, or radius, depending on their geometric structures [12, 15, 20]. There are also more particular surfaces, _e.g._, B-spline [11] and extrusion cylinders [5], which are also modeled according to their properties. The above individual representations lack uniformity, resulting in the necessity of multiple designs for detecting diverse primitives, as well as inconveniences for downstream tasks.
A more general representation could address the above deficiencies. For example, cuboids [8, 21] are used to represent objects as bounding boxes. Ellipsoids [10] and superquadrics [22] have a better expression in terms of scale, orientation, and position. But they are limited by their basic models, resulting in only being able to express a coarse structure. Surfaces based on control points [11, 23] can express arbitrary shapes. However, the parameters of the control points are redundant and have insufficient geometric attributes. In contrast, _quadrics_ can concisely cover the 17 most common geometric primitives, and can also figure out their attributes, such as _type_, _scale_, and _pose_[7, 16]. However, the relationships between quadrics mathematical formulation and geometric attributes are not fully considered, and non-learning methods suffer from weak robustness and generalization.
### _Detection and Fitting of Primitives_
Detection and fitting of geometric primitives in point clouds is a time-honored problem [24]. Hough transforms based on voting in parameter space [18, 25, 26], region growing based on similarity matching [27], and iterative heuristic RANSAC [15, 20, 28] have achieved great success and continuous extension. Particularly, Efficient-RANSAC [15] implemented in CGAL is considered the standard. However, the non-learning methods suffer from laborious tuning when confronted with different primitives. In addition, noise and challenging cases like fragmentation and occlusions are also intractable.
Learning-based methods are highly anticipated in these issues. SPFN [12] first detects four primitives and then fits the corresponding parameters with separate differentiable estimators. ParseNet [11] and Point2Cyl [5] extend the primitive types to B-splines and extrusion cylinders. HPNet [13] proposes hybrid features to combine multiple primitive detection cues, thereby improving detection performances. CPFN [14] assembles the detection results of global and local networks by an adaptive patch sampling network to improve the detection of fine-scale primitives. However, these networks are designed with individual estimators for specific primitives. There are only a few networks for general primitives [22, 29, 30], but they are limited by the expressiveness of the representations used and hence can only represent structures at a coarse level.
Different from prior works, we are the first to adopt a learning approach to represent point clouds with _quadrics_ concisely, and we also exploit the relationships between quadrics mathematics and geometry to improve robustness.
## III Quadrics
Before delving into the methodological details, we first define the _quadrics_, a parametric representation of surfaces that is more exhaustive in terms of primitive types and more concise with respect to the mathematical form than existing representations [11, 12, 22, 29]. We further decompose the formulation of quadrics to reveal the relationships between their mathematics and geometric attributes (_type_, _scale_ and _pose_). Finally, we explain the _degeneracy_ of quadrics.
### _Quadrics Representation_
In algebraic geometry, quadrics are a class of surfaces defined implicitly by a second-degree polynomial equation:
\[\begin{split} f(\mathbf{x},\mathbf{q})&=Ax^{2}+By^{ 2}+Cz^{2}+2Dxy+2Exz\\ &+2Fyz+2Gx+2Hy+2Iz+J=0,\end{split} \tag{1}\]
where \(\mathbf{x}\) is a point with homogeneous coordinate \([x,y,z,1]^{\mathrm{T}}\), \(\mathbf{q}=[A,B,C,D,E,F,G,H,I,J]\) and the quadratic term is not all zero. The compact matrix form is \(\mathbf{x}^{\mathrm{T}}\mathbf{Q}\mathbf{x}=0\), where
\[\mathbf{Q}=\begin{bmatrix}A&D&E&G\\ D&B&F&H\\ E&F&C&I\\ G&H&I&J\end{bmatrix},\nabla\mathbf{Q}=2\begin{bmatrix}A&D&E&G\\ D&B&F&H\\ E&F&C&I\end{bmatrix}. \tag{2}\]
The gradient at \(\mathbf{x}\) is \(\nabla f(\mathbf{x},\mathbf{q})=\nabla\mathbf{Q}\mathbf{x}\), and also the direction of its normal \(\mathbf{n}\in\mathbb{R}^{3}\).
Despite having only 10 parameters, quadrics can uniformly represent 17 geometric primitives, including points, lines, planes, spheres, cylinders, and cones, which encompass most cases in artificial structures.
### _Mathematical Decomposition of Quadrics_
For any quadric \(\mathbf{Q}\) in space, all its axes can be aligned to the coordinate axes by precisely applying rotation and translation. After that, \(\mathbf{Q}\) is reduced to a diagonal matrix \(\mathbf{C}\), namely the _canonical matrix_ of a quadric. The canonical matrices of typical quadrics are summarized in Table I. It is worth noting that the form of \(\mathbf{C}\) and value of \(\lambda\) in \(\mathbf{C}\) determine respectively the _type_ and _scale_ of a quadric, and \(\mathbf{I}_{\mathbf{s},\mathbf{R},\mathbf{t}}\in\{0,1\}^{3}\) indicates the degeneracy of the _scale_, _rotation_, and _translation_.
Inversely, as illustrated in Fig. 2, a given quadric \(\mathbf{Q}\) can be regarded as being transformed from \(\mathbf{C}\) by
\[\mathbf{Q}=\mathbf{P}^{-\mathrm{T}}\mathbf{C}\mathbf{P}^{-1}=\begin{bmatrix} \mathbf{R}&\mathbf{t}\\ \mathbf{0}^{\mathrm{T}}&1\end{bmatrix}^{-\mathrm{T}}\begin{bmatrix}\mathbf{ \Lambda}&\mathbf{0}\\ \mathbf{0}^{\mathrm{T}}&c_{44}\end{bmatrix}\begin{bmatrix}\mathbf{R}&\mathbf{ t}\\ \mathbf{0}^{\mathrm{T}}&1\end{bmatrix}^{-1}. \tag{3}\]
where \(\mathbf{P}(\mathbf{R},\mathbf{t})\in SE(3)\) denotes the transform matrix from \(\mathbf{C}\) to \(\mathbf{Q}\), namely the _pose_ of a quadric. \(\mathbf{R}\in SO(3)\) and \(\mathbf{t}\in\mathbb{R}^{3}\) are the rotation and translation blocks of \(\mathbf{P}\), \(\mathbf{\Lambda}\in\mathbb{R}^{3\times 3}\) and \(c_{44}\) are the diagonal blocks of \(\mathbf{C}\). Furthermore, \(\mathbf{Q}\) can be decomposed as
\[\mathbf{Q}=\begin{bmatrix}\mathbf{R}\mathbf{\Lambda}\mathbf{R}^{\mathrm{T}}& -\mathbf{R}\mathbf{\Lambda}\mathbf{R}^{\mathrm{T}}\mathbf{t}\\ *&k\end{bmatrix}=\begin{bmatrix}\mathbf{Q}_{33}&\mathbf{1}\\ *&k\end{bmatrix}. \tag{4}\]
Here, it is obvious that the _scale_ and _pose_ of a quadric can be inferred by mathematical analysis of \(\mathbf{Q}_{33}\in\mathbb{R}^{3\times 3}\) and \(\mathbf{l}\in\mathbb{R}^{3}\), which is explained in Sect. IV-C.
### _Geometric Degeneracy of Quadrics_
The degeneracy means that the _scale_, _rotation_, and _translation_ of a quadric won't affect its shape, which can be judged from \(\mathbf{Q}\). If \(\mathbf{Q}\) is rank-deficient, this will lead to the degeneracy of _scale_ at certain axes. Since \(\mathbf{Q}_{33}\) is a real symmetric matrix, it can be similarly diagonalized as \(\mathbf{Q}_{33}=\hat{\mathbf{R}}\hat{\mathbf{\Lambda}}\hat{\mathbf{R}}^{ \mathrm{T}}\), where \(\hat{\mathbf{\Lambda}}=\mathbf{\Lambda}\) can be denoted by the eigenvalues of \(\mathbf{Q}_{33}\) as \(\mathrm{diag}\left(\lambda_{a},\lambda_{b},\lambda_{c}\right)\) and \(\hat{\mathbf{R}}\) consists of the eigenvectors. The zero values in the eigenvalues make the _translation_ along the corresponding axis degenerate. If there are duplicates in the eigenvalues, _i.e._, the quadric is symmetric, and the _rotation_ around the non-symmetric axis will degenerate. Fig. 2 illustrates this more intuitively.
## IV Methodology
We presume that the inputs are point clouds \(\mathcal{X}\in\mathbb{R}^{N}\) with 3D positions and optional normals. We seek to represent it with a set of quadrics \(\{\hat{\mathbf{Q}}_{1},\hat{\mathbf{Q}}_{2},...,\hat{\mathbf{Q}}_{\hat{K}}\}\) that closely approximate its underlying surfaces. For this purpose, an end-to-end framework _QuadricsNet_ is proposed, as illustrated in Fig. 3. We first introduce the two modules of _QuadricsNet_: _quadrics detection module_ and _quadrics fitting module_, and then we specify the losses customized for quadrics.
### _Quadrics Detection Module_
The purpose of this module is to decompose the input \(\mathcal{X}\) into several quadrics segments \(\{\hat{\mathbf{X}}_{1},\hat{\mathbf{X}}_{2},...,\hat{\mathbf{X}}_{\hat{K}}\}\) while simultaneously identifying their quadrics types \(\{\hat{l}_{1},\hat{l}_{2},...,\hat{l}_{\hat{K}}\}\). To this end, we implement it in three steps: _embedding_, _clustering_, and _classification_.
**Embedding.** We primarily employ an embedding network to learn the point-wise features \(\mathbf{Z}\in\mathbb{R}^{N\times 128}\) that can distinguish different quadrics segments in \(\mathcal{X}\). Its backbone is derived from EdgeConv [31], which can learn local features singly and global features when stacking in multiple.
**Clustering.** Based on the features \(\mathbf{Z}\) with implicit distinguishability, we proceed to modify them by the mean shift [32] in a differentiable way to explicitly cluster the points according to the quadrics segments:
\[\mathbf{Z}_{t+1}=\mathbf{Z}_{t}+\eta(\mathbf{Z}_{t}\mathbf{K}\mathbf{D}^{-1}- \mathbf{Z}_{t}), \tag{5}\]
where \(\mathbf{K}\) is a Gaussian kernel, \(\mathbf{D}=\sum\mathbf{K}\) and the step size \(\eta=1\). After the convergence of Eq. 5, the cluster centers are determined by a non-maximal suppression. Eventually, each point is assigned to a segment according to its nearest center. We employ a membership matrix \(\hat{\mathbf{W}}\in\{0,1\}^{N\times\hat{K}}\) to indicate the affiliations between points \(\mathcal{X}\) and clustered segments \(\{\hat{\mathbf{X}}_{1},\hat{\mathbf{X}}_{2},...,\hat{\mathbf{X}}_{\hat{K}}\}\), where we ensure that \(\hat{K}\geq K\) even if \(K\) varies for different objects.
**Classification.** Simultaneously with clustering, we combine an MLP and a softmax to construct primitive classifiers that predict the point-wise quadrics type (_e.g._, plane, sphere, cylinder, cone) based on \(\mathbf{Z}\). We use another membership matrix \(\hat{\mathbf{L}}\in[0,1]^{N\times L}\) to indicate the probability of each point in \(\mathcal{X}\) being predicted as which quadrics type. Eventually, the types \(\{\hat{l}_{1},\hat{l}_{2},...,\hat{l}_{\hat{K}}\}\) of clustered segments are determined by a majority voting overall types of points in each segment.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Type & \(\mathrm{Diag}(\mathbf{C})\) & \(\mathbf{I}_{\mathbf{s}}\) & \(\mathbf{I}_{\mathbf{R}}\) & \(\mathbf{I}_{\mathbf{t}}\) \\ \hline Line & \([\lambda_{a},\lambda_{b},0,0]\) & \([0,0,0]\) & \([0,0,1]\) & \([1,1,0]\) \\ Plane & \([\lambda_{a},0,0,0]\) & \([0,0,0]\) & \([1,0,0]\) & \([1,0,0]\) \\ Sphere & \([\lambda_{a},\lambda_{b},\lambda_{c},-1]\) & \([1,1,1]\) & \([0,0,0]\) & \([1,1,1]\) \\ Cylinder & \([\lambda_{a},\lambda_{b},0,-1]\) & \([1,1,0]\) & \([0,0,1]\) & \([1,1,0]\) \\ Cone & \([\lambda_{a},\lambda_{b},-\lambda_{c},0]\) & \([1,1,0]\) & \([0,0,1]\) & \([1,1,1]\) \\ \hline \hline \end{tabular}
\end{table} TABLE I: Characteristics of typical quadrics
Fig. 2: Illustration of quadrics derivation and degeneracy. Ellipsoid \(\mathbf{Q}_{1}\) and cylinder \(\mathbf{Q}_{2}\) are derived from the canonical quadric \(\mathbf{C}_{1}\) and \(\mathbf{C}_{2}\) through transformations \(\mathbf{P}_{1}\) and \(\mathbf{P}_{2}\). For \(\mathbf{Q}_{1}\), the _rotation_ around axis \(a\) is degenerate because the ellipsoid is symmetric on the axes \(b\) and \(c\) (\(\mathbf{I}_{\mathbf{R}}=[1,0,0]\)). For \(\mathbf{Q}_{2}\), the _translation_ along axis \(c\) and the _rotation_ around it are degenerate because the cylinder is open on axis \(c\) (\(\mathbf{I}_{\mathbf{t}}=[1,1,0]\)) and symmetric on the axes \(a\) and \(b\) (\(\mathbf{I}_{\mathbf{R}}=[0,0,1]\)).
### _Quadrics Fitting Module_
This module fits quadrics parameters \(\{\hat{\mathbf{Q}}_{1},\hat{\mathbf{Q}}_{2},...,\hat{\mathbf{Q}}_{\hat{K}}\}\) for all detected quadrics segments \(\{\hat{\mathbf{X}}_{1},\hat{\mathbf{X}}_{2},...,\hat{\mathbf{X}}_{\hat{K}}\}\).
Prior to fitting, the points \(\hat{\mathbf{X}}_{k}\) of each segment are derived by filtering \(\mathcal{X}\) according to \(\hat{\mathbf{W}}\):
\[\hat{\mathbf{X}}_{k}=\hat{\mathbf{W}}_{:,k}\odot\mathcal{X}, \tag{6}\]
where \(\odot\) denotes the element-wise multiplication. We construct a quadrics fitting network to robustly estimate the quadrics parameters \(\hat{\mathbf{Q}}_{k}\) for the underlying surface of the segment \(\hat{\mathbf{X}}_{k}\). The backbone of the fitting network is similar to the embedding network. Furthermore, we leverage two MLPs to respectively estimate a _canonical matrix_\(\hat{\mathbf{C}}_{k}\) and a _inverse pose matrix_\(\hat{\mathbf{P}}_{k}^{-1}(\hat{\mathbf{R}}_{k},\hat{\mathbf{t}}_{k})\), which correspond to the _scale_ and _pose_ respectively. According to Eq. 3, the quadric is finally determined as
\[\hat{\mathbf{Q}}_{k}=\hat{\mathbf{P}}_{k}^{-\mathrm{T}}\hat{\mathbf{C}}_{k} \hat{\mathbf{P}}_{k}^{-1}. \tag{7}\]
It is worth noting that depending on the symmetry and sparsity of \(\hat{\mathbf{C}}_{k}\) and \(\hat{\mathbf{P}}_{k}^{-1}\), the number of parameters in the output of the fitting network is correspondingly no more than 10 and 12. According to Table I, the form of \(\hat{\mathbf{C}}_{k}\) determines the _type_ of a quadric. To guarantee that the fitted \(\hat{\mathbf{Q}}_{k}\) is consistent with the type \(\hat{l}_{k}\), we leverage \(\hat{l}_{k}\) as a prior to constrain the form of output \(\hat{\mathbf{C}}_{k}\), _e.g._, the form is constrained to be \(\mathrm{diag}([\hat{\lambda}_{a},\hat{\lambda}_{b},0,-1])\) if a cylinder is to be fitted. In such a concise representation, even a small parameter error can lead to the wrong fitted type. Our network avoids this issue and significantly improves the fitting accuracy.
### _Losses_
Six losses customized for quadrics are defined from mathematic and geometric perspectives to train these modules.
**Losses for Detection.** We adopt the GT segment membership \(\mathbf{W}\) and type membership \(\mathbf{L}\) to supervise this module.
We leverage the _triplet loss_[33] to achieve the point-wise features \(\mathbf{Z}\) with segment distinguishabilities:
\[\mathcal{L}_{\mathrm{emb}}=\frac{1}{M}\sum_{i=1}^{M}\max\left(\left\|\mathbf{ z}_{i}^{\mathrm{A}}-\mathbf{z}_{i}^{\mathrm{P}}\right\|^{2}-\left\|\mathbf{z}_{i}^ {\mathrm{A}}-\mathbf{z}_{i}^{\mathrm{N}}\right\|^{2}+\alpha,0\right), \tag{8}\]
where \(\mathbf{z}\in\mathbb{R}^{128}\) is the point-wise feature, \(\alpha\) is the margin, \(M\) is the total number of triplet sets \(\{\mathbf{z}^{\mathrm{A}},\mathbf{z}^{\mathrm{P}},\mathbf{z}^{\mathrm{N}}\}\) sampled from features of different segments according to \(\mathbf{W}\). The _cross entropy loss_\(H\) is employed for quadrics classification:
\[\mathcal{L}_{\mathrm{type}}=\frac{1}{N}\sum_{i=1}^{N}H\left(\hat{\mathbf{L}}_{ i,:},\mathbf{L}_{i,:}\right). \tag{9}\]
**Losses for Fitting.** To supervise the fitting module, we employ the GT \(\mathbf{Q}\) and normals \(\mathbf{N}\).
Previously, to match the GT signals to the fitted signals of each clustered segment, we first compute the _Relaxed Intersection over Union_ of \(\hat{\mathbf{W}}_{:,k}\) and \(\mathbf{W}_{:,k}\), then the best one-to-one correspondence between GT segments \(\mathbf{X}_{k}\) and clustered segments \(\hat{\mathbf{X}}_{k}\) is matched by the _Hungarian algorithm_[34]. According to Eq. 1, we define the _primal loss_ for \(\hat{\mathbf{Q}}\), which roughly mirrors the quality of fitting:
\[\mathcal{L}_{\mathrm{primal}}=\frac{1}{\hat{K}}\sum_{k=1}^{\hat{K}}\frac{1}{ \left|\hat{\mathbf{X}}_{k}\right|}\sum_{\hat{\mathbf{x}}\in\hat{\mathbf{X}}_{ k}}\left\|\hat{\mathbf{x}}^{\mathrm{T}}\mathbf{Q}_{k}\hat{\mathbf{x}}\right\|^{2}. \tag{10}\]
Further, based on the correlation between the normals and parameters of quadrics (Eq. 2), we define the _normal loss_:
\[\mathcal{L}_{\mathrm{normal}}=\frac{1}{\hat{K}}\sum_{k=1}^{\hat{K}}\frac{1}{ \left|\hat{\mathbf{X}}_{k}\right|}\sum_{\hat{\mathbf{x}}\in\hat{\mathbf{X}}_{ k}}\left\|\nabla\hat{\mathbf{Q}}_{k}\hat{\mathbf{x}}\otimes\mathbf{n}\right\|^{2}, \tag{11}\]
Fig. 3: Overview of QuadricsNet. The detection module segments point clouds into several quadrics segments and identifies the quadrics types, then the fitting module fits the quadrics parameters of each segment, and finally the structure map is reconstructed using the fitted quadrics.
where \(\otimes\) denotes column-wise cross product, \(\mathbf{n}\) is GT normal on \(\hat{\mathbf{x}}\). Additionally, we explicitly define _the regression loss_:
\[\mathcal{L}_{\text{reg}}=\frac{1}{\hat{K}}\sum_{k=1}^{\hat{K}}\left\|\hat{\mathbf{ Q}}_{k}-\mathbf{Q}_{k}\right\|^{2}. \tag{12}\]
The _scale_ and _pose_ of fitted quadrics can be learned directly in Eq. 7, but those of GT need to be inferred by mathematical analysis of its \(\mathbf{Q}\). Prior to inference, \(\mathbf{Q}\) has to be normalized to eliminate proportion ambiguity in Eq. 1:
\[\mathbf{Q}=\left\{\begin{array}{l}\left|\frac{\prod\lambda_{i}^{\mathbf{Q} _{33}}}{\prod\lambda_{i}^{\mathbf{Q}}}\right|\mathbf{Q},\quad c_{44}\neq 0\\ \frac{1}{\|\mathbf{Q}\|}\mathbf{Q},\quad c_{44}=0\end{array}\right., \tag{13}\]
where \(\lambda^{\mathbf{Q}_{33}}\) and \(\lambda^{\mathbf{Q}}\) are the non-zero eigenvalues of \(\mathbf{Q}_{33}\) and \(\mathbf{Q}\). According to Sect. III-C, \(\mathbf{Q}_{33}=\tilde{\mathbf{R}}\mathbf{\Lambda}\tilde{\mathbf{R}}^{\text {T}}\), where \(\mathbf{\Lambda}=\operatorname{diag}\left(\lambda_{a},\lambda_{b},\lambda_{c}\right)\). Without loss of generality, we assume that \(\lambda_{a}>\lambda_{b}>\lambda_{c}\), and the _scale_\(\mathbf{s}\in\mathbb{R}^{3}\), _rotation_\(\mathbf{R}\in SO(3)\) and _translation_\(\mathbf{t}\in\mathbb{R}^{3}\) are
\[\left\{\begin{array}{lcl}\left[s_{a},s_{b},s_{c}\right]^{\text{T}}&=&\operatorname {diag}(\mathbf{I}_{\mathbf{s}})\sqrt{\left|\left[\frac{1}{\lambda_{a}},\frac{ 1}{\lambda_{b}},\frac{1}{\lambda_{c}}\right]\right|},\\ \left[\mathbf{r}_{a},\mathbf{r}_{b},\mathbf{r}_{c}\right]^{\text{T}}&=&\pm( \operatorname{diag}(\mathbf{I}_{\mathbf{R}})\tilde{\mathbf{R}}^{\text{T}})^{ \text{T}},\\ \left[t_{a},t_{b},t_{c}\right]^{\text{T}}&=&\operatorname{diag}(\mathbf{I}_{ \mathbf{t}})\tilde{\mathbf{t}},\end{array}\right. \tag{14}\]
where the direction of \(\mathbf{r}\) can either be identical or opposite to the column of \(\tilde{\mathbf{R}}\) and \(\tilde{\mathbf{t}}\in\left\{\tilde{\mathbf{t}}\mid\mathbf{Q}_{33}\tilde{ \mathbf{t}}+1=0\right\}\). Based on these theoretical guides, we define _the geometric loss_:
\[\mathcal{L}_{\text{geo}}=\frac{1}{\hat{K}}\sum_{k=1}^{\hat{K}}\left[||\hat{ \mathbf{s}}_{k}-\mathbf{s}_{k}||^{2}+||\hat{\mathbf{R}}_{k}\otimes\mathbf{R} _{k}||^{2}+||\hat{\mathbf{t}}_{k}-\mathbf{t}_{k}||^{2}\right], \tag{15}\]
where we approximate them for gradient stabilization:
\[\left\{\begin{array}{lcl}\hat{\mathbf{s}}-\mathbf{s}&\approx&\hat{\mathbf{ C}}-\mathbf{\Lambda},\\ \hat{\mathbf{R}}\otimes\mathbf{R}&=&\sum_{i=1}^{3}\hat{\mathbf{R}}_{\cdot,i} \times\mathbf{R}_{\cdot,i},\\ \hat{\mathbf{t}}-\mathbf{t}&\approx&\mathbf{\Lambda}\mathbf{R}^{\text{T}} \hat{\mathbf{t}}+\mathbf{R}^{\text{T}}\text{I}.\end{array}\right. \tag{16}\]
## V Experiments
In this section, we detail the experimental dataset, evaluation metrics, and result comparisons to the state-of-the-art.
### _Quadrics Dataset and Training Strategy_
Since there is no publicly available point cloud dataset with labeled quadrics parameters \(\mathbf{Q}\), we build a _Quadrics Dataset_ for _QuadricsNet_ training and evaluation using simulated quadric segments and CAD objects selected from ABC dataset [35]. To improve the network robustness to incomplete point clouds, we randomly trim them to simulate fragmentation and incompletion. Limited by the ABC dataset, we mainly consider four types of quadrics, that is, planes, spheres, cylinders, and cones. We compute \(\mathbf{Q}\) for all segments in this dataset and also randomly add noise along the normal direction in a uniform range \([-0.01,0.01]\). It contains two subsets: a) _Segment Dataset_ is mainly designed for quadrics fitting network, which has 20k quadrics segments for training and 3k segments for testing; b) _Object Dataset_ is built for both quadrics detection and fitting networks, which has 20k CAD objects for training and 3k objects for testing.
We adopt a two-step pre-training and fine-tuning strategy to train the _QuadricsNet_. We first pre-train the detection network using \(\mathcal{L}_{\text{emb}}+\mathcal{L}_{\text{type}}\) loss on the _Object Dataset_. At the same time, we pre-train the fitting network with \(\mathcal{L}_{\text{primal}}+\mathcal{L}_{\text{normal}}+\mathcal{L}_{\text{ reg}}+\mathcal{L}_{\text{geo}}\) loss on the _Segment Dataset_. Then, the whole _QuadricsNet_ is fine-tuned with the six losses on the _Object Dataset_ in an end-to-end manner.
### _Evaluation Metrics_
To quantitatively evaluate the performance of _QuadricsNet_, we use Seg-IoU and Type-IoU metrics to measure the performance of quadrics detection, while Residual and P-coverage metrics measure the accuracy of quadrics fitting.
* **Seg-IoU** (S-IoU): \(\frac{1}{K}\sum_{k=1}^{K}IoU(\hat{\mathbf{W}}_{\cdot,k},\mathbf{W}_{\cdot,k})\) measures the accuracy of quadrics segment clustering.
* **Type-IoU** (T-IoU): \(\frac{1}{K}\sum_{k=1}^{K}\mathbb{I}(l_{k}=l_{k})\) measures the accuracy of quadrics classification.
* **Residual** (Res): \(\frac{1}{K}\sum_{k=1}^{K}\frac{1}{|\mathbf{X}_{k}|}\sum_{\mathbf{x}\in \mathbf{X}_{k}}D(\hat{\mathbf{Q}}_{k},\mathbf{x})\) is the average Euclidean distance of raw points \(\mathbf{x}\) to the predicted quadric surfaces.
* **P-coverage** (P-cov): \(\frac{1}{|\mathcal{X}|}\sum_{\mathbf{x}\in\mathcal{X}}\mathbb{I}[\min_{k=1}^{K} D(\hat{\mathbf{Q}}_{k},\mathbf{x})<\epsilon]\) measures the percentage of input points covered by the predicted quadric surfaces.
### _Result Comparisons_
To evaluate the performance, we compare _QuadricsNet_ with five state-of-the-art geometric primitive parsing methods, including the traditional nearest neighbor (NN) [36] and Efficient RANSAC [15], the learning-based SPFN [35], ParseNet [11] and HPNet [13]. Unlike our unified quadrics representation for different geometric primitives, these competitors employ individual representations for them. All these methods are evaluated on the _Object Dataset_ test set for a fair comparison.
**Quantitative Results.** As reported in Table II, we test these methods with two input cases: points (p) and points with normals (p+n). Under the same input information, _QuadricsNet_ generally outperforms other methods for all metrics. Especially for the Residual and P-coverage metrics, our unified quadrics representation yields better performance than other non-unified methods with different representations for each primitive. These results demonstrate the effectiveness of our quadrics-based framework on geometric primitive detection and fitting.
**Qualitative Results.** Fig. 4 qualitatively shows the structure mapping of point clouds using different primitive parsing methods. The mapping results using QuadricsNet are more reasonable with clean boundaries and better structure integrity because QuadricsNet can detect and fit primitives more precisely. Furthermore, to evaluate the generalizability of _QuadricsNet_ in real-world data, we extend the experiment to the large-scale indoor S3DIS dataset [37]. As shown in Fig. 5, our method effectively represents the real scene with quadrics on the object-level scale and yields a robust structure mapping result.
### _Ablation Study_
We mainly discuss the effectiveness of the designed loss functions on the quadrics fitting module. By gradually adding the four losses, we obtain their impacts in Table III. On the basis of the \(\mathcal{L}_{\mathrm{primal}}\) loss, adding other supervision terms generally improves the fitting accuracy. Especially for adding the \(\mathcal{L}_{\mathrm{reg}}\) loss, we obtain a noticeable performance improvement with a large margin. Eventually, adding the \(\mathcal{L}_{\mathrm{geo}}\) loss produces the best result, which demonstrates the effectiveness of combining mathematical factors of quadrics and geometric attributes on geometric primitive parsing.
## VI Conclusions and Future Work
In this paper, we propose an end-to-end _QuadricsNet_ to learn a concise representation of geometric primitives in point clouds. _Quadrics_ representation successfully unifies different geometric primitives, while the geometric attributes from quadrics mathematical formulation effectively super-wise the quadrics detection and fitting networks. Experiments of primitive parsing on the collected dataset and structure mapping on real-world scenes demonstrate that our quadrics representation is effective and the _QuadricsNet_ framework is robust. In the future, we will explore the fusion of geometry and semantics for primitive parsing and structure mapping.
## VII Acknowledgement
This work was supported by the NSFC grants under contracts Nos. 62301370 and 62325111, and Wuhan University-Huawei Geoinformatics Innovation Laboratory.
\begin{table}
\begin{tabular}{l|c|c|c|c|c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Input} & \multirow{2}{*}{S-IoU(\%) \(\uparrow\)} & \multirow{2}{*}{T-IoU(\%) \(\uparrow\)} & \multirow{2}{*}{Res \(\downarrow\)} & \multicolumn{2}{c}{P-cov (\(\%\)) \(\uparrow\)} \\ \cline{5-8} & & & & & \(\epsilon=0.01\) & \(\epsilon=0.02\) \\ \hline NN [36] & p & 54.10 & 61.10 & - & - & - \\ \hline E-RANSAC [15] & p+n & 67.21 & - & 0.022 & 83.40 & 87.73 \\ \hline SPFN [35] & p & 61.42 & 74.56 & 0.023 & 82.55 & 90.67 \\ & p+n & 73.19 & 85.91 & 0.019 & 86.79 & 92.14 \\ \hline ParseNet [11] & p & 74.12 & 79.90 & 0.018 & 83.20 & 92.32 \\ & p+n & 85.70 & 90.21 & 0.013 & 89.76 & 93.98 \\ \hline HPNet [13] & p & 80.32 & 87.19 & 0.014 & 86.07 & 94.12 \\ & p+n & 88.17 & 92.25 & **0.009** & 93.12 & 96.27 \\ \hline Ours & p & **84.12** & **88.00** & **0.013** & **88.64** & **95.88** \\ & p+n & **92.16** & **95.87** & **0.009** & **93.76** & **97.12** \\ \hline \hline \end{tabular}
\end{table} TABLE II: Quantitative comparisons
Fig. 4: Qualitative comparisons. Structure mapping of the raw point clouds using different primitive parsing methods.
Fig. 5: Structure mapping of a scene in S3DIS using quadrics.
\begin{table}
\begin{tabular}{c c c c|c|c c} \hline \hline & \multicolumn{2}{c|}{Loss settings} & \multirow{2}{*}{Res \(\downarrow\)} & \multicolumn{2}{c}{P-cov (\(\%\)) \(\uparrow\)} \\ \hline \(\mathcal{L}_{\mathrm{primal}}\) & \(\mathcal{L}_{\mathrm{normal}}\) & \(\mathcal{L}_{\mathrm{reg}}\) & \(\mathcal{L}_{\mathrm{geo}}\) & \multicolumn{2}{c}{\(\epsilon=0.01\)} & \multicolumn{2}{c}{\(\epsilon=0.02\)} \\ \hline ✓ & & & & 0.032 & 68.10 & 71.87 \\ ✓ & ✓ & & & 0.029 & 73.03 & 78.62 \\ ✓ & ✓ & ✓ & & 0.012 & 89.45 & 90.34 \\ ✓ & ✓ & ✓ & ✓ & **0.008** & **94.12** & **96.10** \\ \hline \hline \end{tabular}
\end{table} TABLE III: Impacts of loss design on quadrics fitting |
2302.14605 | Quantum criticality of bandwidth-controlled Mott transition | Metallic states near the Mott insulator show a variety of quantum phases
including various magnetic, charge ordered states and high-temperature
superconductivity in various transition metal oxides and organic solids. The
emergence of a variety of phases and their competitions are likely intimately
associated with quantum transitions between the electron-correlation driven
Mott insulator and metals characterized by its criticality, and is related to
many central questions of condensed matter. The quantum criticality is,
however, not well understood when the transition is controlled by the bandwidth
through physical parameters such as pressure. Here, we quantitatively estimate
the universality class of the transition characterized by a comprehensive set
of critical exponents by using a variational Monte Carlo method implemented as
an open-source innovated quantum many-body solver, with the help of established
scaling laws at a typical bandwidth-controlled Mott transition. The criticality
indicates a weaker charge and density instability in contrast to the
filling-controlled transition realized by carrier doping, implying a weaker
instability to superconductivity as well. The present comprehensive
clarification opens up a number of routes for quantitative experimental studies
for complete understanding of elusive quantum Mott transition and nearby
strange metal that cultivate future design of functionality. | Kensaku Takai, Youhei Yamaji, Fakher F. Assaad, Masatoshi Imada | 2023-02-28T14:48:33Z | http://arxiv.org/abs/2302.14605v2 | # Quantum criticality of bandwidth-controlled Mott transition
###### Abstract
Metallic states near the Mott insulator show a variety of quantum phases including various magnetic, charge ordered states and high-temperature superconductivity in various transition metal oxides and organic solids. The emergence of a variety of phases and their competitions are likely intimately associated with quantum transitions between the electron-correlation driven Mott insulator and metals characterized by its criticality, and is related to many central questions of condensed matter. The quantum criticality is, however, not well understood when the transition is controlled by the bandwidth through physical parameters such as pressure. Here, we quantitatively estimate the universality class of the transition characterized by a comprehensive set of critical exponents by using a variational Monte Carlo method implemented as an open-source innovated quantum many-body solver, with the help of established scaling laws at a typical bandwidth-controlled Mott transition. The criticality indicates a weaker charge and density instability in contrast to the filling-controlled transition realized by carrier doping, implying a weaker instability to superconductivity as well. The present comprehensive clarification opens up a number of routes for quantitative experimental studies for complete understanding of elusive quantum Mott transition and nearby strange metal that cultivate future design of functionality.
## I Introduction
The Mott transition is a metal-insulator transition driven by the Coulomb repulsion of electrons in crystalline solids. It is driven either by controlling the ratio of the interaction strength to the bandwidth (bandwidth-controlled transition) or by carrier doping to the Mott insulator (filling-controlled transition) [1]. The two types of control are widely realized in organic solids [2; 3] and transition metal compounds [1].
The filling-controlled transition has been relatively well studied motivated by the high temperature superconductivity in the cuprates. Theoretically estimated criticality of the Mott transition was suggested to cause the charge instability that gives birth to severe competitions of the high temperature superconductivity, strange metal, antiferromagnetism, nematicity and charge inhomogeneity including charge order in the cuprates [4; 5; 6]. It is also understood from the tendency towards the first-order transition that generates a miscibility gap in the carrier density near the Mott insulator. When the first-order transition can be suppressed, criticality emerges around the marginal quantum critical point (MQCP) [7]. The MQCP critical exponents have not been well explored in experiments, partly because various competing phases including superconductivity and effect of disorder preempt or mask criticality. However, the emergence of exotic phases including the superconductivity in the cuprates may be governed by the underlying MQCP and therefore the understanding of the MQCP has crucial importance to reveal the mechanism of the competing phases.
On the other hand, the bandwidth-controlled transitions have also been widely observed. They normally appear as first-order transitions, which terminate at a critical endpoint at nonzero temperatures. The universality class of this endpoint was proposed to belong to that of the classical Ising-model [8; 9]. When the critical temperature is reduced to zero as the MQCP, the universality class should be distinct [7; 10]. One of the central questions is whether the universality class can lead to strong quantum fluctuations and quantum entanglement, which triggers emergence of novel functionality including high-temperature superconductivity similarly to the incentive to gain insights for the filling-controlled case [6]. However, the bandwidth-controlled Mott transi
tion at the MQCP and the related charge instability are not well explored even theoretically.
We summarize the basic structure around the MQCP of the metal insulator transition found in the earlier work, which is illustrated in Fig. 1[10]. The MQCP appears as the endpoint of the finite temperature critical line, namely, the endpoint of the first-order transition, while it also appears as the endpoint of the quantum critical line (QCL) running at temperature \(T=0\). The reason why the critical line continues beyond the MQCP is that the metal and insulator must always have a clear phase transition boundary at \(T=0\) unlike the case of the quantum Ising model such as that with the transverse magnetic field where the transition disappears beyond the conventional quantum critical point. Our focus in this paper is the universality class of the bandwidth-controlled MQCP and not the criticality of the QCL, because the MQCP is expected to show stronger quantum fluctuations and entanglement with enhanced charge fluctuations that may trigger exotic phases [10].
In the literature, the motivation of the study on the quantum critical point (QCP) in general has come from the expectations for novel physics, where finite critical temperature is lowered to zero and associated diverging quantum fluctuations emerge, which may induce exotic phases. In the present case, this corresponds to the MQCP appearing as a single point at \(T=0\), although the distinction between the MQCP and QCL is not well appreciated in the literature. The reason may be due to the fact that the QCL does not exist in the conventional critical point (QCP) arising from symmetry-breaking transitions. Along the quantum critical line (QCL), the criticality should be different from the MQCP in general.
Significance of the QCP including the MQCP is that the first-order transition starts from the QCP, which opens the possibility of coupling to divergent zero-wavenumber modes. In the case of the metal-insulator transition, this appears as the divergent charge fluctuations. On the other hand, the QCL exists even in the noninteracting case as in the simple band-insulator metal transition. For instance, in Ref. [10], the criticality of the MQCP was clarified for the filling-controlled transition in detail and the critical exponents are identified as \(\alpha=-1,\beta=1,\gamma=1,\delta=2,\nu=1/2\) and \(\eta=0\), where \(\gamma=1\), and \(\delta=2\) lead to the divergence of the charge compressibility \(\kappa\propto 1/x\), where \(x\) is the doping concentration. The divergent compressibility at the MQCP was supported in a 2D Hubbard model study [4]. In contrast, \(\alpha=0,\beta=1,\gamma=0,\delta=1,\nu=1/2\) and \(\eta=0\) were reported for the QCL. Here, the exponents \(\alpha=0,\gamma=0\), and \(\delta=1\) imply that the fluctuations are not diverging. This is because of the absence of the opening of the first-order transition and indeed it is equivalent to the band-insulator-to-metal transition in usual noninteracting systems. The divergent charge fluctuations for the filling-controlled MQCP on the verge of the phase separation or the charge inhomogeneity opens the possibility of emergent exotic phases such as unconventional superconductivity associated with this divergence and fluctuations. In the dynamical mean field theory (DMFT) calculation, the metal-insulator critical point appears at a finite temperature, at which it was shown that the charge compressibility diverges [11]. However, in the DMFT, one cannot lower the critical temperature to zero to reach the MQCP, while in 2D one can see such an evolution to the MQCP. Therefore, it is natural to pose a question how the interplay between the diverging charge fluctuation and quantum fluctuations takes place at the MQCP for the bandwidth-controlled case in 2D. In other words, the nontriviality of the MQCP lies in the fact that the first-order metal-insulator transition and the resultant MQCP does not exist in the non-interacting case and it is purely the interaction effect. By considering this background and the significance with a direct connection to the quantum critical phenomena in general, we study the MQCP rather than the QCL.
In this article, we study the mechanism and criticality of the bandwidth-controlled quantum Mott transitions. For this purpose, we employ anisotropic two-dimensional Hubbard models at half filling as a typical example. We study the model by using a state-of-the-art variational Monte Carlo method (VMC) [12; 13], where the open source code is available [14]. See Sec. VII A for details of the numerical method. The solution of the model shows the existence of the MQCP. We estimate a comprehensive set of critical exponents of the MQCP, which shows a perfect consistency with the scaling theory, which indicates a weaker charge and density instability in contrast to the filling-controlled transition by carrier doping, implying a weaker instability to superconductivity as well. Since the earlier experimental as well as theoretical studies by the dynamical mean-field study suggest the exponents different from the present results, we discuss the origin of the discrepancy.
This paper is organized as follows: In Sec.II, we introduce the model. In Sec.III, the phase diagram is shown in the plane of the Hamiltonian parameters, which reveals the MQCP. In Sec. IV, the critical exponents of the MQCP are thoroughly estimated. In Sec. V, the estimated exponents are analyzed in terms of the scaling theory. Section VI is devoted to Discussions and Summary.
## II Model
For the purpose of clarifying the generic feature of the bandwidth-controlled Mott transition, as an example, we study the \(t\)-\(t_{\perp}\)-\(t^{\prime}\) Hubbard model at half filling defined by the following Hamiltonian :
\[\hat{H}= -t\sum_{\langle i,j\rangle_{x},\sigma}\hat{c}_{i\sigma}^{\dagger }\hat{c}_{j\sigma}-t_{\perp}\sum_{\langle i,j\rangle_{y},\sigma}\hat{c}_{i \sigma}^{\dagger}\hat{c}_{j\sigma}\] \[+t^{\prime}\sum_{\langle\langle i,j\rangle\rangle,\sigma}\hat{c}_ {i\sigma}^{\dagger}\hat{c}_{j\sigma}+U\sum_{i}\hat{n}_{i\uparrow}\hat{n}_{i \downarrow}, \tag{1}\]
where \(\hat{c}_{i\sigma}\) (\(\hat{c}_{i\sigma}^{\dagger}\)) annihilates (creates) a spin-\(\sigma\) electron at site \(i\) and \(\hat{n}_{i\sigma}\) is its number operator. Here, \(t\) (\(t_{\perp}\)) is the hopping between the nearest-neighbor sites in the \(x\)-(\(y\)-) direction, \(t^{\prime}\) is that between the next-nearest-neighbor sites and \(U\) represents the on-site Coulomb repulsion. The lattice structure of the present model is depicted in the inset of Fig. 2, where the intra-chain transfer \(t\) and inter-chain transfer \(t_{\perp}\) constituting the square lattice are geometrically frustrated with the next-nearest-neighbor transfer \(t^{\prime}\). The onsite Coulomb interaction \(U\) monitors the correlation effects and the control of \(U/t\) triggers the bandwidth-controlled Mott transition. In this model, by taking the nearest neighbor transfer \(t\) along the chain direction as the energy unit, namely \(t=1\), the interchain hopping \(t_{\perp}\) acts as the parameter to control the dimensionality between 1D (\(t_{\perp}=0\)) and 2D (\(t_{\perp}=t\)), which enables the control of the Mott transition temperature to zero, namely allows us to study the MQCP. Here we fix the ratio of the next nearest neighbor hopping \(t^{\prime}\) to \(t_{\perp}\) as \(t^{\prime}=t_{\perp}/2\).
Although we employ a specific model, the notion of universality that characterizes the 2D MQCP, renders the details of the model irrelevant. The MQCP essentially emerges between the metal and Mott insulator and it appears as the endpoint of both of the first-order transition and the continuous quantum critical line as sketched schematically in Fig. 1. In addition it does not retain the C\({}_{4}\) rotational symmetry, which is common to the experimental structure in the organic solids [2; 3] and offer the possibility to capture the generic feature of the 2D MQCP. Although the transfer terms introduce slightly 1D-like anisotropy, we confirm that spin and charge fluctuations show isotropic singular behavior below and represents a typical 2D criticality. We obtain a comprehensive set of critical exponents that are consistent with each other in light of the scaling theory. In contrast to previous theoretical and experimental studies at finite temperatures \(T>0\) above the classical critical endpoint to infer a zero-temperature exponent [15; 16], we focus on the quantum case directly at \(T=0\). We show, in Supplementary Materials (SM) A, the Fermi surface for the noninteracting case. It changes from 1D-like open Fermi surface for small \(t_{\perp}\) to 2D-like closed one by increasing \(t_{\perp}\) separated by the Lifshitz transition at \(t_{\perp}\approx 0.62\). Similar models have been studied before [17; 18; 19]. Here we focus on the criticality of the Mott transition, for which we assume that the universality class does not depend on the details of the model.
## III Phase diagram
We first summarize the obtained ground-state phase diagram of the metal, insulator and magnetic phases separated by metal-insulator and antiferromagnetic transitions in the parameter space of \(U\) and \(t_{\perp}\) in Fig. 2. Hereafter, we mainly focus on the metal-insulator transition. (Although we do not discuss details, the antiferromagnetic transition is discussed in Secs.F and I of SM). For details of the method to determine the phase boundary, see Sec. VII D. The transition is of first-order for large \(t_{\perp}\) with a jump in physical quantities while it changes to a continuous one for smaller \(t_{\perp}\) detected only by the continuous opening/closing of the charge gap (see SM, Sec. B). The first-order and continuous transitions meet at the MQCP. For the first-order part, the transition temperature as well as the 2D Ising nature of the transition vanishes at the MQCP. We find the MQCP roughly around \(t_{\perp}=t_{\perp}^{\rm MQCP}\sim 0.4\) and \(U=U^{\rm MQCP}\sim 1.8\), which will be more precisely estimated in the later part of this article. For \(t_{\perp}>t_{\perp}^{\rm MQCP}\), magnetic and metal-insulator transitions occur essentially simultaneously as a first-order transition. On the other hand, for \(t_{\perp}<t_{\perp}^{\rm MQCP}\), the two transitions become separated (see Sec. F of SM for the magnetic transition) and a nonmagnetic insulator (NMI) phase emerges, but we do not go into details of the NMI and leave it for studies elsewhere. We also do not study the universality of the quantum critical line depicted as the purple dotted line in Fig. 2. Although the metal-insulator and antiferromagnetic transitions look slightly separated even for \(0.2<t_{\perp}<t_{\perp}^{\rm MQCP}\), we do not exclude the possibility of a simultaneous transition within the nu
Figure 1: Schematic phase diagram of Mott metal-insulator transition. The MQCP (red cross) is the quantum critical point between the metal and the Mott insulator and simultaneously the end point of the first order transition and the quantum critical line (green line). Finite temperature critical point (\(T=T_{\rm c}\)) (dark blue curve) appears as the endpoint of the first order boundary (light blue shaded surface. \(a\) and \(b\) represent the control parameters and are given by a combination of \(t_{\perp}\) and \(U\) in the present case. In the bandwidth-control case in general, the electron filling is fixed at an odd integer in this whole \(T\)-\(a\)-\(b\) parameter space. For details see Ref.[10].
merical error bar. The overall phase structure obtained here is essentially similar to that obtained by the cluster dynamical mean field theory (CDMFT) at low temperature [20]. A small kink-like structure of the phase boundary around \(t_{\perp}\sim 0.6\) is related to the Lifshitz transition in the corresponding noninteracting model (see SM, Sec. A for the Fermi surface of the case \(U=0\)).
## IV Estimate of MQCP and its critical exponents
We now present our numerical results on the universality class at the MQCP. See Sec. VII C for definitions of the critical exponents, \(\alpha,\beta,\gamma,\delta,\nu,z\) and \(\eta\) analyzed below. Since we need to estimate the position of the MQCP first and the MQCP is defined by the point where the first-order transition disappears, we first estimate when the jump of physical quantities characteristic of the first-order transition vanishes. The conventional scaling analysis does not work accurately unless the MQCP point is precisely estimated.
The critical exponent \(\beta\) of the MQCP (Eq. (18)) is estimated from the jump of the double occupancy of electrons on the same site, \(\Delta D=D_{\rm metal}-D_{\rm ins}\), where the double occupancy in the metallic (insulator) side is \(D_{\rm metal}\) (\(D_{\rm ins}\)) along the first-order transition line in the region \(t_{\perp}>t_{\perp}^{\rm MQCP}\) (see Eq.(12) for the definition of the double occupancy). The fitting of the VMC numerical data in the range \(0.4\leq t_{\perp}\leq 0.9\) plotted in Fig. 3**a** shows that the mean squared error by defining the mean given by Eq. (18) becomes the minimum when we employ the MQCP point at \(t_{\perp}^{\rm MQCP}\sim 0.38\pm 0.05\) and \(\beta=0.97\pm 0.05\) as is shown in Methods C and D (Fig. 5**a**). The green curve in Fig. 3**a** is the resultant optimized fitting. The error bar for \(t_{\perp}^{\rm MQCP}=0.38\) estimated by the bootstrap method (see Sec. VII E for details of the bootstrap) is included in the error bar of \(\beta\). The estimated \(\beta\) is similar to \(\beta=1\) in the filling-controlled transition predicted in the literature [7].
We also simultaneously determine the critical value of \(U\) at the MQCP and critical exponents \(\delta\) and \(\nu z\) by the combined analysis with Eq.(29) at \(t_{\perp}^{\rm MQCP}=0.38\), and obtain \(U^{\rm MQCP}=1.83\pm 0.03\), \(\nu z=1.13\pm 0.19\), \(\delta_{\rm I}=0.98\pm 0.03\) and \(\delta_{\rm M}=1.05\pm 0.04\) (see Figs. 3**b** and 4 as well as Methods C and D), where \(\delta\) is estimated separately in the insulating (\(\delta_{\rm I}\)) and in the metallic (\(\delta_{\rm M}\)) phases.
These results imply that the nonsingular linear term proportional to \(|U-U^{\rm MQCP}|\) makes the precise estimate of \(\delta\) difficult, if \(\delta\leq 1\). However, we will clarify that \(\delta\sim 1.0\) is consistent with other scaling analyses. The exponent is the same again with the filling-controlled MQCP estimated as \(\delta=1\) in Refs. [10] and [7] within the statistical error.
## V Scaling analysis
In our calculation, we obtained \(\beta\sim 1.0\), \(\nu z\sim 1.1\) and \(\delta\sim 1.0\). We now analyze this result in the framework of scaling theory. Here, the singular part of the ground-state energy \(E\) around the MQCP satisfies the form
\[E\propto\xi^{-(d+z)}, \tag{2}\]
where \(\xi\) is the unique length scale that diverges at the MQCP, and \(d\) and \(z\) are the spatial dimension and the dynamical exponent, respectively. This scaling theory was examined in Ref. [10], where critical exponents satisfy the following scaling relations:
\[\gamma = \beta(\delta-1), \tag{3}\] \[2-\eta = \gamma/\nu \text{(Fisher${}^{\prime}$s$ relation)},\] (4) \[\alpha+2\beta+\gamma = 2\text{(Rushbrooke${}^{\prime}$s$ relation)},\] (5) \[2-\alpha = (d+z)\nu \text{(Josephson${}^{\prime}$s$ relation)}. \tag{6}\]
All the scaling laws here can be derived from Eq. (2).
Since the metal is characterized by a nonzero carrier density \(X\) as the natural order parameter in distinction from the insulator (\(X=0\)), the unique length scale \(\xi\) that diverges at the MQCP must be the mean carrier distance given by
\[\xi\propto X^{1/d}. \tag{7}\]
In this case, we obtain
\[\delta=z/d. \tag{8}\]
The relation holds for both the bandwidth- and filling-control transitions. In the bandwidth-control case, \(X\) in the metallic phase is the density of unbound doublon (double occupancy site) and holon (electron empty site). The last available scaling relation is
\[\nu=\beta/d. \tag{9}\]
See Ref. [10] and Methods C for the derivation of the scaling laws.
By using these relations, if only \(\beta=q\) and \(\nu z=p\) are known, other exponents can be obtained for \(d=2\) as \(\alpha=2-(p+q),\gamma=p-q,\delta=p/q,\eta=4-2p/q,\nu=q/2\) and \(z=2p/q\). By using the values \(p\sim 1.13\pm 0.19\) and \(q\sim 0.97\pm 0.05\) obtained by our simulation, we find the exponents listed in Fig. 2**b**, which can be consistent with \(\alpha\sim 0,\gamma\sim 0,\delta\sim 1.0,\eta\sim 2.0,\nu\sim 1/2\) and \(z\sim 2\). In fact, our numerical result obtained independently from the scaling of \(D-D_{c}\) indicates \(\delta\sim 1\), which is consistent with this prediction. Furthermore, the spatial correlation of the double occupancy \(D\) can be used to estimate \(z+\eta\) independently from the above estimates, and though the estimate contains a large error bar, it suggests \(z+\eta\sim 3.3\pm 0.8\) (see Sec. VII C and Sec. G of SM), which is again consistent with 4.0 estimated from the scaling theory.
## VI Discussion and Summary
The quantum critical exponent \(\nu z\sim 0.6\sim 0.9\) was indirectly estimated above the classical Ising-type critical temperature of the first-order Mott transition, aiming at estimating the quantum criticality by calculating the resistivity along the Widom line continued above the critical temperature by using the DMFT [15; 21]. It was compared with experimental measurements of organic solids, semiconductor moire superlattices and transition metal dichalcogenides, because they all infer the \(T=0\) criticality again from the Widom line [22; 16; 23]. They also argued that the exponent does not appreciably change with the character of the neighboring phases [16] implying a universal and robust criticality. Ambiguities of the definition of the Widom line and the estimate at temperature above nonzero critical temperature, however, have yielded a variety of estimates for the exponent. By taking into account this ambiguity and also possible errors often recognized in the exponents estimated from the collapse to a single scaling plot employed by them (see also the next paragraph), and by considering a considerable variation of their estimates do not necessarily contradict our estimate of \(\nu z\sim 1.13\pm 0.19\).
More importantly, the estimate by the DMFT [21; 15] is rigorous at infinite dimensions and the exponents can be different from the present two dimensional case. Another DMFT study [24] suggested that the estimated \(\nu z\) in Ref.15 is related to the exponent of the instability line of the metastable insulating state at the boundary of the coexisting region. This instability line should vanish if the finite temperature critical temperature is lowered to zero as in the MQCP. Therefore in this regard as well, \(\nu z\) estimated along the Widom line may not necessarily have a connection to the MQCP exponent studied here. If one wishes to estimate the MQCP exponents focused in this article, it is desired to estimate the exponent by sufficiently suppressing the critical temperature both in the theoretical and experimental studies. Our analysis has determined a more comprehensive and quantitative set of various exponents \(\beta,\delta,\nu z\) and \(z+\eta\) from the scaling of four independent quantities including the double occupancy and charge gap, by straightforward estimates directly at zero temperature precisely for the MQCP. The four exponents are shown to satisfy a perfect consistency with the scaling theory and determine all the exponents.
Though we obtained \(d+z\sim 4\) as if it were at the upper critical dimension of the conventional symmetry-breaking magnetic transition, it does not necessarily mean that the simple mean-field treatment is justified, because the Mott transition is not primarily a symmetry-breaking transition. Indeed, the anomalous dimension drives the nonzero and a fairly large exponent for \(\eta\) (\(\sim 2\)), which can be analyzed as a Lifshitz-type topological transition that makes vanishing Fermi-surface pocket [25]. In fact, the exponents \(\gamma\sim 0\), \(z\sim 2\) and \(\delta\sim 1\) look similar to a case of the 2D Lifshitz transition described by the emergence of electron and hole pockets [25].
The exponents \(\alpha=\gamma\sim 0\) and \(\delta\sim 1\) indicate that the bandwidth-controlled MQCP does not drive divergent fluctuations in the charge channel, because the sus
Figure 2: **a** Ground-state phase diagram obtained by the present VMC calculation. The purple solid and broken lines with open circles indicate the first-order and continuous metal-insulator transition (MIT) boundaries, respectively. The green solid curve with open squares is for the antiferromagnetic transitions (AFT) (see Sec. VII.4 for the method to determine the MIT and see section F of SM for the AFT). Red large circle depicts the MQCP. Error bars are determined by considering the errors of size extrapolations and statistical errors of Monte Carlo calculations for finite-size systems. Inset: Lattice structure used for the present study; \(t\)-\(t_{\perp}\)-\(t^{\prime}\) Hubbard model with the nearest neighbor intrachain (red bonds), interchain (blue bonds) and next-nearest-neighbor (broken black bonds) hoppings \(t,t_{\perp}\) and \(t^{\prime}=t_{\perp}/2\), respectively. We take \(t\) as the energy unit.
**b** Critical exponents of MQCP estrated in this article.
ceptibilities (the second derivatives of the energy with respect to \(t_{\perp}\) and \(U\)) are not divergent at the MQCP. This is also indicated by nonsingular dependence of the energy as a function of the electron density at the MQCP as is shown in Fig. S5 of SM. This absence is in contrast with the filling-controlled MQCP, where the divergent charge fluctuations and the charge inhomogeneity are obtained as a common property [26; 4; 27]. The charge instability is also tightly linked with a strong effective attraction of the carriers [10; 5], which may be absent here. This is obviously a disadvantageous aspect for the promotion for the superconductivity. Since the present simple model and its MQCP do not have any special aspect or unique symmetry, the universality class found here may be a standard one applicable widely to 2D MQCP.
On the other hand, the antiferromagnetic transition does not contradict mean-field like normal divergent fluctuation with divergent susceptibility as is clarified in Sec. F of SM. The antiferromagnetic transition seems to occur at slightly larger \(U\) (\(U^{\rm AF}\sim 1.85\)) than \(U^{\rm MQCP}\sim 1.83\), but it is not easy to pin down whether they really differ (see SM F). Nevertheless, the estimated \(\nu^{\rm AF}\sim 0.5\) and \(\eta^{\rm AF}+z^{\rm AF}\sim 2\) definitely indicate divergent fluctuations characterized by \(\gamma^{\rm AF}>0\) and \(\delta^{\rm AF}>1\) with the help of the scaling law independently of the Mott criticality. In any case, in the scaling properties, metal-insulator transition at the MQCP and the antiferromagnetic transition are decoupled as we show in Sec. I of SM. Therefore, the universality and critical exponents of the MQCP are not affected by either antiferromagnetic or paramagnetic nature of the insulating phase and the present system is expected to represent the general and universal band-width controlled 2D Mott transition.
We also note that the spin and charge correlations show essentially 2D isotropic correlations as we see in Figs. S9 and S10 and manifests the 2D nature at the MQCP.
We summarize the significance of the present paper:
1. The comprehensive set of critical exponents \(\beta,\gamma,\delta,\eta,\nu\) and z, is estimated with consistency with the scaling theory. Our estimate provides us with a unified understanding of the universality class of clean D=2 MQCP for the bandwidth-controlled Mott transition. This is the same situation that the experiments in the literature aimed at.
2. The exponents are estimated directly at \(T=0\) unlike most of the previous studies.
3. The employed numerical method is a state-of-the-art quantum many-body solver provided as the open-source software mVMC, which can treat spatial and temporal quantum fluctuations.
4. The present comprehensive clarification opens up a number of possible routes to test by experimental studies for complete understanding of quantum Mott transition and nearby strange metal, which is expected to serve for future design of functionality.
## VII Methods
### Numerical Method
For the ground-state calculations, we employ a variational Monte Carlo (VMC) method [12; 13]. The optimization procedure of the VMC method to reach the ground state is equivalent to the imaginary time (\(\tau\)) evolution represented by the repeated operation of \(\exp(-\tau H)\) for the Hamiltonian \(H\) or equivalently natural gradient method [28; 29]. We choose the periodic-antiperiodic boundary condition, i.e. \(x(y)\)-direction is periodic (antiperiodic) because its boundary condition allows closed shell condition for \(L\times L=4n\times 4n\) lattices, which makes the optimization of the variational parameters easier and statistical errors smaller due to the reduced degeneracy. It also makes the extrapolation to the thermodynamic limit easier in the later procedure. We use the trial wave function with correlation factors and the spin quantum-number projection as
\[|\psi\rangle=\mathcal{L}^{S}\mathcal{P}_{\mathrm{G}}\mathcal{P}_{\mathrm{J}} \mathcal{P}_{\mathrm{dh}}^{(4)}|\phi_{\mathrm{pair}}\rangle, \tag{10}\]
where \(\mathcal{P}_{\mathrm{G}},\ \mathcal{P}_{\mathrm{J}},\ \mathcal{P}_{\mathrm{dh}}^{(4)}\) are Gutzwiller, Jastrow and doublon-holon correlation factors and \(\mathcal{L}^{S}\) is the spin quantum-number projection. First, we give the pair-product wave function, defined as
\[|\phi_{\mathrm{pair}}\rangle=\left(\sum_{i,j=1}^{N_{s}}f_{ij}\hat{c}_{i\uparrow }^{\dagger}\hat{c}_{j\downarrow}^{\dagger}\right)^{N_{e}/2}|0\rangle, \tag{11}\]
where \(N_{s}\) is the number of sites and \(N_{e}\) is the number of electrons. This wave function has the same form as the Bardeen-Cooper-Schrieffer (BCS) wave function, in which the spins are always restricted to pairs of up and down spins representing the singlet. The pair product function can also represent any form of the Slater determinant and in addition it has representability of any mean-field solution including magnetic, charge and superconducting symmetry breaking.
The averaged double occupancy
\[D=\sum_{i}\langle\hat{n}_{i\uparrow}\hat{n}_{i\downarrow}\rangle/N_{s}, \tag{12}\]
where \(\hat{n}_{i\uparrow}\) (\(\hat{n}_{i\downarrow}\)) is the number operator of spin-up (spin-down) electrons, is a key quantity to understand strong correlation effects, especially in the Hubbard model, where \(\langle\cdots\rangle=\langle\psi|\cdots|\psi\rangle/\langle\psi|\psi\rangle\) is the expectation value in the ground state. In fact, the double occupancy is controlled by the Gutzwiller factor [30]
\[\mathcal{P}_{\mathrm{G}}=\exp\left(-g\sum_{i}\hat{n}_{i\uparrow}\hat{n}_{i \downarrow}\right) \tag{13}\]
to lower the energy where \(g\) is a variational parameter.
To take into account the long-ranged charge correlation, we also introduce the Jastrow factor [31]
\[\mathcal{P}_{\mathrm{J}}=\exp\left(-\frac{1}{2}\sum_{i\neq j}v_{ij}\hat{n}_{ i}\hat{n}_{j}\right), \tag{14}\]
where \(v_{ij}\) are variational parameters and \(\hat{n}_{i}\equiv\hat{n}_{i\uparrow}+\hat{n}_{i\downarrow}\) is the number operator of electrons.
To express the correlation between doublon (site doubly occupied by the spin up and down electrons) and holon (empty site) in the strongly correlated regions, we introduce a four-site doublon-holon correlation factor
\[\mathcal{P}_{\mathrm{dh}}^{(4)}=\exp\left(-\sum_{m=0}^{4}\sum_{i=1}^{N_{s}} \left(\alpha_{m}^{\mathrm{d}}\xi_{im}^{\mathrm{d}}+\alpha_{m}^{\mathrm{h}} \xi_{im}^{\mathrm{h}}\right)\right), \tag{15}\]
where \(\xi_{im}^{\mathrm{d(h)}}\) denotes the number operator of doublon (holon) around \(i\)th site. and \(\alpha_{im}^{\mathrm{d(h)}}\) are the variational parameters. We can express the operator \(\xi_{im}^{\mathrm{d(h)}}\), for example, as \(\xi_{i4}^{\mathrm{d}}\equiv\hat{D}_{i}\prod_{\ell}\hat{H}_{i+\ell}\) and \(\xi_{i0}^{\mathrm{h}}\equiv\hat{H}_{i}\prod_{\tau}(1-\hat{D}_{i+\tau})\), where \(i+\ell\) and \(i+\tau\) run the nearest-neighbor sites around \(i\) and \(\hat{D}_{i}\) (\(\hat{H}_{i}\)) is the doublon (holon) operator defined as \(\hat{D}_{i}=\hat{n}_{i\uparrow}\hat{n}_{i\downarrow}\) (\(\hat{H}_{i}=(1-\hat{n}_{i\uparrow})(1-\hat{n}_{i\downarrow})\)).
We set the \(2\times 2\)-sublattice structure for the pairing wave function \(|\phi_{\mathrm{pair}}\rangle\) to reduce the variational parameters.
We calculate several physical quantities to identify the ground state. To determine the magnetic order and to distinguish a metal from an insulator, we calculate relevant physical quantities, i.e. the momentum distribution function \(n(\vec{k})\) and the spin structure factor \(S(\vec{q})\)
Momentum distribution function \(n(\vec{k})\) is given by
\[n(\vec{k})=\frac{1}{2N_{s}}\sum_{i,j,\sigma}\,\langle\;\hat{c}^{\dagger}_{i\sigma }\hat{c}_{j\sigma}\;\rangle\,e^{i\vec{k}\cdot(\vec{r}_{i}-\vec{r}_{j})}, \tag{16}\]
where \(\vec{r}_{i}\) is the vector representing the coordinate of \(i\) th state.
In the same way, the spin structure factor \(S(\vec{q})\) is calculated from
\[S(\vec{q})=\frac{1}{3N_{s}}\sum_{i,j}\,\langle\;\hat{\vec{S}}_{i}\cdot\hat{ \vec{S}}_{j}\;\rangle\,e^{i\vec{q}\cdot(\vec{r}_{i}-\vec{r}_{j})}. \tag{17}\]
In the VMC calculations, we prepared several different initial states (such as the paramagnetic metal (PM) (free fermion) and antiferromagnetic insulator (AFI) states) and optimized them until the variational parameters reach the convergence, which may not necessarily preserve the character of the initial states and the nature of the optimized state is identified only after calculating physical quantities. To investigate the metal-insulator and magnetic transitions in the thermodynamic limit, we perform calculations of energy and other physical quantities on the \(L\times L\) site square lattice with the periodic-antiperiodic boundary condition for \(L=16,20,24\), and \(28\) for each initial state and the size dependences are examined.
In this article, we perform the size extrapolations and scaling analyses to examine the magnetic order and metallicity in the thermodynamic limit.
This basic method is widely used and was tested from various perspectives in a number of benchmarks [13; 32; 33], ranging from 2D itinerant Hubbard model to frustrated quantum spin models, which has proven that it shows one of the best accuracy among available quantum many-body solvers with wide applicability to quantum lattice systems. In the present case, the ground state energy per site \(E/N\) obtained from precisely the same VMC method using the form of the wave function Eq.(10) and the same Hamiltonian at the MQCP, \(t_{\perp}=0.38\) and \(U=1.83\) for \(4\times 4\) lattice with the periodic-antiperiodic boundary condition is \(-0.8665\pm 0.0005\), while the value obtained from the exact diagonalization is -0.8700. The error \(\sim 0.4\%\) is similar to the case of the benchmark in Ref. [13]. For physical quantities, the double occupancy \(D=0.1869\pm 0.0003\) and the peak of the spin structure factor \(S(\vec{q})=0.4489\pm 0.0008\) at \((\pi,\pi)\) are compared with the exact values 0.1844 and 0.4301, respectively. This benchmark and that in the literature show that the accuracy well withstands and can be used for the present analyses.
### Definition of critical exponents and derivation of scaling laws
Here, the double occupancy \(D\) is regarded as a natural order parameter of the metal-insulator transition. We calculate the critical exponents for the extrapolated double occupancy \(D\) by controlling \(t_{\perp}\) and \(U\), where the scheme for the extrapolation is given in SM G. The exponent \(\beta\) is defined from the asymptotic scaling form between the jumps of \(D\) (namely, \(\Delta D\)) and \(t_{\perp}\) measured from the critical point, i.e.
\[\Delta D(t_{\perp})=a|t_{\perp}-t_{\perp}^{\rm MQCP}|^{\beta} \tag{18}\]
near the MQCP point \(t_{\perp}^{\rm MQCP}\), where \(a\) is a constant.
The critical exponents \(\delta\) and \(\gamma\) are defined from
\[D-D^{\rm MQCP}|_{t_{\perp}=t_{\perp}^{\rm MQCP}} \propto |U-U^{\rm MQCP}|^{1/\delta}, \tag{19}\] \[\left.\frac{dD}{dU}\right|_{U=U_{c}} \propto |t_{\perp}-t_{\perp}^{\rm MQCP}|^{-\gamma}. \tag{20}\]
The definition of the exponent \(\alpha\) is given from
\[\frac{d^{2}E}{dt_{\perp}^{2}}\propto|t_{\perp}-t_{\perp}^{\rm MQCP}|^{-\alpha}. \tag{21}\]
for the ground-state energy \(E\).
Insulators are distinguished from metals by a nonzero charge gap \(\Delta_{\rm c}\), which is numerically defined by
\[\Delta_{\rm c}\equiv\frac{1}{2}(\mu(N_{\uparrow}+N_{\downarrow}+1)-\mu(N_{ \uparrow}+N_{\downarrow})), \tag{22}\]
where the chemical potential \(\mu\) is given as \(\mu(N_{\uparrow}+N_{\downarrow}+1)=(E(N_{\uparrow}+1,N_{\downarrow}+1)-E(N_{ \uparrow},N_{\downarrow}))/2\), and \(E(N_{\uparrow},N_{\downarrow})\) is the optimized ground-state energy for systems with the number of spin-up (spin-down) electrons \(N_{\uparrow}\) (\(N_{\downarrow}\)). The scaling of the charge gap around the MQCP at \(U=U^{\rm MQCP}\) is defined as
\[\Delta_{\rm c}(U)=a_{U}|U-U^{\rm MQCP}|^{\nu z}, \tag{23}\]
where \(\nu\) is the correlation-length exponent and \(z\) is the dynamical exponent. Here, \(a_{U}\) is a constant. This relation is the consequence of the scaling of the energy scale [15; 16], \(\Delta_{\rm c}\propto\xi^{-z}\), where \(\xi\) is the unique length scale which diverges at the MQCP. The dynamical exponent relates the length (momentum) to time (energy) scale and the correlation-length exponent \(\nu\) is defined from
\[\xi\propto|t_{\perp}-t_{\perp}^{\rm MQCP}|^{-\nu}. \tag{24}\]
Scaling relations Eqs. (8) and (9) are derived in the following way [10]: The scaling of the energy, Eq. (2) is rewritten as \(E\propto X^{(d+z)/d}\) by using Eq.(7). By adding the \(t_{\perp}\) and \(U\) dependences, \(E\) has the form
\[E=-UX+B_{0}(t_{\perp}-t_{\perp}^{\rm MQCP})X^{\phi}+CX^{(d+z)/d}. \tag{25}\]
Minimizing \(E\) for \(t_{\perp}-t_{\perp}^{\rm MQCP}=0\) gives the scaling between \(X\) and \(U-U^{\rm MQCP}\), namely \(\delta\) leading to Eq.(8). Eqs.(8), (18) and (24) lead to Eq.(9).
The correlation of double occupancy is determined by
\[Q(\vec{r})=\frac{1}{N_{s}}\sum_{\vec{r}^{\prime}}\langle(\hat{D}(\vec{r}+\vec{ r}^{\prime})-\langle\hat{D}\rangle)(\hat{D}(\vec{r}^{\prime})-\langle\hat{D} \rangle)\rangle \tag{26}\]
where \(\hat{D}(\vec{r})=\hat{n}_{\vec{r}\uparrow}\hat{n}_{\vec{r}\downarrow}\) is the double occupancy operator and \(\ \langle\hat{D}\rangle\) is the spatially averaged expectation value in the ground state. In the scaling hypothesis, this correlation is expected to follow
\[Q(\vec{r})\propto r^{-(d+z+\eta-2)} \tag{27}\]
at asymptotically long distance \(r=|\vec{r}|\).
### Methods for determination of metal-insulator transition and MQCP
In the region of first-order transitions, we see the energy level crossing between PM and AFI states, which accompanies a jump of the double occupancy \(\Delta D\). The first-order transition point is identified by this energy level crossing after the system size extrapolation to the thermodynamic limit. The metal-insulator transition is corroborated by the opening of the charge gap and the qualitative change of the momentum distribution in Fig. S4 in SM depicted for \(t_{\perp}=0.5,0.7\) and \(1.0\). In most of the first-order region, we have confirmed that the transition indeed represents the simultaneous transition of metal-insulator and antiferromagnetic transitions by examining several relevant physical quantities around the transition point. We have determined the continuous metal-insulator transition by the opening of the charge gap as is described in Fig. S2 in SM (See Sec. C of SM).
The MQCP point is first determined from the point where \(\Delta D\) vanishes as is plotted in Fig. 3**a**. To determine \(t_{\perp}^{\rm MQCP}\) and \(\beta\) simultaneously, we have performed a regression analysis to optimize \(t_{\perp}\) and \(\beta\) dependences of \(\Delta D\) in the form Eq.(18) by minimizing the following \(\chi^{2}\),
\[\chi^{2}=\sum_{i}^{N_{\rm sample}}(\Delta D_{i}-\Delta D_{\rm fit})^{2}/(N_{ \rm sample}-2) \tag{28}\]
for \(N_{\rm sample}\) data point, where \(\Delta D_{\rm fit}\) has the form (18) and \(\Delta D_{i}\) is the simulation data. The logarithmic difference is appropriate to estimate the error for the power-law function. In Fig. 5**b**, \(t_{\perp}\) dependence of \(\chi^{2}\) is plotted for the optimized exponent \(\beta\). From the minimum of \(\chi^{2}\), \(t_{\perp}^{\rm MQCP}\) is determined as \(0.38\pm 0.05\), where \(\beta\) is \(0.97\pm 0.05\). The error bar is estimated from the bootstrap analysis explained in Methods E.
Since the MQCP can be signaled by the criticality given by the exponents \(\beta,\delta\), \(\nu z\), and the opening of the charge gap, the value of \(U^{\rm MQCP}\) is estimated by the combined analysis of these three by employing \(t_{\perp}^{\rm MQCP}=0.38\) as is analyzed in Fig. 5**a**, where the minimum of the \(\chi^{2}\) value now defined as
\[\chi^{2} =\sum_{i}^{N_{\rm sample}}(\ln\Delta_{ci}-\ln\Delta_{\rm cfit})^{ 2}/(N_{\rm sample}-2)\] \[+\sum_{i}^{N_{\rm U1-sample}}\frac{(\ln|D_{Ii}-D^{\rm MQCP}|-\ln |D_{\rm fit}-D^{\rm MQCP}|)^{2}}{N_{\rm U1-sample}-2}\] \[+\sum_{i}^{N_{\rm UM-sample}}\frac{(\ln|D_{Mi}-D^{\rm MQCP}|-\ln |D_{\rm fit}-D^{\rm MQCP}|)^{2}}{N_{\rm UM-sample}-2} \tag{29}\]
suggests \(U^{\rm MQCP}=1.83\pm 0.03\), \(\nu z=1.13\pm 0.19\), \(\delta_{\rm I}=0.98\pm 0.03\) and \(\delta_{\rm M}=1.05\pm 0.04\). For fittings to obtain these critical exponents, we assume Eq.(19) and Eq.(23).
### Interpolation and bootstrap techniques
To estimate metal-insulator transition points, we introduce the interpolation techniques by fitting the computed data to an assumed form. For reliable estimates for metal-insulator transition points, we interpolate energy and double occupancy data as a function of \(U\) by the cubic function as
\[f(U)=a_{0}U^{3}+a_{1}U^{2}+a_{2}U+a_{3} \tag{30}\]
as the best fit of the \(U\) dependence of quantities. The crossing point of the interpolated energy of each metallic and insulating state gives us a reliable estimate of the level crossing point for the first-order transition.
In addition, we estimate the error bar of the level crossing point by using the bootstrap method. Ground-state energy estimated by our Monte Carlo calculation, \(E_{\rm MC}\) contains statistical errors given by the standard deviation \(\sigma_{\rm MC}\). Namely, we assume that \(E_{\rm MC}\) obeys the Gaussian distribution \(P(E_{\rm MC},\sigma_{\rm MC}^{2})\) and perform the following procedure:
1. Generate a number of synthetic samples of the energy which follows the probability \(P(E_{\rm MC},\sigma_{\rm MC}^{2})\) around the interpolated \(U\) dependence of the energy given by Eq. (30) for both insulating and metallic states.
2. Calculate the crossing point between the insulating and metallic states for each synthetic data.
3. Calculate the variance of the crossing points of the synthetic data, which gives the estimate of the error bar.
Furthermore, we also apply the bootstrap method for determining statistical errors for critical exponents and \(t_{\perp}^{\rm MQCP}\) and \(U^{\rm MQCP}\) in Methods D.
###### Acknowledgements.
The authors acknowledge Macin Raczkowski for useful discussions. This work was supported in part by KAKENHI Grant No.16H06345 and 22A202 from JSPS. This research was also supported by MEXT as "program for Promoting Researches on the Supercomputer Fugaku"(Basic Science for Emergence and Functionality in Quantum Matter - Innovative Strongly Correlated Electron Science by Integration of Fugaku and Frontier Experiments -, JPMXP1020200104). We thank the Supercomputer Center, the Institute for Solid State Physics, The University of Tokyo for the use of the facilities. We also thank the computational resources of supercomputer Fugaku provided by the RIKEN Center for Computational Science (Project ID: hp210163, hp220166) and Oakbridge-CX in the Information Technology Center, The University of Tokyo. FFA thanks the DFG for funding via the Wurzburg-Dresden Cluster of Excellence on Complexity and Topology in Quantum Matter ct.qmat (EXC 2147, project-id 390858490).
## References
* Imada _et al._ [1998]Masatoshi Imada, Atsushi Fujimori, and Yoshinori Tokura, "Metal-insulator transitions," Rev. Mod. Phys. **70**, 1039-1263 (Oct 1998), [http://link.aps.org/doi/10.1103/RevModPhys.70.1039](http://link.aps.org/doi/10.1103/RevModPhys.70.1039).
* Zhou _et al._ [2017]Yi Zhou, Kazushi Kanoda, and Tai-Kai Ng, "Quantum spin liquid states," Rev. Mod. Phys. **89**, 025003 (Apr 2017), [https://gateway2.itc.u-tokyo.ac.jp/pJ,DanaInfo=link.aps.org,SSL+doi/10.1103/RevModPhys.89.025003](https://gateway2.itc.u-tokyo.ac.jp/pJ,DanaInfo=link.aps.org,SSL+doi/10.1103/RevModPhys.89.025003).
* 2-thione-4,5-dithiolate) anion radicals," Bull. Chem. Soc. Jpn. **87**, 355 (2014)
* Misawa and Imada [2014]Takahiro Misawa and Masatoshi Imada, "Superconductivity and its mechanism in an ab initio model for electron-doped lafeaso," Nat. Commun. **5**, 5738 (2014)
* general concept and basic formalism based on gap physics," J. Phys. Soc. Jpn. **88**, 024701 (2019), [https://doi.org/10.7566/JPSJ.88.024701](https://doi.org/10.7566/JPSJ.88.024701), [https://doi.org/10.7566/JPSJ.88.024701](https://doi.org/10.7566/JPSJ.88.024701).
* Imada [2021]Masatoshi Imada, "Charge order and superconductivity as competing brothers in cuprate high-tc superconductors," J. Phys. Soc. Jpn. **90**, 111009 (2021), [https://doi.org/10.7566/JPSJ.90.111009](https://doi.org/10.7566/JPSJ.90.111009), [https://doi.org/10.7566/JPSJ.90.111009](https://doi.org/10.7566/JPSJ.90.111009).
* Imada [2005]Masatoshi Imada, "Universality classes of metal-insulator transitions in strongly correlated electron systems and mechanism of high-temperature superconductivity," Phys. Rev. B **72**, 075113 (Aug 2005), [http://link.aps.org/doi/10.1103/PhysRevB.72.075113](http://link.aps.org/doi/10.1103/PhysRevB.72.075113).
* Castellani _et al._ [1979]C. Castellani, C. Di Castro, D. Feinberg, and J. Ranninger, "New model hamiltonian for the metal-insulator transition," Phys. Rev. Lett. **43**, 1957-1960 (Dec 1979), [http://link.aps.org/doi/10.1103/PhysRevLett.43.1957](http://link.aps.org/doi/10.1103/PhysRevLett.43.1957).
* Limelette _et al._ [2003]P. Limelette, A. Georges, D. Jerome, P. Wzietek, P. Metcalf, and J. M. Honig, "Universality and critical behavior at the Mott transition," Science **302**, 89-92 (2003), ISSN 0036-8075, [http://science.sciencemag.org/content/302/5642/89.full.pdf](http://science.sciencemag.org/content/302/5642/89.full.pdf), [http://science.sciencemag.org/content/302/5642/89](http://science.sciencemag.org/content/302/5642/89).
* Misawa and Imada [2007]Takahiro Misawa and Masatoshi Imada, "Quantum criticality around metal-insulator transitions of strongly correlated electron systems," Phys. Rev. B **75**, 115121 (Mar 2007), [http://link.aps.org/doi/10.1103/](http://link.aps.org/doi/10.1103/)
PhysRevB.75.115121
* [11] G. Kotliar, Sahana Murthy, and M. J. Rozenberg, "Compressibility divergence and the finite temperature mott transition," Phys. Rev. Lett. **89**, 046401 (Jul 2002), [https://link.aps.org/doi/10.1103/PhysRevLett.89.046401](https://link.aps.org/doi/10.1103/PhysRevLett.89.046401)
* [12] Daisuke Tahara and Masatoshi Imada, "Variational Monte Carlo method combined with quantum-number projection and multi-variable optimization," J. Phys. Soc. Jpn. **77**, 114701 (2008), [https://journals.jps.jp/doi/10.1143/JPSJ.77.114701](https://journals.jps.jp/doi/10.1143/JPSJ.77.114701)
* [13] Takahiro Misawa and Masatoshi Imada, "Origin of high-\(T_{c}\) superconductivity in doped Hubbard models and their extensions: Roles of uniform charge fluctuations," Phys. Rev. B **90**, 115137 (Sep 2014), [http://link.aps.org/doi/10.1103/PhysRevB.90.115137](http://link.aps.org/doi/10.1103/PhysRevB.90.115137)
* open-source software for many-variable variational Monte Carlo method," Comput. Phys. Commun. **235**, 447 (2019)
* [15] H. Terletska, J. Vucicevic, D. Tanaskovic, and V. Dobrosavljevic, "Quantum critical transport near the Mott transition," Phys. Rev. Lett. **107**, 026401 (Jul 2011), [https://link.aps.org/doi/10.1103/PhysRevLett.107.026401](https://link.aps.org/doi/10.1103/PhysRevLett.107.026401)
* [16] Tetsuya Furukawa, Kazuya Miyagawa, Hiromi Taniguchi, Reizo Kato, and Kazushi Kanoda, "Quantum criticality of Mott transition in organic materials," Nat. Phys. **11**, 221 (March 2015), [https://doi.org/10.1038/nphys3235](https://doi.org/10.1038/nphys3235)
* [17] Fabian H. L. Essler and Alexei M. Tsvelik, "Weakly coupled one-dimensional Mott insulators," Phys. Rev. B **65**, 115117 (Mar 2002), [http://link.aps.org/doi/10.1103/PhysRevB.65.115117](http://link.aps.org/doi/10.1103/PhysRevB.65.115117)
* [18] P. Ribeiro, P. D. Sacramento, and K. Penc, "Finite-energy spectral function of an anisotropic two-dimensional system of coupled Hubbard chains," Phys. Rev. B **84**, 045112 (Jul 2011), [http://link.aps.org/doi/10.1103/PhysRevB.84.045112](http://link.aps.org/doi/10.1103/PhysRevB.84.045112)
* [19] Benjamin Lenz, Salvatore R. Manmana, Thomas Pruschke, Fakher F. Assaad, and Marcin Raczkowski, "Mott quantum criticality in the anisotropic 2d Hubbard model," Phys. Rev. Lett. **116**, 086403 (Feb 2016), [http://link.aps.org/doi/10.1103/PhysRevLett.116.086403](http://link.aps.org/doi/10.1103/PhysRevLett.116.086403)
* [20] Marcin Raczkowski, Fakher F. Assaad, and Masatoshi Imada, "Local moments versus itinerant antiferromagnetism: Magnetic phase diagram and spectral properties of the anisotropic square lattice Hubbard model," Phys. Rev. B **103**, 125137 (Mar 2021), [https://link.aps.org/doi/10.1103/PhysRevB.103.125137](https://link.aps.org/doi/10.1103/PhysRevB.103.125137)
* [21] J. Vucicevic, H. Terletska, D. Tanaskovic, and V. Dobrosavljevic, "Finite-temperature crossover and the quantum widom line near the mott transition," Phys. Rev. B **88**, 075143 (Aug 2013), [https://link.aps.org/doi/10.1103/PhysRevB.88.075143](https://link.aps.org/doi/10.1103/PhysRevB.88.075143)
* [22] B.H. Moon, "Metal-insulator transition in two-dimensional transition metal dichalcogenides," emergent mater. **4**, 989-998 (March 2021), [https://doi.org/10.1007/s42247-021-00202-9](https://doi.org/10.1007/s42247-021-00202-9)
* [23] T. Li, Jiang S., L. Li, and et al., "Continuous mott transition in semiconductor moire superlattices," Nature **597**, 350-354 (Sept. 2021), [https://doi.org/10.1038/s41586-021-03853-0](https://doi.org/10.1038/s41586-021-03853-0)
* [24] Heike Eisenlohr, Seung-Sup B. Lee, and Matthias Vojta, "Mott quantum criticality in the one-band hubbard model: Dynamical mean-field theory, power-law spectra, and scaling," Phys. Rev. B **100**, 155152 (Oct 2019), [https://link.aps.org/doi/10.1103/PhysRevB.100.155152](https://link.aps.org/doi/10.1103/PhysRevB.100.155152)
* [25] Youhei Yamaji, Takahiro Misawa, and Masatoshi Imada, "Quantum and topological criticalities of Lifshitz transition in two-dimensional correlated electron systems," J. Phys. Soc. Jpn. **75**, 094719 (2006), [https://doi.org/10.1143/JPSJ.75.094719](https://doi.org/10.1143/JPSJ.75.094719), [https://doi.org/10.1143/JPSJ.75.094719](https://doi.org/10.1143/JPSJ.75.094719)
* [26] Kota Ido, Takahiro Ohgoe, and Masatoshi Imada, "Competition among various charge-inhomogeneous states and \(d\)-wave superconducting state in Hubbard models on square lattices," Phys. Rev. B **97**, 045138 (Jan 2018), [https://link.aps.org/doi/10.1103/PhysRevB.97.045138](https://link.aps.org/doi/10.1103/PhysRevB.97.045138)
* [27] Andrew S. Darmawan, Yusuke Nomura, Youhei Yamaji, and Masatoshi Imada, "Stripe and superconducting order competing in the Hubbard model on a square lattice studied by a combined variational Monte Carlo and tensor network method," Phys. Rev. B **98**, 205132 (Nov 2018), [https://link.aps.org/doi/10.1103/PhysRevB.98.205132](https://link.aps.org/doi/10.1103/PhysRevB.98.205132)
* [28] Sandro Sorella, "Generalized lanczos algorithm for variational quantum Monte Carlo," Phys. Rev. B **64**, 024512 (Jun 2001)
* [29] Kensaku Takai, Kota Ido, Takahiro Misawa, Youhei Yamaji, and Masatoshi Imada, "Finite-temperature variational Monte Carlo method for strongly correlated electron systems," Journal of the Physical Society of Japan **85**, 034601 (2016), [https://doi.org/10.7566/JPSJ.85.034601](https://doi.org/10.7566/JPSJ.85.034601), [https://doi.org/10.7566/JPSJ.85.034601](https://doi.org/10.7566/JPSJ.85.034601)
* [30] Martin C. Gutzwiller, "Effect of correlation on the ferromagnetism of transition metals," Phys. Rev. Lett. **10**, 159-162 (Mar 1963), [http://link.aps.org/doi/10.1103/PhysRevLett.10.159](http://link.aps.org/doi/10.1103/PhysRevLett.10.159)
* [31] Robert Jastrow, "Many-body problem with strong forces," Phys. Rev. **98**, 1479-1484 (Jun 1955), [http://link.aps.org/doi/10.1103/PhysRev.98.1479](http://link.aps.org/doi/10.1103/PhysRev.98.1479)
* [32] Luca F. Tocchio, Federico Becca, and Sandro Sorella, "Hidden mott transition and large-\(u\) superconductivity in the two-dimensional hubbard model," Phys. Rev. B **94**, 195126 (Nov 2016), [https://link.aps.org/doi/10.1103/PhysRevB.94.195126](https://link.aps.org/doi/10.1103/PhysRevB.94.195126)
* [33] Yusuke Nomura and Masatoshi Imada, "Dirac-type nodal spin liquid revealed by refined quantum many-body solver using neural-network wave function, correlation ratio, and level spectroscopy," Phys. Rev. X **11**, 031034 (Aug 2021), [https://link.aps.org/doi/10.1103/PhysRevX.11.031034](https://link.aps.org/doi/10.1103/PhysRevX.11.031034)
## Supplementary Material
### A. Shape of Fermi surface for noninteracting system
Figure S1 shows the Fermi surface for \(U=0\).
### B. Charge gaps in continous transition region
Charge gap defined by Eq. (23) is obtained from the procedure in Methods C. An example of the calculated results for \(28\times 28\) lattices in the cases of a. \(t_{\perp}=0.05\), b. 0.1, c. 0.2 and d. 0.3 are shown in Fig. S2. The phase boundary of metal-insulator transitions in Fig. 1 is determined by analyzing these results. The size of the artifact by the finite-size gap \(\Delta_{0}\) speculated from the noninteracting case is illustrated by the horizontal dotted line.
### C. Method to estimate the charge gap in phase separation region
Figure S3.**a.** shows that the energy as functions of \(n\) shows phase separation for \(U\geq 1.9\) in case of \(t_{\perp}=0.38\). The carrier density \(n\) is defined as \(n=N_{e}/N_{s}-1/2\). Figure S3.**b.** illustrates the procedure to estimate the charge gap when the phase separation takes place. Convex (concave) downward curve of the chemical potential in the electron (hole) doped region indicate the phase separation. By drawing the horizontal line to make the area of the two regions surrounded by the horizontal line and the chemical potential curve, the phase separation region can be obtained, where the pinned chemical potential during the phase separation is given by the horizontal line. The difference in the pinned chemical potential between the hole and electron doped sides is the charge gap.
### D. Momentum distribution functions in first-order transition region
In the most part of the first-order transition region, the metal-insulator and antiferromagnetic transition occur simultaneously. To confirm the metal-insulator transition, momentum distribution functions \(n(\vec{k})\) around the energy level crossing points are shown in Fig.S4, where the shape of \(n(\vec{k})\) is qualitatively different between the metal and the insulator.
### E. Absence of singularity in energy as function of control parameter
Figure S5 shows that the energy as functions of \(n\) (**a.**) and \(U\) (**b.**) look nonsingular around the MQCP at \(U=1.83\) and \(t_{\perp}=0.38\).
### F. Determination of antiferromagnetic transition and its criticality
The universality of the magnetic transition may belong to a class different from the Mott transition. As well as metal-insulator transitions, we see the clear jumps of the staggerred magnetization \(m_{s}\) in the region of first-order transitions. However, the border between paramagnetic and antiferromagnetic phases is not straightforward in the region of continuous transitions. Here we first describe how \(U^{\rm AF}\) is estimated and then the estimate of the critical exponent at the MQCP later.
#### s.1.1 Antiferromagnetic transition determined by correlation ratio method
We determine the boundary of the antiferromagnetic phase by using the correlation-ratio method [1], where the correlation-ratio parameter \(S_{g}\) obtained from the spin structure factor \(S(q)\) is given by
\[S_{g}\equiv 1-\frac{S(\pi,\pi+\Delta q_{y})}{S(\pi,\pi)}.\] (S1)
Here, \(\pi+\Delta q_{y}\) is the nearest-neighbor \(\vec{k}\)-point to \((\pi,\pi)\). We plotted this ratio for \(20\times 20\), \(24\times 24\) and \(28\times 28\) sites, to determine the border of paramagnetic and antiferromagnetic phases. In the nonmagnetic region, \(S_{g}\) converges to zero with increasing system size, because \(S(\vec{q})\) is finite and continuous in the thermodynamic limit. On the other hand, in the AF region, \(S_{g}\) converges to one by increasing the system size. It is empirically observed that the different-size curves cross and the crossing point does not sensitively depend on the system sizes, which serves as a good estimate of the transition point in thermodynamic limit [1; 2]. We plot the curves and their crossing points for \(20\times 20\), \(24\times 24\) and \(28\times 28\) sites.
In the same way as fittings of energy and double occupancy, we interpolate the correlation-ratio parameter \(S_{g}\) as a function of \(U\) by assuming the rational function as
\[g(U)=\frac{a_{0}U^{2}+a_{1}U+a_{2}}{a_{3}U^{2}+a_{4}U+a_{5}}.\] (S2)
From this fitting we are able to estimate the correlation-ratio crossing point by the interpolation. The phase boundary of the magnetic transition in Fig. 1 is thus determined from the crossing points of \(S_{g}\) for \(20\times 20\), \(24\times 24\) and \(28\times 28\) sites. We show the correlation-ratio plot for \(20\times 20\), \(24\times 24\) and \(28\times 28\) sites in the cases of \(t_{\perp}=0.05\) and \(0.1\) in Fig. S6, where the metal-insulator transition is clearly different and the quantum spin liquid phase (NMI) is found. For \(t_{\perp}=t_{\perp}^{\rm MQCP}=0.38\) the plot is shown in Fig. S7. Then the magnetic transition point is consistently estimated as \(U^{\rm AF}\sim 1.85\pm 0.02\), which is close to \(U^{\rm MQCP}\sim 1.83\pm 0.03\), but seems to be slightly different from the one in the case of \(t_{\perp}=0.05\) and \(0.1\) in Fig. S8. The magnetic transition point is consistently estimated as \(U^{\rm AF}\sim 1.85\pm 0.02\), which is close to \(U^{\rm MQCP}\sim 1.83\pm 0.03\), but seems to be slightly different from the one in the case of \(t_{\perp}=0.05\) and \(0.1\) in Fig. S8. The magnetic transition point is consistently estimated as \(U^{\rm AF}\sim 1.85\pm 0.02\), which is close to \(U^{\rm MQCP}\sim 1.83\pm 0.03\), but seems to be slightly different from the one in the case of \(t_{\perp}=0.05\) and \(0.1\) in Fig. S8. The magnetic transition point is consistently estimated as \(U^{\rm AF}\sim 1.85\pm 0.02\), which is close to \(U^{\rm MQCP}\sim 1.83\pm 0.03\), but seems to be slightly different from the one in the case of \(t_{\perp}=0.05\) and \(0.1\) in Fig. S8. The magnetic transition point is consistently estimated as \(U^{\rm AF}\sim 1.85\pm 0.02\), which is close to \(U^{\rm MQCP}\sim 1.83\pm 0.03\), but seems to be slightly different from the one in the case of \(t_{\perp}=0.05\) and \(0.1\) in Fig. S8. The magnetic transition point is consistently estimated as \(U^{\rm AF}\sim 1.85\pm 0.02\), which is close to \(U^{\rm MQCP}\sim 1.83\pm 0.03\), but seems to be slightly different from the one in the case of \(t_{\perp}=0.05\) and \(0.1\) in Fig. S8. The magnetic transition point is clearly different from the one in the case of \(t_{\perp}=0.05\) and \(0.1\) in Fig. S8. The magnetic transition point is consistently estimated as \(U^{\rm AF}\sim 1.85\pm 0.02\), which is close to \(U^{\rm MQCP}\sim 1.83\pm 0.03\), but seems to be slightly different from the one in the case of \(t_{\perp}=0.05\) and \(0.1\) in Fig. S8. The magnetic transition point is clearly different from the one in the case of \(t_{\perp}=0.05\) and \(0.1\) in Fig. S8. The magnetic transition point is consistently estimated as \(U^{\rm AF}\sim 1.85\pm 0.02\), which is close to \(U^{\rm MQCP}\sim 1.83\pm 0.03\), but seems to be slightly different from the one in the case of \(t_{\perp}=0.05\) and \(0.1\) in Fig. S8. The magnetic transition point is clearly different from the one in the case of \(t_{\perp}=0.05\) and \(0.
larger within the error bar.
#### s3.2 Critical exponent at antiferromagnetic transition
We here estimate the critical exponents for the antiferromagnetic transition at the MQCP. For this purpose, we adopt the finite-size scaling relation for the spin structure factor \(S(q)\),
\[S(\pi,\pi)=L^{-z+2-\eta}f_{m}(uL^{1/\nu}),\] (S3)
where \(u=(U-U^{\rm AF})/U^{\rm AF}\) and \(z\) represents the dynamical exponent while \(f_{m}\) is a scaling function and \(\eta\) is the exponent associated with the anomalous dimension.
As shown in Fig.S8, we obtain the exponent as
\[\nu=0.52\pm 0.02,\ {\rm and}\ \eta+z=1.9\pm 0.1\] (S4)
if the scaling form (S3) is used with \(U^{\rm AF}=1.85\) at \(t_{\perp}=0.38\). Moreover, if we assume the hyperscaling relation
\[\frac{\beta}{\nu}=\frac{z+d-2+\eta}{2},\] (S5)
we can estimate the critical exponent \(\beta_{\rm AF}=0.49\pm 0.03\), which turns out to be consistent with that of the mean-field theory (\(\beta=0.5\)). This is justified when \(z\sim 2\) so that \(d+z=4\) assures that the present system is located just at the upper critical dimension in the conventional framework of Ising or Hertz-Moriya [3; 4] and the critical exponents are marginally given by the mean-field values for the symmetry breaking transition. This is also consistent with \(z+\eta\sim 2\) resulting in \(\eta\sim 0\), indicating the absence of the anomalous dimension. Then \(\gamma=1\) and \(\delta=3\) derived from the scaling relation indicate divergent fluctuations in contrast with the universality of the metal-insulator transition. A large \(z(\sim 2)\) instead of the normal value \(z=1\) expected for the antiferromagnetic spin wave dispersion could be the consequence of the proximity from the MQCP. Instead, it is conceivable that non-negligible \(\eta>0\) makes \(d+z<4\) so that the deviation from the mean-field value exists, which may drive \(z\) to decrease from 2, though the presence of the diverging fluctuations characterized by \(\gamma>1\) and \(\delta>1\) would not change. These issues should be carefully examined in the future in the region close to the transition point if
is different from \(U^{\rm MQCP}\). Of course, the AF long-range order requires the multi-dimensionality \(d\geq 2\) of the system. Although the background broad peak reflects the moderate anisotropy of the Hamiltonian, the spin structure factor \(S(q)\) shown in Fig. S9 clearly demonstrates that the spin correlation is 2D isotropic behavior for a critical sharp peak at \((\pi,\pi)\) even at MQCP.
### **G. Double occupancy correlation**
Spatial correlation of double occupancy \(D\) is defined in Eq. (12). The spatial correlation of the fluctuation of \(D\) defined by Eq. (26) is plotted in Figure S10, where the fitting of \(Q(\vec{r})\) suggests \(z+\eta=3.3\pm 0.8\) from Eq.(27). The value is consistent with the present scaling theory that requires \(z+\eta=4\).
### **H. Size extrapolation of double occupancy**
To analyze the ciriticality by using the double occupancy, we perform the size extrapolations of \(D\) by using the following formulae,
\[D(L)=D_{\infty}+\left\{\begin{array}{ll}b_{\rm M}/L^{2}&({\rm metal})\\ b_{\rm l}/L&({\rm insulator})\end{array}\right\}\] (S6)
where \(D_{\infty}\) is the double occupancy at the thermodynamic limit and \(b_{\rm M}\) (\(b_{\rm l}\)) is the fitting parameter in the metallic (insulating) phase. Examples of the fitting are shown in Fig. S11. The error bars in Fig. 2 of the main article are determined by the square root of the mean square error of the fitting.
### **I. Decoupling of metal-insulator and antiferromagnetic transitions**
The metal-insulator transition (MIT) is often intertwined with magnetic fluctuations. A phenomenology that will capture both the MIT and the spin degrees of freedom necessitates a scalar order parameter \(\Phi({\bf x},\tau)\) that captures the doublon occupancy, as well as a normalised vector order parameter \({\bf n}({\bf x},\tau)\) that captures the antiferromagnetic fluctuations. The field theory has to posses a \(Z_{2}\times O(3)\) global symmetry, \(\Phi({\bf x},\tau)\rightarrow-\Phi({\bf x},\tau)\) and \({\bf n}({\bf x},\tau)\to O{\bf n}({\bf x},\tau)\) with \(O\) an orthogonal matrix, and
effective Lagrangian reads:
\[L=L_{\Phi}+L_{n}+L_{int}.\] (S7)
It accounts for the dynamics of the scalar field and the vector field as well as the interaction between both of them. We will refrain from writing down explicit forms for \(L_{\Phi}\) and \(L_{n}\), since the only information we need to assess if \(L_{int}\) is relevant or not at criticality are the scaling dimensions of \(\mathbf{n}(\mathbf{x},\tau)\) and \(\Phi(\mathbf{x},\tau)\). Assuming a single singular spatio-temporal length scale \(\lambda\), and for a given dynmaical exponent \(z\), we expect that the correlation function of the order parameter \(\Phi\) at criticality follows
\[\langle\Phi(\lambda\mathbf{x},\lambda^{z}\tau)\Phi(\lambda\mathbf{x}^{\prime},\lambda^{z}\tau^{\prime})\rangle\propto\frac{1}{\lambda^{2\Delta_{\Phi}}} \langle\Phi(\mathbf{x},\tau)\Phi(\mathbf{x}^{\prime},\tau^{\prime})\rangle,\] (S8)
where \(\Delta_{\Phi}\) is represented by the exponents of the MQCP as \(2\Delta_{\Phi}=d+z+\eta-2\). For those who are not familar with this critical scaling exponent \(\Delta_{\Phi}\), see below an example of the simple \(\phi^{4}\) model. At the MQCP our estimates are \(z\sim 2\) and \(\eta\sim 2\) and the equal time doublon-doublon correlation functions are consistent with \(\Delta_{\Phi}\simeq 2\). A similar form holds for the O(3) order parameter \(\mathbf{n}\). We are now in a position to perturbatively understand if the coupling between the spin and doublon degrees of freedom is irrelevant, marginal or relevant. The most relevant symmetry allowed interaction between the O(3) and \(Z_{2}\) fields reads:
\[L_{int}=g\int d^{2}\mathbf{x}d\tau\Phi(\mathbf{x},\tau)^{2}\left(\nabla_{ \mathbf{x}}\mathbf{n}(\mathbf{x},\tau)\right)^{2}+\cdots\] (S9)
We note that due to the normalization of the O(3) order parameter \(\Phi^{2}\mathbf{n}^{2}\) does not provide a spin-charge coupling. The ellipsis denotes higher order terms under a scale transformation. Under a scale transformation, the interaction terms transforms as
\[g\to g\lambda^{z-2\Delta_{\Phi}-2\Delta_{\mathbf{n}}}\] (S10)
As mentioned above, we know that for the MQCP \(\Delta_{\Phi}\simeq 2\) and that \(z\simeq 2\). As a result, and for any \(\Delta_{\mathbf{n}}>0\), \(g\) scales to zero under successive coarse graining scale transformations. The above provides a compelling argument supporting the notion that the charge and spin transitions are, in the RG sense, independent of each other at the MQCP.
Here, we supplement the relation of the scaling exponent of the correlation defined in Eq. (S8) to the genral framework of the scaling theory in a simple examle of conventional \(\phi^{4}\) theory for the readers who are not familiar with the scaling theory of quantum systems. The non-dimensional \(\phi^{4}\) Hamiltonian \(H_{\phi}\) is given by
\[H[\phi]=\int d^{d}r[\frac{1}{2}(\nabla\phi)^{2}+\frac{A}{2}\phi^{2}+\frac{B}{ 4}\phi^{4}]\] (S11)
with coefficients \(A\) and \(B\). From the assumption of a single length scale \(\lambda\), this classical \(\phi^{4}\) Hamiltonian requires the scaling of \(\phi\) from the first term as
\[[\phi]=\lambda^{1-\frac{d+\eta}{2}},\] (S12)
where \(\eta\) is the anomalous dimension to account for the relation of \(\lambda\) and the diverging correlation length \(\xi\). From the second and third terms, we obtain similarly \([A]=\lambda^{-2}\) and \([B]=\lambda^{d+\eta-4}\), respectively. When quantum dynamics is introduced, a mapping of a \(d\)-dimensional quantum system to \(d+z\) dimensional classical representation tells us that we need to replace \(d\) with \(d+z\). Here, \(z\) is the dynamical exponent to represent the scaling of time scale \([\tau]=[\lambda]^{z}\). From this we obtain the criticality of correlation function from the scaling of \(\phi\) (Eq.(S12)) as
\[\langle\phi(\lambda\mathbf{x},\lambda^{z}\tau)\phi(\lambda\mathbf{x}^{\prime},\lambda^{z}\tau^{\prime})\rangle\propto\frac{1}{\lambda^{2\Delta_{\phi}}} \langle\phi(\mathbf{x},\tau)\phi(\mathbf{x}^{\prime},\tau^{\prime})\rangle\] (S13)
with \(2\Delta_{\phi}=d+z+\eta-2\).
|
2309.15669 | On the Computational Entanglement of Distant Features in Adversarial
Machine Learning | In this research, we introduce the concept of "computational entanglement," a
phenomenon observed in overparameterized feedforward linear networks that
enables the network to achieve zero loss by fitting random noise, even on
previously unseen test samples. Analyzing this behavior through spacetime
diagrams reveals its connection to length contraction, where both training and
test samples converge toward a shared normalized point within a flat Riemannian
manifold. Moreover, we present a novel application of computational
entanglement in transforming a worst-case adversarial examples-inputs that are
highly non-robust and uninterpretable to human observers-into outputs that are
both recognizable and robust. This provides new insights into the behavior of
non-robust features in adversarial example generation, underscoring the
critical role of computational entanglement in enhancing model robustness and
advancing our understanding of neural networks in adversarial contexts. | YenLung Lai, Xingbo Dong, Zhe Jin | 2023-09-27T14:09:15Z | http://arxiv.org/abs/2309.15669v7 | # On the Computational Entanglement of Distant Features in Adversarial Machine Learning
###### Abstract
Adversarial examples in machine learning has emerged as a focal point of research due to their remarkable ability to deceive models with seemingly inconspicuous input perturbations, potentially resulting in severe consequences. In this study, we embark on a comprehensive exploration of adversarial machine learning models, shedding light on their intrinsic complexity and interpretability. Our investigation reveals intriguing links between machine learning model complexity and Einstein's theory of special relativity, all through the lens of entanglement. While our work does not primarily center on quantum entanglement, we instead define the entanglement correlations we have discovered to be computational, and demonstrate that distant feature samples can be entangled, strongly resembling entanglement correlation in the quantum realm. This revelation bestows fresh insights for understanding the phenomenon of emergent adversarial examples in modern machine learning, potentially paving the way for more robust and interpretable models in this rapidly evolving field.
Computational Entanglement, Adversarial Machine Learning, Special Relativity, Information Reconciliation
## 1 Introduction
The rise of machine learning has increased reliance on automated systems for critical decision-making across various domains, particularly in pattern recognition, enabling the identification and classification of objects and individuals through biometric recognition. However, this integration of machine learning in pattern recognition brings advantages and significant concerns.
Started from 2004, Dalvi et al. [1] and quickly followed by Lowd and Meek [2] exposed the susceptibility of linear classifiers to adversarial manipulations in spam filtering, marking the early instances of adversarial attacks. Barreno et al. [3] presented a taxonomy of attacks on machine learning systems, while Biggio et al. [4] demonstrated gradient-based attacks. Szegedy et al. [5] revealed the vulnerability of deep neural networks to gradient-based attacks, introducing _adversarial examples_ that closely resemble original data but can lead to incorrect model predictions. These findings highlight the risks inherent in machine learning systems, especially in pattern recognition tasks. Adversarial attacks can exploit model vulnerabilities, casting doubt on their trustworthiness and suitability for critical decision-making processes.
**Adversarial Example** introduce a noteworthy attribute, which is the ability to deceive disparate machine learning models despite differences in their architectures or training datasets. Through meticulous manipulation of input images, attackers can create adversarial examples that remain imperceptibly altered to the human eye, thereby presenting considerable challenges in detection and raising awareness of their potentially adverse implications [6]. Furthermore, when these different models are deceived by an adversarial example, they tend to agree with each other on the incorrect class prediction. This transferability phenomenon challenges the conventional belief that models with varying characteristics would possess diverse vulnerabilities and make dissimilar errors [7]. It also implies the existence of common vulnerabilities or analogous decision boundaries across diverse models. Despite their differences, these models may possess susceptibility to similar input perturbations or exhibit similar decision-making mechanisms that are sensitive to adversarial manipulations. The transferability of adversarial examples raises concerns about the robustness and generalizability of machine learning models, whereby an attacker is able to train their own substitute model, generate adversarial examples against the substitute, and successfully exploit the vulnerabilities of a victim model with minimal knowledge about it.
In the rapidly evolving machine learning landscape, various techniques for crafting less human-perceptible adversarial examples have arisen, including spatial transformations proposed by Xiao et al. [8], the application of generative adversarial networks showcased by Baluja and Fischer [9], Xiao et al. [10], and Hossain et al. [11], and the recent diffusion model-based approach introduced by Chen et al. [12].
Papernot et al. [13] provides substantial evidence supporting the transferability of adversarial examples, even in black-box scenarios. Within these environments, an adversary does not possess detailed knowledge about the internal intricacies of the model, such as its architecture or specific parameter values. Interactions with the model are solely confined to supplying input data and observing the corresponding predicted labels. This paradigm sets forth a realistic, yet formidable, challenge, as adversaries often lack a comprehensive understanding of the model's intricate operations. Their study substantiates the potential of black-box attacks, showcasing their practical feasibility across an
extensive range of machine learning models. The models considered in their study have span beyond deep neural networks (DNNs) and encompass logistic regression (LR), support vector machines (SVM), decision trees (DT), and k-nearest neighbors (kNN).
Indeed, extensive research has been conducted to investigate the transferability phenomenon, and numerous theories have emerged to explain its underlying mechanics. These theories commonly posit that adversarial examples occur due to complex and high dimensional inconsistencies present in the training data [14, 15, 16]. Nevertheless, these theories have yet to provide a complete understanding that succinctly portrays how this phenomenon operates in practical settings.
Despite the incomplete understanding of adversarial transferability's root cause, the drive to maximize machine learning model accuracy has shown no signs of slowing down [17, 18]. This belief in accuracy improvement as a means to enhance adversarial robustness is unyielding. However, it's important to note that while accuracy improvement is a significant goal, it doesn't guarantee adversarial robustness. Ilyas et al. [19] emphasized this in their work, suggesting that human-understandable explanations closely tied to model training are essential. They introduced a fresh perspective, arguing that adversarial examples stem from a model's sensitivity to data features with strong generalization properties, not anomalies. They position adversarial vulnerability as a human-centric issue, asserting that in standard supervised learning, both non-robust and robust features play equally vital roles in shaping model predictions.
Research conducted by Demontis et al. [20] has revealed the relationship between the transferability of adversarial examples and the complexity of models. The study indicates that higher complexity models are more susceptible to the transfer of adversarial examples. This susceptibility can be attributed to the greater gradients observed in more complex models, which represent the rate of change of the loss function. Loosely speaking, as the complexity of a model increases, the potential for adversarial examples to successfully transfer and deceive the model also tends to increase with appropriate regularization applied.
While these findings may seem counter-intuitive, it's important to consider the trade-offs. As models increase in complexity, they can become less transparent and harder to intuitively understand. This is especially true for deep learning models with highly non-linear neural network architectures. This interpretability challenge makes it even harder for researchers to fully grasp the transferability of adversarial examples. Therefore, in the rapidly developing field of adversarial machine learning, there's a real need for practical research and solutions that address this issue.
**Summary of Results**: Our main focus is adversarial machine learning, specifically emphasizing interpretability, complexity, and their impact on model accuracy. We've developed a physical model to simulate adversaries' ability to independently construct machine learning models using available physical resources via parameter inferencing.
Our research deviates from traditional error correction approaches and is deeply rooted in physical laws, particularly the second law of thermodynamics, which posits an ever-increasing system entropy. To counteract this entropy surge, we introduce new notion of computational entanglement: for deterministic computation of model parameters (ensuring model correctness) and for preserving system information through encoding (guaranteeing model completeness). Deeper analysis reveals connections between our model and Einstein's theory of special relativity. In particular, our demonstration showcases that through computational entanglement is sufficient to establish a flat spacetime framework for the study of relative motion between objects. Our findings put new emphasize on the principle of increasing computational complexity while complementing the foundational ideas of special relativity through computational entanglement.
Intriguingly, our introduced emphasis on increasing complexity resonate with a lot of recent work spanning various fields of scientific research, extending beyond quantum physics [21], functional evolution [22], information dynamics [23, 24], and even chemistry [25]. Our research, in particular, represents an extension of this emphasis, broadening its scope from the domain of computational physics into the exciting realm of machine learning.
Beyond theoretical implications, the computational manifestation of entanglement equips us with the means to efficiently stimulate entanglement between pairs of feature samples regardless their spatial separation. We demonstrate that computational entanglement can emerge within arbitrary random input distributions, accompanied by relativistic effects like time dilation and length contraction, affecting sample pair-wise distances and angular differences, and potentially leading to the creation of adversarial examples that may result in misclassification.
Based on our findings, we hope to ignite fresh enthusiasm and contribute to evolving perspectives on entanglement in both the machine learning and physics communities. This exploration holds the promise of enhancing the robustness of machine learning models. Although our primary focus does not center on quantum entanglement, our research has yielded valuable insights into the intriguing parallels between computational entanglement and quantum entanglement, particularly in the realm of entangling distant features. These discoveries have the potential to spark exciting new avenues for further research and exploration.
## 2 Contemporary Challenges in Today's Machine Learning Models
In machine learning, it's often thought that the more complex the model, the better its accuracy. This belief has led to a push for advanced, 'black box' learning models to get the best prediction performance possible [26]. The research of Zhang et al. [27] further illuminates this concept by demonstrating that neural networks are capable of learning arbitrary relationships between images and labels. This remains true even when these models are subjected to stringent regularization techniques, and when original images are replaced with random, unstructured noise, contradicting conventional expectations. Furthermore, they establish through theoretical construction that even simple, two-layer neural networks can perfectly fit the data once the
number of parameters surpasses the number of data points. Therefore, it appears that model complexity is proportional to accuracy, given the direct correlation between the increase in the number of parameters and enhanced learning performance. This gives rise to an intriguing question: _Should machines delve into acquiring features that may not align with human perceptions of usefulness in pursuit of peak accuracy?_
In order to tackle this query, we first need to establish what signifies a "useful" feature from a human perspective within the realm of machine learning. This definition largely hinges on the specific context and the problem at hand. Broadly speaking, a feature is regarded as useful if it substantially improves the model's predictive capability and aligns with human interpretability [28]. Human interpretability typically involves features that align with our understanding of the problem domain. For example, in a predictive model for housing prices, variables like square footage, geographic location, and the number of bedrooms are considered meaningful and interpretable to humans due to our awareness of their impact on property values.
However, machine learning models, particularly deep learning ones, are known to identify non-robust features--those which humans find challenging to comprehend or regard as insignificant. While these features often contribute to high model generalization, they render the models susceptible to adversarial attacks. The study conducted by Ilyas et al. [19] suggests a nuanced understanding of non-robust features in models, arguing that they are not inherently "useless". Rather, their functionality resides in regions that might not align with human intuition or contemplation. Such features are essential when the ultimate goal is achieving the highest possible accuracy. They employed a standard training procedure for a deep convolutional neural network (DCNN) and then crafted adversarial examples with incorrect labels. These adversarial examples were utilized to train a second DCNN which exhibited the capability to correctly classify new, unseen images, even without prior exposure to properly labeled data.
Nakkiran [29] conduct experiment using projected stochastic gradient descent (PGD) to generate adversarial examples that minimize the "disentangled loss." In simpler terms, they demonstrate that when creating adversarial examples, one can control how well these examples deceive different models by managing the information leakage within the adversarial example. Their experiments strongly suggest that some adversarial examples are not solely the result of non-robust features. How information is harnessed and how models are trained also significantly impact adversarial transferability. They further conduct another experiment to illustrate that adversarial examples can indeed be also "just bugs," emerging due to factors like limited data samples, overfitting, and labeling errors.
Although Wallace [30] challenged Ilyas' findings by attributing them to model distillation, where information from one model influenced another due to incorrect labels in adversarial examples derived from an initially correctly labeled dataset, Ilyas maintained that this doesn't undermine their central claim, as the only features that could have been transferred through distillation were the non-robust ones.
The debate on adversarial examples has given rise to a new perspective: they can be seen as artifacts, neither purely features nor bugs, arising from interactions between systems and the real world. Buckner [31] uses the Doppler effect as an example of how artifacts can be misleading, like changes in sound or light frequency due to the motion of objects. While exploring non-robust features may enhance predictions and control in science, there's a challenge: _am humans truly grasp these complex non-robust features intuitively?_ Striking a balance between exploring them and keeping scientific explanations comprehensible and interpretable to human understanding is crucial.
Rudin [32] asserts that while uninterpretable algorithms can still prove beneficial in high-stakes decisions as part of the knowledge discovery process (for instance, to establish baseline performance levels), they are generally not the final objective of knowledge discovery. The need for model interpretability becomes particularly crucial in high-stakes applications such as medicine, judicial decision-making, autonomous driving, among others--it is imperative not only for a model to be accurate but also to be interpretable. An interpretable model affords users an understanding of why it is rendering certain predictions, fostering trust in the model, and facilitating scrutiny and justification of its decisions.
Research has indicated that the differences in performance among various algorithms tend to be negligible if a standard knowledge discovery process is followed. This implies that high accuracy in machine learning models can be achieved without necessarily sacrificing interpretability, provided that appropriate data preparation steps, including feature engineering, are conducted effectively [33, 34].
While accuracy and interpretability are important, it's crucial not to underestimate the significance of a model's robustness against adversarial attacks. In this context, the perspective of an adversary is essentially about understanding the potential vulnerabilities and weak points of a model. By adopting this perspective, data scientists can design machine learning models that are less susceptible to such attacks and are more robust in general. Yet, there seems to be a deficit in the field concerning comprehensive adversary models that can illustrate all possible threats an adversary might pose to a machine learning model. Such models are critical as they give a holistic perspective on what an attacker might ultimately aim for, regardless of the type of attack model, be it under a black-box or a white-box setting. Therefore, a comprehensive understanding of potential adversarial threats is vital. Achieving a balance between model accuracy, complexity, and interpretability ensures our models are not only precise but also interpretable for continuous improvement. This balance shall fortifies the models' resilience against adversarial intrusions, thereby enhancing their overall effectiveness and security.
## 3 Adversarial Learning Model
We begin with the definition of an adversarial learning model, which is characterized by parameter inference, a vital yet longstanding concept in machine learning, without making additional assumption over the computational power of the adversary. More specific, it follows the Byesian inference (a.k.a Bayes's Theorem) that combines prior beliefs of arbitrary random distribution \(\mathcal{D}\) with observed data
\(x\in\mathcal{D}\), to offer precise uncertainty measures for achieving model objective.
**Adversary Model's Objective**: By means of parameter inference, the objective of the defined model is to identify a parameter value, denoted as \(\theta_{0}\), that maximize the posterior probability \(P(\theta_{0}=\theta|\mathcal{D})\), given an arbitrary random distribution, denoted as \(\mathcal{D}\). As parameters are fundamentally essential in characterizing _almost all_ machine learning model. A precise inference of the true parameter provides an adversary with the tools necessary to launch an attack across various models. We here consider the true parameter, denoted as \(\theta_{0}\), to be unknown; if it were otherwise, there would be no problem to resolve. The primary objective is to maximize the likelihood of an inferred parameter, \(\theta\), being identical to the true parameter \(\theta_{0}\). Using Bayes' Theorem, it can be formally described as:
\[P(\theta=\theta_{0}|\mathcal{D})=\frac{P(\mathcal{D}|\theta=\theta_{0})\cdot P (\theta=\theta_{0})}{P(\mathcal{D})} \tag{1}\]
In above equation, \(P(\mathcal{D}|\theta=\theta_{0})\) represents the "likelihood" of the distribution \(\mathcal{D}\) given the inferred parameter \(\theta\). The term \(P(\theta=\theta_{0})\), known as the "prior", reflects our initial belief concerning the value of \(\theta\) prior to any data observation. The denominator \(P(\mathcal{D})\), or the "evidence", serves as a normalizing constant which ensures the probabilistic coherence of the equation. Disregarding the denominator could lead to the right-hand side of the equation failing to constitute a probability, thereby not necessarily falling between 0 and 1.
In the following paragraph, we demonstrate that the adversary model's objective can be achieved through maximizing the likelihood, which ultimately represents the binary entropy of the true parameter \(\theta_{0}\).
**Likelihood Maximization**: Given any training sample \(x\in\mathcal{D}\), we can identify an explicit function \(f\) within \(\mathcal{F}\), which maps the sample statistic to \(\theta\), that is \(\theta=f(x_{1},\ldots,x_{N})\) holds true for all \(x\) within \(\mathcal{D}\). However, if this is not achievable, we should resort to numerical optimization. In this case, we choose the parameter value of \(\theta\) that maximizes the likelihood of the data in order to fulfil our objective. In a conventional machine learning model, the process of maximization can be represented as follows:
\[\hat{\theta}=\arg\max_{\theta}\mathcal{L}(\theta;\mathcal{D}),\]
where \(\hat{\theta}\) is our inferred parameter, and the likelihood typically referred to as joint density of \(x\in\mathcal{D}\) that described as a function of \(\theta\) expressed as:
\[\mathcal{L}(\theta;\mathcal{D}=x_{1},\ldots,\mathcal{D}=x_{N})\] \[=f(\mathcal{D}=x_{1},\ldots,\mathcal{D}=x_{N};\theta_{1},\ldots, \theta_{N})\] \[=\prod_{i=1}^{N}f(\mathcal{D}=x_{i};\theta_{i})=\prod_{i=1}^{N}f (\mathcal{D}=x_{i};\theta). \tag{2}\]
The last line follows a common assumption used in machine learning that all training samples \((x_{1},\ldots,x_{N})\in\mathcal{D}\) are independent and identically distributed (i.i.d).
Notably, the maximization of the likelihood can be done by minimizing the "negative log likelihood", that is:
\[-\frac{1}{N}\log\mathcal{L}(\theta;\mathcal{D})=-\frac{1}{N}\sum_{i=1}^{N}\log (f(\mathcal{D}=x_{i});\theta), \tag{3}\]
where we take the average via dividing the likelihood by \(1/N\).
It should be noted that directly solving the minimization problem as stated above would not provide a meaningful result because it does not give us any information about the true distribution of \(\mathcal{D}\). The true distribution should correspond to the true parameter \(\theta_{0}\). Therefore, we employ a mathematical strategy of adding zero to the above equation, i.e., adding and subtracting the log-likelihood of \(\log(f(\mathcal{D}=x_{i};\theta_{0}))\). Consequently, bringing the negative term inside, we yield the following expression:
\[\frac{1}{N}\bigg{[}\sum_{i=1}^{N}\log\left(\frac{f(\mathcal{D}=x_{i};\theta_{ 0})}{f(\mathcal{D}=x_{i};\theta)}\right)-\sum_{i=1}^{N}\log(f(\mathcal{D}=x_{ i};\theta_{0})\bigg{]} \tag{4}\]
For all \(x\in\mathcal{D}\), with sufficiently large \(N\), Eq. 4 converges to its expected value according to the asymptotic equipartition theorem, result to:
\[D_{KL}(f(x;\theta_{0})||f(x;\theta))+\mathrm{H}(f(x;\theta_{0})) \tag{5}\]
where
\[\mathrm{H}(f(x;\theta_{0}))=-\mathbb{E}[\log\left(f(x;\theta_{0}) \right)]\] \[=-\int_{\mathbb{R}}f(x;\theta_{0})\log(f(x;\theta_{0}))dx\]
is the differential entropy of the true distribution \(f(x;\theta_{0})\) and
\[D_{KL}(f(x;\theta_{0})||f(x;\theta))=\int_{\mathbb{R}}f(x;\theta_{0})\log \left(\frac{f(x;\theta_{0})}{f(x;\theta)}\right)dx\]
is the Kullback-Leibler (KL) divergence, i.e., relative entropy between the true density \((f(x;\theta_{0}))\) and the parameter-inferred density \((f(x;\theta))\), which is a positive term and equal to zero only if \(\theta=\theta_{0}\).
The optimal goal for the adversary is to attain \(\theta=\theta_{0}\), where this leads us to the average minimum negative log likelihood equivalent to
\[-\frac{1}{N}\log\mathcal{L}(\theta;\mathcal{D})=\mathrm{H}(f(x;\theta_{0})). \tag{6}\]
**Challenges with Non-Determinism in Standard Approaches**: Noting that above defined adversary learning model is of information theoretic, without assumption on computational power applicable to any machine learning model aiming to minimize the loss function or maximize likelihood. Yet, achieving \(\theta=\theta_{0}\) in practice can be challenging due to factors like model complexity and noise, resulting in non-zero Kullback-Leibler divergence. Most importantly, the entropy of the likelihood function described above is non-deterministic, and cannot be further reduced without precise knowledge of the density function \(f(.)\) itself, that is, by setting its derivative to zero. For this outlined reason, we discern a theoretical constraint imposed by conventional optimization strategies in minimizing the negative log likelihood function. The non-deterministic nature of this scenario introduces an element of uncertainty, prompting us to consider an alternative approach: employing coding approach [35] that involves transforming feature samples into higher-dimensional codewords; with the aim of ensuring the model's correctness and completeness in the inference of arbitrary parameters \(\theta_{1},\theta_{2},\ldots,\theta_{t}\).
Model's Correctness and Completeness
We begin by formalizing the model's "correctness," which refers to model's confidence in predicting outcomes correctly or generating "good" adversarial examples. It provides insight into the likelihood of accurately inferring a parameter, ensuring that \(\theta=\theta_{0}\) holds across various random inputs. The rationale underlying this notion posits that should an adversary possess the capability to precisely determine the model parameters, the adversary's objective shifts from data-driven to model-driven, potential allowing them to devise attacks that have a higher likelihood of success.
As we will demonstrate in the forthcoming sections, the computational efficiency of our model serves as a testament to its practical security implications for real-world machine learning models, even when confronted with stringent deterministic requirement of \(\theta=\theta_{0}\).
### _Model's Correctness_
To formalize correctness within our adversarial models, we must explore non-robust features, as defined by Ilyas et al. [19], which are believed to serve as the primary reason for the transferability of adversarial examples across different learning models.
In our context, we redefine non-robust features as those with low real-world occurrence probabilities. This definition is intuitive because rare or infrequent features can be challenging for humans to recognize across various domains.
However, it's crucial to note that a feature's robustness isn't solely based on its occurrence probability. Instead, it relates more to how significantly the feature influences the model's predictive performance. An infrequent feature may have a substantial impact on predictions when present, making it robust in terms of model accuracy. Conversely, a common feature may have little influence on outcomes, rendering it non-robust despite its high occurrence probability.
Nevertheless, our reformulation emphasizes that a feature's robustness depends on the model's capability to detect its presence and understand its impact. This consideration, particularly relevant for adversarial examples, is vital for improving a model's ability to effectively learn from and predict such features. Hence, it's crucial to reiterate that our central objective is to explore the _potential existence_ of these non-robust features in arbitrary adversarial learning models, rather than primarily focusing on their implications for robustness.
To account for these low probability (non-robust) features in our adversarial model, we focus on the distribution of distances between features. When we adopt a coding methodology, we traditionally view a pair of feature samples, denoted as \(w\in\mathbb{R}^{k}\) (and \(w^{\prime}\in\mathbb{R}^{k}\)), as analogous to a corresponding pair of'messages', each with a dimension of \(k\). This'message' pair undergoes a transformation into its respective pair of codewords (typically inhabits a larger-dimension \(n>k\)), represented as \(c\in\mathbb{R}^{n}\) (and \(c^{\prime}\in\mathbb{R}^{n}\)). A smaller distance between codewords suggests a stronger correlation (i.e., intraclass) between their original features, while a larger distance implies a weaker correlation (i.e., interclass) their original features.
To align with the discrete notion of conventional coding approaches [35], we denote \(x=d_{H}(c,c^{\prime}),\forall c\in\mathcal{C}\in\{-1,1\}^{n}\) as the Hamming distance between the codewords post quantization. The term 'quantization' refers to quantizing the codeword values by their sign into the set \(\{-1,1\}\), creating \(n\)-dimensional binary vectors in \(\mathcal{C}\) where
\[\Pr[x=k]=\theta^{k}(1-\theta)^{n-k}. \tag{7}\]
1.1 A Physical Model Designation Based on Second Law of Thermodynamics and Computational Entanglement
The annotation in Eq. 7 narrows our attention to those distributions that can be described with the inferred parameter \(\theta\), with codewords distance equal to \(k\). This annotation is crucial as it describes arbitrary codewords distance \(x\in\mathcal{D}\) as an i.i.d Binomial variable characterized by \(\theta\).
While that is common for standard machine learning models to assume that training data is i.i.d without additional constrain; However, this assumption can be quite idealistic which often doesn't hold in practical applications. Users might introduce manipulated data that violates this i.i.d assumption [36] potentially allow transferability of adversarial examples across different models [37].
In the context of the aforementioned, we can describe the density \(f(x;\theta)\) corresponds to the inferred parameter \((\theta)\) as a function of \((x,k,\theta)\) follows:
\[f(x;\theta)\Rightarrow f(x,k;\theta)=\Pr[x=k]=\theta^{k}(1-\theta)^{n-k}.\]
Given above description, it becomes logical to define the true density, which corresponds to \(\theta_{0}\), as conforming to a Binomial distribution:
\[f(x;\theta_{0})\Rightarrow f(x,k;\theta_{0})=\binom{n}{k}\theta_{0}^{k}(1- \theta_{0})^{n-k}.\]
The reason for this formulation is to establish a rigorous relationship between the measured density and the true density function. This relationship can be written as follows:
\[\frac{f(x,k;\theta_{0})}{f(x,k;\theta)}=\binom{n}{k} \tag{8}\]
_if and only if \(\theta=\theta_{0}\)._
It's important to note that the equation above describes the relationship between a single true parameter, \(\theta_{0}\), and a single inferred parameter, \(\theta\). However, for the sake of model completeness (will be elaborated in Section 4.2), we consider the possibility of using a finite number of parameters to describe the true densities of the system, i.e., \(\theta_{1},\theta_{2},\ldots,\theta_{t}\). This approach resembles the chain rule in calculus used to compute derivatives of composite functions. In our context, the end result is not a derivative but a derivable constant with respect to \(t\). This constant arises from a sequence of fractions representing the true densities of the system,
linked together similar to function compositions in the chain rule expressed as:
\[\frac{f_{0}(x,k;\theta_{1})}{f_{0}(x,k;\theta_{1})}\frac{f_{1}(x,k; \theta_{1})}{f_{1}(x,k;\theta_{2})}\cdots\frac{f_{t-1}(x,k;\theta_{t-2})}{f_{t- 1}(x,k;\theta_{t-1})}\frac{f_{t}(x,k;\theta_{t-1})}{f_{t}(x,k;\theta_{t})}\] \[=\left(\frac{n}{k}\right)\!\theta_{0}^{k}(1-\theta_{0})^{n-k} \choose\frac{\left(n\right)\!\theta_{1}^{k}(1\!-\!\theta_{1})^{n-k}}{\theta_{ 1}^{k}(1\!-\!\theta_{1})^{n-k}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
output vector is referred to as an \(n\)-dimensional codeword. This codeword incorporates all \(h_{i}\in G\):
\[c=G(w)=(h_{1}(w),\ldots,h_{n}(w))\in\mathcal{C},\] \[c^{\prime}=G(w^{\prime})=(h_{1}(w^{\prime}),\ldots,h_{n}(w^{\prime }))\in\mathcal{C},\]
The Hamming distance, \(x=d_{H}(c,c^{\prime})\), between two codewords \(c\) and \(c^{\prime}\) is found to conform to a Binomial distribution, denoted as \(\text{Bin}(n,\theta_{0})\). This relationship can be formulated as:
\[\Pr[x=k]=\binom{n}{k}(\theta_{0})^{k}(1-\theta_{0})^{n-k}. \tag{11}\]
Owing to the Central Limit Theorem (CLT), when \(n\) is large, the Hamming distance \(x=d_{H}(c,c^{\prime})\) can be normalized, taking the form \((d_{H}(c,c^{\prime})-n\theta_{0})/\sqrt{n\theta_{0}(1-\theta_{0})}\), which tends to converge towards a standard normal distribution, \(\mathcal{N}(0,1)\). This observation implies that \(d_{H}(c,c^{\prime})\) aligns closely with a normal distribution, precisely \(\mathcal{N}(n\theta_{0},n\theta_{0}(1-\theta_{0}))\) when large enough \(n\) is in used. Given the _principle of maximum entropy_, the standard normal distribution, therefore, naturally serves as an optimal candidate for maximizing the entropy of the densities within our examination, supporting the model's interpretability in harmony with the second law of thermodynamics, as outlined in Eq. 9.
While adopting a normal distribution, which is inherently continuous, we introduce a derived set of measurements, termed "reduced codewords" denoted as follows:
\[\hat{c}=\hat{G}(w)=(\hat{h}_{1}(w),\ldots,\hat{h}_{k}(w)),\ \hat{c}\in \mathcal{C}^{+}\] \[\hat{c}^{\prime}=\hat{G}(w^{\prime})=(\hat{h}_{1}(w^{\prime}), \ldots,\hat{h}_{k}(w^{\prime})),\ \hat{c}^{\prime}\in\mathcal{C}^{+}\]
Although it is not mandatory, it is beneficial to define each reduced codeword to be non-discrete (\(\hat{c}\in\mathbb{R}^{k}\) and \(\hat{c}^{\prime}\in\mathbb{R}^{k}\)) within a reduced dimensionality \(k<n\) that aligns to their original sample (\(w,w^{\prime}\)) in \(\mathbb{R}^{k}\). These reduced codewords are derived from a subset, namely "reduced matrix", \(\hat{G}\subseteq G\) consisting of \(k\) hash functions drawn from the generator function \(G\) without any quantization. More specifically,
\[\hat{h}_{i}(w)=(v_{i}^{T}\cdot w)\in\mathbb{R}\]
for \(i=1,\ldots,k\). The generated reduced codewords are resides within a combinatorial set--\(\mathcal{C}^{+}\)--situated in the dimensionally reduced \(\mathbb{R}^{k}\) space.
To satisfy the necessary constraint established in Eq. 8, \(f(x,k;\theta)\) should be expressed as a fraction, specifically \(1/\binom{n}{k}\), of the true density \(f(x,k;\theta_{0})\). This implies that \(f(x,k;\theta)\) represents a specified case, preferably the worst-case scenario of interest, among the \(\binom{n}{k}\) possible outcomes of the true density. This aligns with our objective of configuring our model to effectively identify features with a low probability of occurrence that are located furthest from the origin of the standard normal distribution. In this context, our primary objective is to maximize the "relative distance" between the reduced codeword \(\hat{c}\) and \(\hat{c}^{\prime}\).
Let assume \(w^{\prime}=0^{k}\), it follows that \(c^{\prime}=0^{n}\) represents the null vector in the higher-dimensional space. Since \(n>k\), we can establish a \(\mathbb{R}^{k+1}\) dimensional subspace. This subspace consists of \(k\)-dimension hyperplanes with \(\hat{c}^{\prime}=0^{k}\) serving as their origin. The reduced codeword \(\hat{c}\) that is the furthest from \(\hat{c}^{\prime}=0^{k}\) can be identified by selecting the \(k\) largest absolute outputs from the original codeword \(c\), i.e., the Euclidean distance \(d(\hat{c},0)\) is maximum. These outputs, in conjunction with their corresponding hash functions, form the reduced matrix \(\hat{G}\).
**Decoding via Hamming Distance Measurement:** Leverage this methodology, we are equipped to compute the worst-case Hamming distance \(\tilde{x}=d_{H}(\hat{c},\hat{c}^{\prime})=k\) between an arbitrary random \(\hat{c}\) with respect to any non-zero codeword \(\hat{c}^{\prime}\). This calculation is made feasible following the application of the quantization process to the reduced codewords with \(\text{sgn}()\) function. If a worst-case Hamming distance \(\hat{x}=d_{H}(\hat{c},\hat{c}^{\prime})=k\) exists, it signifies a unique instance among \(\binom{n}{k}\) possible outcomes of a Hamming distance solution between the original \(n\) dimensional codeword, i.e., \(x=d_{H}(c,c^{\prime})=k\), that aligns with the expression given by Eq. 11.
Under these circumstances, a solution must exist for the angular difference \(\theta\pi\) between the feature sample \(w\) and \(w^{\prime}\). This can be represented as:
\[\theta=\frac{\sum_{i}^{k}\text{sgn}(\hat{h}_{i}(w))\neq\text{sgn}(\hat{h}_{i}(w ^{\prime}))}{n}=\frac{\arccos(w\cdot w^{\prime})}{\pi}=k/n.\]
Above derivation establishes the conclusion that \(\theta=\theta_{0}=k/n\) with respect to the mean of the standard normal distribution, in accordance with the CLT:
\[d_{H}(\hat{c},\hat{c}^{\prime})-n\theta=d_{H}(c,c^{\prime})-n \theta_{0}=0\] \[\Rightarrow\theta_{0}=\theta=k/n. \tag{12}\]
Such computation yields non-trivial null solutions (the zero) that satisfies our previous assumption, specifically manifesting as \(w^{\prime}=0^{k}\), \(\hat{c}^{\prime}=0^{k}\) and \(c^{\prime}=0^{n}\). These null solutions are observed within the Euclidean space spanning multiple dimensions if and only if \(\theta_{0}=\theta=k/n\). Consequently, the densities corresponds to \(\theta_{0}\) and \(\theta\) can therefore have their ratio complies with Eq. 8. This alignment provides us with the minimum solution for the negative log likelihood equivalent to the binary entropy function \(nH_{2}(k/n)\) as described in Eq. 10.
It's worth noting that the null (zero) vector, i.e., \(0^{k}\) and \(0^{n}\) presents a unique special case in above derivation. The angular difference between the null vector (\(w^{\prime}=0^{k}\)) and any non-zero \(w\) can be measured as \(\cos(\theta\pi)=w\cdot w^{\prime}/(||w||w^{\prime}||)\), which is undefined due to the division by zero magnitude of the null vector. Despite this, we often assign a value of \(\theta=k/n=1/2\) by convention which signifies that a null vector is maximally dissimilar from any non-zero \(w\), implying a scenario of maximum entropy (\(nH_{2}(1/2)=n\)). Conversely, for any pair of non-zero feature samples \(w\) and \(w^{\prime}\), if they are orthogonal (angle between them is 90 degrees), their entropy is also maximum, but for a different reason - their dissimilarity stems from having entirely different directions.
**A 3-D Spacetime Interpretation:** To better visualize and understand this process, we adopt a geometrical interpretation, represented in Figure 1. This illustration demonstrates the influence of three successive encoding stages: from \(t=1\) to \(t=3\). Although our model can easily extend to higher dimensions, including a \(\mathbb{R}^{k+1}\) subspace for arbitrary large \(n\) and \(k<n\), for ease of visualization, we only focus on scenarios where \(k=2\), considering a 2-D space. In this
context, the \(\mathbb{R}^{k+1}\) subspace contains maximum \(t=3\) unique hyperplanes. Each of these corresponds to a new solution of \(w^{\prime}=0^{k}\) describing its largest distance with \(w\) along the additional dimension-the temporal axis. Each new encoding introduces a fresh hyperplane, collectively giving rise to a 3-D spacetime representation.
**Model's Efficiency**: Note that despite the fact that there is an exponential growth in entanglement complexity as delineated in Eq. 9, the intrinsic encoding and decoding process remains computationally efficient. Specifically, the resultant reduced codewords, (\(\hat{c}\) and \(\hat{c}^{\prime}\)) manifest quadratic time computational complexity, denoted as \(O(n^{2}t)\) (see Algorithm 1 and 2 for asymptotic large \(n\)). This insight provides a crucial perspective on the design criteria for algorithms. Any algorithm intended to facilitate the realization of entanglement must be _efficient_ enough to accommodate computational demands that increase on an exponential scale with entanglement complexity. Figure 2 illustrates a toy example on how Algorithm 1 operates at \(t=1,2,3\) to produce reduced matrices \(\{\hat{G}_{1},\hat{G}_{2},\hat{G}_{3}\}\) and their corresponding reduced codewords \(\{\hat{c}_{1},\hat{c}_{2},\hat{c}_{3}\}\).
### _Model's Completeness_
Our analysis to this point has effectively confirmed the model's correctness guarantee in inferring \(\theta_{0}\). Building on our earlier findings, it's natural to broaden our exploration to a multi-parameter context (\(\theta_{1},\ldots,\theta_{t}\)), described in Eq. 9.
The move to a multi-parameter setting is vital: for large values of \(n\), the worst-case angular difference between \(w\) and \(w^{\prime}\) that can be computed is small, suggesting that the computable distance between them on a single unique hyperplane must also be small. This means that in order to compute larger distances between \(w\) and \(w^{\prime}\), we need to introduce additional iterations of encoding. As a result, computational complexity grows over time, correlating with the progression of entanglement. In view of this, the move to multi-parameter setting is necessary to include all the information needed to predict the possible computational distance solution between \(w\) and \(w^{\prime}\), which can be formally defined as the model's completeness.
When the encoding is applied recursively, i.e., using a new random generator function \(G()\), and taking the reduced codeword as input feature samples, it produces new entanglement solution to the feature sample pair \((w,w^{\prime})\) corresponds to new parameter \(\theta_{i}\) (for \(i=1,\ldots,t\)). These parameters collectively should adhere to the same conclusion presented in Eq. 12 due to their independent and identical characteristic, generally expressed as:
\[\theta_{0}=\theta_{1}=\ldots=\theta_{t}=k/n.\]
**Spacetime Expansion for the Preservation of Inertial Motion Frames**: To gain a deeper grasp of our model's completeness requirement, it is intuitive to view \((\hat{c},\hat{c}^{\prime})\) and \((w,w^{\prime})\) as _physical entities_, specifically denoting their positions within a coordinate system (over x-y plane) at arbitrary movement of time \(s_{t}\), in second, characterized by the maximum encoding time steps (\(t\)).
At each encoding phase, the introduction of randomness through new random reduced matrix \((\hat{G_{t}})\) may result in an inconsistent increase in the distance between \((\hat{c}_{t},\hat{c}^{\prime}_{t})\). Considering each encoding operation at arbitrary maximum encoding time step \((t)\) shall corresponds to specific time interval (in second) \(s_{t}\). The non-linear increases in distance over time \((s_{t})\) started from a common origin, implies that \(w\) is seemingly accelerating away from \(w^{\prime}\) at arbitrary random rates. To consistently apply classical physics laws in describing the relative motion between \(w\) and \(w^{\prime}\) at arbitrary maximum encoding time step, particularly within an inertial (non-accelerating) frame of reference, it is imperative to uphold a constant angular difference in between \((w,w)\) across distinct hyperplanes, as specified by our formulation \(\theta_{0}=\theta_{1}=\ldots=\theta_{t}=k/n\). The sole effective method for
Fig. 1: The encoding process visually demonstrates the progressive divergence of \(w\) and \(w^{\prime}\), and their reduced codewords counterpart \((\hat{c},\hat{c}^{\prime})\) with increasing encoding time step. We employ subscript notation to indicate that the reduced matrix \(\hat{G}_{t}\) corresponds to arbitrary maximum encoding time steps \(t=1\), \(t=2\) and \(t=3\). For non-trivial encoding, \(w\) should start from a non-zero value.
Fig. 2: Toy example (\(n=4,k=2\)) demonstrates the encoding process with \(t=1\), \(t=2\), and \(t=3\). This process can be repeated for arbitrary large \(t\). Alternatively, one can view \(\hat{c}_{t}\) as the coefficient of an evolving function, represented by \(f_{(t=3)}(x,y)=\hat{G}_{3}\hat{G}_{2}\hat{G}_{1}(x;y)\) treat \(x=0.72\) and \(y=-0.06\) as initial condition.
achieving this outcome is by adjusting their time intervals at arbitrary maximum encoding time step. This adjustment precipitates a displacement of the spacetime origin away from hyperplanes characterized by greater distances between \((w,w)\), thereby giving rise to what we term "spacetime expansion."
The motivation for considering spacetime expansion is to ensure that we accurately capture the dynamics of a system within the framework of Newton's laws of motion. When an object experiences acceleration, whether due to a strong or weak force, we can counteract this acceleration by introducing a fictitious force. This adjustment enables us to seamlessly apply Newton's laws of motion.
For example, we can use the concept of a fictitious force to elucidate why a pendulum swings backward when it is suspended from the ceiling of a car that accelerates. In our specific context, the cumulative effect of these forces results in spacetime expansion. This concept becomes even more apparent when we draw an analogy to the well-documented expansion of our universe as explained Hubble [39] and further tested by Guzzo et al. [40]. We emulate this cosmic expansion to establish the notion of computational entanglement making it an intuitive way to explain our model's completeness for parameter inference.
While there may be queries regarding the necessity of introducing spacetime expansion, we posit that it is an indispensable component in our case, particularly when confronting requirement of \(\theta_{0}=\theta_{1}=\ldots=\theta_{t}=k/n\). In such instances, the imperative arises to interconnect all hyperplanes while referencing a common origin, as vividly depicted in Figure 4. This approach allows us to construct a spacetime framework that is inherently connected to Einstein's theory of special relativity.
Specifically, drawing a parallel to the Einstein's theory of special relativity [41], where all physical laws remain the same in every inertial reference frame, aforementioned approach also ensures a similar constancy: With the condition that the angular difference between \((w,w^{\prime})\) remains consistent at arbitrary maximum encoding time step \(t>0\), it follows that the ratio of their Euclidean distance, represented as \(d_{t}(w,w^{\prime})\), to the corresponding time interval in seconds (\(s_{t}\)), denoted as \(d_{t}(w,w^{\prime})/s_{t}\), must remain invariant.
Above observation also underscores another _principle of equivalence_ in Einstein's general theory of relativity, offering a foundational aspect to our model in characterizing the overall motion of \(w\) as it accelerates away from \(w^{\prime}\). This accomplishment is made possible by employing distinct local inertial frames of reference, each associated with different \(t\) through an entanglement process. As shown in Figure 4, at arbitrary \(t>0\), the angular difference between \((w,w^{\prime})\) represented as \(\theta_{1}\pi=\theta_{2}\pi=\theta_{3}\pi\), remain constant. This constancy is concurrent with the expansion of spacetime, as the origin progressively shifts away from the event hyperplane over time.
**Connection to Minkowski Spacetime:** One can readily observe that the previously described geometric representation (see Figure 1 and 4) bear a strong resemblance to the Minkowski's spacetime diagram (see Figure 3). More specific, when we treat \((w,w^{\prime})\) as distinct objects in physical context, our model posits that the ability to ascertain the existence of a solution for \(w\) deviates from \(w^{\prime}\) with a maximum speed of \(\frac{d_{t}(w,w^{\prime})}{s_{t}}\leq v_{t}\), should not only be computationally feasible but also adhere to a specific computational complexity. This inequality holds because of \(d_{t}(w,w^{\prime})\) corresponds to maximum distance where their reduced codewords counterpart s.t. \(\hat{c}\) is furthest away from \(\hat{c^{\prime}}=0^{k}\). This give us solution to the spacetime interval between two events always less than or equal to zero, as in \(d_{t}(w,w^{\prime})^{2}-(v_{t}s_{t})^{2}\leq 0\). This scenario holds significance in Minkowski's spacetime diagram. In particular, when we set \(v_{t}\) equal to the speed of light, events fall within the "light cone" in Minkowski's spacetime diagram, known as the timelike region. This cone-shaped boundary defines the limits of causality and where information and physical effects can propagate.
Conversely, when \(d_{t}(w,w^{\prime})^{2}-(v_{t}s_{t})^{2}>0\) is the case, marking the region beyond the light cone, it is denoted as the spacelike area. In this spacelike region, events find themselves entirely devoid of any conceivable causal connection to the origin. Thus, an event occurring in the spacelike region is causally disconnected from the origin. No signal, even one moving at the speed of light, could reach from the origin to the spacelike event (or vice versa) without surpassing the speed of light.
The discovery above provides profound insights: despite our operation within a global non-inertial frame of reference wherein \(w\) undergoes acceleration away from \(w^{\prime}\), it becomes evident that every local inertial frame of reference, corresponding to an arbitrary maximum encoding time steps (\(t\)), exists as an independent entity, and is computationally derivable. These frames of reference harmoniously adhere to the principles of Minkowski Spacetime. The fundamental tenet of the invariant speed of light within each local inertial frame, i.e., "locally Minkowskian," thus remains steadfast and rigorously validated.
**Put succinctly**, while we witness an increases in computational complexity throughout the continuous encoding, we acquire valuable insights regarding their maximum spatial separation while still preserving a causal connection between them, thus upholding the _principle of locality_. This brings us to contemplate an expansion of Einstein's postulate of special relativity, emphasising on the increasing computational complexity as an additional facet, which can coexist harmoniously with special relativity. Notably, this
Fig. 3: The Minkowski’s spacetime diagram. \(v\) is the speed of light and \(s\) is unit of time (second).
complexity surge stems from the second law of thermodynamics, which dictates entropy's increases over time. To uphold information completeness and order within systems, the integration of computational entanglement emerges as an imperative necessity.
#### 4.2.1 Introducing Computationsyal Relativity: Time Dilation and Length Contraction
Having firmly established a profound connection between our model and the Minkowski spacetime, it is indeed valuable to delve deeper into Einstein's special theory of relativity. This theoretical framework finds its most illustrative representation through the utilization of a Minkowski spacetime diagram, thoughtfully integrated with Lorentz transformations. This integrated approach provides invaluable insights into the effects of time dilation and length contraction within the realm of relativity.
**Time Dilation**: Without a preliminary examination of the Lorentz transformation, it is straightforward that, as elucidated in Figure 4, the continuous progression of entanglement over increasing maximum encoding time step engenders a consequential expansion of spacetime. This expansion undeniably gives rise to increases in the temporal duration within every single local inertial frame of reference at relatively lower computational complexity, as exemplified by the increased in time interval \(s_{t=1}\) to \(s_{t^{\prime}=1}\), and \(s_{t=2}\) to \(s_{t^{\prime}=2}\) within the green and red cones in the figure respectively. This phenomenon plays a crucial role in ensuring the preservation of the invariance of the spacetime interval across all local inertial frames of reference at various computational complexities, as follows:
\[d_{t}(w,w^{\prime})^{2}-(v_{t}s_{t})^{2}=d_{t^{\prime}}(w,w^{\prime})^{2}-(v_{ t^{\prime}}s_{t^{\prime}})^{2}, \tag{13}\]
for \(t\geq t^{\prime}>0\).
Here, we refer to the scenario when \(t=t^{\prime}\), where \(v\) represents the maximum speed of light at arbitrary maximum encoding time step \(t\), corresponding to the frame of reference at lower complexity level (see Figure 4 for better visualization). In this context, frame of reference at greater complexity level demonstrates \(d_{t^{\prime}}(w,w^{\prime})=d_{t}(w,w^{\prime})\) (since \(t=t^{\prime}\)). This frame of reference will undergo time dilation and exhibit a noticeable decrease in the angular difference between \((w,w^{\prime})\) due to spacetime expansion. Consequently, the measured speed \(v_{t^{\prime}}=d_{t^{\prime}}(w,w^{\prime})/s_{t^{\prime}}\) at greater complexity level must decrease when compared with the maximum speed of light \(v_{t}\), i.e., \(v_{t^{\prime}}<v_{t}\). Then, by describing \(d_{t^{\prime}}(w,w^{\prime})^{2}=(x_{t^{\prime}})^{2}+(y_{t^{\prime}})^{2}\) with respect to \(w^{\prime}=0^{k}\) as the origin for \(k=2\)-dimensions of space (representing a hyperplane in the 3-D spacetime); the most common form of the Lorentz transformation, parametrized by \(v_{t^{\prime}}\) confined to the \(x\)-direction, can be expressed as:
\[s_{t} =\gamma(s_{t^{\prime}}-\frac{v_{t^{\prime}}x_{t^{\prime}}}{v_{t}^ {2}}),\] \[x_{t} =\gamma(x_{t^{\prime}}-v_{t^{\prime}}s_{t^{\prime}}),\] \[y_{t} =y_{t^{\prime}},\]
with \(\gamma=1/\sqrt{1-\frac{(v_{t^{\prime}})^{2}}{v_{t}^{2}}}\) representing these Lorentz factor. Utilizing the Lorentz transformation equations, the changes of time interval \((\Delta s_{t},\Delta s_{t^{\prime}})\) measured within local inertial frame of reference at arbitrary maximum encoding time steps \(t=t^{\prime}\) but different complexity level follows the time dilation formula:
\[\Delta s_{t^{\prime}}=\frac{\Delta s_{t}}{\sqrt{1-\frac{(v_{t^{\prime}})^{2}} {v_{t}^{2}}}}.\]
This leads to the time appears to tick more slowly in higher complexity frame of reference.
**Length Contraction**: Given the aforementioned time dilation effect, the length contraction can be readily derived in similar way within the context when \(t>t^{\prime}>0\). The length contraction formula is described as:
\[\Delta x_{t^{\prime}}=\Delta x_{t}\sqrt{1-\frac{(v_{t^{\prime}})^{2}}{v_{t}^ {2}}}.\]
This equation illustrates that the changes of length \(\Delta x_{t^{\prime}}\) along \(x\)-direction measured in reference frame of lower complexity level, relative to reference frame of higher complexity level (\(\Delta x_{t}\)), appears to be contracted. The concept of length contraction becomes evident when we analyze Figure 4, by looking at the inner cone and extend it to specific maximum encoding time step \(t>t^{\prime}\). For example, if we take \(t=2\) and \(t^{\prime}=1\) then projecting the distance \(d_{t^{\prime}}(w,w^{\prime})\) onto \(t=2\) while maintaining speed at \(v_{t^{\prime}}<v_{t}\). This projected length is shorter than \(d_{t}(w,w^{\prime})\), which corresponds to the maximum speed of light \(v_{t}\).
_Comprehensive Role of Model's Completeness and Correctness in Achieving Robustness Against Adversaries_
In the context of adversarial robustness, when confronted with an arbitrary random query feature sample \(w^{\prime}\), a machine learning model needs, at the very least, the ability to recognize the existence of adversarial examples \(w\in W\) within an arbitrary random feature sample distribution \(W\in\mathbb{R}^{k}\). In this scenario, the encoding and decoding procedures developed with completeness guarantee over set of parameters \(\theta_{1},...,\theta_{t}\) become crucial for showcasing the possible existence of adversarial examples. These examples could exhibit identical angular difference despite the presence of large absolute distances or angular differences within our formalized spacetime geometry \((\mathbb{R}^{k+1})\).
As entanglement continue, corresponding to more encoding iterations, these adversarial examples display effects akin to time dilation and length contraction, with reductions in distances and angular differences relative to the reference frame of higher complexity. This makes it increasingly difficult for models positioned at a higher complexity to detect and differentiate their small differences. Consequently, this explains the adversarial conundrum: _Overparameterized machine learning models, as mentioned by Zhang et al. [27], and those with heightened complexity [20], exhibit increased vulnerability to adversarial attacks. The enhanced complexity during training may effectively reduce the distance and angle between distinct features, such as 'cat' and 'aircraft', through entanglement processes. This can lead to misclassification and erroneous predictions._
We must underscore that the assertion presented above may yet require meticulous scrutiny and empirical validation. Nevertheless, as expounded upon in Section 6.2, we have exhibited compelling evidence suggesting that this
proposition indeed holds true, particularly as parameters and model complexity surpass those of a simple linear model.
Specifically, our model posits that entanglement intrinsically corresponds to varying degrees of computational complexity distributed across distinct hyperplanes within spacetime. This underscores the idea that the complexity inherent in the entanglement process is indicative of the model's robustness in the face of adversarial examples. Without such a guarantee of model completeness that takes into account the spacetime interval with respect to complexity differences, the model would fail to ascertain the existence of adversarial examples, let alone formulate a strategy to combat them.
Expanding on above rationale, we establish a significant relationship between the _existence_ of a capable encoding/decoding scheme and entanglement, applicable to adversarial learning model study. This relationship acts as a critical benchmark for those wishing to construct an optimal learning model, with the goal of achieving the best performance. Without a proper consideration on the model's construction and the relativistic effect of continuous entanglement, the model will lack the necessary completeness to encode and decode the codeword correctly to produce reliable predictive output.
This conclusion holds true even for those aiming to create a model with a high degree of resilience against adversarial examples. When examining these adversarial examples from the vantage point of time dilation and length contraction, their intricate characteristics come to the forefront. This perspective implies that these examples are not mere features or bugs but instead emerge from situations where spacetime undergoes expansion, resulting in notable relativistic effects. These findings align with the work of Buckner, as described in [31], which highlights relativistic effects such as the Doppler effect as examples of 'artifacts' that can result from interactions between our instruments and the surrounding environment.
Nonetheless, it is valid to question the appropriateness of using the term 'artifacts' to describe these universally recognized relativistic phenomena. We, however, favor to consider them as integral and foundational elements of physical reality, shaping the very essence of our universe. From this perspective, we are, at any rate, convinced that contemporary machine learning model is undeniably bound to the governing principles that permeate this physical reality.
In the words of Deutch, as quoted in [42], _"Computers are physical objects, and computations are physical processes. What computers can or cannot compute is determined solely by the laws of physics, not by pure mathematics."_ This perspective highlights the inseparable connection between computation and the physical world, underscoring the essential role of the laws of physics in defining the limits of what can be computed.
## 5 Experimental and Stimulation Results
In this section, we undertake a comprehensive series of experiments and simulations designed to rigorously assess the intricacies of our model and validate its correctness and completeness. Algorithm 1 depict the Encoding procedure. Each encoding iteration produce a reduced codeword \(\hat{c}\) from input sample \(w\) and its corresponding reduced matrix \(\hat{G}\) that depends on the previous state.
We embark on our investigative journey with feature samples collected from human biometrics. Our focus is centered on three crucial feature types: fingerprints, faces, and irises. These features, characterized by their remarkable distinctiveness, both between different classes and in terms of their inherent entropy, making them well-suited for our
Fig. 4: The model’s behavior during continuous encoding is illustrated as follows: Each colored arrow pointing to \(w\) represents its direction, with an angle of \(\theta_{t}\pi\) between the axis of encoding time step. When these vectors share a common origin (e.g., at point \(\mathbf{0}\)), the angle between them in various local frames of reference (represented by green, red, and blue cones) increases over time, signifying acceleration of \(w\) traverse away from \(w^{\prime}\) over time. Spacetime expansion counteracts this acceleration, causing each frame of reference at an arbitrary maximum encoding time step \(t>t^{\prime}\) to move inertially at a constant speed of light, denoted as \(v_{t}=d_{t}(w,w^{\prime})/s_{t}\).
model analysis, serve as input samples \(w\) for encoding and \(w^{\prime}\) for decoding.
**For fingerprints,** we make use of a 99-dimensional feature, which is derived from fingerprint minutiae as proposed by Jin et al. [43]. The procedure involves generating the minutia cylinder-code using the MCC SDK [44], followed by a transformation through kernel principal component analysis (KPCA). For our data sources, we engage the FVC 2002 (subset DB1) and FVC 2004 (subset DB1) [45] datasets. Combined, these datasets offer 800 fingerprints (distributed as 100 sets with 8 fingerprints each). Following the FVC protocol [46], our study yielded 2,800 intra-class distance (genuine) scores and 4,950 inter-class distance (imposter) scores.
**For face,** we adopted the MagFace model [47], a pre-established CNN model. This model facilitates the creation of universal face feature embeddings with an input measurement (\(w,w^{\prime}\)) that possesses a dimensionality of \(512\). The datasets employed for this assessment include LFW [48] and CMU-PIE [49]. Specifically, with LFW, we strictly adhere to the 1:1 verification protocol which produces 3,000 genuine and an equivalent number of imposter test scores. In contrast, our utilization of CMU-PIE, in compliance with the FVC protocol, has yielded 18,768 intra-class and 2,278 inter-class distance scores.
**For iris,** we adopted the Iriscode technique, enhanced with a bloom filter [50]. This method is notable for its versatility: the Iriscode's distribution (specifically, the count of zeros and ones) can be adjusted by altering parameters \(L\) and \(W\), which modulate the size of the bloom filter. Such adaptability offers a pathway to probe our model across varied distributions. Our primary dataset for this category is the CASIA database v3-interval [51]. It contains 2,639 iris images sourced from 396 individual subjects. The intra-class comparisons, where each iris template is cross-referenced with different samples from the same subject, amount to 4,406 inter-class distance scores. On the other hand, inter-class comparisons pair each template with the primary iris sample template from varying subjects, resulting in 199,110 inter-class distance scores.
### _Result 1: Entropy Reduced and Pronounced Time Dilation with Increment of Complexity_
Algorithm 2 illustrates the Decoding algorithm employed in our initial experiment, which decodes by measuring the Hamming distance between the reduced codeword pair (\(d_{H}(\hat{c}_{t},\hat{c}^{\prime}_{t})/k\)) at arbitrary maximum encoding time step \(t\). The distances are measured from both inter-class and intra-class comparisons and are illustrated in Figures 5-10, encompassing various configurations of \((n,k)\) under same group of features samples we have used for face biometric.
Given sufficiently large \(t\), both inter-class and intra-class distance distributions trend deviate toward the edges of the graph. Such behaviour provides compelling evidence of ongoing entanglement, which effectively reduces the the angular difference between every feature sample pair \((w,w^{\prime})\).
As the measured distance between all reduced codeword pairs approach the graph's edge at \(d_{H}(\hat{c}_{t},\hat{c}^{\prime}_{t})/k=1\), we obtain a solution where \(\theta_{0}=\ldots=\theta_{t}=k/n\), where the entropy of the system is minimized to zero. The conclusion holds even when \(d_{H}(\hat{c}_{t},\hat{c}^{\prime}_{t})/k=0\). This is because changing the sign of each bit in the reduced codeword \(\hat{c}_{t}\) (e.g., transitioning from \(-1\) to \(1\) or vice versa) simply leads to the opposite outcome.
As we work with abstract representations of \(w\) and \(w^{\prime}\) rather than tangible physical entities, it is more appropriate to interpret the time dilation effect in terms of the reduction in the angular difference between these numerical samples. This effect becomes more conspicuous for pairs of samples that are in closer proximity to each other, specifically, those with smaller intra-class distances. This elucidates why the intra-class distance distribution tends to deviate towards the edges of the graph more rapidly (at smaller values of \(t\)). These distributions correspond to a lower complexity frame of reference at smaller encoding time steps.
It's essential to emphasize that these observed phenomena aren't limited to specific input samples \((w,w^{\prime})\) and their default distributions (see Figure 11-16). This effect is expected to be _universal_, entangling all feature sample pairs \((w,w^{\prime})\) with sufficiently large encoding iterations, regardless of their initial distance, feature type, or dataset.
```
1:functionDecoding(\(\{\hat{c}_{1}\ldots,\hat{c}_{t}\}\), \(\{\hat{G}_{1}\ldots,\hat{G}_{t}\}\), \(w^{\prime}\))
2:for\(i=1:t\)do
3: Compute \(\hat{c}^{\prime}_{i}=\hat{G}_{i}w^{\prime}\)
4: Set \(w^{\prime}=\hat{c}^{\prime}_{i}\)
5:endfor
6: Output \(d_{H}(\hat{c}_{t},\hat{c}^{\prime}_{t})\)
7:endfunction
```
**Algorithm 2**
### _Result 2: Information Entangled In Quantized Context_
In this section, we present experimental results demonstrating how information can exhibit entanglement in quantized context.
While our focus isn't on quantum entanglement, our definition of entanglement is classical and computational, specifically targeting feature-sample pairs. It's worth to highlight that in quantum mechanics [52], the entanglement of two spin-\(1/2\) electrons is described probabilistically using the singlet state wave function:
\[|\Psi^{-}\rangle=\frac{1}{\sqrt{2}}(|-1\;1\rangle-|1\;-1\rangle),\]
Figure 9: \(n=25,k=24\)
Figure 10: \(n=50,k=49\)
Figure 16: \(n=500,k=250\), with Bloom filter (\(L=9\), \(W=32\)), CASIAv3-Interval (Iris)
Figure 12: \(n=500,k=250\), CMUPIE (Face)
Figure 13: \(n=500,k=250\), FVC2002DB1 (Fingerprint)
Figure 15: \(n=500,k=250\), with Bloom filter (\(L=7\), \(W=32\)), CASIAv3-Interval (Iris)
with \(-1\) and \(1\) denoting spin orientations. In this state, each electron exists in a superposition of both spin orientations until measured. When measured, the wave function collapses, leading to either the state \(|-1~{}1\rangle\) or \(|1~{}-1\rangle\). This results in opposite spins, ensuring a total spin of zero due to their intrinsic correlation.
Conversely, in our context of computational entanglement, individual bits of the reduced codewords \(\hat{c}\) or \(\hat{c}^{\prime}\) exhibit entanglement analogous to electron spin. With sufficiently large number of maximum encoding time step (\(t\)), this results in the Hamming distance measurement \(d_{H}(\hat{c},\hat{c}^{\prime})=k\) exhibit opposite states. For example, when \(\hat{c}=(-1~{}1)\), it necessitates that \(\hat{c}^{\prime}=(1~{}-1)\). This illustrates a strong correlation between the features, akin to the entanglement in quantum realm, with a total "spin" (represented in bits) sum to zero. Alternatively, a null solution can be achieved, resulting in entangled codewords always aligning in the same direction. This phenomena has been demonstrated in our second experiment, which employed Algorithm 3 to monitor the value of each individual bit value following the quantization of the reduced codeword pair over successive encoding iterations.
In our experiment, we selected parameters \(n=100\) and \(t=50\) with dimensions \(k=2\) (2-D) and \(k=3\) (3-D). These choices facilitated a more lucid visualization of the value shifts within the 2-D and 3-D Euclidean spaces. Figure 17 illustrates the full spectrum of potential values, illuminating the trajectories through which the reduced codewords traverse the Euclidean domain. As anticipated, since the codewords undergo quantization--resulting in discrete outcomes of either -1 or 1 for each reduced codeword of dimension \(k\)--the findings presented in Figure 17 reinforce the notion that the quantized information naturally corresponds to its classical analogue within the Euclidean Space, visualized as a hypercube of dimension \(k\).
### _Result 3: Causality Behind Computational Entanglement_
In this section, we aim to shed light on the local aspects of computational entanglement within spacetime and delve into the underlying causality behind its formation. It's crucial to clarify our specific use of the term "locality" in this context to avoid any confusion. We are primarily concerned with the local realization of the processes that lead to the creation of entangled states. The concept of locality implies that any entanglement event takes place within the light cone region, where causal connections can be established between any sampled feature pair \((w,w^{\prime})\), using a specific set of reduced metrics, \(\{\hat{G}_{1},\ldots,\hat{G}_{t}\}\).
```
1:functionTracePath\({}_{2}\{(\hat{c}_{1}\ldots,\hat{c}_{t}\}\), \(\{\hat{G}_{1}\ldots,\hat{G}_{t}\}\), \(w^{\prime})\)
2:for\(i=1:t\)do
3: Compute \(\hat{c}^{\prime}_{i}=\hat{G}_{i}w^{\prime}\)
4: Set \(w^{\prime}=\hat{c}^{\prime}_{i}/\|\hat{c}^{\prime}_{i}\|\)\(\triangleright\) normalization take place here
5: Set \(w=\hat{c}_{i}/\|\hat{c}_{i}\|\)\(\triangleright\) normalization take place here
6: Record their position as \(p_{i}=(w,w^{\prime})\)
7:endfor
8: Plot and trace the path for \(p_{1},\ldots,p_{t}\)
9:endfunction
```
**Algorithm 4**
Building upon our foundational analysis, we leveraged Algorithm 4 for our third experiment. Our goal was to meticulously trace the trajectory of the reduced codeword pair \((\hat{c},\hat{c}^{\prime})\) within each encoding iteration. For the seek of information completeness, we bypass the quantization of the reduced codewords, retaining their continuous values within the Euclidean space. It then becomes essential to normalize these values at each encoding iteration. Such normalization safeguards the computational stability, especially preventing outcomes from diverging unboundedly. This safeguard is particularly crucial when envisioning scenarios where infinite encoding iterations could, in principle, cause \(w\) to deviates indefinitely from \(w^{\prime}\).
Our preliminary observations revealed something profound. At lower encoding iterations (i.e., smaller \(t\)), such as \(t=2\) or \(t=4\), the pathways of the reduced codeword pair appeared to lack discernible order, behaving ostensibly at random. However, a fascinating transition became evident
Fig. 17: After numerous encoding iterations, \(w\) and \(w^{\prime}\) — represented by the entangled reduced codeword counterpart \((\hat{c},\hat{c}^{\prime})\) — have _higher tendency_ (around 80%) of exhibiting post-quantization positions within Euclidean Space with a Hamming distance of \(k\) (given the angle between \(w\) and \(w^{\prime}\) is greater than \(\pi/2\)) or 0 (given the angle between \(w\) and \(w^{\prime}\) is less than \(\pi/2\).
as we incrementally increased the encoding iterations. The trajectories began to conform to distinct patterns, reminiscent of the properties of entanglement observed in the discrete domain. This was particularly astonishing given the fact that we were working within a continuous Euclidean spacetime. This emergence of structured behaviour, even in the absence of quantization, signified that the reduced codewords were not just randomly drifting but instead were adhering to the principles of entanglement in quantized context. Notably, _they are entangled, displayed either opposite signs that summed to zero_**or**_aligned signs and magnitudes, converge at a singular endpoint_, given sufficiently large computational complexity.
Above observations aligned with our interpretation, strongly hint that the very act of entanglement can be locally described, guided, and modulated by the specified reduced matrices \(\{\hat{G}_{1},...,\hat{G}_{t}\}\). What was initially perceived as random behaviour gives way to discernible patterns as the encoding progresses, with an increase in computational complexity.
**Remarkably**, it is imperative to emphasize that our assertion does not equate computational entanglement to quantum entanglement. Rather, we encourage recognition of the parallels within our model. In our work, we efficiently establish robust correlations between features through algorithmic computation, both in quantized and non-quantized contexts. These correlations bear a striking resemblance to quantum entanglement in their ability to connect distant features. Distinguising this resemblance would mean missing the opportunity to explore the profound implications of computational entanglement.
### _Result 4: Pronounced Length Contraction with Increment of Complexity_
In this subsection, we present a simulation illustrating the phenomenon of length contraction across reference frames, characterized by increasing complexity.
We employed Algorithm 4 with a modification to Step 6. Instead of recording the positions of \(w\) and \(w^{\prime}\) at various stages, we recorded their Euclidean distance, specifically, we set \(p_{i}=\min\{d_{i}(w,w^{\prime}),d_{i}(-w,w^{\prime})\}\), for \(i=1,\ldots,t\), as we progressed through \(t=2\), \(t=3\), \(t=5\), and \(t=10\). As demonstrated in Result 3, two types of entangled correlations exist: \(w\) and \(w^{\prime}\) either display opposite signs that sum to zero or aligned signs and magnitudes, converging at a singular endpoint. Therefore, we selected the minimum value between \(d_{t}(w,w^{\prime})\) and \(d_{t}(-w,w^{\prime})\) to ensure the obtained distance always converge. Throughout this simulation, we considered a scenario with \(n=100\), using \(k=2\) for a 2-D space and \(k=3\) for a 3-D space.
Figures 22-23 showcase the effects of increasing \(t\) which indicates frames of reference with higher complexities. We've used distinct colors to represent different initializations of \(w\) and \(w^{\prime}\). As observed within the Figures, with increasing \(t\) and computational complexity, the trajectories of \(w\) and \(w^{\prime}\), specifically, their distance in Euclidean space converge toward zero, indicating their length contraction.
## 6 Model Application
In this section, we delve into one potential application of computational entanglement, which facilitates reliable information transmission through information reconciliation [53, 54, 55]. Information reconciliation is a cryptographic method that ensures Alice and Bob share a common secret after exchanging data over an unreliable channel, rectifying any errors and inconsistencies.
Common approaches to information reconciliation involve the use of error-correcting codes, such as linear codes [35]. These codes are mathematically well-defined and can correct errors efficiently for most practical purposes. However, the problem arises when considering the concept of _optimal decoding_[56].
In the context of error-correcting codes, optimal decoding involves determining the codeword from the codebook that is closest to the received data in some sense, typically measured using a suitable metric (e.g., Hamming distance for binary codes). However, in a worst-case scenario, all possible combinations of codewords and error patterns need to be considered, leading to an exponential growth in the search space.
To tackle the computational challenges associated with error correction, researchers have developed efficient algorithms, such as the Viterbi algorithm [57] for convolutional codes and belief propagation for LDPC codes [58]. These algorithms leverage mathematical techniques to streamline the search process and identify the most probable codeword without the need to exhaustively explore all possible options.
However, it's important to note that the complexity of the Viterbi algorithm increases exponentially with the constraint length, which is essentially the "memory" of the code. This is done to accommodate a higher degree of error correction. Similarly, belief propagation may demand more iterations for dependable decoding, and this can be influenced by various factors, including the structural characteristics of the code and the signal-to-noise ratio.
Furthermore, it's worth highlighting that finding the optimal decoding solution for arbitrary error correction codes presents a significant challenge. This problem falls under the category of NP (non-deterministic polynomial time)-hard problems [59]. In practical terms, this means that as the code's dimension and the number of errors to be corrected increase, the computational resources, both in terms of time and memory, required to achieve an exact solution grow exponentially.
### _Information Reconciliation via Computational Entanglement_
Our proposed entanglement approach provides a _deterministic_ and computationally _efficient_ solution that circumvents the inherent NP-hardness associated with the maximum likelihood decoding problem. More precisely, it enables information reconciliation through computational entanglement, with a firm guarantee (as long as \(t\) is sufficiently large) of optimizing the likelihood of the adversary's objective, as outlined in Eq. 10.
Figure 24 illustrates an example of how entanglement can be applied to perform information reconciliation. In this context, we considered Alice and Bob, each independently sampling random features denoted as \(w\) and \(w^{\prime}\), respectively, following specific probability distributions, such as the standard normal distribution.
Fig. 21: \(n=100,k=3\), original angle between \(w,w^{\prime}\) less than \(\pi/2\) (3-D)
[MISSING_PAGE_POST]
Figure 24: Information reconciliation via computational entanglement
Figure 25: In our toy example of information reconciliation, as \(t\) increases, entropy decreases due to ongoing entanglement. Time dilation and length contraction facilitate recovery of the message. When all bits of the reduced codeword \((\hat{c},\hat{c^{\prime}})\) are fully entangled, perfect message recovery is achieved, and the system’s entropy reaches a minimum of zero.
Figure 26: Increasing the value of \(n\) results in faster convergence for reduced codeword pair \((\hat{c},\hat{c^{\prime}})\) to achieved fully entangled state, enable perfect recovery of the message at lower \(t\).
In the realm of computational entanglement, an intriguing possibility emerges: even if \(w\) and \(w^{\prime}\) start as orthogonal entities, given the freedom to utilize an encoder associated with an expansive collection of reduction matrices, denoted as \(\{\hat{G}_{1},\dots,\hat{G}_{t}\}\); with a sufficiently large \(t\), Alice and Bob can collaboratively generate reduced codewords, \(\hat{c}\) and \(\hat{c}^{\prime}\), that possess entangled characteristics. These entangled codewords can subsequently serve as a means for encoding and decoding a message denoted as \(m\), thus facilitating information transmission between the parties involved.
Upon successful generation of the entangled pair, we shall observe that the Hamming distance \(d_{H}(\hat{c},\hat{c}^{\prime})\), results in either \(k\) or \(0\). Here, \(k\) signifies the number of bits in the entangled codeword pair. This outcome implies that any encoded message, denoted as
\[y=\mathsf{sgn}(\hat{c})+m,\]
is recoverable through the decoding process, follows
\[y-\mathsf{sgn}(\hat{c}^{\prime})=(\mathsf{sgn}(\hat{c})-\mathsf{sgn}(\hat{c}^{ \prime}))+m,\]
leading to the recovered output to be either equal to \(m\) or \(1^{k}-m\).
**Achieving Computational Security through Causal Entanglement**: In the context of secure communication, it's crucial to acknowledge that the encoded message is susceptible to eavesdropping or malicious tampering by an active adversary. Therefore, the selection of the maximum encoding time step, \((t)\), should align with the minimum requirement for Bob to decode the message effectively using the set of reduced matrices \(\{\hat{G}_{1},\dots,\hat{G}_{t}\}\).
More specifically, as highlighted in Result 1, a scenario emerges where, without imposing stringent assumptions on the adversary's capabilities or computational power, the adversary can, with a sufficiently large value of \(t\), sample an arbitrary random \(w^{*}\in\mathbb{R}^{\ell}\). This result in the creation of reduced codewords \(\hat{c}^{*}\) that become entangled with \(\hat{c}\) (provided that the knowledge of the reduced matrices \(\{\hat{G}_{1},\dots,\hat{G}_{t}\}\) is available). Subsequently, these entangled codewords have the capability to accurately decode the message hence compromised the system security.
In the light of above, the establishment of secure communication channel, while acknowledging the potential of computational entanglement, essentially relies on the assumption that Alice and Bob have the capacity to establish entanglement relationship over the sample pair \((w,w^{\prime})\) in an efficient manner. This capability sets them apart from potential adversaries, granting them a significant advantage. In a more appropriate context, Alice and Bob need to possess prior knowledge (pre-shared) of \(\{\hat{G}_{1},\dots,\hat{G}_{t}\}\) to generate an entangled reduced codeword pair \((\hat{c},\hat{c}^{\prime})\). Without access to the knowledge of \(\{\hat{G}_{1},\dots,\hat{G}_{t}\}\), any potential adversary would have no guarantee of establishing a causal relationship with \(\hat{c}\) through computational entanglement and would lack the ability to efficiently decode the message from \(y\) using arbitrary random sample \(w^{*}\).
**Toy Example**: Figure 25 illustrates the outcomes of our practical application of computational entanglement for encoding arbitrary messages, represented as \(50\times 50\) (equivalently vector representation of \(k=2500\)) binary images encompassing various objects like dogs, laptops, airplanes, crosses, and fish. The process involves the generation of reduced codeword pairs \((\hat{c},\hat{c}^{\prime})\) from feature samples \((w,w^{\prime})\), which are randomly drawn from a standard normal distribution. The encoded message appears initially as random and noisy.
As \(t\) increases, the entanglement process unfolds, leading to the emergence of relativistic effects such as time dilation and length contraction. The system entropy then continue to decreases. These effects ultimately entangled every bits of the reduced codeword pair \((\hat{c},\hat{c}^{\prime})\), enable the successful decoding of the message using \(\hat{c}^{\prime}\) through the operation \(y-\hat{c}^{\prime}\), resulting in a perfect recovery of the original message.
As shown in Eq. 10, our model incurs an average minimum entropy loss described as \(nH_{2}(k/n)\) through computational entanglement process. This entropy loss signifies the error tolerance capability of using \(\{\hat{G}_{1},\dots,\hat{G}_{t}\}\) and \(w^{\prime}\) in decoding \(y\) from to recover \(m\) perfectly. Increasing the parameter \(n\) accelerates the convergence of the reduced codeword pair \((\hat{c},\hat{c}^{\prime})\) toward complete entangled state, i.e., every bits of the reduced codeword pair \((\hat{c},\hat{c}^{\prime})\) are entangled. This outcome is evident in Figure 26, where an increase in \(n\) enables perfect message recovery at lower values of \(t\). This underscores the redundancy principle in error correction code design: greater redundancy (\(n>k\)) allows for the correction of more errors but results in higher entropy loss.
### _Adversary Example Generation Through the Lens of Information Reconciliation_
In continuation of our earlier discussion regarding the application of information reconciliation, we now embark on a deeper examination, shedding light on how this process can be harnessed to facilitate the generation of adversarial examples, thereby offering new insight for adversarial machine learning studies. Within this exploration, we revisit the previous scenario encompassing Alice and Bob's information reconciliation process. However, we approach it from a fresh perspective, one that focuses on an adversarial context.
In this pursuit, we delve into a different scenario depicted in Figure 27. Here, we grant an adversary the capability to sample arbitrary random noise, denoted as \(w\), which follows a standard normal distribution. In this intriguing context, the adversary is capable of creating a reduced codeword \(\hat{c}\) from \(w\), through encoding. This reduced codeword, resemble random Gaussian noise, can exhibiting arbitrary variance parametrized by \(\alpha\). The adversary can generate an adversarial example, denoted as \(y\), by injecting additive noise into the "Panda" sample \((m)\), specifically by adding \(\alpha\hat{c}^{\prime}\) to it, i.e., \(y=\alpha\hat{c}+m\).
A noteworthy insight from our model strongly suggests that it is indeed possible to derive a corresponding reduced codeword, \(\hat{c}^{\prime}\), from an arbitrary random sample, which we'll henceforth refer to as the "Gibbon" \((w^{\prime})\). Consequently, \(\hat{c}^{\prime}\) can be generated in a manner that inherently entangled with \(\hat{c}\). This entangled relationship serves as the foundation for the effective reconciliation of the "Panda" information from the adversary example \(y\), even in the presence of what might initially appear as an unrelated or uncorrelated "Gibbon" sample.
As part of our assessment, we employed images of "Gibbon" and "Panda," each sized at \(50\times 50\), and chose
\(n=k+1000\) to expedite the attainment of a fully entangled state. In Figure 28, we observe the impact of increasing \(\alpha\) on the original sample, resulting in the generation of the adversary example \(y\). It becomes evident that higher levels of noise necessitate a greater number of iterations to successfully reconcile the original "Panda" sample from \(y\), utilizing the "Gibbon" sample. What's crucial to emphasize here is that our results demonstrate a significant insight: _human-imperceptible adversarial examples might essentially be a specialized instance of information reconciliation, even when the noise level is extremely low._
**In summary**, viewing adversary examples through the lens of information reconciliation, we gain deeper insights into their emergence via the intricate process of entanglement. This viewpoint highlights the crucial role that entanglement of different data elements can play in machine learning models. Recognizing this importance is essential not only for comprehending vulnerabilities but also for effectively strengthening model defenses and improving their resilience against adversarial threats.
## 7 Conclusive Statement
Within this discourse, we have put forth a computational interpretation of the emergence of adversarial examples in machine learning by delving into the concept of entanglement and relativity. These insights carry significant implications, especially when we contemplate feature samples that may be subjected to relativistic effects, such as time dilation and length contraction. Moreover, they raise the intriguing possibility of entanglement even when feature samples are widely separated. Hence, it is of paramount importance to bolster our research efforts aimed at achieving a more profound comprehension of how established machine learning models align with the fundamental principles of physics.
In light of our revelation regarding computational entanglement, it is crucial for the machine learning community to
Fig. 28: Figure demonstrates that as \(\alpha\) increases, it leads to greater perturbation on the original “Panda” sample. Consequently, a higher number of encoding iterations is required to successfully reconcile the original sample from the effects of the increasing noise level.
Fig. 27: Examining adversary example generation through the lens of information reconciliation.
exercise caution when prioritizing model accuracy through the adoption of increasingly complex and less interpretable models. Our research unequivocally showcases the existence of computational entanglement properties, strongly suggesting that even highly complex models, including deep neural networks with intricate layer connections, may not be impervious to such entanglement effects. Remarkably, our model demonstrates entanglement with a parameter count of 60, and even fewer when analyzed within the context of maximum encoding time step \((t)\) and high noise levels \(\alpha=300\) (refer to Figure 28). The existence of simpler model with profound consequences should not be overlooked or underestimated.
Through our research, there is a captivating revelation emerges, one that extends beyond the confines of machine learning--it is the flourishing interplay between the domains of computation and physics. As we delve into the intriguing possibility that the fundamental laws of physics may be profoundly entwined with a computational framework, particularly within the domain of computational entanglement; we find ourselves inadvertently embarking on a transformative journey of thought. This journey leads us to contemplate the notion that the Turing machine, celebrated for its computational universality, might possess the capacity to simulate, or at the very least closely approximate these complex physical phenomena. Such an assertion holds great significance in the context of the "It from bit" notion, as originally postulated by John Archibald Wheeler [60].
In synthesizing these ideas, we position ourselves at a point where the boundaries between computation and the laws of physics blur, hints at the idea that information and computation may play a central role in our understanding of the cosmos, although the full extent of this influence remains an open question awaiting further exploration. Nonetheless, we should not be surprised that the pursuit of truth may require us to abstain from distinguishing between these realms but instead unify them. This perspective resonates with the words of physicist Anton Zeilinger, drawing inspiration from Einstein's extraordinary unification of space and time into spacetime.
|
2309.13218 | AI-Copilot for Business Optimisation: A Framework and A Case Study in
Production Scheduling | Business optimisation refers to the process of finding and implementing
efficient and cost-effective means of operation to bring a competitive
advantage for businesses. Synthesizing problem formulations is an integral part
of business optimisation, which relies on human expertise to construct problem
formulations using optimisation languages. Interestingly, with advancements in
Large Language Models (LLMs), the human expertise needed in problem formulation
can be minimized. However, developing an LLM for problem formulation is
challenging, due to training data, token limitations, and lack of appropriate
performance metrics. For the requirement of training data, recent attention has
been directed towards fine-tuning pre-trained LLMs for downstream tasks rather
than training an LLM from scratch for a specific task. In this paper, we adopt
an LLM fine-tuning approach and propose an AI-Copilot for business optimisation
problem formulation. For token limitations, we introduce modularization and
prompt engineering techniques to synthesize complex problem formulations as
modules that fit into the token limits of LLMs. Additionally, we design
performance evaluation metrics that are better suited for assessing the
accuracy and quality of problem formulations. The experiment results
demonstrate that with this approach we can synthesize complex and large problem
formulations for a typical business optimisation problem in production
scheduling. | Pivithuru Thejan Amarasinghe, Su Nguyen, Yuan Sun, Damminda Alahakoon | 2023-09-22T23:45:21Z | http://arxiv.org/abs/2309.13218v3 | # AI-Copilot for Business Optimisation: A Framework and A Case Study in Production Scheduling
###### Abstract
Business optimisation refers to the process of finding and implementing efficient and cost-effective means of operation to bring a competitive advantage for businesses. Synthesizing problem formulations is an integral part of business optimisation, which relies on human expertise to construct problem formulations using optimisation languages. Interestingly, with advancements in Large Language Models (LLMs), the human expertise needed in problem formulation can be minimized. However, developing an LLM for problem formulation is challenging, due to training data, token limitations, and lack of appropriate performance metrics. For the requirement of training data, recent attention has been directed towards fine-tuning pre-trained LLMs for downstream tasks rather than training an LLM from scratch for a specific task. In this paper, we adopt an LLM fine-tuning approach and propose an AI-Copilot for business optimisation problem formulation. For token limitations, we introduce modularization and prompt engineering techniques to synthesize complex problem formulations as modules that fit into the token limits of LLMs. Additionally, we design performance evaluation metrics that are better suited for assessing the accuracy and quality of problem formulations. The experiment results demonstrate that with this approach we can synthesize complex and large problem formulations for a typical business optimisation problem in production scheduling.
Copilot, Large Language Model (LLM), Artificial Intelligence (AI), Business optimisation, Problem Formulation, Production Scheduling
## 1 Introduction
Business optimisation is an important process to help businesses gain competitive advantages by reducing operational costs, improving customer satisfaction, and mitigating risks. Advances in digital technologies, such as Internet-of-Things and cloud technologies, have enabled new business models with complex operations. Optimising key business decisions (operational, tactical, and strategic) in complex and dynamic systems is challenging and requires the involvement of different stakeholders. Handling business rules and various practical constraints is also not a trivial task. Although modern optimisation technologies have offered businesses different ways to formulate and solve their problems, successfully adopting these technologies still requires significant domain knowledge and optimisation expertise.
While solving business optimisation problems, businesses and optimisation experts engage at different stages. Usually, businesses commence the process by providing a problem description to an optimisation expert. Subsequently, the optimisation expert formulates the problem description into a mathematical model (Antoniou and Lu, 2007) and translates the mathematical model to an executable problem formulation to solve using a solver (Boyd and Vandenberghe, 2004). Later, the optimisation expert interprets the results and suggests the best actions for the business. As the final step, a software engineer integrates the mathematical models developed by the optimisation expert into the business's systems and applications. Although solving optimisation problems can be handled efficiently by many advanced solvers such as Gurobi Optimization, LLC (2023), Google (2023), Cplex (2009), and meta-heuristics, transforming a problem description to an executable and accurate problem formulation is time-consuming and requires expert knowledge. Poor problem formulations can lead to infeasible solutions (e.g., failure to address constraints and optimise the objective of interest) and significantly slow down solving process.
LLMs have become increasingly popular due to their broad applications. Initiated by transformer (Vaswani et al., 2017) for machine translation, LLMs have quickly adopted within different software and business functions such as analysing business data (Cheng et al., 2023), creating marketing content (Rivas and Zhao, 2023), generating code for visualisations (OpenAI, 2023), supporting programmers with auto-completion (Nguyen and Nadi, 2022), and working as optimisers for simple continuous and discrete optimisation problems (Yang et al., 2023). With respect to supporting technical users, Salesforce uses code-generating LLMs in its development teams (Le et al., 2022). Meanwhile, GitHub Copilot (Nguyen and Nadi, 2022) enables code suggestions and code completions for programmers to improve their coding efficiency. Furthermore, Amazon CodeWhisperer (Yetigtiren et al., 2023) helps developers to code efficiently as well as write code related to AWS resources. Meanwhile, ServiceNow has worked with the open-source community to introduce StarCoder (Li et al., 2023) as a free AI code generator. Going beyond supporting technical users, LLMs support non-technical users to implement technical tasks. For instance, code-generating LLMs now enable non-technical users to generate a simple website or create a simple query to retrieve data from a database, without technical support. In fact, the motivation behind this research is to leverage code-generating LLMs to support non-expert users to successfully carry out business optimisations without having to consult experts to significantly reduce traditionally required effort.
Given the nature of problem formulation as a language-to-language translation, code-generating LLMs can be a powerful tool to transform problem descriptions into problem formulations. Furthermore, recent considerable attention towards using LLMs to automate code generation tasks, paves the way to fine-tuning a pre-trained code-generating LLM for problem formulation. Additionally, the introduction of unlabelled data to train LLMs for code generation (Chen et al., 2021), has eliminated most of the limitations in early-stage code-generating LLMs that were trained using labelled datasets (Mastropaolo et al., 2021). Recently, fine-tuning pre-trained models for downstream tasks has enabled training an LLM for a specific set of tasks with just a hundred or two hundred data points (Solaiman and Dennison, 2021).
However, in general, LLM-based applications in complex decision-making scenarios are still limited. Since existing code-generating LLMs are trained on generic programming problems, problem formulation is a non trivial task for those code-generating LLMs due to complex constraints, different optimisation requirements, and the need of selecting the most suitable optimisation technique. Additionally, due to token limitation, code-generating LLMs cannot generate
large problem formulations, and large computational and memory requirements of some of the code-generating LLMs limit their practical use. Also, the existing performance evaluation metrics in code-generating LLMs are not suitable for problem formulation since the result as well as the optimisation technique need to be considered.
Although machine translation LLMs have been recently fine-tuned for auto-formulation of optimisation models, such models are restricted to mathematical modeling with linear programming [Ramamonjison et al., 2022] or conceptual models that are still in the experimental stage [Tsouros et al., 2023]. Moreover, datasets used in these LLMs contain comparatively smaller problem formulations with limited constraints and variables. As a result, applications of such LLMs are significantly limited in practice since real-world problems often have large numbers of variables and constraints while part of the variables are integers.
Accordingly, we introduce our AI-Copilot as a step towards automating problem formulations for complex real-world optimisation problems. To do so, we select production scheduling as a case study as it has been comprehensively researched in the past and contains complex constraints and different optimisation objectives [Xiong et al., 2022]. We fine-tune a code-generating LLM, which uses limited memory, and computational resources, using a data set created by us that comprises 100 pairs of problem descriptions and formulations. As a result, we minimize the requirement of large training data using this fine-tuning step without training an LLM from scratch for problem formulation. In addition, we apply modularization and prompt engineering techniques on our AI-Copilot to cater to token limitations when formulating complex problem formulations. Furthermore, we use loss and execution based performance evaluation metrics to assess the accuracy and quality of problem formulations compared to existing evaluation metrics.
In contrast to existing machine translation LLM based auto-formulation models such as Ramamonjison et al. [2022], our method performs text-to-code translation and formulates constraint programming problems. Moreover, our AI-Copilot can formulate complex problem formulations compared to existing machine translation LLM based auto-formulation models [Ramamonjison et al., 2022, Tsouros et al., 2023]. Therefore the contributions of this paper toward automating problem formulations could be highlighted as:
* Constructing an open-source dataset with production scheduling to fine-tune a code-generating LLM for problem formulation.
* Fine-tuning a code-generating LLM for problem formulation that consumes limited computing resources.
* Developing a modularization and prompt engineering technique to manage large problem formulations.
* Designing suitable performance metrics for assessing the accuracy and quality of problem formulation.
## 2 Literature Review
### Business Optimisation and Optimisation Technologies
Business optimisation has been described as the "Philosophy of Continuous Improvement" [Singh and Singh, 2009], where businesses attempt to make their operation as perfect as possible. Generally, business optimisation covers all processes and efforts to improve productivity, efficiency, performance, etc. However, this research considers business optimisation from a computational and mathematical perspective where one tries to minimize or maximize an important characteristic of a process by an appropriate choice of decisions [Kallrath and Wilson, 1997]. For example, problem descriptions are formulated as mathematical models and solved using solvers to provide suggestions for business decisions. Traditionally, combinatorial optimisation is a class of optimisation, that is used for mathematical and computational requirements of business optimisation [Yu, 2013]. In spite of the benefits from business optimisation to businesses, challenges such as intensive computational and human expertise requirements and inadequate support structure for business optimisation may exist. While past studies have predominantly focused on algorithmic improvements for business optimisation, our proposed AI-Copilot will focus on improving support structure and reducing intensive human expertise requirements.
In general combinatorial optimisation is known to be the process of finding the minimum or maximum of an objective function that has a large discrete domain. Such a process is needed in real-world scenarios such as vehicle routing problem to select the optimal set of routes to serve a given set of customers (Toth and Vigo, 2002), bin-packing problem for multiprocessor scheduling (Coffman Jr et al., 1978), cut waste reduction problem in furniture manufacturing (Klosowski et al., 2018), production scheduling problem (Pochet and Wolsey, 2006), and many more. The solution space for such a scenario is too broad for a pure brute-force approach. Therefore, algorithmic techniques such as dynamic programming, branch and bound, random-restart hill climbing, simulated annealing, genetic algorithms, and tabu search are developed. From a computer science perspective, the above algorithmic techniques reduce solution space or accelerate solution search using mathematical methods.
Recently considerable attention has been directed towards applying machine learning techniques for combinatorial optimisation rather than traditional mathematical improvements to combinatorial optimisation algorithms. Although the latest research focuses on improving combinatorial optimisation techniques using machine learning, our AI-Copilot focuses on generating problem formulations related to combinatorial optimisation.
### Problem Formulation and Solvers
An optimisation problem can be formulated by selecting one or more optimisation variables, an objective function, and constraints. Such optimisation problems can be categorised into unconstrained optimisations, simple bound constraints, and constrained optimisations. While the name suggests that unconstrained optimisations have no constraints, simple bound constraints have boundaries for design parameters, but no constraints on the solution. However, constrained optimisation is the most complex category of optimisation problems, where the solution must satisfy a set of linear or nonlinear constraints and bounds to design parameters.
Unquestionably solvers embed powerful algorithms to solve problem formulations. Nevertheless, solvers are different from each other, due to the facts such as computational efficiency and solution strategies. Despite Gurobi Optimization, LLC (2023) being the state-of-art commercial solver for mathematical programming which solves a wide range of problems including linear programming and mixed integer programming, SCIP (Bestuzheva et al., 2021) is the fastest non-commercial solver currently available for mixed integer programming and mixed integer non-linear programming. In addition, Gurobi Optimization, LLC (2023) is more competitive over SCIP in solving complex problem formulations (Avella et al., 2023).
More importantly, optimisation languages like MiniZinc (Nethercote et al., 2007), GAMS (Soroudi, 2017), and AMPL (Fourer et al., 1990) supply problem formulation specific syntax to users to represent a mathematical model related to a problem description. But due to differences between optimisation languages over each other, users require significant training to master an optimisation language. Such differences mainly happen in solving capability, licencing approach, expressiveness of syntax, and documentation. Furthermore, due to transforming a problem description from a native language to an optimisation language is time consuming, optimisation languages limit the application of business optimisation in a wide range of problems. Yet our proposed AI-Copilot will go beyond existing technologies to bridge this gap and reduce the requirement of mastering optimisation languages.
### Large Language Models and Code Generation
Considerable attention has been recently directed towards LLMs, since LLMs can perform a wide range of tasks such as writing academic literature (Lund et al., 2023), question answering (Wang et al., 2019), language translation (OpenAI, 2023), code generation (Le et al., 2022), among others. Zan et al. (2022) report a comprehensive study on twenty-seven such code-generating LLMs. Unquestionably transformer-based machine translation (Vaswani et al., 2017) has paved the way for code generation using LLMs, and the first code-generating LLMs were trained using labelled datasets (Mastropaolo et al., 2021). These models had limitations since such techniques had practicality issues of requiring labelled data to even fine-tune an LLM for code generation. However, promising results could be seen with
the introduction of unlabelled data to train LLMs for code generation by Chen et al. (2021). Since Chen et al. (2021) is a code-generating LLM fine-tuned using a large corpus of GitHub Python code, it can cover a broader spectrum.
Meanwhile Le et al. (2022) have improved the quality of generated code by considering the results of test execution with reinforcement learning techniques. Accordingly, they use an actor-critic network to supply rewards for generated code and use these rewards to further improve its code generation capability. Furthermore, Zhang et al. (2023) introduce a model agnostic planning process based on Markov's decision process, as planning capabilities for code-generating LLMs can generate more accurate code for problem descriptions by considering future scenarios. They use the beam search for the evaluation process since the tree search-based planning process developed by them is inspired by the Monte-Carlo tree search. As such techniques can cause computational efficiency-related discrepancies that come with the Monte-Carlo tree search algorithm, they suggest caching beam-search used in the code-generating LLM.
Benchmarks and metrics play a key role in finding progressive improvements in the code-generating capabilities of LLMs. APPS (Hendrycks et al., 2021) benchmark includes \(10,000\) programming problems and their solutions in Python. A significant feature of this benchmark is that it has problem descriptions closer to the natural language. Furthermore, it includes simple problems as well as complex algorithms. Meanwhile, the HumanEval (Chen et al., 2021) benchmark includes \(164\) hand-written programming problems, with a focus on measuring the functional correctness of generated code. Each programming problem of HumanEval has a function signature, docstring, body, and several unit tests. Nonetheless, the major difference between APPS and HumanEval is that problems included in HumanEval are new and do not contain solutions in GitHub. Since for a sizeable part of APPS benchmark problems, there are solutions in GitHub, for code-generating LLMs trained using GitHub data, HumanEval is more precise compared to APPS. Conversely, the MBPP (Austin et al., 2021) benchmark includes \(974\) Python programming problems, which suit entry-level programmers. Meanwhile, some specialised metric named \(pass@k\) has been proposed by Kulal et al. (2019) to evaluate generated code, where for a particular problem description \(k\) number of solution codes are generated. Generated codes are run against a test case related to a programming problem and the programming problem is considered solved if at least one solution code out of \(k\) can pass the test case.
The existing problem formulation research uses NLP techniques (Ramamonjison et al., 2022) in contrast to the code-generating LLM based problem formulation automation approach introduced by our AI-Copilot. In addition, recent problem formulation research is based on the NL4Opt (Ramamonjison et al., 2023) dataset and it contains simpler problem descriptions and linear programming problem formulations related to them. Furthermore, according to the statistics, the NL4Opt dataset contains on average \(2.08\) variables and \(2.83\) constraints per problem formulation. Recently, Tsouros et al. (2023) introduced a conceptual framework for text to code transformation for problem formulation using prompt engineering techniques with generic LLMs. However, experiment results of the framework have not been released as yet.
### Machine Learning in Combinatorial Optimisation
Recent attempts have been made by machine learning and operational research communities to leverage machine learning for combinatorial optimisation. Modern combinatorial optimisation algorithms use handcrafted heuristics to make decisions that are computationally expensive or mathematically complex (Bengio et al., 2021). Since patterns for efficient heuristics can be discovered by observing data from the optimisation process (e.g., search historial, optimal solutions), machine learning is naturally a good candidate to enhance decision making in optimisation methods. Therefore, some past studies have applied reinforcement learning to make low-level optimisation decisions based on the dynamic states of an optimisation process. Furthermore, approximations can be learned through imitation learning because of demonstrations done by an expert on how a model should behave.
Recently, Baltean-Lugojan et al. (2018) introduce a neural network-based model to estimate the objective of a time-consuming semidefinite programming problem to decide the most promising cutting planes. With regard to branching policies in branch-and-bound trees of mixed integer linear programming, Gasse et al. (2019) introduce a neural network
to learn strong branching (Cook et al., 2011) approach. For container pre-marshalling problems, Hottung et al. (2020) introduce convolutional neural networks for learning branching policy and estimating the value of partial solutions. Going further beyond, for the traveling salesman problem, Khalil et al. (2017) leverage graph neural networks to learn selection criteria for the next node.
### Summary
Recently, shifting boundaries using AI to enhance workplaces (Jarrahi et al., 2023) has become a popular topic. In fact, it focuses on introducing virtual assistants to help employees in an organization. In contrast to other domains, the business optimisation domain lacks such tools to ease the burden on the people involved. Going further, the application of chatbots builds a more convenient workplace for employees (Wang et al., 2023). Even though LLMs can be treated as perfect candidates for supporting such requirements, the application of code-generating LLMs for problem formulation introduces several challenges such as training data, token limitations, evaluation metrics, and expensive computational resources required for LLM execution. Therefore research in this area is timely and critical.
## 3 Case Study
Job Shop Scheduling (JSS) is one class of combinatorial optimisation problems that is common in manufacturing (Pinedo, 2005). Due to its practical constraints and complex nature, JSS is one of the most popular optimisation problems investigated by researchers in operational research and computer science. Furthermore, JSS is treated as one of the renowned NP-hard problems in literature. The goal of JSS is to schedule a number of jobs over a number of machines, and each job consists of a set of operations that need to be executed in a given order on the allocated machine. In addition, the machine is allowed to process one operation at a time, and different objectives such as makespan and weighted tardiness can be minimized. It should be noted that methods such as integer programming (Ku and Beck, 2016), metaheuristics (Kreipl, 2000), and constraint programming (Beck et al., 2011; Watson and Beck, 2008) can be used to solve JSS.
For the static JSS problem instance, the shop, such as, the working or manufacturing environment includes a set of \(M\) machines and \(N\) jobs that need to be scheduled. Each job \(j\) has its own pre-determined route through a sequence of machines to follow and its own processing time at each machine it visits. The following notation is used to define the mathematical model for the JSS (Nguyen et al., 2021).
Parameters:
* \(J=\{1,....,j,....,N\}\): the set of all jobs
* \(n_{j}\): the number of operations of job \(j\)
* \(route_{j}=(m_{j1},....,m_{jn_{j}})\): the sequence of machines that job \(j\) will visit, where \(m_{ji}\) is the machine that processes the \(i^{th}\) operation of job \(j\)
* \(time_{j}=(p_{j1},....,p_{jn_{j}})\): the processing times of all operations of job \(j\), where \(p_{ij}\) is the processing time of the \(i^{th}\) operation of job \(j\)
* \(r_{j}\): the release time of job \(j\)
* \(d_{j}\): the due date of job \(j\)
* \(w_{j}\): the weight of job \(j\)
Variables:
* \(s_{ji}\): the starting time of the \(i^{th}\) operation of job \(j\)
* \(e_{ji}\): the ending time of the \(i^{th}\) operation of job
* \(C_{j}\): the completion time of job \(j\)
* \(T_{j}\): the tardiness of job \(j\) calculated by \(T_{j}=\max(C_{j}-d_{j},0)\)
The constraint programming formulation for the JSS is defined as follows.
\[\forall j\in J:s_{j1}>r_{j} \tag{1}\] \[\forall j\in J,i\in\{1,...,n_{j}\}:e_{ji}=s_{ji}+p_{ji}\] (2) \[\forall j\in J:C_{j}=e_{jn_{j}}\] (3) \[\forall j\in J:T_{j}=\max(C_{j}-d_{j},0) \tag{4}\]
Where (1): starting time of the first operation of the job should be greater than the release time of the job, (2): ending time of an operation equals to the sum of starting time and processing time of an operation, (3): completion time of a job equals to the ending time of the last operation of the job, (4): tardiness of a job equals to the difference between the job completion time and the due date of the job if it is positive or zero otherwise.
To ensure no overlap between operations (or disjunctive constraints) on the same machine:
\[\forall j,k\in J,u\in\{1,...,n_{j}\},v\in\{1,...,n_{k}\},\\ m\in route_{j},o\in route_{k}:m_{ju}=o_{kv}\Rightarrow s_{ju} \geq e_{kv}\lor s_{kv}\geq e_{ju} \tag{5}\]
That is if operations \(u\) and \(v\) from different jobs are to execute on the same machine \(m_{ju}=o_{kv}\), the start time of one of these jobs must be greater than the end time of the other job.
There are a number of precedence constraints between the operations of a job:
\[\forall j\in J,i\in\{1,...,n_{j}-1\}:s_{j,i+1}\geq e_{ji} \tag{6}\]
The objective functions are defined as follows:
* Makespan: Defined variable \(C_{\max}\) which represents the latest completion time of any job. The objective is to minimise \(C_{\max}\) subject also to constrain (8): \[\min C_{\max}\] (7) \[\forall j\in J:C_{\max}\geq e_{jn_{j}}\] (8)
* Maximum tardiness: Defined variable \(T_{\max}\) which represents the maximum tardiness of any job. The objective is to minimise \(T_{\max}\) subject to constrain (10): \[\min T_{\max}\] (9) \[\forall j\in J:T_{max}\geq T_{j}\] (10)
* Total Weighted Tardiness (TWT): The objective is to minimise cumulative tardiness' across all jobs: \[\min\sum_{j\in J}w_{j}T_{j}\] (11)
## 4 Proposed Method
### Overview
Our approach is conceptually represented in Figure 1, which comprises of five main components:
* which aims to capture the business optimisation scenario, that is going to be formulated.
* which synthesizes a problem formulation from a problem description
* which is the generated problem formulation for a given problem description, that can be solved using a solver.
* which is the final result obtained after solving a problem formulation using the solver.
* which interprets a solution and suggests the best actions for a business optimisation scenario.
The proposed AI-Copilot is designed to facilitate the first four components of the conceptual framework. Apart from this if we need to study the mathematical properties of a problem description, a mathematical formulation can also be generated by using a similar approach. So an optimisation expert can verify the mathematical formulation. Although verification from an optimisation expert will have a human dependency, our AI-Copilot has reduced human effort through automation. The remaining sections of this article will focus on how our AI-Copilot is developed based on our conceptual model.
### Pre-Trained Model
The pre-trained model acts as the base model for developing code-generating LLM for problem formulation. In fact, using a pre-trained model reduces training time and training resource requirements. More importantly, a suitable pre-trained model should have excellent code-generating capabilities with minimum resource requirements. Accordingly, we use CodeRL as the pre-trained model due to relatively fewer resource requirements, availability as a free model, and excellent code-generating capabilities.
The underlying unified encoder-decoder architecture of CodeRL sums up to a size of \(60\)M\(\sim\)\(770\)M parameters [Le et al., 2022]. Moreover, CodeRL is an advanced version of CodeT5 [Wang et al., 2021] trained on GitHub data, which holds
Figure 1: Solution overview
\(48\) layers, \(16\) attention heads, and \(1024\) hidden states. Though CodeRL is significantly smaller compared to Codex (Chen et al., 2021), and GPT-3 (Brown et al., 2020) introduced by GPT, CodeRL has been able to perform well (Le et al., 2022). The reason for such performance from CodeRL is that it considers the test case execution status of a generated code as a fine-tuning approach with an actor-critic reinforcement learning technique. Since CodeRL performs the actor role, a separate critic network assesses solutions generated by CodeRL, with test cases related to problem descriptions, and supplies a critic score for reinforcement learning.
Despite CodeRL showing promising results with code generation, CodeRL is not capable of problem formulation and it is the same for GPT-4 (OpenAI, 2023) (Figure 2). Furthermore, facts such as the GitHub-based dataset of CodeRL does not hold a substantial amount of problem formulation-related samples, and the six hundred token limit makes CodeRL incompetent with problem formulation. Moreover, such generic code-generating LLMs might provide problem formulations to problem descriptions, but we are not satisfied with the generated problem formulations, because generic code-generating LLMs are not specifically trained on problem formulation. In addition, by using in-context learning available in generic code-generating LLMs, users might be able to generate problem formulations, but due to the complexity of problem descriptions and effort that needs to be put in by users, generic code-generating LLMs may not be ideal for problem formulation. Such data and token limitations may be common for code-generating LLMs, but for problem formulation, those limitations should be managed. In the following sections, we will show how our AI-Copilot is capable of synthesizing complex problem formulations by fine-tuning CodeRL with prompt engineering techniques.
## #1/usr/bin/env python
import random
def makespan(x):
return math.sqrt(x)
def main():
jobs = [random.randint(1,5) for i in range(5)]
machines = [random.randint(1,5) for i in range(5)]
for i in range(5):
for j in range(5):
for l in range(5):
for m in range(5):
for n in range(5):
(a) CodeRL does not create a complete problem formulation due to token limitation
KeyErrorTraceback(most
recent call last)
Input In [3], in <cell line: 35>()
36for job in range(num_jobs):
37if job = 1:
38model += x[1_routes[1][1], t1<= [job,
**values[0][1], 1+ durations[1_routes[4][1]]**
40# Step 3: Solution
41 model.solve()
KeyError: (0, 1, 20)
(b) GPT-4 does not properly formulate the constraint of "Second task of Job two has to come before all the second tasks of other jobs"
Figure 2: Problem Formulation with Generic Code-Generating LLMs: For the same problem description, Figure 1(a) shows the problem formulation generated by CodeRL, and Figure 1(b) shows the solution after solving the problem formulation generated by GPT-4
### Dataset Development
Since it is rare to find publicly available problem formulation data, we manually create a hundred production scheduling problem descriptions with their problem formulations (Figure 3).1 To create problem formulations, we have selected CPMpy (Guns, 2019) as the solver and Python as the programming language. Furthermore, we use different styles to add the individuality factor of different business stakeholders into problem descriptions. Therefore, we use ChatGPT (OpenAI, 2023) with prompt engineering to transfer problem descriptions into different styles (Reif et al., 2021). Due to the fact that problem formulations related to business optimisations are lengthier compared to normal programming code and written in optimisation languages supported by solvers, our dataset contribute towards filling the gap that exists in datasets for business optimisation problem formulation. Additionally, the methodology of our AI-Copilot can be used by businesses to develop their own AI-Copilot using their data. Such applications will allow businesses to dynamically adapt optimisations based on business environment changes.
Footnote 1: Dataset has been made publicly available on GitHub at: [https://github.com/pivithuruthejanamarasinghe/AI-Copilot-Data](https://github.com/pivithuruthejanamarasinghe/AI-Copilot-Data).
The problem descriptions in our dataset have less than six hundred tokens, and respective problem formulations have tokens in the range of \(1200\) to \(1800\). Additionally, our dataset holds different scenarios related to production scheduling to capture different requirements that are encountered in production scheduling (Table 1). However, large problem formulations included in our dataset exceed the token limits of the code-generating LLMs. To address this, we use prompt engineering techniques that will generate problem formulations as modules that fit into the token limits of code-generating LLMs. In contrast to existing datasets such as NL4Opt, our dataset focuses on constraint programming
Figure 3: Dataset Development: First we create different problem descriptions with their problem formulations based on different objectives and constraints. Syntax in problem formulations depends on the solver and programming language. Next, we regenerate problem descriptions with different styles using ChatGPT to add the individuality factor. Finally, we reorganise and break down the code into modules. So we can use them when fine-tuning the code-generating LLM.
problem formulations that involve a significantly larger number of constraints and variables. A sample problem description can be,
* Job shop scheduling model with \(5\) jobs and \(5\) machines. All jobs have random routes and their operations have random durations. The objective function is makespan. Maximum duration is \(20\). After solving the problem, solutions will be printed and visualised. Note: The second task of Job two has to come before all the second tasks of other jobs.
Due to the fact that there are scenarios where problem formulations must generate random dummy data to make them solvable via a solver, particular statements have been added to problem descriptions to keep the consistency of generated random dummy data. In addition, the dataset holds scenarios to capture metric conversion while generating problem formulations (Table 2). Finally, to get consistent outputs from problem formulations, the dataset has configured a random seed (random.seed(1)) for all random generations.
### Fine-Tuning
In order to fine-tune the pre-trained model CodeRL for problem formulation, we use a trainer [Hugging Face, 2023], which follows a loss-based approach. Even though the loss-based approach preserves qualitative aspects of generated
\begin{table}
\begin{tabular}{l l} \hline \hline Scenario Type & Example Scenario \\ \hline Task completion precedence & The second task of Job two has to come before all the second tasks of the other jobs. \\ \hline Introduction of release times & The release time of a job is a random value from \(0\) to \(50\). The jobs cannot start before their release time. \\ \hline Minimize makespan & The objective function is makespan. \\ \hline Minimize maximum tardiness & The due dates are calculated based on the total processing time of each job multiplied by a due date allowance of \(1.3\). The objective function is maximum tardiness. \\ \hline Minimize weighted tardiness & The due dates are calculated based on the total processing time of each job multiplied by a due date allowance of \(1.3\). The release time of a job is a random value from \(0\) to \(50\). Jobs cannot start before their release times. Each job has a weight following a random distribution in which \(20\%\) will have a weight of \(1\), \(60\%\) will have a weight of \(2\), and \(20\%\) will have a weight of \(4\). The objective function is total weighted tardiness. \\ \hline Minimize total flow time & The objective function is total flow time (completion time \(-\) release time). \\ \hline Minimize total weighted flow time & The objective function is the total weighted flowtime. \\ \hline \hline \end{tabular}
\end{table}
Table 1: Problem Formulation Scenarios
\begin{table}
\begin{tabular}{l l} \hline \hline Statement Type & Example Scenario \\ \hline Maximum duration & The maximum duration is \(20\). \\ \hline Release time range & The release time of a job is a random value from \(0\) to \(50\). \\ \hline Metric types & Jobs two and four will have task durations in minutes. The other jobs will have task durations in seconds. \\ \hline \hline \end{tabular}
\end{table}
Table 2: Data Consistency Statements
problem formulations, we compare the final solution with the actual solution to measure accuracy. Training configurations are available in Table 3, and we fine-tune the code-generating LLM on these configurations based on parameters available in Table 4. Furthermore, we pick batch size and epoch count as primary parameters for hyper-parameter tuning since they affect the learning frequency of the code-generating LLM. We will show in the next section that this parameter setting has already generated promising results, and therefore we will not further tune parameters such as learning rate.
With the intention of fulfilling scalability aspects such as token limits of code-generating LLMs, we modularize problem formulations using instructions. As instructions (Figure 4), we use nine prompts that allow the code-generating LLM to create problem formulations part by part. In the end, we combine all problem formulation modules to create a final problem formulation for a particular problem description. In the modularization process, we amend the first dataset by attaching instructions as suffixes for each problem description. Correspondingly each problem description becomes nine different problem descriptions, and all together we produce nine hundred problem descriptions. Furthermore, we modularize each problem formulation into nine different sub problem formulations to align with the instructions. As an outcome of the modularization process, the original hundred problem formulations get increased to nine hundred sub problem formulations. Below is a sample modularized problem description, and some of its modularized problem formulation can be seen in Figure 4.
Create a job shop scheduling model with \(6\) jobs and \(6\) machines. All jobs have random routes and their operations have random durations. The due dates are calculated based on the total processing time of each job multiplied by a due date allowance of \(1.3\). The release time of a job is a random value from \(0\) to \(50\). Jobs cannot start before their release times. Each job has a weight following a random distribution in which \(20\%\) will have a weight of \(1\), \(60\%\) will have a weight of \(2\), and \(20\%\) will have a weight of \(4\). The objective function is the total weighted flowtime. Maximum duration is \(20\). After
\begin{table}
\begin{tabular}{l l} \hline \hline Training Configuration & Value \\ \hline GPU type & NVIDIA Tesla V100 SXM2 \(32\) GB \\ \hline Pre-trained model & Salesforce/codet5-large-ntp-py \\ \hline Tokenizer & Salesforce/codet5-large-ntp-py \\ \hline Learning rate & \(5e-05\) \\ \hline Gradient checkpointing & True \\ \hline Evaluation strategy & steps \\ \hline Evaluation steps & \(10\) \\ \hline Logging steps & \(10\) \\ \hline Do Evaluation & True \\ \hline \hline \end{tabular}
\end{table}
Table 3: Training Configurations
solving the problem, solutions will be printed and visualised. Note: Jobs two and four will have task durations in minutes. Other jobs will have task durations in seconds.[DEFINE_CONTRAINTS]
### Performance Metrics
We use training loss, training time, and problem formulation execution status to evaluate the performance of our AI-Copilot. The training loss is calculated by the trainer, by comparing a generated output with a target output for a particular problem description using cross-entropy loss:
\[l(x,y)=\frac{\sum\limits_{n=1}^{N}l_{n}}{N}, \tag{12}\]
\[l_{n}=-\log\frac{\exp(x_{n,y_{n}})}{\sum\limits_{c=1}^{C}\exp(x_{n,c})}.1\{y_{n }\neq ignore\_index\}, \tag{13}\]
where \(x\): logits from the code-generating LLM for a given problem description (generated problem formulation), \(y\): target ids (target problem formulation), \(ignore\_index\): -100, \(C\): number of classes, \(N\): mini-batch dimension.
Since the cross-entropy loss is inadequate to ensure the executability and correctness of generated problem formulations, we solve problem formulations using a solver and compare the final solution with the actual solution. It should be noted that, for problem formulation executions, there are three possible outcomes which are, getting correct output, getting incorrect output, and failure due to syntax errors. Accordingly, we introduce success rate, failure rate, and exception rate to evaluate the performance of code-generating LLMs for generating problem formulations. In fact, a combination of such metrics allows us to pick the best code-generating LLM that suits problem formulation.
Figure 4: Instructions-based code modules
## 5 Experiments
### Overview
We randomly allocate \(70\%\) of the dataset as training data, \(10\%\) as validation data, and \(20\%\) as test data. While the training and validation data are used in the fine-tuning process to avoid any overfitting and underfitting scenarios, the test data is used to evaluate the performance of the fine-tuned code-generating LLM. The parameters and metrics used in the training, validation, and testing stages are available in Tables 4 and 5.
### Training and Testing Performance
The training results are available in Table 6. While reviewing the results, we identify some key observations. One such observation is that, in general, loss values are low for all batch sizes. But failure and exception rates are significantly high for lower epoch counts. On the other hand, we observe a significant improvement in success rates with the increase in epoch count. Furthermore, we see that for lower epoch counts, due to minor errors in generated problem formulations, the execution of problem formulations fails. Expectedly, we identify a slight increase in loss and a significant increase in the failure and exception rates with the increase in batch size. However, when increasing the epoch count, success rates are getting improved for larger batch sizes. Based on the aforementioned observations we infer that the batch size and epoch count are key contributors in obtaining better problem formulation capabilities. While observing testing results (Table 7), we observe the same patterns as that in the training results. For instance, in a similar manner to the training results, the success rate of the testing results increases with the epoch count, and the failure and exception rates
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt}} \hline \hline Parameter & Definition \\ \hline batch size & number of data points for a batch \\ \hline epoch & number of training iterations \\ \hline \hline Metric & Definition \\ \hline loss & generated by comparing a target problem formulation and a generated problem formulation \\ \hline time & number of seconds to complete training \\ \hline success & rate of problem formulation successfully solved and giving correct output \\ \hline failure & rate of problem formulation successfully solved and giving incorrect output \\ \hline exception & rate of invalid problem formulations \\ \hline \hline \end{tabular}
\end{table}
Table 4: Parameter Definitions
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt}} \hline \hline Parameter & Definition \\ \hline batch size & number of data points for a batch \\ \hline epoch & number of training iterations \\ \hline \hline \end{tabular}
\end{table}
Table 5: Metric Definitions
reduce with the epoch count. From such similar training and testing observations, we infer that the fine-tuning process has not led the code-generating LLM to an overfit scenario.
### Convergence
We utilize validation error in each parameter setting to investigate the loss convergence of the code-generating LLM. While we observe in all parameter settings that the validation and training curves overlap at some point (Figure 5), for batch size one, overlapping happens at the early stages. But, such overlapping behaviour we do not observe in batch sizes two and four. In fact, the learning frequency of the code-generating LLM reduces with the increase in batch size, and batch size may cause the initial gap in training and validation curves. Unquestionably, with the increase of the epoch count even for the larger batch sizes, we see overlapping training and validation curves after having adequate learning iterations. Going further beyond, if we try to relate how success rate maps with the overlapping training and validation curves, we observe that perfect overlapping training and validation curves mean higher success rates. As we generate problem formulations as modules for a particular problem description, and combine them at the end, even minor mistakes can cause incorrect results and exceptions. Since perfectly overlapping training and validation curves mean perfect training, perfect training makes the code-generating LLM avoid minor mistakes.
\begin{table}
\begin{tabular}{c c c c c c c} \hline batch size & epoch & loss & time(sec) & success & failure & exception \\ \hline \(1\) & \(1\) & \(0.0056\) & \(1083.78\) & \(0.16\) & \(0.23\) & \(0.61\) \\ \(1\) & \(2\) & \(0.0068\) & \(2105.51\) & \(0.00\) & \(0.00\) & \(1.00\) \\ \(1\) & \(4\) & \(0.0038\) & \(4216.54\) & \(0.00\) & \(0.01\) & \(0.99\) \\ \(1\) & \(8\) & \(0.0008\) & \(8561.05\) & \(0.96\) & \(0.00\) & \(0.04\) \\ \hline \(2\) & \(1\) & \(0.0363\) & \(675.19\) & \(0.00\) & \(0.00\) & \(1.00\) \\ \(2\) & \(2\) & \(0.0027\) & \(1256.11\) & \(0.44\) & \(0.20\) & \(0.36\) \\ \(2\) & \(4\) & \(0.0014\) & \(2554.93\) & **1.00** & \(0.00\) & \(0.00\) \\ \(2\) & \(8\) & \(0.0008\) & \(5153.64\) & **1.00** & \(0.00\) & \(0.00\) \\ \hline \(4\) & \(1\) & \(1.4474\) & \(429.63\) & \(0.00\) & \(0.00\) & \(1.00\) \\ \(4\) & \(2\) & \(0.0043\) & \(849.15\) & \(0.24\) & \(0.00\) & \(0.76\) \\ \(4\) & \(4\) & \(0.0017\) & \(1733.71\) & \(0.97\) & \(0.00\) & \(0.03\) \\ \(4\) & \(8\) & \(0.0010\) & \(3446.56\) & \(0.97\) & \(0.00\) & \(0.03\) \\ \hline \end{tabular}
\end{table}
Table 6: Training results of metrics based on batch size and epoch count
\begin{table}
\begin{tabular}{c c c c c} \hline batch size & epoch & success & failure & exception \\ \hline \(1\) & \(1\) & \(0.25\) & \(0.30\) & \(0.45\) \\ \(1\) & \(2\) & \(0.00\) & \(0.00\) & \(1.00\) \\ \(1\) & \(4\) & \(0.00\) & \(0.00\) & \(1.00\) \\ \(1\) & \(8\) & \(0.95\) & \(0.00\) & \(0.05\) \\ \hline \(2\) & \(1\) & \(0.00\) & \(0.00\) & \(1.00\) \\ \(2\) & \(2\) & \(0.45\) & \(0.15\) & \(0.40\) \\ \(2\) & \(4\) & **1.00** & \(0.00\) & \(0.00\) \\ \(2\) & \(8\) & **1.00** & \(0.00\) & \(0.00\) \\ \hline \(4\) & \(1\) & \(0.00\) & \(0.00\) & \(1.00\) \\ \(4\) & \(2\) & \(0.15\) & \(0.00\) & \(0.85\) \\ \(4\) & \(4\) & \(0.90\) & \(0.00\) & \(0.10\) \\ \(4\) & \(8\) & \(0.95\) & \(0.00\) & \(0.05\) \\ \hline \end{tabular}
\end{table}
Table 7: Testing results of metrics based on batch size and epoch count
Figure 5: The behavior of the training and validation loss for different settings.
### Loss Analysis
While simpler problem formulation modules have a negligible loss, problem formulation modules with more complexities such as defining model, constraints, objective function, and solution have contributed more towards the loss (Figure 5(a)). Additionally, to analyse loss distribution in different problem formulation modules, we use GSOM (Alahakoon et al., 2000) on target problem formulations for each problem formulation module. By doing so, we try to identify problem formulation modules that have more variety compared to other problem formulation modules. The problem formulation modules that define solution, constraints, and objective function have a higher number of GSOM clusters (Figure 5(b)). On the other hand, the problem formulation module model defining has few GSOM clusters but a large deviation in the number of related problem formulations per cluster (Figure 5(b)). In contrast, simpler problem formulation modules have few clusters and minor deviations in the number of related problem formulations per cluster. Interestingly, the above observation aligns with the loss distribution shown in Figure 5(a) which indicates that defining constraints, objective function, model, and solution are the most challenging problem formulation modules for the code-generating LLM.
Additionally, we utilize Principal Component Analysis (PCA) to investigate how the code-generating LLM responds to different problem descriptions. Accordingly, we apply PCA on vector embeddings of the code-generating LLM as shown in Figure 7. Since the code-generating LLM is an encoder-decoder transformer model (Vaswani et al., 2017), encoder embeddings demonstrate how the code-generating LLM captures different problem descriptions and decoder embeddings demonstrate how the code-generating LLM generates problem formulations. Interestingly, we do not observe instruction-wise clusters for the encoder embeddings as shown in Figure 6(a). In fact, instruction-wise problem descriptions differ from each other by a suffix attached to the original problem descriptions. Since PCA brings similar characters in problem descriptions and problem formulations together in the 2D space, we infer that PCA should provide overlapping clusters for instruction-wise problem descriptions. Furthermore, as shown in Figure
Figure 6: The Loss Analysis: As shown in Figure 5(a), there are four major contributors that determine the final status of a problem formulation. As shown in Figure 5(b) while other instructions are restricted to one region, these four instructions span two other regions in the diagram. This is due to the complexity of problem formulations for those instructions.
7b, we observe some clusters for the decoder embeddings. For instance, as imports and visualizations require similar problem formulation modules to be generated across all the problem descriptions, we see single clusters. But problem formulation modules for solution, constraints, and utility functions have multiple clusters or isolated points since they cover different scenarios based on problem descriptions. Furthermore, we see overlapping clusters for different problem formulation modules since they share common variables.
### Examples
A sample of the generated problem formulations is shown in Figure 7(a). Note that due to space limitations, we have only presented part of the generated code, and the complete code is available in our GitHub repository 2. Figure 7(b) shows the output of the generated problem formulation once it is executed. While reviewing generated problem formulations and outputs, we observe that the code-generating LLM has been able to model complex scenarios mentioned in problem descriptions and generate correct problem formulations.
Footnote 2: The generated problem formulations have been made publicly available on GitHub at: [https://github.com/pivithuruthejanmarasinghe/AI-Copilot-Artifacts](https://github.com/pivithuruthejanmarasinghe/AI-Copilot-Artifacts).
Create a job shop scheduling model with \(6\) jobs and \(5\) machines. All jobs have random routes and their operations have random durations. The objective function is makespan. Maximum duration is \(20\). After solving the problem, solutions will be printed and visualised. Note: The first task related to each job should be completed before the completion of any job.
Figure 7: The vector embeddings of the fine-tuned code-generating LLM: The embeddings shown in Figure 6(a) depend on problem descriptions whereas the embeddings shown in Figure 6(b) depend on generated problem formulations
## Appendix A A Preprint - October 20, 2023
Figure 8: Generated problem formulations
## 6 Conclusion
We introduce an AI-Copilot based on a code-generating LLM based problem formulation framework for business optimisation using a case study in production scheduling. The proposed AI-Copilot can generate large and complex problem formulations and requires only a small training dataset. Additionally, new performance evaluation metrics are proposed to evaluate the quality of generated problem formulations. The prompt engineering-based problem formulation modularization technique introduced in our AI-Copilot can overcome token limitation issues in code-generating LLMs for problem formulation. Although the case study is based on artificial production data and hypothetical problem descriptions, the framework introduced in our AI-Copilot is general and can be used in practical scenarios where data is available. Although the proposed AI-Copilot in this research is limited to production scheduling problem formulation, the general framework can be adapted to other types of business optimisation problems.
As further improvements, we will focus on developing the remaining components of the framework while supporting multiple problem formulation types such as routing, assignment, etc. Generating problem formulations for specific optimisation technologies different from constraint programming is also an interesting topic to explore. In the next step, we will focus on mixed-integer programming, column generation, and lazy constraints problem formulations in combinatorial optimisation that cover a broad range of business optimisation case studies. Given the importance of business decisions produced by optimisation, it is important to incorporate multiple explanation techniques and responsible AI principles to help users validate the correctness of the generated problem formulation and prevent negative consequences from the proposed AI-Copilot. use cases. |
2309.13387 | YOLORe-IDNet: An Efficient Multi-Camera System for Person-Tracking | The growing need for video surveillance in public spaces has created a demand
for systems that can track individuals across multiple cameras feeds in
real-time. While existing tracking systems have achieved impressive performance
using deep learning models, they often rely on pre-existing images of suspects
or historical data. However, this is not always feasible in cases where
suspicious individuals are identified in real-time and without prior knowledge.
We propose a person-tracking system that combines correlation filters and
Intersection Over Union (IOU) constraints for robust tracking, along with a
deep learning model for cross-camera person re-identification (Re-ID) on top of
YOLOv5. The proposed system quickly identifies and tracks suspect in real-time
across multiple cameras and recovers well after full or partial occlusion,
making it suitable for security and surveillance applications. It is
computationally efficient and achieves a high F1-Score of 79% and an IOU of 59%
comparable to existing state-of-the-art algorithms, as demonstrated in our
evaluation on a publicly available OTB-100 dataset. The proposed system offers
a robust and efficient solution for the real-time tracking of individuals
across multiple camera feeds. Its ability to track targets without prior
knowledge or historical data is a significant improvement over existing
systems, making it well-suited for public safety and surveillance applications. | Vipin Gautam, Shitala Prasad, Sharad Sinha | 2023-09-23T14:11:13Z | http://arxiv.org/abs/2309.13387v1 | # YOLORe-IDNet: An Efficient Multi-Camera System for Person-Tracking
###### Abstract
The growing need for video surveillance in public spaces has created a demand for systems that can track individuals across multiple cameras feeds in real-time. While existing tracking systems have achieved impressive performance using deep learning models, they often rely on pre-existing images of suspects or historical data. However, this is not always feasible in cases where suspicious individuals are identified in real-time and without prior knowledge. We propose a person-tracking system that combines correlation filters and Intersection Over Union (IOU) constraints for robust tracking, along with a deep learning model for cross-camera person re-identification (Re-ID) on top of YOLOv5. The proposed system quickly identifies and tracks suspect in real-time across multiple cameras and recovers well after full or partial occlusion, making it suitable for security and surveillance applications. It is computationally efficient and achieves a high F1-Score of 79% and an IOU of 59% comparable to existing state-of-the-art algorithms, as demonstrated in our evaluation on a publicly available OTB-100 dataset. The proposed system offers a robust and efficient solution for the real-time tracking of individuals across multiple camera feeds. Its ability to track targets without prior knowledge or historical data is a significant improvement over existing systems, making it well-suited for public safety and surveillance applications.
Keywords:Realtime Systems Person Tracking Person Re-identification Multi-camera tracking Correlation filter tracking.
## 1 Introduction
Visual security systems have become increasingly important in today's world from a surveillance and law and order point of view. Advancements in technology, such as high-definition images and artificial intelligence (AI) can be leveraged to create robust person and object identification and tracking systems. Traditional security approaches involve monitoring by human personnel, which can be challenging for large areas like malls, smart cities, and universities. Surveillance cameras are commonly used in public places, but it is difficult to compare different individuals manually. AI-powered surveillance systems [1] have been
developed to monitor suspects in real-time across multiple cameras, addressing these challenges.
Technically, intelligent surveillance systems [18] can be divided into two parts: intra-camera tracking and inter-camera tracking, which employs recognition strategies. There are various challenges in both strategies, such as viewpoint variation, occlusion, different aspect ratios and spatial sizes, lighting, and cluttered background, to name a few. In the case of intra-camera tracking, the difficulty is due to occlusion, which normally happens when people overlap each other or are overlapped by some other object, which leads a tracking algorithm to track the wrong target. In cases of inter-camera tracking, the difficulty is due to high intra-class variance due to variations in lighting conditions, angles, resolutions, and other factors. In addition to this, the task becomes more challenging when we have a large number of people to identify and a large number of non-overlapping cameras, as shown in Fig. 1 with high intra-class variance.
In this work, we present YOLORe-IDNet. The YOLORe-IDNet combines Kernelized Correlation Filters (KCF) [7] with an IOU constraint and You Only Look Once (YOLOv5) [9] to form a more robust and efficient tracker for intra-camera tracking tasks. In order to reduce tracking errors and improve accuracy, an IOU-based occlusion assessment approach is introduced into our framework. For recognizing a person across multiple camera views, a deep learning based person Re-ID model has been used along with camera network information to reduce the latency. We make the following novel contributions:
* Creation of an open dataset using cameras placed at 9 different locations in the university campus (see Fig. 6), where the dataset consists of 81000 frames.
* User defined real-time identification and selection of person of interest in a camera feed by a simple mouse gesture.
* A novel algorithm for intra-camera tracking that uses IOU-based novel occlusion assessment approach to effectively address occlusion and minimize tracking failures.
Figure 1: Person tracking in a real-world scenario, showcasing the challenges of high intra-class variability: (a) MCPT Dataset and (b) EPFL dataset.
The rest of the paper is organized as follows: the related work on visual object tracking for intra-camera tracking and inter-camera tracking is presented in section 2, and section 3 elaborates details of the methodology presented in this paper. Section 4 discusses the experimental setup and results using our dataset and comparison with some of the state-of-the art tracking algorithms. Finally, in section 5, we conclude the paper and discuss potential directions for future work.
## 2 Related Work
The field of computer vision (CV) has played a significant role in developing intelligent surveillance systems. Person tracking, in particular, is a critical component of these systems. In this section, we review the existing work on person-tracking methods and their strengths and weaknesses. Compared to conventional computer vision approaches, modern deep learning models have achieved significant progress in person tracking due to their ability to learn features and patterns in a data-driven manner. However, the existing models often require a large amount of labeled data for training and can be computationally expensive.
### Intra-camera tracking
Intra-camera object tracking refers to the process of tracking objects within a single video feed. Correlation filters [5] have been used for such a tracking exercise. These filters work by correlating a template with the target object in each frame of a video. One of the earliest approaches for single camera tracking used [4] MOSSE tracker. However, these trackers can be susceptible to distractors and are limited in their ability to handle occlusion and scale adaptation. Intersection over union (IOU) constraint-based tracking [3] is another approach that has been used to track individuals in real-time scenarios. This approach computes the IOU between the current frame and previous frames to predict the target's location. However, IOU-based tracking can be limited in its ability to handle complex scenarios with multiple objects.
The combination of a CNN based object detector and correlation filter can significantly enhance the robustness and tracking accuracy [17]. The detectors can quickly identify potential targets in a given scene, despite occlusions and scale variations. On the other hand, correlation filters are effective for tracking objects with stable appearances and minimal scale change but struggle when objects undergo significant changes in appearance or shape. By integrating the strengths of both detectors and correlation filters, it is possible to leverage their complementary strengths and achieve improved tracking performance in challenging scenarios [17]. Our proposed approach incorporates YOLOv5 and correlation filter techniques, along with an occlusion detection module, to effectively mitigate tracking drifts. The selection of YOLOv5 and correlation filter, and the design of the occlusion detection module are done keeping in view real-time processing requirements.
### Inter-camera tracking
Inter-camera tracking tracks objects across multiple cameras, which is a challenging problem due to the differences in camera viewpoints, image quality, illumination conditions etc. In recent years, significant progress has been made in developing inter-camera tracking methods that are robust to these challenges. Several studies have investigated the use of camera networks for human tracking, including the review [8] on various tracking techniques and challenges. One common approach to inter-camera tracking is to use appearance-based methods such as person Re-ID [15]. Re-ID is a process of matching people across different cameras by using their appearance features such as color, texture, and shape. These models typically learn discriminative feature representations of people that are invariant to pose, illumination, and camera viewpoint changes. Therefore, we base our inter-camera tracking module upon AlignedReID++ [14] a person Re-ID model. It uses DMLI, a dynamically matching local information method, for person Re-ID and is effective at resolving pose misalignments or other challenging samples.
Another important aspect of inter-camera tracking is the use of camera network information. This includes the spatial relationships, the field of view (FOV), and the calibration parameters of each camera. By incorporating this information, inter-camera tracking methods can better estimate the trajectories of people across different cameras.
## 3 YOLORe-IDNet: Proposed Methodology
In this section we first present the algorithms in our methodology and then the implementation of the algorithms for real-time processing.
### Proposed Algorithm
YOLORe-IDNet reads the Real Time Streaming Protocol (RTSP) URLs or paths of the video sources from a "txt" file. Upon identifying the target, the user of the system would input the camera number in the ROI selection field and be expected to draw a bounding box around the person who needs to be tracked. The system then encodes the target data and sends it to the cloud server via post request along with the camera number where the target was first seen. The cloud server then processes this request and validates whether or not the inter-camera tracking module needs to be activated, as illustrated in Fig. 2.
Inside the intra-camera tracking module, we first validate if this is the first request made to the server. If so, then the system performs feature extraction with the help of YOLO detections, finds a similarity score, and initializes the tracker with the coordinates of the person whose similarity score is highest with the target. This ensures the current location of the target is obtained since in real-time systems, Due to potential latency in the network and processing delays, the target may have moved from its original location by the time it is found.
Similarly, when the person exists from the camera's FOV, the cloud server sends the trigger to the client, and from there onwards the inter-camera tracking begins. In this way, the system continues to track the target and finally saves the trajectory map of the locations where the target visited, as shown in Fig. 6.
#### 3.2.2 Intra-camera tracking
The intra-camera tracking module consists of two stages. First, a base case is activated when the module is called for the first time or when occlusion is observed by the system in the current stream, as mentioned in the proposed Algorithm 1 (line 2). In the second stage of algorithm (line 9), bounding box detections are obtained using YOLOv5 [9]. The correlation filter updates the target's new location, over which an IOU constraint is applied to select the bounding box with the highest IOU compared to a threshold of 30%, assuming that the movement of the target is limited. The system then applies the proposed Occlusion Detection Algorithm (Algorithm 2) to further assess whether the target has been occluded to overcome tracking errors. This reduces overall tracking errors that occur when the target is fully or partially occluded by other objects or people. If the occlusion is detected, the base case is activated. The features of the target are extracted along with the features of all the detected persons in the frame, and Re-ID is applied. The tracker is then updated with the coordinates of the person where the highest similarity is observed. Otherwise, the system continues to track the target. Fi
Figure 2: Flow diagram with intra-camera tracking on left and inter-camera tracking on right.
nally, the tracking module returns the target's location to the client, and the target's current location, along with the camera ID, is stored to update the target's trajectory. The resulting trajectory map is stored as an output in Fig 6.
```
Input: Image_frame, target Output: yolo_bbox
1\(base\_case\gets True\);
2ifbase_case is Truethen
3 bboxes \(\leftarrow\) run_yolo_detector(Image_frame); base_case = False;
4 objects_features = extract_features(bboxes, Image_frame);
5 target_features = extract_features(target);
6 target_new_coords = perform_reid(objects_features, target_features);
7 initialize_tracker(target_new_coords, Image_frame);
8return target_new_coords
9
10else
11 bboxes = run_yolo_detector(Image_frame);
12 target_new_coords = update_tracker(Image_frame);
13 IOU_vector, yolo_bbox, has_exit = apply_iou_constraint(bboxes, target_new_coords);
14ifnot has_exitthen
15ifmax_of(IOU_vector) \(\geq\) Thresholdthen
16 occlusion_flag = detect_occlusion(IOU_vector);
17ifocclusion_flag is Truethen
18 base_case = True;
19
20else
21 initialize_tracker(yolo_bbox, Image_frame);
22return yolo_bbox;
23
24
25else
26 has_exit;
27
```
**Algorithm 1**Intra-camera tracking algorithm
```
Input: IOU_vector Output: Boolean value indicating occlusion
1\(count\gets 0\); \(max\_iou\leftarrow\) max(IOU_vector);
2for\(iou\_score\) in IOU_vectordo
3if\(iou\_score>0.1\) and \(iou\_score\neq max\_iou\)then
4\(count\gets count+1\);
5
6if\(count>1\)then
7return True;
8
9else
10 return False;
11
```
**Algorithm 2**Occlusion Detection Algorithm
To establish the exit of the target from the current stream, the IOU vector of the last three frames is observed. If the IOU vector is zero in the last three frames, the target is marked as exited, triggering the inter-camera tracking module. The system output shown in Fig. 2(a) demonstrates the performance of our occlusion detection algorithm on both our MCPT dataset and the EPFL dataset's test sequence [6]. Specifically, in subfigures (A) and (B), the targets were occluded in the second frame, but our algorithm was able to detect this and successfully recovered tracking from the third frame onwards.
#### 4.2.2 Inter-camera tracking
In the proposed work we make use of AlignedReID++[14] for person Re-ID. The network uses the DMLI method that aligns horizontal stripes in person and learns global and local features. The local branch aligns local parts using the shortest path distance, which helps the global branch learn more discriminative features for robust Re-ID.
The main goal of the inter-camera tracking module is to conduct the search for the target, and after a successful search, return the camera-ID and bounding box coordinates of the target. As shown in Fig.4 in the inter-camera tracking module, initially target features are extracted using the Re-ID model, and the YOLO detector is applied to a batch of frames; subsequently, the bounding boxes are filtered to obtain person detections, which serve as candidates for Re-ID. Furthermore, post-processing is done to crop the images of all the persons
Figure 3: System output on various video sequences.
in the frames, and another batch is formed and sent for Re-ID where the similarity match is performed with the original target features. In cases of success, the camera-ID and location are returned to the client; otherwise, the system continues the search with recommended cameras in the next iteration.
### System Architecture and Implementation
#### 3.2.1 Multithreading for Enhancing Execution Speed
To improve the client's performance during the I/O-blocking operation of reading from streaming cameras, we employed a multi-threading approach. Instead of relying on a single thread to grab frames sequentially and risking delays, we spawned an additional thread to handle grabbing frames in parallel. This enabled the continuous reading of frames from the I/O thread while allowing the root thread to process the current frame. In essence, by creating another thread to grab frames, we could enhance the execution of the client.
#### 3.2.2 REST APIs for inference Data
A REST API (Representational State Transfer API) is an architectural style for building web services. REST APIs communicate over HTTP requests.
The complete architecture of the deployed system is shown in Fig. 5. The system architecture is based on a Flask server that receives video frames through HTTP requests. These frames are processed using object detection and tracking algorithms, as explained earlier. Once the cloud server generates inference data, a JSON response is prepared by the server, including bounding boxes and stream ID for the suspect. This response is then relayed back to the client as an HTTP
Figure 4: Inter camera tracking flow diagram.
Figure 5: System architecture for cloud-based application
response. Finally, the inference data is visualized on the client, allowing the generation of a target trajectory, as demonstrated in Fig. 2(b) and 6.
#### 3.3.2 Multi-Camera Person Tracking (MCPT) Dataset
In general, person-tracking systems need datasets of the target's trajectory to be developed and evaluated. However, there is a shortage of such datasets that cover multiple cameras, making it difficult to evaluate tracking algorithms against obstacles like occlusions, scale and angle variations, and the movement of individuals in different zones.
To address this challenge, we created the MCPT dataset to evaluate person-tracking systems in real-world scenarios. The dataset consists of nine videos captured from non-overlapping cameras situated across the university campus, each of which is approximately 5 minutes long, resulting in a total of approximately 81,000 frames. Each camera was set to a resolution of 720 x 1280 and recorded at a frame rate of 30 frames per second (FPS). To generate suspicious trajectories that are representative of real-world scenarios, a target was recorded in each stream for a maximum of 30 seconds. The dataset captured the target amidst various intra-class variations, such as targets with varying angles, scale, and lighting variations, while also including instances of full and partial occlusion, providing a more realistic representation of real-world scenarios. The dataset is provided in a standard format, with each video stored as a sequence of frames in JPEG format and each frame labeled with the corresponding ground truth data in a separate "txt" file.
## 4 Experimental Results
#### 4.0.1 Datasets and Evaluation Protocol:
We evaluated the YOLORe-IDNet on MCPT dataset and Object Tracking Benchmark-100 (OTB-100) dataset. The OTB-100 [16] dataset is a widely utilized benchmark dataset that is primarily used to evaluate the performance of visual object tracking algorithms. This dataset is composed of 100 video sequences, which present a wide range of visual challenges, including occlusion, motion blur, changes in lighting, scale variation, and background clutter. Moreover, this dataset features a diverse set of object classes, including humans, animals, vehicles, and other miscellaneous objects.
Figure 6: Trajectory map saved by system as output
Given that the primary objective of the YOLORe-IDNet is to track individuals, we deliberately selected 11 of the most challenging person sequences from this dataset for our evaluation.
Precision, Recall, F1Score, Intersection over union (IOU), and Overall precision error (OPE) are used as evaluation metrics.
\[Precision=\frac{TP}{TP+FP},\;Recall=\frac{TP}{TP+FN},\;F1Score=2*\frac{Precision* Recall}{Precision+Recall} \tag{1}\]
\[OPE=\frac{1}{N}\sum_{i=1}^{N}d_{i},\;IOU=\frac{|GT\cap PD|}{|GT\cup PD|} \tag{2}\]
where \(N\) is the total number of matches for ground truth, \(d_{i}\) is the Euclidean distance between the center of the predicted bounding box (\(PD\)) and the center of the ground truth bounding box (\(GT\)) in the \(i^{th}\) frame. \(IOU\) is the overlap between the \(PD\) and the \(GT\) box at frame f. True positives (\(TP\)) refer to the number of correctly identified \(GT\) matches, while false positives (\(FP\)) refer to the number of predicted matches that do not match \(GT\). False negatives (\(FN\)) refer to the number of \(GT\) matches that are not identified by the tracker.
**Evaluation on MCPT dataset:** As shown in Fig. 2(b), the system is able to accurately track the targets in real-time. To assess the performance of our method, we computed Precision, Recall, and mean IOU for each camera, as reported in Table 1. During the experiments, the proposed system maintained an operational speed of 18 FPS, showcasing its capability to handle real-time feeds efficiently. Moreover, our method demonstrated high Precision and IOU values of 100% and 91% respectively, indicating its ability to provide accurate object localization across different camera views.
**Ablation study:** As shown in Table 2 the ablation study was conducted to evaluate the effectiveness of the occlusion assessment module in improving the performance of the system. The results indicate that the inclusion of the occlusion assessment module led to a notable increase of 6% in F1Score and 5% in Recall, and 33.73 units decrease in OPE when compared to the system without the occlusion assessment module. The reason for this is that, when a target occludes, the tracker continues to track the wrong target in the absence of occlusion assessment and without any self-correcting mechanism.
\begin{table}
\begin{tabular}{l|c c c c c c c c c c} \hline & **Cam0** & **Cam1** & **Cam2** & **Cam3** & **Cam4** & **Cam5** & **Cam6** & **Cam7** & **Cam8** & **Mean** \\ \hline \hline
**Precision\(\uparrow\)** & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\
**Recall\(\uparrow\)** & 0.82 & 0.81 & 0.78 & 0.70 & 0.75 & 0.91 & 0.73 & 0.93 & 0.85 & 0.81 \\
**IOU\(\uparrow\)** & 0.91 & 0.90 & 0.89 & 0.93 & 0.92 & 0.92 & 0.91 & 0.92 & 0.90 & 0.91 \\ \hline \end{tabular} Note: FPS for all camera feeds = 18.
\end{table}
Table 1: Results on MCPT dataset
**Comparison with State-of-the-art Methods:** We selected several popular tracking algorithms for comparison, and our method achieved a high F1Score of 79%, second only to CSRT's 83%. However, our method outperformed CSRT's in terms of OPE (10.83 vs. 15.51) on OTB-100 Dataset, as shown in Table 3. Moreover, our algorithm achieved a relatively high mean IOU of 59%. We also compared these tracking algorithms on MCPT dataset, and our approach outperformed other tracking algorithms by a big margin. The reason for this high performance is the ability to recover from occlusions effectively.
## 5 Conclusion
In this study, we developed a person-tracking system that uses correlation filters and IOU constraints for reliable tracking and a deep learning model for cross-camera person Re-ID. Our experiments show that the proposed system is effective in detecting and tracking objects in multi-camera surveillance scenarios, achieving an F1Score of 89%, Recall of 81%, and a mean IOU of 91%, with a low OPE of 6.25. We also evaluated our system on the OTB-100 dataset and achieved
\begin{table}
\begin{tabular}{l c c c|c c c} \hline \hline \multicolumn{3}{c|}{**With occlusion assessment**} & \multicolumn{3}{c}{**Without occlusion assessment**} \\ \multicolumn{3}{c}{**F1Score\(\uparrow\) Recall\(\uparrow\)**} & **OPE\(\downarrow\)** & **F1Score\(\uparrow\)** & **Recall\(\uparrow\)** & **OPE\(\downarrow\)** \\ \hline \hline Cam0 & 0.90 & 0.82 & 6.14 & 0.77 & 0.78 & 103.39 \\ Cam1 & 0.89 & 0.81 & 7.75 & 0.90 & 0.82 & 7.22 \\ Cam2 & 0.87 & 0.78 & 6.58 & 0.88 & 0.78 & 5.87 \\ Cam3 & 0.82 & 0.70 & 5.68 & 0.85 & 0.74 & 5.98 \\ Cam4 & 0.86 & 0.75 & 6.25 & 0.70 & 0.63 & 115.4 \\ Cam5 & 0.95 & 0.91 & 6.21 & 0.77 & 0.72 & 104.8 \\ Cam6 & 0.85 & 0.73 & 6.12 & 0.87 & 0.78 & 5.84 \\ Cam7 & 0.96 & 0.93 & 5.94 & 0.87 & 0.78 & 5.55 \\ Cam8 & 0.92 & 0.85 & 5.55 & 0.88 & 0.78 & 5.75 \\ \hline
**Mean** & 0.89 & 0.81 & 6.25 & 0.83 & 0.76 & 39.98 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation study on Occlusion assessment module.
\begin{table}
\begin{tabular}{l|c c c|c c|c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c|}{**OTB-100 Dataset**} & \multicolumn{3}{c}{**MCPT dataset**} \\ & **F1Score\(\uparrow\) Recall\(\uparrow\)** & **OPE\(\downarrow\) IOU\(\uparrow\)** & **F1Score\(\uparrow\) Recall\(\uparrow\)** & **OPE\(\downarrow\) IOU\(\uparrow\)** \\ \hline \hline Boosting[12] & 0.50 & 0.40 & 53.05 & 0.29 & 0.50 & 0.37 & 198.76 & 0.30 \\ CSRT[13] & **0.83** & **0.82** & 15.51 & **0.61** & 0.33 & 0.27 & 363.89 & 0.23 \\ KCF[7] & 0.39 & 0.29 & 12.38 & 0.22 & 0.23 & 0.16 & 541.20 & 0.12 \\ MFLOW[10] & 0.35 & 0.26 & 65.13 & 0.21 & 0.13 & 0.07 & 319.87 & 0.11 \\ MIL[2] & 0.49 & 0.37 & 51.09 & 0.28 & 0.55 & 0.43 & 146.48 & 0.33 \\ MOSSE[4] & 0.23 & 0.19 & 22.81 & 0.24 & 0.40 & 0.28 & 327.36 & 0.20 \\ TLD[11] & 0.64 & 0.54 & 37.19 & 0.38 & 0.08 & 0.04 & 429.15 & 0.04 \\ \hline
**OURS** & 0.79 & 0.69 & **10.83** & 0.59 & **0.89** & **0.81** & **6.25** & **0.91** \\ \hline \hline \end{tabular}
* **Bold**: best; Underlined: second-best results.
\end{table}
Table 3: Comparison with state-of-the-art methods on OTB-100 & MCPT dataset
competitive results with the least OPE of 10.83 and the second-highest F1Score of 79%, Recall of 69%, and mean IOU of 59% compared to state-of-the-art tracking algorithms. In the future, we aim to explore additional techniques, such as apparel invariant Re-ID, to further enhance the accuracy and generalization of our system in real-world scenarios.
|
2309.03194 | Signatures of Bayesian inference emerge from energy efficient synapses | Biological synaptic transmission is unreliable, and this unreliability likely
degrades neural circuit performance. While there are biophysical mechanisms
that can increase reliability, for instance by increasing vesicle release
probability, these mechanisms cost energy. We examined four such mechanisms
along with the associated scaling of the energetic costs. We then embedded
these energetic costs for reliability in artificial neural networks (ANN) with
trainable stochastic synapses, and trained these networks on standard image
classification tasks. The resulting networks revealed a tradeoff between
circuit performance and the energetic cost of synaptic reliability.
Additionally, the optimised networks exhibited two testable predictions
consistent with pre-existing experimental data. Specifically, synapses with
lower variability tended to have 1) higher input firing rates and 2) lower
learning rates. Surprisingly, these predictions also arise when synapse
statistics are inferred through Bayesian inference. Indeed, we were able to
find a formal, theoretical link between the performance-reliability cost
tradeoff and Bayesian inference. This connection suggests two incompatible
possibilities: evolution may have chanced upon a scheme for implementing
Bayesian inference by optimising energy efficiency, or alternatively, energy
efficient synapses may display signatures of Bayesian inference without
actually using Bayes to reason about uncertainty. | James Malkin, Cian O'Donnell, Conor Houghton, Laurence Aitchison | 2023-09-06T17:57:07Z | http://arxiv.org/abs/2309.03194v4 | # Signatures of Bayesian inference emerge from energy efficient synapses
###### Abstract
Biological synaptic transmission is unreliable, and this unreliability likely degrades neural circuit performance. While there are biophysical mechanisms that can increase reliability, for instance by increasing vesicle release probability, these mechanisms cost energy. We examined four such mechanisms along with the associated scaling of the energetic costs. We then embedded these energetic costs for reliability in artificial neural networks (ANN) with trainable stochastic synapses, and trained these networks on standard image classification tasks. The resulting networks revealed a tradeoff between circuit performance and the energetic cost of synaptic reliability. Additionally, the optimised networks exhibited two testable predictions consistent with pre-existing experimental data. Specifically, synapses with lower variability tended to have 1) higher input firing rates and 2) lower learning rates. Surprisingly, these predictions also arise when synapse statistics are inferred through Bayesian inference. Indeed, we were able to find a formal, theoretical link between the performance-reliability cost tradeoff and Bayesian inference. This connection suggests two incompatible possibilities: evolution may have chanced upon a scheme for implementing Bayesian inference by optimising energy efficiency, or alternatively, energy efficient synapses may display signatures of Bayesian inference without actually using Bayes to reason about uncertainty.
## Introduction
The synapse is the major site of inter-cellular communication in the brain. The amplitude of synaptic post-synaptic potentials (PSPs) are usually highly variable or stochastic. This variability arises primarily presynaptically: the release of neurotransmitter from presynaptically-housed vesicles into the synaptic cleft has variable release probabilities and variable quantal sizes (_Lisman and Harris, 1993; Branco and Staras, 2009; Brock et al., 2020_). Unreliable synaptic transmission seems puzzling, especially in light of evidence for low-noise, almost failure-free transmission at some synapses (_Paulsen and Heggelund, 1994, 1996; Bellingham et al., 1998_). Moreover, the degree to which a synapse is unreliable does not just vary from one synapse type to another, there is also an heterogeneity of precision amongst synapses of the same type (_Murthy et al., 1997; Dobruz and Stevens, 1997_). Given that there is capacity for more precise transmission, why is this capacity not used in more synapses?
Unreliable transmission degrades accuracy but _Laughlin et al._ (1998) showed that the synaptic connection from a photoreceptor to a retinal large monopolar cell could increase its precision by increasing the number of synapses, averaging the noise away, but this comes at the cost of extra energy per bit of information transmitted. Moreover, _Levy and Baxter_ (2002) demonstrated that there is a value for the precision which optimises the energy cost of information transmission. In this paper, we explore this notion of a performance-energy tradeoff.
However, it is important to consider precision and energy cost in the context of neuronal computation; the brain does not
simply transfer information from neuron to neuron, it performs computation through the interaction between neurons. However, models outlining a synaptic energy-performance tradeoff, [15, 14, 13, 12, 11, 17], predominantly consider information transmission between just two neurons and the corresponding information-theoretic view treats the synapse as an isolated conduit of information [12]. In contrast, in reality, a single synapse is just one unit of the computational machinery of the brain. As such, the performance of an individual synapse needs to be considered in the context of circuit performance. To perform computation in an energy-efficient way the circuit as a whole needs to allocate resources across different synapses to optimise the overall energy cost of computation [13, 12].
Here, we consider the consequences of a tradeoff between network performance and energetic reliability costs that depend explicitly upon synapse precision. We estimate the energy costs associated with precision by considering the biological mechanisms underpinning synaptic transmission. By including these costs in a neural network designed to perform a classification task, we observe a heterogeneity in synaptic precision and find that this "allocation" of precision is related to signatures of synapse "importance", which can be understood formally on the grounds of Bayesian inference.
## Results
We proposed energetic costs for reliable synaptic transmission and then measured their consequences in an artificial neural network.
### Biophysical costs
Here, we seek to understand the biophysical energetic costs of synaptic transmission, and how those costs relate to the reliability of transmission (Fig. 1a). We start by considering the underlying mechanisms of synaptic transmission. In particular, synaptic transmission begins with the arrival of a spike at the axon terminal. This triggers a large influx of calcium ions into the axon terminal. The increase in calcium concentration causes the release of neurotransmitter-filled vesicles docked at axonal release sites. The neurotransmitter diffuses across the synaptic cleft to the postsynaptic dendritic membrane. There, the neurotransmitter binds with ligand-gated ion channels causing a change in voltage, i.e. a postsynaptic potential. This process is often quantified using the _Katz and Miledi_[13] quantal model of neurotransmitter release. Under this model, for each connection between two cells, there are \(n\) docked, readily releasable vesicles. When the presynaptic cell spikes, each docked vesicle releases with probability \(p\) and each released vesicle causes a postsynaptic potential of size \(q\). Thus, the mean, \(\mu\), and variance, \(\sigma^{2}\), of the PSP can be written (see Fig. 1b),
\[\mu =npq\] \[\sigma^{2} =np(1-p)q^{2}. \tag{1}\]
We considered four biophysical costs associated with improving the reliability of synaptic transmission, while keeping the mean fixed, and derived the associated scaling of the energetic cost with PSP variance.
**Calcium efflux**. Reliability is higher when the probability of vesicle release, \(p\), is higher. As vesicle release is triggered by an increase in intracellular calcium, greater calcium concentration implies higher release probability. However, increased calcium concentration implies higher energetic costs. In particular, calcium that enters the synaptic bouton will subsequently need to be pumped out. We take the cost of pumping out calcium ions to be proportional to the calcium concentration, and take the relationship between release probability and calcium concentration to be governed by a Hill Equation, following _Sakaba and Neher_[18]. The resulting relationship between energetic costs and reliability is cost \(\propto\sigma^{-1/2}\) (Fig. 1c (I); see Appendix - Reliability costs for further details).
**Vesicle membrane surface area.** There may also be energetic costs associated with producing and maintaining a large amount of vesicle membrane. _Purdon et al._[20] argues that phospholipid metabolism may take a considerable proportion of the brain's energy budget. Additionally, costs associated with membrane surface area may arise because of leakage of hydrogen ions across vesicles [19]. Importantly, a cost for vesicle surface area is implicitly a cost on reliability. In particular, we could obtain highly reliable synaptic release by releasing many small vesicles, such that stochasticity in individual vesicle release events averages out. However, the resulting many small vesicles have a far larger surface area than a single large vesicle, with the same mean PSP. Thus, a cost on surface area implies a relationship
between energetic costs and reliability; in particular cost \(\propto\sigma^{-2/3}\) (Fig. 1c (II); see Appendix - Reliability costs for further details).
**Actin.** Another cost for small but numerous vesicles arises from a demand for structural organisation of the vesicles pool by filaments such as actin (_Cingolani and Goda, 2008; Gentile et al., 2022_). Critically, there are physical limits to the number of vesicles that can be attached to an actin filament of a given length. In particular, if vesicles are smaller we can attach more vesicles to a given length of actin, but at the same time, the total vesicle volume (and hence the total quantity of neurotransmitter) will be smaller (Fig. 1c (III)). A fixed cost per unit length of actin thus implies a relationship between energetic costs and reliability of, cost \(\propto\sigma^{-4/3}\) (see Appendix - Reliability costs).
**Trafficking.** A final class of costs is proportional to the number of vesicles (_Laughlin et al., 1998_). One potential biophysical mechanism by which such a cost might emerge is from active transport of vesicles along actin filaments or microtubules to release sites (_Chenouard et al., 2020_). In particular, vesicles are transported by ATP-dependent myosin-V motors (_Bridgman, 1999_), so more vesicles require a greater energetic cost for trafficking. Any such cost proportional to the number of vesicles gives rise to a relationship between energetic cost and PSP variance of the form, cost \(\propto\sigma^{-2}\) (Fig. 1c (IV); see Appendix - Reliability costs).
**Costs related to PSP mean/magnitude** While costs on precision are the central focus of this paper, it is certainly the case that other costs relating to the mean PSP magnitude constitute a major cost of synaptic transmission. For example, high amplitude PSPs require a large quantity of neurotransmitter, high probability of vesicle release, and a large number of post-synaptic receptors (_Atwell and Laughlin, 2001_). These can be formalised as costs on the PSP mean, \(\mu\), and can additionally be related to L1 weight decay in a machine learning context (_Rosset and Zhu, 2006; Sacramento et al., 2015_).
### Reliability costs in artificial neural networks
Next, we sought to understand how these biophysical energetic costs of reliability might give rise to patterns of variability in a trained neural network. Specifically, we trained artificial neural networks (ANNs) using an objective that embodied a tradeoff between performance and reliability costs,
\[\text{overall cost}=\text{performance cost + magnitude cost + reliability cost}. \tag{2}\]
The "performance cost" term measures the network's performance on the task, for instance in our classification tasks we used the usual cross-entropy cost. The "magnitude cost" term captures costs that depend on the PSP mean, while the "reliability cost" term captures costs that depend on the PSP precision. In particular,
\[\text{magnitude cost} =\lambda\sum_{i}|\mu_{i}|, \tag{3}\] \[\text{reliability cost} =c\sum_{i}\sigma_{i}^{-\rho}. \tag{4}\]
Here, \(i\) indexes synapses, and recall that \(\sigma_{i}\) is the standard deviation of the \(i\)th synapse. The multiplier \(c\) in the reliability cost determines the strength of the reliability cost relative to the performance cost. Small values for \(c\) imply that the reliability cost term is less important, permitting precise transmission and higher performance. Large values for \(c\) give greater importance to the reliability cost encouraging energy efficiency by allowing higher levels of synaptic noise, causing detriment to performance (see Fig. 2).
We trained fully-connected, rate-based neural network to classify MNIST digits. Stochastic synaptic PSPs were sampled from a Normal distribution,
\[w_{i}\sim\text{Normal}(\mu_{i},\sigma_{i}). \tag{5}\]
where, recall, \(\mu_{i}\) is the PSP mean and \(\sigma_{i}^{2}\) is the PSP variance for the \(i\)th synapse. The output firing rate was given by,
\[\text{firing rate}=f\left(\sum_{i}w_{i}x_{i}-w_{0}\right). \tag{6}\]
Here, \(\sum_{i}w_{i}x_{i}-w_{0}\) can be understood as the somatic membrane potential, and \(f\) represents the relationship between somatic membrane potential and firing rate; we used ReLU (_Fukushima, 1975_). We optimised network parameters \(\mu_{i}\) and \(\sigma_{i}\) using Adam (_Kingma and Ba, 2014_) (see Methods for details on architecture and hyperparameters).
Figure 1: **Physiological reliability costs - a. Physiological processes that influence PSP precision. b. A binomial model of vesicle release. For fixed PSP mean, increasing \(\rho\) or \(n\) decreases PSP variance. We have substituted \(q\)\(\propto\)\(r^{3}\) to reflect that vesicle volume scales quantal size (_Karunanittiki et al., 2002_). c. Four different biophysical costs of reliable synaptic transmission. I) Calcium pumps reverse the calcium influx that triggers vesicle release. A high probability of vesicle release requires a large influx of calcium, and extruding this calcium is costly. II) An equivalent volume of neurotransmitter can be stored in few large vesicles or shared between many smaller vesicles. Sharing a fixed volume of neurotransmitter among many small vesicles reduces PSP variability but increases vesicle surface area, creating greater demand for phospholipid metabolism and hence greater energetic costs. III) Actin filament support the structure of vesicle clusters at the terminal. Many and large vesicles require more actin and higher rates of ATP dependent actin turnover. IV) There are biophysical costs that scale with the number of vesicles (_Langhlin et al., 1998_; _Atwell and Laughlin, 2001_) e.g. vesicle trafficking driven by myosin-V active transport along actin filaments.**
### The tradeoff between accuracy and reliability costs in trained networks
Next we sought to understand how the tradeoff between accuracy and reliability cost manifests in trained networks. Perhaps the critical parameter in the objective, (Eq. 2 and Eq. 4) was \(c\), which controlled the importance of the reliability cost relative to the performance cost. We trained networks with a variety of different values of \(c\), and with four values for \(\rho\) motivated by the biophysical costs (the different columns). As expected, we found that as \(c\) increased, performance fell (Fig. 2a) and the average synaptic standard deviation increased (Fig. 2b). Importantly, we considered two different settings. First, we considered an homogeneous noise setting, where \(\sigma_{i}\) is optimised but kept the same across all synapses (grey lines). Second, we considered an heterogeneous noise setting, where \(\sigma_{i}\) is allowed to vary across synapses, and is optimised on a per-synapse basis. We found that heterogeneous noise (i.e. allowing the noise to vary on a per-synapse basis) improved accuracy considerably for a fixed value of \(c\), but only reduced the average noise slightly.
The findings in Fig. 2 imply a tradeoff between accuracy and average noise level, \(\sigma\), as we change \(c\). If we explicitly plot the accuracy against the noise level using the data from Fig. 2, we see that as the synaptic noise level increases, the accuracy decreases (Fig. 3a). Further, the synaptic noise level is associated with a reliability cost (Fig. 3b), and this relationship changes in the different columns as they use different values of \(\rho\) associated with different biological mechanisms that might give rise to the dominant biophysical reliability cost. Thus, there is also a relationship between accuracy and reliability costs (Fig. 3c), with accuracy increasing as we allow the system to invest more energy in becoming more reliable, which implies a higher reliability cost. Again, we plotted both the homogeneous (grey lines) and heterogeneous noise cases (green lines). We found that heterogeneous noise allowed for considerably improved accuracy at a given average noise standard deviation or a given reliability cost.
### Energy-efficient patterns of synapse variability
We found that the heterogeneous noise setting, where we individually optimise synaptic noise on a per-synapse basis, performed considerably better than the homogeneous noise setting (Fig. 3). This raised an important question: how does the network achieve such large improvements by optimising the noise levels on a per-synapse basis? We hypothesised that the system invests a lot of energy in improving the reliability for "important" synapses, i.e. synapses whose weights have a large impact on predictions and accuracy (Fig. 4a). Conversely, the system allows unimportant synapses to have high variability, which reduces reliability costs (Fig. 4b). To get further intuition, we compared both \(w_{1}\) and \(w_{2}\) on the same
Figure 2: **Accuracy and PSP variance as we change the tradeoff between reliability and performance costs.** We changed the tradeoff by modifying \(c\), in Eq. 4, which multiplies the reliability cost. **a.** As the reliability cost multiplier, \(c\), increases, the accuracy decreases considerably. The green lines show the heterogeneous noise setting where the noise level is optimised on a per-synapse basis, while the grey lines show the homogeneous noise setting, where the noise is optimised, but forced to be the same for all synapses. **b.** When the reliability cost multiplier, \(c\) increases, the synaptic noise level (specifically, the average standard deviation, \(\sigma\)) increases.
plot (Fig. 4c). Specifically, we put the important synapse, \(w_{1}\) from Fig. 4a, on the horizontal axis, and the unimportant synapse, \(w_{2}\) from Fig. 4b, on the vertical axis. In Fig. 4c, the relative importance of the synapse is now depicted by how the cost increases as we move away from the optimal value of the weight. Specifically, the cost increases rapidly as we move away from the optimal value of \(w_{1}\), but increases much more slowly as we move away from the optimal value of \(w_{2}\). Now, consider deviations in the synaptic weight driven by homogeneous synaptic variability (Fig. 4c left, grey points). Many of these points have poor performance (i.e. a high performance cost), due to relatively high noise on the important synapse (i.e. \(w_{1}\)). Next, consider deviations in the synaptic weight driven by heterogeneous, optimised variability (Fig. 4c left, green points). Critically, optimising synaptic noise reduces variability for the important synapse, and that reduces the average performance cost by eliminating large deviations on the important synapse. Thus, for the same overall reliability cost, heterogeneous, optimised variability can achieve much lower performance costs, and hence much lower overall costs than homogeneous variability (Fig. 4d).
To investigate experimental predictions arising from optimised, heterogeneous variability, we needed a way to formally assess the "importance" of synapses. We used the "curvature" of the performance cost: namely the degree to which small deviations in the weights from their optimal values will degrade performance. If the curvature is large (Fig. 4a), then small deviations in the weights, e.g. those caused by noise, can drastically reduce performance. In contrast, if the curvature is smaller (Fig. 4b), then small deviations in the weights cause a much smaller reduction in performance. As a formal measure of the curvature of the objective, we used the Hessian matrix, \(\mathbf{H}\). This describes the shape of the objective as a function of the synaptic weights, the \(w_{i}\)s: specifically, it is the matrix of second derivatives of the objective, with respect to the weights, and measures the local curvature of objective. We were interested in the diagonal elements, \(H_{ii}\); the second
Figure 3: **The performance-reliability cost tradeoff in ANN simulations** - **a**. Accuracy decreases as the average PSP standard deviation, \(\sigma\), increases. The grey lines are for the homogeneous noise setting where the PSP variance is optimised but isotropic (i.e. the same across all synapses), while the green lines is for the heterogeneous noise setting, where the PSP variances are optimised individually on a per-synapse basis. **b**. Increasing reliability by reducing \(\sigma^{2}\) leads to greater reliability costs, and this relationship is different for different biophysical mechanisms and hence values for \(\rho\) (columns). **c**. Higher accuracy therefore implies larger reliability cost.
derivatives of the objective with respect to \(w_{i}\).
We began by looking at how the optimised synaptic noise varied with synapse importance, as measured by the curvature or, more formally, the Hessian (Fig. 5a). We found that as the importance of the synapse increased, the optimised noise level decreased. These patterns of synapse variability make sense because noise is more detrimental at important synapses and so it is worth investing energy to reduce the noise in those synapses.
However, this relationship (Fig. 5a) between the importance of a synapse and the synaptic variability is not experimentally testable, as we are not able to directly measure synapse importance. That said, we are able to obtain two testable predictions. First, the input rate in our simulations was negatively correlated with optimised synaptic variability (Fig. 5b). Second, the optimised synaptic variability was larger for synapses with larger learning rates (Fig. 5c). Critically, both of these patterns have been observed in experimental data. The relationship between input firing rate and synaptic variability was first observed by _Aitchison et al._ (2021) using data from _Ko et al._ (2013) (Fig. 6a). The relationship between learning rate and synaptic variability was first observed by _Schug et al._ (2021), using data from _Sjostrom et al._ (2003) as processed by _Costa et al._ (2017) (Fig. 6b).
To understand why these patterns of variability emerge in our simulations and in data, we need to understand the connection between synapse importance, synaptic inputs (Fig. 5b, Fig. 6b) and synaptic learning (Fig. 5c, Fig. 6a). Perhaps the easiest connection is between the synapse importance and the input firing rate. If the input cell never fires, then the synaptic weight cannot affect the network output, and the synapse has zero importance (and also zero Hessian (see Appendix - High input rates and high precision at important synapses)). This would suggest a tendency for synapses with higher input firing rates to be more important, and hence to have lower variability. This pattern is indeed borne out in our simulations (Fig. 5b; also see Supplementary - Appendix 6-Fig. 1), though of course there is a considerable amount of noise: there are a few important synapses with low input rates, and vice-versa.
Figure 4: **Schematic depiction of the impact of synaptic noise on synapses with different importance.****a.** First, we considered an important synapse for which small deviations in the weight, \(w_{1}\), e.g. driven by noise, imply a large increase in the performance cost. This can be understood as a high curvature of the performance cost as a function of \(w_{1}\). **b.** Next we considered an unimportant synapse, for which deviations in the weights cause far less increase in performance cost. **c.** A comparison of the impacts of homogeneous and optimised heterogeneous variability for synapses \(w_{1}\) and \(w_{2}\) from **a** and **b.** The performance cost is depicted using the purple contours, and realisations of the PSPs driven by synaptic variability are depicted in the grey/green points. The grey points (left) depict homogeneous noise while the green points (right) depict optimised, heterogeneous noise. **d.** The noise distributions in panel c are chosen to keep the same reliability cost (diagonally hatched area); but the homogeneous noise setting has far a higher performance cost, primarily driven by larger noise in the important synapse, \(w_{1}\).
Next, we consider the connection between learning rate and synapse importance. To understand this connection, we need to choose a specific scheme for modulating the learning rate as a function of the inputs. Modern, state-of-the-art, update rules for artificial neural networks often use an adaptive learning rate. These adaptive learning rates, \(\eta_{i}\), (including the most common such as Adam and variants) almost always use a normalising learning rate which decreases in response to high incoming gradients,
\[\eta_{i}=\frac{\eta_{\text{base}}}{\sqrt{\left(g_{i}^{*}\right)}}. \tag{7}\]
Specifically, the local learning rate for the \(i\)th synapse, \(\eta_{i}\), is usually a base learning rate, \(\eta_{\text{base}}\), divided by the root-mean-square gradient at this synapse \(\sqrt{\left(g_{i}^{*}\right)}\). Critically, the root-mean-square gradient turns out to be strongly related to synapse importance. Intuitively, important synapses with greater impact on network predictions will have larger gradients Appendix - Synapse importance and gradient magnitudes.
In-vivo performance requires selective formation, stabilisation and elimination of long term plasticity (LTP) (_Yang et al., 2009_), raising the questions as to which biological mechanisms are able to provide this selectivity. Reducing updates at historically important synapses is one potential approach to determining which synapses should have their strengths adjusted and which should be stabilised. Adjusting learning rates based on synapse importance enables fast, stable learning (_LeCun et al., 2002_; _Kingma and Ba, 2014_; _Khan et al., 2018_; _Aitchison, 2020_; _Martens, 2020_).
For our purposes, the crucial point is that when training using an adaptive learning rate such as Eq. 7, important synapses have higher root-mean-squared gradients, and hence lower learning rates. Here we use a specific set of update rules which
Figure 5: **The heterogeneous patterns of synapse variability in ANNs optimised by the tradeoff –** We present data patterns on logarithmic axis between signatures of synapse importance and variability for 10,000 (100 neuron units, each with 100 synapses) synapses that connect two hidden layers in our ANN. **a.** Synapses whose corresponding diagonal entry in the Hessian is large have smaller variance. **b.** Synapses with high variance have faster learning rates **c.** As input firing rate increases, synapse variance decreases.
uses this adaptive learning rate (i.e. Adam (_Kingma and Ba, 2014; Yang and Li, 2022_)). Thus, we can use learning rate as a proxy for importance, allowing us to obtain the predictions tested in Fig. 5b which match Fig. 5a/c.
### The connection to Bayesian inference
Surprisingly, our experimental predictions obtained for optimised, heterogeneous synaptic variability (Fig. 5,6) match those arising from Bayesian synapses (i.e. synapses that use Bayes to infer their weights (_Aitchison et al., 2021_)). Our first prediction was that lower variability implies a lower learning rate. The same prediction also arises if we consider Bayesian synapses. In particular, if variability and hence uncertainty is low, then a Bayesian synapse is very certain that it is close to the optimal value. In that case, new information should have less impact on the synaptic weight, and the learning rate should be lower. Our second prediction was that higher presynaptic firing rates imply less variability. Again, this arises in Bayesian synapses: Bayesian synapses should become more certain and less variable if the presynaptic cell fires more frequently. Every time the presynaptic cell fires, the synapse gets a feedback signal which gives a small amount of information about the right value for that synaptic weight. So the more times the presynaptic cell fires, the more information the synapse receives, and the more certain it becomes.
This match between observations for our energy-efficient synapses and previous work on Bayesian synapses led us to investigate potential connections between energy efficiency and Bayesian inference. Intuitively, there turns out to be a strong connection between synapse importance and uncertainty. Specifically, if a synapse is very important, then the performance cost changes dramatically when there are errors in that synaptic weight. That synapse therefore receives large gradients, and hence strong information about the correct value, rapidly reducing uncertainty.
To assess the connection between Bayesian posteriors and energy efficient variability in more depth, we plotted the posterior variance against the optimised synaptic variability (Fig. 7a). We considered our four different biophysical mechanisms (values for \(\rho\); Fig. 7a, columns), and values for \(c\) (Fig. 7a, rows). In all cases, there was a clear correlation between the posterior and the optimised variability: directions with larger posterior variance also had large optimised variability. To more formally assess this connection, we derived the relationships between the optimised noise, \(\sigma_{i}\) and the posterior variable, \(\sigma_{\text{post}}\) as a function of \(\rho\) (Fig. 7b;) and as a function of \(c\) (Fig. 7c). Again, these plots show a clear correlation between optimised variability and posterior variance; though the relationship is far from perfect. For a perfect relationship, we would expect the lines in Fig. 7bc to all lie along the diagonal. In contrast, these lines actually have a slope smaller than one, indicating that optimised variability is less heterogeneous than posterior variance (Fig. 7bc).
Finally, in the Appendix, we derive a formal connection between our overall performance cost and Bayesian inference. Specifically, variational inference, a well-known procedure for performing (approximate) Bayesian inference in NNs (_Hinton and van Camp, 1993_; _Graves, 2011_; _Blundell et al., 2015_). Variational inference optimises the "evidence lower bound objective" (ELBO) (_Barber and Bishop, 1998_; _Jordan et al., 1999; _Blei et al., 2017_), which surprisingly turns out to resemble our performance cost. Specifically, the ELBO includes a term which encourages the entropy of the approximating posterior distribution (which could be interpreted as our noise distribution) to be larger. This resembles a reliability cost, as our reliability costs also encourage the noise distribution to be larger. Critically, the biological power-law reliability cost has a different form from the ideal, entropic reliability cost. However, we are able to derive a formal relationship: the biological power-law reliability costs bounds the ideal entropic reliability cost. Remarkably, this implies that our overall cost (Eq. 2) bounds the ELBO, so reducing our cost (Eq. 2) tightens the ELBO bound and gives an improved guarantee on the quality of Bayesian inference.
## Discussion
Comparing the brain's computational roles with associated energetic costs provides a useful means for deducing properties of efficient neurophysiology. Here, we applied this approach to PSP variability. We began by looking at the biophysical mechanisms of synaptic transmission, and how the energy costs for transmission might vary with synaptic reliability. We modified a standard ANN to incorporate unreliable synapses and trained this on a classification task using an objective that combined classification accuracy and an energetic cost on reliability. This led to a performance-reliability cost tradeoff and heterogeneous patterns of synapse variability that correlated with input rate and learning rate. We noted that these patterns of variability have been previously observed in data (see Fig. 6). Remarkably, these are also the patterns of variability predicted by Bayesian synapses (_Aitchison et al., 2021_) (i.e. when distributions over synaptic weights correspond with the
Bayesian posterior). Finally, we showed empirical and formal connections between the synaptic variability implied by Bayesian synapses and our performance-reliability cost tradeoff.
The reliability cost in terms of the synaptic variability (Eq. 4) is a critical component of the numerical experiments we present here. While the precise form of the cost is inevitably uncertain, we attempted to mitigate the uncertainty by considering a wide range of functional forms for the reliability cost. In particular, we considered four biophysical mechanisms, corresponding to four power-law exponents, (\(\rho=\frac{1}{2},\frac{2}{3},\frac{4}{3},2\)). Moreover, these different power-law costs already cover a reasonably wide-range of potential penalties and we would expect the results to hold for many other forms of reliability cost as the intuition behind the results ultimately relies merely on there being _some_ penalty for increasing reliability.
The biophysical cost also includes a multiplicative factor, \(c\), which sets the magnitude of the reliability cost. In fact, the patterns of variability exhibited in Fig. 5 are preserved as \(c\) is changed: this was demonstrated for values of \(c\) which are ten times larger and ten times smaller, Supplementary - Appendix 6- Fig. 2. This multiplicative factor should be understood as being determined by the properties of the physics and chemistry underpinning synaptic dynamics, for example it could represent the quantity of ATP required by the metabolic costs of synaptic transmission (although this factor could vary e.g. in different cell types).
Our artificial neural networks used backpropagation to optimise the mean and variance of synaptic weights. While there are a number of schemes by which biological circuits might implement backpropagation (_Whittington and Bogacz, 2017; Sacramento et al., 2018; Richards and Lillicrap, 2019_), it is not yet clear whether backpropagation is implemented by the brain (see _Lillicrap et al. (2020)_ for a review on the plausibility of propagation in the brain). Regardless, backpropagation is merely the route we used in our ANN setting to reach an energy efficient configuration. The patterns we have observed are characteristic of an energy-efficient network and therefore should not depend on the learning rule that the brain uses to achieve energy-efficiency.
Our results in ANNs used MNIST classification as an example of a task; this may appear somewhat artificial, but all brain areas ultimately do have a task: to maximise fitness (or reward as a proxy for fitness). Moreover, our results all ultimately arise from trading off biophysical reliability costs against the fact that if a synapse is important to performing a task, then variability in that synapse substantially impairs performance. Of course performance, in different brain areas, might mean reward, fitness or some other measures. In contrast, if a synapse is unimportant, variability in that synapse impairs performance less. In all tasks there will be some synapses that are more, and some synapses that are less important, and our task, while relatively straightforward, captures this important property.
Figure 6: **Experimental results are consistent with predictions arising from optimising a tradeoff between performance and reliability costs** - The patterns of synapse variability in biological synapses compared with signatures of synapse importance. **a)** As the normalised PPSP variability increases, the absolute percentage change in the weight during a plasticity protocol also increases with \(slope=1.47\) (data from (_Sjörström et al., 2003_) as processed by (_Costa et al., 2017_); this pattern has previously been observed using this data by (_Schug et al., 2021_)). **b)** As input firing rate is increased, normalised EPSP variability decreases with \(slope=-0.71\) (data from (_Ko et al., 2013_), and this pattern was previously observed from this data by (_Aitchison et al., 2021_)).
Our results have important implications for the understanding of Bayesian inference in synapses. In particular, we show that energy efficiency considerations give rise to two phenomena that are consistent with predictions outlined in previous work on Bayesian synapses (_Altichison et al._, 2021). First, that normalised variability decreases for synapses with higher presynaptic firing rates. Second, that synaptic plasticity is higher for synapses with higher variability.
Specifically, these findings suggest that synapses connect their uncertainty in the value of the optimal synaptic weight (see _Aitchison et al._, 2021, for details) to variability. This is in essence a synaptic variant of the "sampling hypothesis".
Figure 7: **A comparison of optimised synaptic variability and posterior variance.** - **a.** Posterior variance (grey-dashed ellipses) plotted alongside optimised synaptic variability (green ellipses) for different values of \(\rho\) (columns) and \(c\) (rows). Note that using fixed values of \(c\) for different \(\rho\)’s dramatically changed the scale of the ellipses. Instead, we chose \(c\) as a function of \(\rho\) to ensure that the scale of the optimised noise variance was roughly equal across different \(\rho\). This allowed us to highlight the key pattern: that smaller values for \(\rho\) give optimised variance closer to the true posterior variances, while higher values for \(\rho\) tended to make the optimised synaptic variability more isotropic. **b.** To understand this pattern more formally, we plotted the synaptic variability as a function of the posterior variance for **different values of \(\rho\)**. Note that we set \(c\) to \(c=\frac{10^{-2.0\mu}}{\rho}\) to avoid large additive offsets (see _Connecting the entropy_ and the biological reliability cost–Eq.51 for details). **c.** The synaptic variability as a function of the posterior variance for different values of \(c\): \([0.112,0.2,0.356]\) (3 DP). As \(c\) increases (lighter blues) we penalise reliability more, and hence the optimised synaptic noise variability increases. (Here we fixed \(\rho=1/2\) across different settings for \(c\))
Under the sampling hypothesis, neural activity is believed to represent a potential state of the world, and variability is believed to represent uncertainty (_Hoyer and Hyvarinen, 2002_; _Knill and Pouget, 2004_; _Ma et al., 2006_; _Fiser et al., 2010; _Berkes et al., 2011; _Orban et al., 2016; _Aitchison and Lengyel, 2016; _Haefner et al., 2016; _Lange and Haefner, 2017_; _Shirkumar et al., 2018; _Bondy et al., 2018; _Echeveste et al., 2020; _Festa et al., 2021; _Lange et al., 2021; _Lange and Haefner, 2022_). This variability in neural activity, representing uncertainty in the state of the world, can then be read out by downstream circuits to inform behaviour. Here, we showed that a connection between synaptic uncertainty and variability can emerge simply as a consequence of maximising energy efficiency. This suggest that Bayesian synapses may emerge without any necessity for specific synaptic biophysical implementations of Bayesian inference.
Importantly though, while the brain might use synaptic noise for Bayesian computation, these results are also consistent with an alternative interpretation: that the brain is not Bayesian, it just looks Bayesian because it is energy efficient. To distinguish between these two interpretations, we ultimately need to know whether downstream brain areas exploit or ignore information about uncertainty that arises from synaptic variability.
### Materials and methods
The ANN simulations were run in PyTorch with feedforward, fully-connected neural networks with two hidden layers of width 100. The input dimension of 784 corresponded to the number of pixels in the greyscale MNIST images of handwritten digits, while the output dimension of ten corresponded to the number of classes. We used the reparameterisation trick to backpropagate with respect to the mean and variance of the weights, in particular, we set \(w_{i}=\mu_{i}+\sigma_{i}\xi\) where \(\xi\sim\text{Normal}(0,1)\)(_Kingma et al., 2015_). MNIST classification was learned through optimisation of Gaussian parameters with respect to a cross-entropy loss in addition to reliability costs using minibatch gradient descent under Adam optimisation with a minibatch size of 20. To prevent negative values for the \(\sigma\)s, they were re-parameterised using a softplus function with argument \(\phi_{i}\), with \(\sigma_{i}=\text{softplus}(\phi_{i})\). The base learning rate in Eq.7 is \(\eta_{\text{base}}=5\times 10^{-4}\). The \(\mu_{i}\)s were initialised homogeneously across the network from \(\text{Uniform}(-0.1,0.1)\) and the \(\sigma_{i}\)s were initialised homogeneously across the network at \(10^{-4}\). Hyperparameters were chosen via grid search on the validation dataset to enable smooth learning, high performance and rapid convergence. In the objective \(\mathcal{L}_{\text{BI}}\) used to train our simulations, we also add an L1 regularisation term over synaptic weights, \(\lambda|\mu|_{1}\), where \(\lambda=10^{-4}\).
Plots in Fig. 2 present mappings from hyperparameter \(c\), to accuracy and \(\sigma\). A different neural network was trained for each \(c\), after 50 training epochs the average \(\sigma\) across synapses was computed, and accuracy was evaluated on the test dataset. Plots in Fig. 3 present mappings of this \(\sigma\) against accuracy and reliability cost. The reliability cost was computed using fixed \(s=1\) (see Appendix 51). To compute the Hessian in Fig. 5 and elsewhere, we used the empirical Fisher information approximation (_Fisher, 1922_), \(\mathbf{H}\approx g^{2}\). This was evaluated by taking the average \(g^{2}\) at \(w^{*}=\mu\) over ten epochs after full training for 50 epochs. The average learning rate \(\gamma|g|^{-1}\) and the average input rate \(|x|\) were also evaluated over ten epochs following training. The data presented illustrate these variables with regard to the weights of the second hidden layer. We set hyperparameter \(s=0.001\) (see Eq.51) in these simulations.
The experimental data in Fig. 5(a) were originally from data collected from plasticity experiments conducted by _Sjostrom et al._ (2003). _Costa et al._ (2017) use this data to derive quantal parameters of a binomial model. Using the data processed by _Costa et al._ (2017) we calculated the normalised PSP variance before and after the experimental protocol, following the approach taken by _Schug et al._ (2021).
For geometric comparisons between the distribution over synapses and the Bayesian posterior presented in Fig. 7 we used the analytic results in Appendix - Analytic predictions for \(\sigma_{i}\).
Source code used for simulations available at github.com/JamesMalkin/EfficientBayes
### Acknowledgments
We are grateful to Dr Stewart whose philanthropy supported GPU compute used in this project. JM was funded by the Engineering and Physical Sciences Research Council (2482786). COD was funded by the Leverhulme Trust (RPG-2019-229) and Biotechnology and Biological Sciences Research Council (BB/W001845/1). CH is supported by the Leverhulme Trust (RF-2021-533). |
2306.17764 | On the existence of free sublattices of bounded index and arithmetic
applications | Let $\mathcal{O}$ be a Dedekind domain whose field of fractions $K$ is a
global field. Let $A$ be a finite-dimensional separable $K$-algebra and let
$\Lambda$ be an $\mathcal{O}$-order in $A$. Let $n$ be a positive integer and
suppose that $X$ is a $\Lambda$-lattice such that $K \otimes_{\mathcal{O}} X$
is free of rank $n$ over $A$. Then $X$ contains a (non-unique) free
$\Lambda$-sublattice of rank $n$. The main result of the present article is to
show there exists such a sublattice $Y$ such that the generalised module index
$[X : Y]_{\mathcal{O}}$ has explicit upper bounds with respect to division that
are independent of $X$ and can be chosen to satisfy certain conditions. We give
examples of applications to the approximation of normal integral bases and
strong Minkowski units, and to the Galois module structure of rational points
over abelian varieties. | Henri Johnston, Alex Torzewski | 2023-06-30T16:14:30Z | http://arxiv.org/abs/2306.17764v2 | # On the existence of free sublattices of
###### Abstract.
Let \(\mathcal{O}\) be a Dedekind domain whose field of fractions \(K\) is a global field. Let \(A\) be a finite-dimensional separable \(K\)-algebra and let \(\Lambda\) be an \(\mathcal{O}\)-order in \(A\). Let \(n\) be a positive integer and suppose that \(X\) is a \(\Lambda\)-lattice such that \(K\otimes_{\mathcal{O}}X\) is free of rank \(n\) over \(A\). Then \(X\) contains a (non-unique) free \(\Lambda\)-sublattice of rank \(n\). The main result of the present article is to show there exists such a sublattice \(Y\) such that the generalised module index \([X:Y]_{\mathcal{O}}\) has an explicit bound that is independent of \(X\) and can be chosen to satisfy certain conditions. We give examples of applications to the approximation of normal integral bases and strong Minkowski units, and to the Galois module structure of rational points over abelian varieties.
Key words and phrases:Lattices, orders, normal integral bases, Minkowski units, abelian varieties 2020 Mathematics Subject Classification: 16H20, 11R33, 11R27, 11G05, 11G10
## 1. Introduction
Let \(A\) be a finite-dimensional semisimple \(\mathbb{Q}\)-algebra and let \(\Lambda\) be an order in \(A\). For example, if \(G\) is a finite group then the group ring \(\mathbb{Z}[G]\) is an order in the group algebra \(\mathbb{Q}[G]\). A \(\Lambda\)-lattice is a (left) \(\Lambda\)-module that is finitely generated and torsion-free over \(\mathbb{Z}\). A special case of the Jordan-Zassenhaus theorem says that for each positive integer \(t\), there are only finitely many isomorphism classes of \(\Lambda\)-lattices of \(\mathbb{Z}\)-rank at most \(t\).
Now fix a positive integer \(n\). Then there exists a positive integer \(m\) with the following property: given any \(\Lambda\)-lattice \(X\) such that \(\mathbb{Q}\otimes_{\mathbb{Z}}X\) is free of rank \(n\) as an \(A\)-module, there exists a free \(\Lambda\)-sublattice \(Y\) of \(X\) such that the index \([X:Y]\) is finite and divides \(m\). To see this, first note that by clearing denominators of a free basis of \(\mathbb{Q}\otimes_{\mathbb{Z}}X\) over \(A\), any such \(X\) must contain a (non-unique) free \(\Lambda\)-sublattice of rank \(n\), necessarily of finite index \(m_{X}\) in \(X\). Then note that the Jordan-Zassenhaus theorem implies that there are only finitely many choices for \(X\) up to isomorphism and take \(m\) to be any common multiple of the \(m_{X}\) as \(X\) ranges over all such choices.
The main goal of the present article is to make \(m\) explicit and to give arithmetic applications. In fact, the setting generalises to the case in which \(\Lambda\) is an \(\mathcal{O}\)-order where \(\mathcal{O}\) is a Dedekind domain whose field of fractions \(K\) is a global field and \(A\) is a finite-dimensional separable \(K\)-algebra. In this situation, the group index \([X:Y]\) is replaced by the generalised module index \([X:Y]_{\mathcal{O}}\) and the main theorem is Theorem 4.5.
We now give examples of our algebraic results and arithmetic applications. The following result is a weaker version of Theorem 5.3 obtained via specialisation and Remark 7.2.
**Theorem 1.1**.: _Let \(G\) be a finite group. Let \(k\) and \(n\) be positive integers. Then there exists a positive integer \(i\), which can be chosen to be coprime to \(k\), with the following property: given any \(\mathbb{Z}[G]\)-lattice \(X\) such that \(\mathbb{Q}\otimes_{\mathbb{Z}}X\) is free of rank \(n\) over \(\mathbb{Q}[G]\), there exists a free \(\mathbb{Z}[G]\)-sublattice \(Z\) of \(X\) such that the index \([X:Z]\) divides \(i\cdot|G|^{\lceil 3|G|/2\rceil n}\)._
Note that when \(G\) satisfies certain conditions we can take \(i=1\). Moreover, \(|G|^{\lceil 3|G|/2\rceil n}\) is a crude but neat upper bound for a more precise term that will be made explicit. The following result is Theorem 5.15, which is just one example of the stronger results that can be obtained in special cases.
**Theorem 1.2**.: _Let \(G\) be a finite group and suppose that there exist positive integers \(n_{1},\ldots,n_{t}\) such that \(\mathbb{Q}[G]\cong\prod_{i=1}^{t}\operatorname{Mat}_{n_{i}}(\mathbb{Q})\). Let \(n\) be a positive integer. If \(X\) is an \(\mathbb{Z}[G]\)-lattice such that \(\mathbb{Q}\otimes_{\mathbb{Z}}X\) is free of rank \(n\) over \(\mathbb{Q}[G]\) then there exists a free \(\mathbb{Z}[G]\)-sublattice \(Z\) of \(X\) such that \([X:Z]\) divides_
\[\left(|G|^{|G|}{\prod_{i=1}^{t}}n_{i}^{-n_{i}^{2}}\right)^{\frac{3n}{2}}.\]
Before sketching the ideas used in the proof of our main result, we discuss how a variant of Theorem 1.1 can be applied in the following arithmetic situation. Let \(L/K\) be a finite Galois extension such that \(K\) is equal to either \(\mathbb{Q}\) or an imaginary quadratic field. Let \(G=\operatorname{Gal}(L/K)\) and let \(\mu_{L}\) denote the roots of unity of \(L\). In this setting, \(\mathcal{O}_{L}^{\times}/\mu_{L}\) is a \(\mathbb{Z}[G]\)-lattice and one can show that \(L/K\) has a so-called _Minkowski unit_, that is, an element \(\varepsilon\in\mathcal{O}_{L}^{\times}/\mu_{L}\) such that \(\mathbb{Q}\otimes_{\mathbb{Z}}(\mathcal{O}_{L}^{\times}/\mu_{L})=\mathbb{Q}[G ]\cdot\varepsilon\). Such an \(\varepsilon\) is said to be a _strong Minkowski unit_ if \(\mathcal{O}_{L}^{\times}/\mu_{L}=\mathbb{Z}[G]\cdot\varepsilon\). The existence of strong Minkowski units (which some authors refer to as Minkowski units) has been studied in numerous articles; see Remark 8.3. In SS8, we give several results on the approximation of strong Minkowski units. The following result is a weakening of Theorem 8.5 obtained via Remark 8.6.
**Theorem 1.3**.: _Let \(G\) be a finite group and let \(k\) be a positive integer. Then there exists a positive integer \(i\), which can be chosen to be coprime to \(k\), with the following property: given any finite Galois extension \(L/K\) with \(\operatorname{Gal}(L/K)\cong G\) and \(K\) equal to either \(\mathbb{Q}\) or an imaginary quadratic field, there exists a Minkowski unit \(\varepsilon\in\mathcal{O}_{L}^{\times}/\mu_{L}\) such that the index \([\mathcal{O}_{L}^{\times}/\mu_{L}:\mathbb{Z}[\operatorname{Gal}(L/K)]\cdot \varepsilon]\) divides \(i\cdot|G|^{\lceil 3|G|/2\rceil-2}\)._
Again, stronger results can be obtained in special cases. Analogous applications to the approximation of normal integral bases are given in SS7 and to the Galois module structure of rational points on abelian varieties are given in SS9.
We now outline the ideas used in the proof of the main result Theorem 4.5. Let \(\mathcal{O}\) be a Dedekind domain whose field of fractions \(K\) is a global field and assume that \(\mathcal{O}\neq K\). Let \(A\) be a finite-dimensional separable \(K\)-algebra and let \(\Lambda\) be an \(\mathcal{O}\)-order in \(A\). Let \(\mathcal{M}\) be a maximal \(\mathcal{O}\)-order in \(A\) containing \(\Lambda\) and let \(X\) be a \(\Lambda\)-lattice such that \(K\otimes_{\mathcal{O}}X\) is free of rank \(1\) over \(A\) (the higher rank case is similar). We consider the unique \(\mathcal{M}\)-lattice \({}^{\mathcal{M}}\!X\) contained in \(X\) that is maximal with respect to inclusion. Then \({}^{\mathcal{M}}\!X\) is locally free over \(\mathcal{M}\), and as explained in Corollary 4.2, \({}^{\mathcal{M}}\!X\) contains a free \(\mathcal{M}\)-sublattice \(\mathcal{M}\cdot\varepsilon\) with an index that can be controlled (the key ingredients here are the Jordan-Zassenhaus theorem and Roiter's lemma). Hypotheses on \(\mathcal{M}\) can also be given to ensure that this index is trivial (see Lemma 2.2). We then obtain a bound on the index \([X:\Lambda\cdot\varepsilon]_{\mathcal{O}}\) by taking the product of bounds on the indices corresponding to each of the three inclusions
\[\Lambda\cdot\varepsilon\subseteq\mathcal{M}\cdot\varepsilon\subseteq{}^{ \mathcal{M}}\!X\subseteq X.\]
Note that \([\mathcal{M}\cdot\varepsilon:\Lambda\cdot\varepsilon]_{\mathcal{O}}=[ \mathcal{M}:\Lambda]_{\mathcal{O}}\), which can be computed by localisation. Moreover, \([X:{}^{\mathcal{M}}\!X]_{\mathcal{O}}\) divides \([\mathcal{M}X:{}^{\mathcal{M}}\!X]_{\mathcal{O}}\), where \(\mathcal{M}X\) is the unique \(\mathcal{M}\)-lattice containing \(X\) that is minimal with respect to inclusion. In Corollary 3.3, we show that \([\mathcal{M}X:{}^{\mathcal{M}}\!X]_{\mathcal{O}}\) divides \([\mathcal{M}:J]_{\mathcal{O}}\) where \(J\) is any full two-sided ideal of \(\mathcal{M}\) contained in \(\Lambda\). Again, \([\mathcal{M}:J]_{\mathcal{O}}\) can be computed by localisation. Crucially, the product of bounds of indices obtained is independent of the choice of \(\Lambda\)-lattice \(X\).
If \(G\) is a finite group such that \(|G|\) is invertible in \(K\) and \(\Lambda=\mathcal{O}[G]\), then \(J\) can be taken to be the (left) conductor of \(\mathcal{M}\) into \(\Lambda\) (the left and right conductors are equal in this case) and \([\mathcal{M}:J]_{\mathcal{O}}\) can be computed explicitly using Jacobbinski's conductor formula [10]. We also obtain an explicit formula for \([\mathcal{M}:\mathcal{O}[G]]_{\mathcal{O}}\), which may be of independent interest. Note that in the setting of Theorem 1.1 with \(n=1\), the term \(|G|^{[3|G]/2|}\) is a crude but neat upper bound for \([\mathcal{M}:\mathbb{Z}[G]]\cdot[\mathcal{M}:J]=[\mathcal{M}:\mathbb{Z}[G]]^{3}\) and the term \(i\) is the upper bound for \([\mathcal{M}X:\mathcal{M}\cdot\varepsilon]\) given by Corollary 4.2. Moreover, we can take \(i=1\) when \(G\) satisfies the hypotheses of Proposition 5.11 or Corollary 5.13.
### Acknowledgements
The authors are grateful to Werner Bley, Nigel Byott, Frank Calegari, Hebert Gangl, Tommy Hofmann, Donghyeok Lim, Daniel Macias Castillo, Alexandre Maksoud and John Nicholson for helpful comments and discussions. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any author accepted manuscript version arising.
## 2. Preliminaries on lattices and orders
For further background, we refer the reader to [11, 12, 13]. Let \(\mathcal{O}\) be a Dedekind domain with field of fractions \(K\). To avoid trivialities, we assume that \(\mathcal{O}\neq K\).
### Lattices over Dedekind domains
An \(\mathcal{O}\)-lattice \(M\) is a finitely generated torsion-free \(\mathcal{O}\)-module, or equivalently, a finitely generated projective \(\mathcal{O}\)-module. Using the former definition and the fact that \(\mathcal{O}\) is noetherian, we see that any \(\mathcal{O}\)-submodule of an \(\mathcal{O}\)-lattice is again an \(\mathcal{O}\)-lattice.
For any finite-dimensional \(K\)-vector space \(V\), an \(\mathcal{O}\)-lattice in \(V\) is a finitely generated \(\mathcal{O}\)-submodule \(M\) in \(V\). We define a \(K\)-vector subspace of \(V\) by
\[KM:=\{\alpha_{1}m_{1}+\alpha_{2}m_{2}+\cdots+\alpha_{r}m_{r}\mid r\in\mathbb{Z} _{\geq 0},\alpha_{i}\in K,m_{i}\in M\}\]
and say that \(M\) is a full \(\mathcal{O}\)-lattice in \(V\) if \(KM=V\). Each \(\mathcal{O}\)-lattice \(M\) may be viewed as an \(\mathcal{O}\)-lattice in the finite-dimensional \(K\)-vector space \(K\otimes_{\mathcal{O}}M\) by identifying \(M\) with its image \(1\otimes M\). We may identify \(K\otimes_{\mathcal{O}}M\) with \(KM\).
Let \(M\) and \(N\) be a pair of full \(\mathcal{O}\)-lattices in a finite-dimensional \(K\)-vector space \(V\). Since \(N\) contains a \(K\)-basis for \(V\), for each \(m\in M\) there is a nonzero \(r\in\mathcal{O}\) such that \(rm\in N\). Therefore there exists a nonzero \(r\in\mathcal{O}\) such that \(rM\subseteq N\) since \(M\) is finitely generated over \(\mathcal{O}\).
For a maximal ideal \(\mathfrak{p}\) of \(\mathcal{O}\), let \(\mathcal{O}_{\mathfrak{p}}\) denote the localisation of \(\mathcal{O}\) at \(\mathfrak{p}\). Let \(\widehat{\mathcal{O}}_{\mathfrak{p}}\) denote the completion of \(\mathcal{O}\) at \(\mathfrak{p}\) and let \(\widehat{K}_{\mathfrak{p}}\) denote its field of fractions. For an \(\mathcal{O}\)-lattice \(M\), we define the localisation \(M\) at \(\mathfrak{p}\) to be the \(\mathcal{O}_{\mathfrak{p}}\)-lattice \(M_{\mathfrak{p}}:=\mathcal{O}_{\mathfrak{p}}\otimes_{\mathcal{O}}M\) and the completion of \(M\) at \(\mathfrak{p}\) to be the \(\widehat{\mathcal{O}}_{\mathfrak{p}}\)-lattice \(\widehat{M}_{\mathfrak{p}}:=\widehat{\mathcal{O}}_{\mathfrak{p}}\otimes_{ \mathcal{O}}M\). By identifying \(M\) with its image \(1\otimes M\), we may view \(M\) as embedded in \(M_{\mathfrak{p}}\). Viewing \(M\) and each \(M_{\mathfrak{p}}\) as embedded in \(KM\), we have \(M=\bigcap_{\mathfrak{p}}M_{\mathfrak{p}}\), where \(\mathfrak{p}\) ranges over all maximal ideals of \(\mathcal{O}\) (see [12, (4.21)]).
### Generalised module indices
Let \(M,N\) be full \(\mathcal{O}\)-lattices in a finite-dimensional \(K\)-vector space \(V\). First consider the case in which \(\mathcal{O}\) is a discrete valuation ring. Then \(M\) and \(N\) are both free and of equal rank over \(\mathcal{O}\), and so there exists an \(\alpha\in\operatorname{Aut}_{K}(V)\) with \(\alpha(M)=N\). Moreover, \(\alpha\) is unique modulo \(\operatorname{Aut}_{\mathcal{O}}(N)\); hence its determinant is unique modulo \(\mathcal{O}^{\times}\), and so the ideal \([M:N]_{\mathcal{O}}:=\mathcal{O}\det(\alpha)\) is a uniquely defined fractional ideal of \(\mathcal{O}\). Now consider the case in which \(\mathcal{O}\) is an arbitrary Dedekind domain. For almost all maximal ideals \(\mathfrak{p}\) of \(\mathcal{O}\) we have \(M_{\mathfrak{p}}=N_{\mathfrak{p}}\) and hence \([M_{\mathfrak{p}}:N_{\mathfrak{p}}]_{\mathcal{O}_{\mathfrak{p}}}=\mathcal{O}_{ \mathfrak{p}}\) (see [11, Exercise 4.7]). Therefore there is a unique fractional ideal \([M:N]_{\mathcal{O}}\) of \(\mathcal{O}\) such that
\(([M:N]_{\mathcal{O}})_{\mathfrak{p}}=[M_{\mathfrak{p}}:N_{\mathfrak{p}}]_{ \mathcal{O}_{\mathfrak{p}}}\) for all \(\mathfrak{p}\). Note that if \(M_{1},M_{2},M_{3}\) are full \(\mathcal{O}\)-lattices in \(V\) then \([M_{1}:M_{3}]_{\mathcal{O}}=[M_{1}:M_{2}]_{\mathcal{O}}\cdot[M_{2}:M_{3}]_{ \mathcal{O}}\). Moreover, if \(\mathcal{O}^{\prime}\) is a Dedekind domain containing \(\mathcal{O}\) then \(\mathcal{O}^{\prime}\otimes_{\mathcal{O}}[M:N]_{\mathcal{O}}=[\mathcal{O}^{ \prime}\otimes_{\mathcal{O}}M:\mathcal{O}^{\prime}\otimes_{\mathcal{O}}N]_{ \mathcal{O}^{\prime}}\). If \(M\subseteq N\) are \(\mathbb{Z}\)-lattices of equal rank then we abbreviate \([N:M]_{\mathbb{Z}}\) to \([N:M]\), which is consistent with the fact that \([N:M]_{\mathbb{Z}}\) is the \(\mathbb{Z}\)-ideal generated by the usual group index of \(M\) in \(N\).
**Lemma 2.1**.: _Suppose we have a diagram of \(\mathcal{O}\)-lattices with exact rows_
_such that \(KN_{i}=KM_{i}\) for \(i=1,2,3\). Then \([M_{2}:N_{2}]_{\mathcal{O}}=[M_{1}:N_{1}]_{\mathcal{O}}\cdot[M_{3}:N_{3}]_{ \mathcal{O}}\)._
Proof.: For \(i=1,3\), fix \(K\)-linear maps \(\alpha_{i}\colon KM_{i}\to KM_{i}\) such that \(\alpha_{i}(M_{i})=N_{i}\). Let \(\pi\colon N_{2}\to N_{3}\) denote the map in the above diagram. Since \(N_{3}\) is \(\mathcal{O}\)-projective, there exists an \(\mathcal{O}\)-section \(s\colon N_{3}\to N_{2}\) of \(\pi\). Then define \(\tilde{\alpha}_{3}\colon KM_{3}\to KM_{2}\) by \(\tilde{\alpha}_{3}=(K\otimes_{\mathcal{O}}s)\circ\alpha_{3}\). Fixing an \(\mathcal{O}\)-linear splitting \(M_{2}\cong M_{1}\oplus M_{3}\) (which exists since \(M_{3}\) is \(\mathcal{O}\)-projective) and thus a \(K\)-linear splitting \(KM_{2}\cong KM_{1}\oplus KM_{3}\), we then obtain a \(K\)-linear map \(\alpha_{2}:=(\alpha_{1}+\tilde{\alpha}_{3})\colon KM_{2}\to KM_{2}\) such that \(\alpha_{2}(M_{2})=N_{2}\) and \(\alpha_{1}(M_{1})=N_{1}\). Hence, with respect to a \(K\)-basis of \(KM_{2}\) extending a \(K\)-basis of \(KM_{1}\), the matrix representing \(\alpha_{2}\) is block upper triangular. Consequently, \(\det(\alpha_{2})=\det(\alpha_{1})\det(\alpha_{3})\), and thus we obtain the desired result.
### Duals of lattices
Let \(M\) be an \(\mathcal{O}\)-lattice. The linear dual \(M^{\vee}:=\operatorname{Hom}_{\mathcal{O}}(X,\mathcal{O})\) is also an \(\mathcal{O}\)-lattice and there is a canonical identification \((M^{\vee})^{\vee}=M\). Moreover, \((-)^{\vee}\) is inclusion-reversing. For a maximal ideal \(\mathfrak{p}\) of \(\mathcal{O}\), we have \((M_{\mathfrak{p}})^{\vee}=(M^{\vee})_{\mathfrak{p}}\). Together the fact that determinants are invariant under transposes, this implies that if \(M\) and \(N\) are full \(\mathcal{O}\)-lattices in a finite-dimensional \(K\)-vector space \(V\) then \([M:N]_{\mathcal{O}}=[N^{\vee}:M^{\vee}]_{\mathcal{O}}\).
### Lattices over orders
Let \(A\) be a finite-dimensional \(K\)-algebra and let \(\Lambda\) be an \(\mathcal{O}\)-order in \(A\), that is, a subring of \(A\) that is also a full \(\mathcal{O}\)-lattice in \(A\). Note that \(\Lambda\) is both left and right noetherian since \(\Lambda\) is finitely generated over \(\mathcal{O}\). A left \(\Lambda\)-lattice \(X\) is a left \(\Lambda\)-module that when considered as an \(\mathcal{O}\)-module is also an \(\mathcal{O}\)-lattice; in this case, \(KX\) may be viewed as a left \(A\)-module.
Henceforth all modules (resp. lattices) shall be assumed to be left modules (resp. lattices) unless otherwise stated. Two \(\Lambda\)-lattices are said to be isomorphic if they are isomorphic as \(\Lambda\)-modules.
For a maximal ideal \(\mathfrak{p}\) of \(\mathcal{O}\), the localisation \(\Lambda_{\mathfrak{p}}\) is an \(\mathcal{O}_{\mathfrak{p}}\)-order in \(A\), and the completion \(\widehat{\Lambda}_{\mathfrak{p}}\) is a \(\widehat{\mathcal{O}}_{\mathfrak{p}}\)-order in \(\widehat{K}_{\mathfrak{p}}\otimes_{K}A\). Localising a \(\Lambda\)-lattice \(X\) at \(\mathfrak{p}\) yields a \(\Lambda_{\mathfrak{p}}\)-lattice \(X_{\mathfrak{p}}\), and completing \(X\) at \(\mathfrak{p}\) yields a \(\widehat{\Lambda}_{\mathfrak{p}}\)-lattice \(\widehat{X}_{\mathfrak{p}}\). Given \(\Lambda\)-lattices \(X\) and \(Y\), we have that \(X_{\mathfrak{p}}\cong Y_{\mathfrak{p}}\) as \(\Lambda_{\mathfrak{p}}\)-lattices if and only if \(\widehat{X}_{\mathfrak{p}}\cong\widehat{Y}_{\mathfrak{p}}\) as \(\widehat{\Lambda}_{\mathfrak{p}}\)-lattices (see [10, (18.2)]). For a positive integer \(n\), a \(\Lambda\)-lattice \(X\) is said to be locally free of rank \(n\), if for each maximal ideal \(\mathfrak{p}\) of \(\mathcal{O}\), the \(\Lambda_{\mathfrak{p}}\)-lattice \(X_{\mathfrak{p}}\) is free of rank \(n\), or equivalently, the \(\widehat{\Lambda}_{\mathfrak{p}}\)-lattice \(\widehat{X}_{\mathfrak{p}}\) is free of rank \(n\). Note that every locally free \(\Lambda\)-lattice is projective by [11, (8.19)].
### Maximal orders
Suppose that \(A\) is a separable finite-dimensional \(K\)-algebra (see [10, SS7c]). A _maximal_\(\mathcal{O}\)-order in \(A\) is an \(\mathcal{O}\)-order that is not properly contained in any other \(\mathcal{O}\)-order in \(A\). For any \(\mathcal{O}\)-order \(\Lambda\) in \(A\), there exists a (not necessarily unique) maximal \(\mathcal{O}\)-order \(\mathcal{M}\) in \(A\) containing \(\Lambda\) by [10, (10.4)]. If \(\mathcal{M}\) is a maximal \(\mathcal{O}\)-order,
\(X\) is an \(\mathcal{M}\)-lattice, and \(n\) is a positive integer, then by [11, (31.2)(iii)] we have that \(KX\) is free of rank \(n\) over \(A\) if and only if \(X\) is locally free of rank \(n\).
### Locally free class groups and cancellation properties
Suppose that \(K\) is a global field and that \(A\) is separable finite-dimensional \(K\)-algebra. Let \(\Lambda\) be an \(\mathcal{O}\)-order \(A\). Let \(P(\Lambda)\) be the free abelian group generated by symbols \([X]\), one for each isomorphism class of locally free \(\Lambda\)-lattices \(X\), modulo relations \([X]=[X_{1}]+[X_{2}]\) whenever \(X\cong X_{1}\oplus X_{2}\). We define the locally free class group \(\operatorname{Cl}(\Lambda)\) of \(\Lambda\) to be the subgroup of \(P(\Lambda)\) consisting of all expressions \([X]-[Y]\), with \(X,Y\) locally free and \(KX\cong KY\).
We remark that \([X]-[Y]=0\) in \(\operatorname{Cl}(\Lambda)\) if and only if \(X\) is stably isomorphic to \(Y\), that is, \(X\oplus\Lambda^{(k)}\cong Y\oplus\Lambda^{(k)}\) for some positive integer \(k\) (here \(\Lambda^{(k)}\) denotes the direct sum of \(k\) copies of \(\Lambda\)). The order \(\Lambda\) is said to have the locally free cancellation property if given any pair of locally free \(\Lambda\)-lattices \(X\) and \(Y\),
\[X\oplus\Lambda^{(k)}\cong Y\oplus\Lambda^{(k)}\text{ for some }k\in\mathbb{Z}_{ \geq 0}\implies X\cong Y.\]
Moreover, \(\Lambda\) is said to have the stably free cancellation property if this holds in the special case that \(Y\) is free. If \(A\) satisfies the so-called Eichler condition relative to \(\mathcal{O}\) then \(\Lambda\) has the locally free cancellation property; this condition is satisfied if \(A\) is commutative (see [11, SS51] for further details).
If \(n\) is a positive integer and \(Y\) is any locally free \(\Lambda\)-lattice of rank \(n\), then by [11, (31.14)] there exists a locally free \(\Lambda\)-lattice \(X\) in \(A\) such that \(Y\cong\Lambda^{(n-1)}\oplus X\). Hence every element of \(\operatorname{Cl}(\Lambda)\) is expressible in the form \([X_{1}]-[X_{2}]\), where \(X_{1}\) and \(X_{2}\) are locally free \(\Lambda\)-lattices in \(A\). Moreover, for each such pair \(X_{1},X_{2}\), there exists another such lattice \(X_{3}\) such that \(X_{2}\oplus X_{3}\cong\Lambda\oplus X_{1}\) by [11, (31.7)]. Therefore every element of \(\operatorname{Cl}(\Lambda)\) is in fact expressible in the form \([X]-[\Lambda]\) for some locally free \(\Lambda\)-lattice \(X\) in \(A\). This together with the Jordan-Zassenhaus theorem [12, (26.4)] implies that \(\operatorname{Cl}(\Lambda)\) is finite.
The following result is easily deduced from the above discussion.
**Lemma 2.2**.: _The following statements are equivalent:_
1. _every locally free_ \(\Lambda\)_-lattice is in fact free;_
2. _every locally free_ \(\Lambda\)_-lattice of rank_ \(1\) _is in fact free;_
3. \(\operatorname{Cl}(\Lambda)\) _is trivial and_ \(\Lambda\) _has the stably free cancellation property;_
4. \(\operatorname{Cl}(\Lambda)\) _is trivial and_ \(\Lambda\) _has the locally free cancellation property._
## 3. Overlattices and sublattices for overorders
Let \(\mathcal{O}\) be a Dedekind domain with field of fractions \(K\) and assume that \(\mathcal{O}\neq K\).
### Setup and definitions
Let \(A\) be a finite-dimensional \(K\)-algebra. Let \(\Lambda\subseteq\Gamma\) be \(\mathcal{O}\)-orders in \(A\) and let \(X\) be a \(\Lambda\)-lattice. Define
\[\Gamma X:=\{\gamma_{1}m_{1}+\gamma_{2}m_{2}+\cdots+\gamma_{r}m_{r}\mid r\in \mathbb{Z}_{\geq 0},m_{i}\in X,\gamma_{i}\in\Gamma\}\subseteq KX.\]
This is the unique \(\Gamma\)-lattice in \(KX\) containing \(X\) that is minimal with respect to inclusion.
There exists a nonzero \(r\in\mathcal{O}\) such that \(r\Gamma\subseteq\Lambda\) (see SS2.1) and so \(r\Gamma X\) is a \(\Gamma\)-lattice contained in \(X\) of finite index. Since the sum of any two \(\Gamma\)-lattices contained in \(X\) is also a \(\Gamma\)-lattice contained in \(X\), we see that there exists a unique \(\Gamma\)-lattice contained in \(X\) that is maximal with respect to inclusion, which we shall denote by \({}^{\Gamma}X\). For a right \(\Lambda\)-lattice \(X\), we define \(X^{\Gamma}\) similarly. Note that \({}^{\Gamma}\Lambda\) (resp. \(\Lambda^{\Gamma}\)) coincides with the right (resp. left) conductor of \(\Gamma\) into \(\Lambda\) (see [11, (27.2)]).
### Bounds on indices
The following result gives a bound on
\[[\Gamma X:{}^{\Gamma}\!X]_{\mathcal{O}}=[\Gamma X:X]_{\mathcal{O}}\cdot[X:{}^{ \Gamma}\!X]_{\mathcal{O}}\]
that only depends on \(\Gamma\) and \(\Lambda\), and not on the particular choice of lattice \(X\).
**Proposition 3.1**.: _Let \(A\) be a finite-dimensional \(K\)-algebra and let \(\Lambda\subseteq\Gamma\) be \(\mathcal{O}\)-orders in \(A\). Let \(J\) be any full two-sided ideal of \(\Gamma\) contained in \(\Lambda\). Let \(n\) be a positive integer and let \(X\) be a \(\Lambda\)-lattice such that \(\Gamma X\) is locally free of rank \(n\) over \(\Gamma\). Then \([\Gamma X:{}^{\Gamma}\!X]_{\mathcal{O}}\) divides \([\Gamma:J]_{\mathcal{O}}^{n}\)._
_Remark 3.2_.: There are many possible choices of \(J\), and the best choice will be context specific. For example, a weak but general choice is \(J=[\Gamma:\Lambda]_{\mathcal{O}}\cdot\Gamma\). Moreover, \(J\) can always be taken to be the two-sided ideal of \(\Gamma\) generated by the central conductor of \(\Gamma\) into \(\Lambda\), that is, by \(\{x\in C\mid x\Gamma\subseteq\Lambda\}\), where \(C\) denotes the centre of \(A\).
Proof of Proposition 3.1.: Since \(J\) is a left \(\Gamma\)-lattice contained in \(\Lambda\), we have that \(JX\) is a left \(\Gamma\)-lattice contained in \(X\). Hence \(JX\) is contained in \({}^{\Gamma}\!X\). The chain of containments
\[JX\subseteq{}^{\Gamma}\!X\subseteq X\subseteq\Gamma X\]
implies that \([\Gamma X:{}^{\Gamma}\!X]_{\mathcal{O}}\) divides \([\Gamma X:JX]_{\mathcal{O}}\). Thus it remains to show that \([\Gamma X:JX]_{\mathcal{O}}=[\Gamma:J]_{\mathcal{O}}^{n}\). Since indices are defined locally and \(([\Gamma X:JX]_{\mathcal{O}})_{\mathfrak{p}}=[\Gamma_{\mathfrak{p}}X_{ \mathfrak{p}}:J_{\mathfrak{p}}X_{\mathfrak{p}}]_{\mathcal{O}_{\mathfrak{p}}}\) for every maximal ideal \(\mathfrak{p}\) of \(\mathcal{O}\), we can and do assume without loss of generality that \(\mathcal{O}\) is a discrete valuation ring. Then by hypothesis there exist \(\varepsilon_{1},\ldots,\varepsilon_{n}\in\Gamma X\) such that \(\Gamma X=\Gamma\varepsilon_{1}\oplus\cdots\oplus\Gamma\varepsilon_{n}\). Since \(J\) is a right \(\Gamma\)-module, we have
\[JX=J\Gamma X=J(\Gamma\varepsilon_{1}\oplus\cdots\oplus\Gamma\varepsilon_{n}) =J\varepsilon_{1}\oplus\cdots\oplus J\varepsilon_{n}.\]
Therefore \([\Gamma X:JX]_{\mathcal{O}}=[\Gamma\varepsilon_{1}\oplus\cdots\oplus\Gamma \varepsilon_{n}:J\varepsilon_{1}\oplus\cdots\oplus J\varepsilon_{n}]_{ \mathcal{O}}=[\Gamma:J]_{\mathcal{O}}^{n}\).
**Corollary 3.3**.: _Let \(A\) be a separable finite-dimensional \(K\)-algebra and let \(\Lambda\) be an \(\mathcal{O}\)-order in \(A\). Let \(\mathcal{M}\) be a maximal \(\mathcal{O}\)-order in \(A\) containing \(\Lambda\) and let \(J\) be any full two-sided ideal of \(\mathcal{M}\) contained in \(\Lambda\). Let \(n\) be a positive integer and let \(X\) be a \(\Lambda\)-lattice such that \(KX\) is free of rank \(n\) over \(A\). Then \([\mathcal{M}X:{}^{\mathcal{M}}\!X]_{\mathcal{O}}\) divides \([\mathcal{M}:J]_{\mathcal{O}}^{n}\)._
Proof.: Since \(KX\) is free of rank \(n\) over \(A\), we have that \(\mathcal{M}X\) is locally free of rank \(n\) over \(\mathcal{M}\) (see SS2.5), and so the desired result follows directly from Proposition 3.1.
_Remark 3.4_.: In Proposition 3.1 and Corollary 3.3, the order \(\Lambda\) can be replaced by the so-called associated order \(\mathcal{A}(X)=\{\alpha\in A\mid\alpha X\subseteq X\}\). Thus if the containment \(\Lambda\subseteq\mathcal{A}(X)\) is strict, then it may be possible to make a better choice of \(J\). For example, if \(\mathcal{M}\) is a maximal order containing \(\Lambda\) and we take \(X=\mathcal{M}\), then \(\mathcal{A}(X)=\mathcal{M}\) and so we can take \(J=\mathcal{M}\), which is consistent with the fact that \(\mathcal{M}X=X={}^{\mathcal{M}}\!X\) in this situation. Of course, the disadvantage of this approach is that \(\mathcal{A}(X)\) depends on \(X\).
### Duals and overorders
For an \(\mathcal{O}\)-order \(\Lambda\) in a finite-dimensional \(K\)-algebra and any left (resp. right) \(\Lambda\)-lattice \(X\), the dual \(X^{\vee}=\operatorname{Hom}_{\mathcal{O}}(X,\mathcal{O})\) has the structure of a right (resp. left) \(\Lambda\)-lattice, and there is a canonical identification \((X^{\vee})^{\vee}=X\).
**Lemma 3.5**.: _Let \(\Lambda\subseteq\Gamma\) be \(\mathcal{O}\)-orders in a finite-dimensional \(K\)-algebra._
* _If_ \(X\) _is a left_ \(\Lambda\)_-lattice then we have an equality of right_ \(\Gamma\)_-lattices_ \((\Gamma X)^{\vee}=(X^{\vee})^{\Gamma}\) _and an equality of indices_ \([\Gamma X:X]_{\mathcal{O}}=[X^{\vee}:(X^{\vee})^{\Gamma}]_{\mathcal{O}}\)_._
* _If_ \(X\) _is a right_ \(\Lambda\)_-lattice then we have an equality of left_ \(\Gamma\)_-lattices_ \((X\Gamma)^{\vee}={}^{\Gamma}(X^{\vee})\) _and an equality of indices_ \([X\Gamma:X]_{\mathcal{O}}=[X^{\vee}:{}^{\Gamma}(X^{\vee})]_{\mathcal{O}}\)
Proof.: We only prove part (i). Since \((-)^{\vee}\) reverses inclusions, \((\Gamma X)^{\vee}\) is a right \(\Gamma\)-lattice contained in \(X^{\vee}\). Hence \((\Gamma X)^{\vee}\) is contained in \((X^{\vee})^{\Gamma}\) by definition of the latter. Dualising, we also have that
\[\Gamma X=((\Gamma X)^{\vee})^{\vee}\supseteq((X^{\vee})^{\Gamma})^{\vee}. \tag{3.1}\]
Since \(((X^{\vee})^{\Gamma})^{\vee}\) is itself a left \(\Gamma\)-lattice containing \(X\), this forces equality in (3.1) and hence \((\Gamma X)^{\vee}=(X^{\vee})^{\Gamma}\) as desired. Finally, since \((-)^{\vee}\) preserves indices (see SS2.3) we have that \([\Gamma X:X]_{\mathcal{O}}=[X^{\vee}:(\Gamma X)^{\vee}]_{\mathcal{O}}=[X^{ \vee}:(X^{\vee})^{\Gamma}]_{\mathcal{O}}\).
### The commutative separable setting
In the setting of commutative separable algebras, the following result of Frohlich is a refinement of Corollary 3.3.
**Theorem 3.6** (Frohlich).: _Let \(A\) be a commutative separable finite-dimensional \(K\)-algebra and let \(\Lambda\) be an \(\mathcal{O}\)-order in \(A\). Let \(\mathcal{M}\) be the unique maximal \(\mathcal{O}\)-order in \(A\). Let \(n\) be a positive integer and let \(X\) be a \(\Lambda\)-lattice such that \(KX\) is free of rank \(n\) over \(A\). Then both \([\mathcal{M}X:X]_{\mathcal{O}}\) and \([X:{}^{\mathcal{M}}\!X]_{\mathcal{O}}\) divide \([\mathcal{M}:\Lambda]_{\mathcal{O}}^{n}\)._
Proof.: The claim that \([\mathcal{M}X:X]_{\mathcal{O}}\) divides \([\mathcal{M}:\Lambda]_{\mathcal{O}}^{n}\) is contained in [10, Theorem 4].
It remains to show that \([X:{}^{\mathcal{M}}\!X]_{\mathcal{O}}\) divides \([\mathcal{M}:\Lambda]_{\mathcal{O}}^{n}\). Since \(A\) is separable there is an isomorphism of (right) \(A\)-modules \(A\cong\operatorname{Hom}_{K}(A,K)\) induced by the pairing of [1, (7.41)]. Thus there are \(A\)-module isomorphisms
\[K(X^{\vee})\cong\operatorname{Hom}_{K}(KX,K)\cong\operatorname{Hom}_{K}(A^{( n)},K)\cong\operatorname{Hom}_{K}(A,K)^{(n)}\cong A^{(n)}.\]
Lemma 3.5(ii) implies that \([X:{}^{\mathcal{M}}\!X]_{\mathcal{O}}=[X^{\vee}\mathcal{M}:X^{\vee}]_{ \mathcal{O}}=[\mathcal{M}X^{\vee}:X^{\vee}]_{\mathcal{O}}\), where in the last equality, we consider \(X^{\vee}\) as a left \(\mathcal{M}\)-lattice, as we may since \(\mathcal{M}\) is commutative. Moreover, since \(K(X^{\vee})\) is free of rank \(n\) over \(A\), the first claim and the appropriate substitution imply that \([\mathcal{M}X^{\vee}:X^{\vee}]_{\mathcal{O}}\) divides \([\mathcal{M}:\Lambda]_{\mathcal{O}}^{n}\).
_Remark 3.7_.: [1, SS7, Example 1] shows that the result analogous to Theorem 3.6 does not always hold in the noncommutative separable setting.
## 4. Free sublattices of bounded index
Let \(\mathcal{O}\) be a Dedekind domain with field of fractions \(K\). Assume that \(K\) is a global field and that \(\mathcal{O}\neq K\). Let \(A\) be a separable finite-dimensional \(K\)-algebra.
### Free sublattices of locally free lattices
The following result gives a bound on the index of a free sublattice in a locally free lattice.
**Proposition 4.1**.: _Let \(\Gamma\) be an \(\mathcal{O}\)-order in \(A\) and let \(\mathcal{K}\) be any non-zero ideal of \(\mathcal{O}\). Then there exists a non-zero ideal \(\mathcal{I}\) of \(\mathcal{O}\), that can be chosen to be coprime to \(\mathcal{K}\), with the following property: for every locally free \(\Gamma\)-lattice \(X\), there exists a free \(\Gamma\)-sublattice \(Y\) of \(X\) such that \([X:Y]_{\mathcal{O}}\) divides \(\mathcal{I}\)._
Proof.: By [1, (31.14)], for a positive integer \(n\) and a locally free \(\Lambda\)-lattice \(X\) of rank \(n\), there exists a locally free \(\Lambda\)-lattice \(W\) of rank \(1\) such that \(X\cong\Lambda^{(n-1)}\oplus W\). Thus the problem reduces to the case of locally free \(\Lambda\)-lattices \(W\) of rank \(1\). The number of isomorphism classes of such lattices is finite by the Jordan-Zassenhaus theorem [14, (26.4)]. For each such class, choose a representative \(W\) and note that by Roiter's lemma [1, (31.6)] there exists an inclusion \(\iota_{W}:W\hookrightarrow\Gamma\) such that \([\Gamma:\iota_{W}(W)]_{\mathcal{O}}\) is coprime to \(\mathcal{K}\). Now take \(\mathcal{I}\) to be any common multiple of the (finite number of) ideals \([\Gamma:\iota_{W}(W)]_{\mathcal{O}}\) as \(W\) varies.
**Corollary 4.2**.: _Let \(\mathcal{M}\) be a maximal \(\mathcal{O}\)-order in \(A\) and let \(\mathcal{K}\) be any non-zero ideal of \(\mathcal{O}\). Then there exists a non-zero ideal \(\mathcal{I}\) of \(\mathcal{O}\), that can be chosen to be coprime to \(\mathcal{K}\), with the following property: given any \(\mathcal{M}\)-lattice \(X\) such that \(KX\) is free as an \(A\)-module, there exists a free \(\mathcal{M}\)-sublattice \(Y\) of \(X\) such that \([X:Y]_{\mathcal{O}}\) divides \(\mathcal{I}\)._
Proof.: Let \(X\) be an \(\mathcal{M}\)-lattice. Then \(KX\) is free as an \(A\)-module if and only if \(\mathcal{M}X\) is locally free over \(\mathcal{M}\) (see SS2.5). Hence the result follows from Proposition 4.1.
_Remark 4.3_.: If \(\Gamma\) (resp. \(\mathcal{M}\)) satisfies the equivalent conditions of Lemma 2.2 then it is clear that we can take \(\mathcal{I}=\mathcal{O}\) in Proposition 4.1 (resp. Corollary 4.2).
Given a finite set \(S\) of maximal ideals of \(\mathcal{O}\), let \(\mathcal{O}_{S}=\bigcap_{\mathfrak{p}\notin S}\mathcal{O}_{\mathfrak{p}}\), where \(\mathfrak{p}\) ranges over all maximal ideals of \(\mathcal{O}\) not in \(S\). We include the following neat result for general interest.
**Corollary 4.4**.: _Let \(\Gamma\) be an \(\mathcal{O}\)-order in \(A\) and let \(T\) be a finite set of maximal ideals of \(\mathcal{O}\). Then there exists a finite set \(S\) of maximal ideals of \(\mathcal{O}\) such that \(S\cap T=\emptyset\) and \(\mathcal{O}_{S}\otimes_{\mathcal{O}}\Gamma\) satisfies the equivalent conditions of Lemma 2.2._
Proof.: If \(\mathcal{I}\) is the ideal given by Proposition 4.1 and \(\mathcal{K}\) the product of the maximal ideals in \(T\), then we can take \(S\) to be the set of maximal ideals dividing \(\mathcal{I}\).
### The main theorem
The main theorem of the present article is as follows.
**Theorem 4.5**.: _Let \(\mathcal{O}\) be a Dedekind domain with field of fractions \(K\). Assume that \(K\) is a global field and that \(\mathcal{O}\neq K\). Let \(A\) be a separable finite-dimensional \(K\)-algebra and let \(\Lambda\) be an \(\mathcal{O}\)-order in \(A\). Let \(\mathcal{M}\) be a maximal \(\mathcal{O}\)-order in \(A\) containing \(\Lambda\) and let \(J\) be any full two-sided ideal of \(\mathcal{M}\) contained in \(\Lambda\). Let \(n\) be a positive integer and let \(\mathcal{K}\) be any non-zero ideal of \(\mathcal{O}\). Then there exists a non-zero ideal \(\mathcal{I}\) of \(\mathcal{O}\), that can be chosen to be coprime to \(\mathcal{K}\), with the following property: given any \(\Lambda\)-lattice \(X\) such that \(KX\) is free of rank \(n\) over \(A\), there exists a free \(\Lambda\)-sublattice \(Z\) of \(X\) such that \([X:Z]_{\mathcal{O}}\) divides \(\mathcal{I}\cdot[\mathcal{M}:\Lambda]_{\mathcal{O}}^{2n}\) if \(A\) is commutative or \(\mathcal{I}\cdot[\mathcal{M}:J]_{\mathcal{O}}^{n}\cdot[\mathcal{M}:\Lambda]_{ \mathcal{O}}^{n}\) otherwise. Moreover, if \(\mathcal{M}\) satisfies the equivalent conditions of Lemma 2.2 then we can take \(\mathcal{I}=\mathcal{O}\)._
Proof.: Let \(\mathcal{I}\) be the ideal of \(\mathcal{O}\) given by Corollary 4.2. If \(\mathcal{M}\) satisfies the equivalent conditions of Lemma 2.2 then we can take \(\mathcal{I}=\mathcal{O}\) by Remark 4.3. Then there exists a free \(\mathcal{M}\)-sublattice \(Y\) of \({}^{\mathcal{M}}X\) such that \([{}^{\mathcal{M}}X:Y]_{\mathcal{O}}\) divides \(\mathcal{I}\). Let \(\varepsilon_{1},\ldots,\varepsilon_{n}\) be a free \(\mathcal{M}\)-basis of \(Y\), so that \(Y=\mathcal{M}\varepsilon_{1}\oplus\cdots\oplus\mathcal{M}\varepsilon_{n}\), and let \(Z=\Lambda\varepsilon_{1}\oplus\cdots\oplus\Lambda\varepsilon_{n}\). Then \(Z\subseteq Y\subseteq{}^{\mathcal{M}}X\subseteq X\) and \([X:Z]_{\mathcal{O}}=[X:{}^{\mathcal{M}}X]_{\mathcal{O}}\cdot[{}^{\mathcal{M }}X:Y]_{\mathcal{O}}\cdot[Y:Z]_{\mathcal{O}}\). Note that \([Y:Z]_{\mathcal{O}}=[\mathcal{M}:\Lambda]_{\mathcal{O}}^{n}\). Moreover, Corollary 3.3 implies that \([X:{}^{\mathcal{M}}X]_{\mathcal{O}}\) divides \([\mathcal{M}:J]_{\mathcal{O}}^{n}\), and under the assumption that \(A\) is commutative, Theorem 3.6 implies that in fact \([X:{}^{\mathcal{M}}X]_{\mathcal{O}}\) divides \([\mathcal{M}:\Lambda]_{\mathcal{O}}^{n}\). Therefore we obtain the desired result.
_Remark 4.6_.: The statement of Theorem 4.5 extends to \(\Lambda\)-lattices \(X\) admitting a surjection \(A^{(n)}\twoheadrightarrow KX\) of \(A\)-modules. More specifically, the ideal \(\mathcal{I}\) has the following property: given any \(\Lambda\)-lattice \(X\) admitting a surjection \(A^{(n)}\twoheadrightarrow KX\) of \(A\)-modules, there exists a \(\Lambda\)-sublattice \(Z\) of \(X\) generated by at most \(n\) elements such that \([X:Z]_{\mathcal{O}}\) divides \(\mathcal{I}\cdot[\mathcal{M}:J]_{\mathcal{O}}^{n}\cdot[\mathcal{M}:\Lambda]_{ \mathcal{O}}^{n}\). This can be seen as follows. There exists an \(A\)-module \(B\) such that \(KX\oplus B\cong A^{(n)}\). Thus given any full \(\Lambda\)-lattice \(W\) in \(B\), the \(\Lambda\)-lattice \(X\oplus W\) satisfies the conditions of Theorem 4.5 and so admits a free \(\Lambda\)-sublattice \(Z^{\prime}\) of index dividing \(\mathcal{I}\cdot[\mathcal{M}:J]_{\mathcal{O}}^{n}\cdot[\mathcal{M}:\Lambda]_{ \mathcal{O}}^{n}\), and the image of \(Z^{\prime}\) under the projection \(X\oplus W\twoheadrightarrow X\) is the desired sublattice \(Z\). Note that one should expect stronger bounds if one specifies the isomorphism class of \(KX\); one such situation is considered in SS6.
## 5. Group rings
### Conductors of group rings
The extra structure of group rings is exploited in the following result, which will allow us to make an optimal choice of the two-sided ideal \(J\) that appears in the statement of Theorem 4.5.
**Proposition 5.1**.: _Let \(\mathcal{O}\) be a Dedekind domain with field of fractions \(K\neq\mathcal{O}\). Let \(G\) be a finite group and let \(\Gamma\) be an \(\mathcal{O}\)-order in \(K[G]\) containing \(\mathcal{O}[G]\). Then \(\mathcal{O}[G]^{\Gamma}={}^{\Gamma}\mathcal{O}[G]\) and we have_
\[[\Gamma:\mathcal{O}[G]]_{\mathcal{O}}=[\mathcal{O}[G]:{}^{\Gamma}\mathcal{O} [G]]_{\mathcal{O}}=[\Gamma:{}^{\Gamma}\mathcal{O}[G]]_{\mathcal{O}}^{\frac{1} {2}}. \tag{5.1}\]
_Moreover, if \(|G|\) is invertible in \(K\) and \(\Gamma=\mathcal{M}\) is a maximal \(\mathcal{O}\)-order, this index is independent of the choice of \(\mathcal{M}\)._
_Remark 5.2_.: In the case that \(|G|\) is invertible in \(K\) and \(\Gamma=\mathcal{M}\) is a maximal \(\mathcal{O}\)-order, Jacobinski has given an explicit description of \(\mathcal{O}[G]^{\Gamma}={}^{\Gamma}\mathcal{O}[G]\) (see Theorem 5.5) and this leads to an explicit formula for the index of (5.1) (see Corollary 5.6).
Proof of Proposition 5.1.: Given an \(\mathcal{O}\)-order \(\Lambda\) in \(K[G]\), let \(\Lambda^{\mathrm{op}}\) denote the \(\mathcal{O}\)-order defined by the image of \(\Lambda\) under the involution on \(K[G]\) induced by \(g\mapsto g^{-1}\). Any left (resp. right) \(\Lambda\)-lattice carries a canonical structure of a right (resp. left) \(\Lambda^{\mathrm{op}}\)-lattice with \(g^{-1}\) acting as \(g\) did previously. Given a left (resp. right) \(\Lambda\)-lattice \(X\), we denote by \(X^{*}\) the dual lattice \(X^{\vee}=\mathrm{Hom}_{\mathcal{O}}(X,\mathcal{O})\) considered as a left (resp. right) \(\Lambda^{\mathrm{op}}\)-lattice. Note that for a left \(\Lambda\)-lattice \(X\), we have \([(X^{\vee})\Lambda:X^{\vee}]_{\mathcal{O}}=[\Lambda X^{*}:X^{*}]_{\mathcal{O}}\), etc.
Now observe that \(\Gamma^{\mathrm{op}}\) is an \(\mathcal{O}\)-order containing \(\mathcal{O}[G]=\mathcal{O}[G]^{\mathrm{op}}\). Hence \(\Gamma^{\mathrm{op}}\mathcal{O}[G]=\Gamma^{\mathrm{op}}=\mathcal{O}[G]\Gamma ^{\mathrm{op}}\). We also have that
\[(\Gamma^{\mathrm{op}}\mathcal{O}[G])^{\vee} =(\mathcal{O}[G]^{\vee})^{\Gamma^{\mathrm{op}}}={}^{\Gamma}( \mathcal{O}[G]^{*}),\] \[(\mathcal{O}[G]\Gamma^{\mathrm{op}})^{\vee} ={}^{\Gamma^{\mathrm{op}}}(\mathcal{O}[G]^{\vee})=(\mathcal{O}[G] ^{*})^{\Gamma},\]
where in each case the first equality follows from Lemma 3.5 and the second equality follows from the definition of \((-)^{*}\). Therefore \({}^{\Gamma}(\mathcal{O}[G]^{*})=(\mathcal{O}[G]^{*})^{\Gamma}\). Furthermore, there is an \(\mathcal{O}[G]=\mathcal{O}[G]^{\mathrm{op}}\)-isomorphism \(\mathcal{O}[G]^{*}\stackrel{{\sim}}{{\rightarrow}}\mathcal{O}[G]\) given by \(\mathbb{1}_{g}\mapsto g\), where \(\mathbb{1}_{g}\) denotes the element of \(\mathrm{Hom}_{\mathcal{O}}(\mathcal{O}[G],\mathcal{O})\) defined by \(h\mapsto 0\) for \(h\neq g\) and \(g\mapsto 1\). Hence we conclude that \({}^{\Gamma}\mathcal{O}[G]=\mathcal{O}[G]^{\Gamma}\).
We have that \([\Gamma:\mathcal{O}[G]]_{\mathcal{O}}=[\Gamma^{\mathrm{op}}:\mathcal{O}[G]^{ \mathrm{op}}]_{\mathcal{O}}\) since \((-)^{\mathrm{op}}\) is an \(\mathcal{O}\)-linear isomorphism. As \(\mathcal{O}[G]=\mathcal{O}[G]^{\mathrm{op}}\), we then have
\[[\Gamma:\mathcal{O}[G]]_{\mathcal{O}} =[\Gamma^{\mathrm{op}}:\mathcal{O}[G]]_{\mathcal{O}}\] \[=[(\mathcal{O}[G])^{\vee}:((\mathcal{O}[G])^{\vee})^{\Gamma^{ \mathrm{op}}}]_{\mathcal{O}}\] \[=[\mathcal{O}[G]^{*}:{}^{\Gamma}((\mathcal{O}[G])^{*})]_{\mathcal{O}}\] \[=[\mathcal{O}[G]:{}^{\Gamma}\mathcal{O}[G]]_{\mathcal{O}},\]
where the second equality follows from Lemma 3.5(i). Since
\[[\Gamma:{}^{\Gamma}\mathcal{O}[G]]_{\mathcal{O}}=[\Gamma:\mathcal{O}[G]]_{ \mathcal{O}}\cdot[\mathcal{O}[G]:{}^{\Gamma}\mathcal{O}[G]]_{\mathcal{O}},\]
the second equality of (5.1) follows.
For the last statement, note that the hypotheses ensure that \(K[G]\) is separable and hence maximal orders exist (see SS2.5). For any \(\mathcal{O}\)-order \(\Lambda\) in \(K[G]\), let \(\mathrm{Disc}(\Lambda)\) denote the discriminant of \(\Lambda\) with respect to the reduced trace map \(\mathrm{tr}:K[G]\to K\). Then \(\mathrm{Disc}(\mathcal{M})\) is independent of the choice of maximal \(\mathcal{O}\)-order \(\mathcal{M}\) of \(K[G]\) by [11, (25.3)]. Moreover,
by [1, (26.3)(iii)] we have \(\operatorname{Disc}(\mathcal{O}[G])=[\mathcal{M}:\mathcal{O}[G]]_{\mathcal{O}}^{2} \cdot\operatorname{Disc}(\mathcal{M})\), and so \([\mathcal{M}:\mathcal{O}[G]]_{\mathcal{O}}\) is independent of the choice of \(\mathcal{M}\).
### The main theorem for group rings
We now obtain a more precise version of Theorem 4.5 for lattices over group rings.
**Theorem 5.3**.: _Let \(\mathcal{O}\) be a Dedekind domain with field of fractions \(K\). Assume that \(K\) is a global field and that \(\mathcal{O}\neq K\). Let \(G\) be a finite group such that \(|G|\) is invertible in \(K\). Set \(s=2\) if \(G\) is abelian and \(s=3\) otherwise. Let \(\mathcal{M}\) be a maximal \(\mathcal{O}\)-order in \(K[G]\) containing \(\mathcal{O}[G]\). Let \(n\) be a positive integer and let \(\mathcal{K}\) be any non-zero ideal of \(\mathcal{O}\). Then there exists a non-zero ideal \(\mathcal{I}\) of \(\mathcal{O}\), that can be chosen to be coprime to \(\mathcal{K}\), with the following property: given any \(\mathcal{O}[G]\)-lattice \(X\) such that \(KX\) is free of rank \(n\) over \(K[G]\), there exists a free \(\mathcal{O}[G]\)-sublattice \(Z\) of \(X\) such that \([X:Z]_{\mathcal{O}}\) divides \(\mathcal{I}\cdot[\mathcal{M}:\mathcal{O}[G]]_{\mathcal{O}}^{sn}\). Moreover, if \(\mathcal{M}\) satisfies the equivalent conditions of Lemma 2.2 then we can take \(\mathcal{I}=\mathcal{O}\)._
_Remark 5.4_.: In the case \(\mathcal{O}=\mathbb{Z}\), explicit conditions on \(G\) under which \(\mathcal{M}\) satisfies the equivalent conditions of Lemma 2.2 are given in Proposition 5.11 and Corollary 5.13.
Proof of Theorem 5.3.: We apply Theorem 4.5 with \(\Lambda=\mathcal{O}[G]\). If \(G\) is abelian then the desired result follows directly. Otherwise, by Proposition 5.1 we can and do take \(J=\mathcal{O}[G]^{\mathcal{M}}={}^{\mathcal{M}}\mathcal{O}[G]\), and we have \([\mathcal{M}:J]_{\mathcal{O}}^{n}\cdot[\mathcal{M}:\Lambda]_{\mathcal{O}}^{n} =[\mathcal{M}:\Lambda]_{\mathcal{O}}^{3n}\).
### Jacobinski's formula and the index of a group ring in a maximal order
For further details on the following setup and notation, we refer the reader to [1, SS27] and the references therein.
Let \(\mathcal{O}\) be a Dedekind domain with field of fractions \(K\neq\mathcal{O}\). Let \(G\) be a finite group such that \(|G|\) is invertible in \(K\). Then \(K[G]\) is a separable finite-dimensional \(K\)-algebra. We may write \(K[G]=A_{1}\oplus\cdots\oplus A_{t}\), where each \(A_{i}\) is a simple \(K\)-algebra. For each \(i\), let \(K_{i}\) denote the centre of \(A_{i}\). Then each \(K_{i}\) is a finite separable field extension of \(K\), and there exist integers \(n_{1},\ldots,n_{t}\) such that \(\dim_{K_{i}}A_{i}=n_{i}^{2}\) for each \(i\). Let \(\operatorname{tr}_{i}\) denote the reduced trace from \(A_{i}\) to \(K\) (see [1, SS7D]). Then \(\operatorname{tr}_{i}=T_{K_{i}/K}\circ\operatorname{tr}_{A_{i}/K_{i}}\), where \(T_{K_{i}/K}\) is the ordinary trace from \(K_{i}\) to \(K\), and \(\operatorname{tr}_{A_{i}/K_{i}}\) is the reduced trace from \(A_{i}\) to \(K_{i}\).
Let \(\mathcal{M}\) be a maximal \(\mathcal{O}\)-order such that \(\mathcal{O}[G]\subseteq\mathcal{M}\subseteq K[G]\). For each \(i\), let \(\mathcal{M}_{i}=\mathcal{M}\cap A_{i}\), let \(\mathcal{O}_{i}\) denote the integral closure of \(\mathcal{O}\) in \(K_{i}\), and define the _inverse different_ of \(\mathcal{M}_{i}\) with respect to \(\operatorname{tr}_{i}\) to be \(\mathcal{D}_{i}^{-1}=\{x\in A_{i}:\operatorname{tr}_{i}(x\mathcal{M}_{i}) \subseteq\mathcal{O}\}\). Then \(\mathcal{M}=\mathcal{M}_{1}\oplus\cdots\oplus\mathcal{M}_{t}\) and each \(\mathcal{D}_{i}^{-1}\) is a two-sided \(\mathcal{M}_{i}\)-lattice containing \(\mathcal{M}_{i}\).
**Theorem 5.5** (Jacobinski [11]).: _In the notation above, we have_
\[{}^{\mathcal{M}}\mathcal{O}[G]=\mathcal{O}[G]^{\mathcal{M}}=\bigoplus_{i=1}^{t }|G|n_{i}^{-1}\mathcal{D}_{i}^{-1}.\]
**Corollary 5.6**.: _In the notation above, we have_
\[[\mathcal{M}:\mathcal{O}[G]]_{\mathcal{O}}=[\mathcal{O}[G]:{}^{\mathcal{M}} \mathcal{O}[G]]_{\mathcal{O}}=\left(|G|^{|G|}\prod_{i=1}^{t}\left(n_{i}^{[K_{i} :K]n_{i}^{2}}[\mathcal{D}_{i}^{-1}:\mathcal{M}_{i}]_{\mathcal{O}}\right)^{-1} \right)^{\frac{1}{2}},\]
_and this index is independent of the choice of \(\mathcal{M}\)._
Proof.: By Theorem 5.5 we have
\[[\mathcal{M}:{}^{\mathcal{M}}\mathcal{O}[G]]_{\mathcal{O}} =\prod_{i=1}^{t}[\mathcal{M}_{i}:(|G|n_{i}^{-1}\mathcal{D}_{i}^{-1} )]_{\mathcal{O}}\] \[=\prod_{i=1}^{t}(|G|n_{i}^{-1})^{\dim_{K}A_{i}}[\mathcal{M}_{i}: \mathcal{D}_{i}^{-1}]_{\mathcal{O}}\] \[=|G|^{|G|}\prod_{i=1}^{t}(n_{i}^{[K_{i}:K]n_{i}^{2}}[\mathcal{D}_ {i}^{-1}:\mathcal{M}_{i}]_{\mathcal{O}})^{-1},\]
where in the last equality we have used that \(\dim_{K}A_{i}=[K_{i}:K]n_{i}^{2}\) for each \(i\) and that \(\prod_{i=1}^{t}\dim_{K}A_{i}=|G|\). The desired result now follows from Proposition 5.1.
**Corollary 5.7**.: _In the notation above, if \(A_{i}\cong\operatorname{Mat}_{n_{i}}(K)\) for \(i=1,\dots,t\), then_
\[[\mathcal{M}:\mathcal{O}[G]]_{\mathcal{O}}=[\mathcal{O}[G]:{}^{\mathcal{M}} \mathcal{O}[G]]_{\mathcal{O}}=\left(|G|^{|G|}\prod_{i=1}^{t}n_{i}^{-n_{i}^{2}} \right)^{\frac{1}{2}}.\]
Proof.: The hypotheses imply that \(K_{i}=K\) and \(\mathcal{D}_{i}^{-1}=\mathcal{M}_{i}\) for \(i=1,\dots,t\). Thus the desired result follows from Corollary 5.6.
**Corollary 5.8**.: _In the notation above, if \(G\) is abelian then_
\[[\mathcal{M}:\mathcal{O}[G]]_{\mathcal{O}}=[\mathcal{O}[G]:{}^{\mathcal{M}} \mathcal{O}[G]]_{\mathcal{O}}=\left(|G|^{|G|}\prod_{i=1}^{t}(\Delta_{K_{i}/K})^ {-1}\right)^{\frac{1}{2}},\]
_where \(\Delta_{K_{i}/K}\) denotes the discriminant of \(\mathcal{O}_{i}\) with respect to \(\mathcal{O}\)._
Proof.: Since \(A\) is commutative, for every \(i\) we have \(n_{i}=1\), \(A_{i}=K_{i}\), and \(\mathcal{M}_{i}=\mathcal{O}_{i}\). Thus the reduced trace \(\operatorname{tr}_{i}\) coincides with the ordinary trace \(T_{K_{i}/K}\) and so
\[\mathcal{D}_{i}^{-1}=\{x\in K_{i}:T_{K_{i}/K}(x\mathcal{O}_{i})\subseteq \mathcal{O}\}\]
is the usual inverse different of \(\mathcal{O}_{i}\) with respect to \(\mathcal{O}\). Moreover,
\[[\mathcal{D}_{i}^{-1}:\mathcal{M}_{i}]_{\mathcal{O}}=[\mathcal{D}_{i}^{-1}: \mathcal{O}_{i}]_{\mathcal{O}}=[\mathcal{O}_{i}:\mathcal{D}_{i}]_{\mathcal{O} }=\operatorname{Norm}_{K_{i}/K}(\mathcal{D}_{i})=\Delta_{K_{i}/K},\]
where for the third equality, it suffices to first localise and then consider the determinant of the \(K\)-linear endomorphism of \(K_{i}\) given by multiplication by a generator of \(\mathcal{D}_{i}\). Therefore the desired result now follows from Corollary 5.6.
We now make the last result completely explicit in the case \(K=\mathbb{Q}\).
**Proposition 5.9**.: _Let \(G\) be a finite abelian group and let \(e\) denote its exponent. Let \(\mathcal{M}\) be the unique maximal \(\mathbb{Z}\)-order in \(\mathbb{Q}[G]\). Then_
\[[\mathcal{M}:\mathbb{Z}[G]]=\bigg{(}|G|^{|G|}\prod_{d|e}\Big{(}d^{-\phi(d)} \prod_{p|d}p^{\frac{\phi(d)}{p-1}}\Big{)}^{t_{d}}\bigg{)}^{\frac{1}{2}},\]
_where \(t_{d}\) denotes the number of cyclic subgroups of \(G\) of order \(d\) and \(\phi(-)\) denotes the Euler totient function._
Proof.: By [1, Theorem 2], we have \(\mathbb{Q}[G]\cong\prod_{d|e}\mathbb{Q}(\zeta_{d})^{(t_{d})}\), where \(\mathbb{Q}(\zeta_{d})^{(t_{d})}\) denotes the direct product of \(t_{d}\) copies of \(\mathbb{Q}(\zeta_{d})\) (see also [10]). Moreover,
\[\Delta_{\mathbb{Q}(\zeta_{d})/\mathbb{Q}}^{-1}=d^{-\phi(d)}\prod_{p|d}p^{\frac {\phi(d)}{p-1}}\]
by [22, Proposition 2.7]. Therefore the desired result now follows from a straightforward calculation and Corollary 5.8 in the case \(K=\mathbb{Q}\).
The following special case is equivalent to [23, Lemma 5.2], which was proven using different methods.
**Corollary 5.10**.: _Let \(p\) be a prime, let \(k\) be a positive integer, and let \(G\) be the cyclic group of order \(p^{k}\). Let \(\mathcal{M}\) be the unique maximal \(\mathbb{Z}\)-order in \(\mathbb{Q}[G]\). Then_
\[[\mathcal{M}:\mathbb{Z}[G]]=p^{1+p+\cdots+p^{k-1}}.\]
### The case of group rings over the integers
For a finite group \(G\), let \(\operatorname{Irr}_{\mathbb{C}}(G)\) denote the set of complex irreducible characters of \(G\). For each \(\chi\in\operatorname{Irr}_{\mathbb{C}}(G)\), let \(\mathbb{Q}(\chi)\) denote the field generated by the values of \(\chi\), and let \(C(\chi)\) be the ideal class group of its ring of integers.
The following result is well-known to experts, but the authors were unable to locate it in this precise form in the literature.
**Proposition 5.11**.: _Let \(G\) be a finite group and let \(\mathcal{M}\) be a maximal \(\mathbb{Z}\)-order in \(\mathbb{Q}[G]\). Suppose that no factor of \(\mathbb{R}[G]\) is isomorphic to the quaternions \(\mathbb{H}\). Then \(\mathcal{M}\) satisfies the equivalent conditions of Lemma 2.2 if and only if \(C(\chi)\) is trivial for each \(\chi\in\operatorname{Irr}_{\mathbb{C}}(G)\)._
Proof.: The hypothesis on \(\mathbb{R}[G]\) ensures that \(\mathbb{Q}[G]\) satisfies the Eichler condition relative to \(\mathbb{Z}\) (see [11, SS51A]). Hence Jacobinski's cancellation theorem [11, (51.24)] implies that every \(\mathbb{Z}\)-order in \(\mathbb{Q}[G]\), in particular \(\mathcal{M}\), has the locally free cancellation property. Now write \(\mathbb{Q}[G]=A_{1}\oplus\cdots\oplus A_{t}\), where each \(A_{i}\) is a central simple \(\mathbb{Q}\)-algebra. For each \(i\), let \(K_{i}\) denote the centre of \(A_{i}\) and let \(\mathcal{M}_{i}=A_{i}\cap\mathcal{M}\). Then \(\mathcal{M}=\mathcal{M}_{1}\oplus\cdots\oplus\mathcal{M}_{t}\) and \(\operatorname{Cl}(\mathcal{M})\cong\operatorname{Cl}(\mathcal{M}_{1})\oplus \cdots\oplus\operatorname{Cl}(\mathcal{M}_{t})\). Each \(K_{i}\) is isomorphic to \(\mathbb{Q}(\chi)\) for some \(\chi\in\operatorname{Irr}_{\mathbb{C}}(G)\) and by [11, (49.32)]\(\operatorname{Cl}(\mathcal{M}_{i})\) is isomorphic to \(C(\chi)\). Therefore we obtain the 'if' direction of the desired equivalence. The 'only if' direction now follows from the fact that \(\{K_{i}:1\leq i\leq t\}=\{\mathbb{Q}(\chi):\chi\in\operatorname{Irr}_{\mathbb{ C}}(G)\}\).
_Remark 5.12_.: The hypothesis in Proposition 5.11 that no factor of \(\mathbb{R}[G]\) is isomorphic to the quaternions \(\mathbb{H}\) can be weakened to the requirement that \(\mathcal{M}\) has the stably free cancellation property, once \(C(\chi)\) is taken to be the narrow class group in the case that \(\chi\) is an irreducible symplectic character of \(G\). Maximal \(\mathbb{Z}\)-orders in finite-dimensional semisimple \(\mathbb{Q}\)-algebras with the stably free cancellation property have been classified in [23]; see also [22, Theorem II] for the case of maximal \(\mathbb{Z}\)-orders in \(\mathbb{Q}[G]\) for \(G\) a binary polyhedral group. For example, the unique maximal \(\mathbb{Z}\)-order in \(\mathbb{Q}[Q_{8}]\) satisfies the equivalent conditions of Lemma 2.2, where \(Q_{8}\) denotes the quaternion group of order \(8\).
**Corollary 5.13**.: _Let \(G\) be a finite abelian group and let \(e\) denote its exponent. Define_
\[\Sigma=\{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,2 4,25\\ 26,27,28,30,32,33,34,35,36,38,40,42,44,45,48,50,54,60,66,70,84,90\}\]
_and let \(\mathcal{M}\) be the unique maximal \(\mathbb{Z}\)-order in \(\mathbb{Q}[G]\). Then \(\mathcal{M}\) satisfies the equivalent conditions of Lemma 2.2 if and only if \(e\in\Sigma\)._
Proof.: First write \(G\cong C_{n_{1}}\times C_{n_{2}}\times\cdots\times C_{n_{k}}\) for positive integers \(k,n_{1},\ldots,n_{k}\) such that \(n_{i}\mid n_{i+1}\) for \(1\leq i\leq k-1\). Then \(e=n_{k}\) and \(\{\mathbb{Q}(\chi):\chi\in\operatorname{Irr}_{\mathbb{C}}(G)\}=\{\mathbb{Q}( \zeta_{d}):d\mid e\}\). Since \(\mathbb{Z}[\zeta_{d}]\) is the ring of integers of \(\mathbb{Q}(\zeta_{d})\), this implies that \(\{C(\chi):\chi\in\operatorname{Irr}_{\mathbb{C}}(G)\}=\{\operatorname{Cl}( \mathbb{Z}[\zeta_{d}]):d\mid e\}\). The set \(\Sigma\) is precisely the set positive integers \(n\) for which \(\operatorname{Cl}(\mathbb{Z}[\zeta_{n}])=0\) by [23, Theorem 11.1] (note that we include choices of \(n\) such that
\(n\equiv 2\bmod 4\)). It can easily be checked that \(\Sigma\) is also precisely the set of choices of \(e\) for which \(\operatorname{Cl}(\mathbb{Z}[\zeta_{d}])=0\) for all \(d\mid e\). Since no factor of \(\mathbb{Q}[G]\) is isomorphic to the quaternions \(\mathbb{H}\), the desired result now follows from Proposition 5.11.
_Remark 5.14_.: The hypotheses of Proposition 5.11 and Corollary 5.13 are much weaker than \(\operatorname{Cl}(\mathbb{Z}[G])\) itself being trivial. If \(G\) is a finite abelian group, then a result of Ph. Cassou-Nogues [10] shows that \(\operatorname{Cl}(\mathbb{Z}[G])\) is trivial if and only if either \(G\cong C_{2}\times C_{2}\) or \(G\cong C_{n}\) where \(1\leq n\leq 11\) or \(n\in\{13,14,17,19\}\). If \(G\) is a finite non-abelian non-dihedral group then a result of Endo and Hironaka [1] shows that \(\operatorname{Cl}(\mathbb{Z}[G])\) is trivial if and only if \(G\cong A_{4}\), \(S_{4}\) or \(A_{5}\) (the if direction was already shown by Reiner and Ullom [11]). For partial results in the dihedral case, see [1].
Many specialisations of Theorem 5.3 can now be obtained by applying the results of SS5.3, Proposition 5.11 and/or Corollary 5.13; the following is just one such example.
**Theorem 5.15**.: _Let \(G\) be a finite group and suppose that there exist positive integers \(n_{1},\dots,n_{t}\) such that \(\mathbb{Q}[G]\cong\prod_{i=1}^{t}\operatorname{Mat}_{n_{i}}(\mathbb{Q})\). Let \(n\) be a positive integer. If \(X\) is an \(\mathbb{Z}[G]\)-lattice such that \(\mathbb{Q}X\) is free of rank \(n\) over \(\mathbb{Q}[G]\) then there exists a free \(\mathbb{Z}[G]\)-sublattice \(Z\) of \(X\) such that \([X:Z]\) divides_
\[\left(|G|^{|G|}{\prod_{i=1}^{t}}n_{i}^{-n_{i}^{2}}\right)^{\frac{3n}{2}}.\]
Proof.: Proposition 5.11 implies that any maximal \(\mathbb{Z}\)-order in \(\mathbb{Q}[G]\) satisfies the equivalent conditions of Lemma 2.2. Thus the result follows from Theorem 5.3 and Corollary 5.7.
_Remark 5.16_.: The collection of finite groups \(G\) satisfying the hypothesis of Theorem 5.15 is closed under direct products and includes symmetric groups and hyperoctahedral groups (e.g. the dihedral group of order \(8\)). See [11] and [12] for more on this topic.
## 6. Group rings modulo trace
Let \(\mathcal{O}\) be a Dedekind domain with field of fractions \(K\neq\mathcal{O}\). Let \(G\) be a finite group such that \(|G|\) is invertible in \(K\) and let \(\operatorname{Tr}_{G}=\sum_{g\in G}g\). Then both \(K[G]\) and its quotient \(K[G]/(\operatorname{Tr}_{G})\) are separable finite-dimensional \(K\)-algebras. The purpose of this section is to consider lattices over the \(\mathcal{O}\)-order \(\mathcal{O}[G]/(\operatorname{Tr}_{G})\).
Let \(e=1-|G|^{-1}\mathrm{Tr}_{G}\), which is a central idempotent of \(K[G]\). Let \(\pi_{e}:K[G]\to eK[G]\) be the projection map associated to \(e\). Given a subset \(X\subseteq K[G]\), let \(X_{e}=\pi_{e}(X)\) and let \(X^{1-e}=X\cap\ker(\pi_{e})\). In particular, \(K[G]_{e}=eK[G]\cong K[G]/(\mathrm{Tr}_{G})\) and \(K[G]^{1-e}=\mathrm{Tr}_{G}K\).
Let \(\Lambda=\mathcal{O}[G]\) and let \(\mathcal{M}\) be a maximal \(\mathcal{O}\)-order in \(K[G]\) containing \(\Lambda\). Then \(\mathcal{M}_{e}=e\mathcal{M}\) is a maximal \(\mathcal{O}\)-order of \(K[G]_{e}\) containing \(\Lambda_{e}\). By Proposition 5.1, \({}^{\mathcal{M}}\Lambda\) is a two-sided ideal of \(\mathcal{M}\) contained in \(\Lambda\). Hence \(({}^{\mathcal{M}}\Lambda)_{e}\) is a choice of two-sided ideal of \(\mathcal{M}_{e}\) contained in \(\Lambda_{e}\). This is not necessarily the largest such choice, but its form allows us to make use of our previous computations.
**Lemma 6.1**.: _We have \([\mathcal{M}_{e}:\Lambda_{e}]_{\mathcal{O}}=|G|^{-1}[\mathcal{M}:\Lambda]_{ \mathcal{O}}\) and \([\Lambda_{e}:({}^{\mathcal{M}}\Lambda)_{e}]_{\mathcal{O}}=[\mathcal{M}:\Lambda ]_{\mathcal{O}}\)._
Proof.: Consider the following diagram of \(\mathcal{O}\)-lattices with exact rows
\[\begin{CD}0@>{}>{}>\Lambda^{1-e}@>{}>{}>\Lambda@>{}>{}>\Lambda_{e}@>{}>{}>0\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ 0@>{}>{}>\mathcal{M}^{1-e}@>{}>{}>\mathcal{M}@>{}>{}>\mathcal{M}_{e}@>{}>{}>0.\end{CD}\]
Then by Lemma 2.1 we have \([\mathcal{M}:\Lambda]_{\mathcal{O}}=[\mathcal{M}^{1-e}:\Lambda^{1-e}]_{\mathcal{O}} \cdot[\mathcal{M}_{e}:\Lambda_{e}]_{\mathcal{O}}\). Note that \(\mathcal{M}^{1-e}=(|G|^{-1}\mathrm{Tr}_{G})\cdot\mathcal{O}\) and \(\Lambda^{1-e}=\mathcal{M}^{1-e}\cap\Lambda=\mathrm{Tr}_{G}\cdot\mathcal{O}\). Hence \([\mathcal{M}^{1-e}:\Lambda^{1-e}]_{\mathcal{O}}=|G|\), and so we obtain the first equality.
Similarly, we also have the following diagram of \(\mathcal{O}\)-lattices with exact rows
(6.1)
Then by Lemma 2.1 we have \([\Lambda:{}^{\mathcal{M}}\Lambda]_{\mathcal{O}}=[\Lambda^{1-e}:({}^{\mathcal{M }}\Lambda)^{1-e}]_{\mathcal{O}}\cdot[\Lambda_{e}:({}^{\mathcal{M}}\Lambda)_{e }]_{\mathcal{O}}\). By maximality of \({}^{\mathcal{M}}\Lambda\), the subset \(({}^{\mathcal{M}}\Lambda)^{1-e}\) is the largest \(\mathcal{M}^{1-e}\)-sublattice contained in \(\Lambda^{1-e}\). Since \(\mathcal{M}^{1-e}\cong\mathcal{O}\), we find that \(\Lambda^{1-e}\), an \(\mathcal{O}\)-lattice, is already a \(\mathcal{M}^{1-e}\)-sublattice so that the left vertical map of (6.1) is an equality. Hence we have \([\Lambda_{e}:({}^{\mathcal{M}}\Lambda)_{e}]_{\mathcal{O}}=[\Lambda:{}^{ \mathcal{M}}\Lambda]_{\mathcal{O}}\). But \([\Lambda:{}^{\mathcal{M}}\Lambda]_{\mathcal{O}}=[\mathcal{M}:\Lambda]_{ \mathcal{O}}\) by Proposition 5.1, and thus we obtain the desired result.
**Theorem 6.2**.: _Let \(\mathcal{O}\) be a Dedekind domain with field of fractions \(K\). Assume that \(K\) is a global field and that \(\mathcal{O}\neq K\). Let \(G\) be a finite group such that \(|G|\) is invertible in \(K\). Set \(s=2\) if \(G\) is abelian and \(s=3\) otherwise. Let \(\mathcal{M}\) be a maximal \(\mathcal{O}\)-order in \(K[G]\) containing \(\mathcal{O}[G]\). Let \(n\) be a positive integer and let \(\mathcal{K}\) be any non-zero ideal of \(\mathcal{O}\). Then there exists a non-zero ideal \(\mathcal{I}\) of \(\mathcal{O}\), that can be chosen to be coprime to \(\mathcal{K}\), with the following property: given any \(\mathcal{O}[G]/(\mathrm{Tr}_{G})\)-lattice \(X\) such that \(KX\) is free of rank \(n\) over \(K[G]/(\mathrm{Tr}_{G})\), there exists a free \(\mathcal{O}[G]/(\mathrm{Tr}_{G})\)-sublattice \(Z\) of \(X\) such that \([X:Z]_{\mathcal{O}}\) divides \(\mathcal{I}\cdot|G|^{-2n}\cdot[\mathcal{M}:\mathcal{O}[G]]_{\mathcal{O}}^{ \mathrm{sn}}\). Moreover, if \(\mathcal{M}\) satisfies the equivalent conditions of Lemma 2.2 then we can take \(\mathcal{I}=\mathcal{O}\)._
Proof.: Let \(\Lambda=\mathcal{O}[G]\). Then \(\mathcal{M}_{e}\) is a maximal \(\mathcal{O}\)-order of \(K[G]_{e}\) containing \(\Lambda_{e}=\mathcal{O}[G]/(\mathrm{Tr}_{G})\). Note that if \(\mathcal{M}\) satisfies the equivalent conditions of Lemma 2.2 then so does \(\mathcal{M}_{e}\). By Lemma 6.1 we have \([\mathcal{M}_{e}:\Lambda_{e}]_{\mathcal{O}}=|G|^{-1}[\mathcal{M}:\Lambda]_{ \mathcal{O}}\). By Proposition 5.1, \(J:=({}^{\mathcal{M}}\Lambda)_{e}\) is a two-sided ideal of \(\mathcal{M}_{e}\) contained in \(\Lambda_{e}\). Then we have
\[[\mathcal{M}_{e}:\Lambda_{e}]_{\mathcal{O}}\cdot[\mathcal{M}_{e}:J]_{\mathcal{ O}}=[\mathcal{M}_{e}:\Lambda_{e}]_{\mathcal{O}}^{2}\cdot[\Lambda_{e}:J]_{ \mathcal{O}}=|G|^{-2}\cdot[\mathcal{M}:\Lambda]_{\mathcal{O}}^{3}.\]
Therefore we obtain the desired result by applying Theorem 4.5 for the \(\mathcal{O}\)-order \(\Lambda_{e}\), the maximal \(\mathcal{O}\)-order \(\mathcal{M}_{e}\) and the ideal \(J\).
## 7. Application: approximation of normal integral bases
We refer the reader to [10] for an overview of normal integral bases, on which there is a vast literature. In this section, we consider examples of applications of the algebraic machinery of previous sections to the approximation of normal integral bases.
Beyond the base field and the isomorphism type of the Galois group, the following result does not use any arithmetic information about the Galois extensions concerned.
**Theorem 7.1**.: _Let \(K\) be a number field and let \(\mathcal{K}\) be any non-zero ideal of \(\mathcal{O}_{K}\). Let \(G\) be a finite group and let \(\mathcal{M}\) be a maximal \(\mathcal{O}\)-order in \(K[G]\) containing \(\mathcal{O}_{K}[G]\). Set \(s=2\) if \(G\) is abelian and \(s=3\) otherwise. Then there exists a non-zero ideal \(\mathcal{I}\) of \(\mathcal{O}_{K}\), that can be chosen to be coprime to \(\mathcal{K}\), with the following property: given any Galois extension \(L/K\) with \(\mathrm{Gal}(L/K)\cong G\), there exists \(\alpha\in\mathcal{O}_{L}\) such that \([\mathcal{O}_{L}:\mathcal{O}_{K}[\mathrm{Gal}(L/K)]\cdot\alpha]_{\mathcal{O}}\) divides \(\mathcal{I}\cdot[\mathcal{M}:\mathcal{O}_{K}[G]]_{\mathcal{O}_{K}}^{s}\). Moreover, if \(\mathcal{M}\) satisfies the equivalent conditions of Lemma 2.2 then we can take \(\mathcal{I}=\mathcal{O}_{K}\)._
Proof.: The normal basis theorem says that for a finite Galois extension of fields \(L/K\) we have \(L\cong K[\operatorname{Gal}(L/K)]\) as \(K[\operatorname{Gal}(L/K)]\)-modules. Therefore the desired result now follows easily from Theorem 5.3 with \(n=1\) and \(\mathcal{O}=\mathcal{O}_{K}\).
_Remark 7.2_.: An explicit formula for \([\mathcal{M}:\mathcal{O}_{K}[G]]_{\mathcal{O}_{K}}\) is given in Corollary 5.6. In particular, a weak but general bound is that \([\mathcal{M}:\mathcal{O}_{K}[G]]_{\mathcal{O}_{K}}^{s}\) divides \(|G|^{\lceil s|G|/2\rceil}\).
By making the further assumption that the extensions concerned are at most tamely ramified, we obtain the following result with a stronger conclusion.
**Theorem 7.3**.: _Let \(K\) be a number field, let \(\mathcal{K}\) be any non-zero ideal of \(\mathcal{O}_{K}\), and \(G\) be a finite group. Then there exists a non-zero ideal \(\mathcal{I}\) of \(\mathcal{O}_{K}\), that can be chosen to be coprime to \(\mathcal{K}\), with the following property: given any at most tamely ramified Galois extension \(L/K\) with \(\operatorname{Gal}(L/K)\cong G\), there exists \(\alpha\in\mathcal{O}_{L}\) such that \([\mathcal{O}_{L}:\mathcal{O}_{K}[\operatorname{Gal}(L/K)]\cdot\alpha]_{ \mathcal{O}_{K}}\) divides \(\mathcal{I}\). Moreover, if \(\mathcal{O}_{K}[G]\) satisfies the equivalent conditions of Lemma 2.2 then we can take \(\mathcal{I}=\mathcal{O}_{K}\)._
Proof.: For an at most tamely ramified Galois extension \(L/K\) with \(\operatorname{Gal}(L/K)\cong G\), we have that \(\mathcal{O}_{L}\) is a locally free \(\mathcal{O}_{K}[G]\)-lattice of rank \(1\) by [10, Chapter I, SS3, Corollary 2], for example. Therefore the desired result now follows easily from Proposition 4.1 with \(\mathcal{O}=\mathcal{O}_{K}\) and \(\Gamma=\mathcal{O}_{K}[G]\).
_Remark 7.4_.: Improved bounds can be obtained in special cases. For example, if \(G\) is a finite group with no irreducible symplectic characters (e.g. \(G\) is abelian or of odd order), then every (at most) tamely ramified Galois extension \(L/\mathbb{Q}\) with \(\operatorname{Gal}(L/\mathbb{Q})\cong G\) has a normal integral basis by a special case of Taylor's proof [23] of a conjecture of Frohlich (see [10, SSI] for an overview). Moreover, if \(G\) is a finite abelian group then Leopoldt's theorem [11] (see also [12]) implies that for every Galois extension \(L/\mathbb{Q}\) with \(\operatorname{Gal}(L/\mathbb{Q})\cong G\), there exists \(\alpha\in\mathcal{O}_{L}\) such that \([\mathcal{O}_{L}:\mathbb{Z}[\operatorname{Gal}(L/\mathbb{Q})]\cdot\alpha]\) divides \([\mathcal{M}:\mathbb{Z}[G]]\), where \(\mathcal{M}\) is the unique maximal \(\mathbb{Z}\)-order in \(\mathbb{Q}[G]\). By contrast, Theorems 7.1 and 7.3 are very general and their short proofs use little or no arithmetic information about the particular field extensions concerned.
## 8. Application: approximation of strong Minkowski units
In this section, we consider examples of applications of the algebraic machinery of previous sections to the approximation of strong Minkowski units.
**Definition 8.1**.: Let \(L/K\) be a Galois extension of number fields and let \(G=\operatorname{Gal}(L/K)\). Let \(\mu_{L}\) denote the roots of unity of \(L\). An element \(\varepsilon\in\mathcal{O}_{L}^{\times}/\mu_{L}\) is said to be
* a _Minkowski unit_ of \(L/K\) if \(\mathbb{Q}\otimes_{\mathbb{Z}}(\mathcal{O}_{L}^{\times}/\mu_{L})=\mathbb{Q}[G] \cdot\varepsilon\),
* a _strong Minkowski unit_ of \(L/K\) if \(\mathcal{O}_{L}^{\times}/\mu_{L}=\mathbb{Z}[G]\cdot\varepsilon\).
**Lemma 8.2**.: _Let \(L/K\) be a Galois extension of number fields and let \(G=\operatorname{Gal}(L/K)\). Then \(L/K\) has a Minkowski unit if and only \(K\) is equal to either \(\mathbb{Q}\) or an imaginary quadratic field. Moreover, if either \(L\) is totally real or \(K\) is imaginary quadratic, then \(\mathbb{Q}\otimes_{\mathbb{Z}}(\mathcal{O}_{L}^{\times}/\mu_{L})\cong\mathbb{Q }[G]/(\operatorname{Tr}_{G})\) as \(\mathbb{Q}[G]/(\operatorname{Tr}_{G})\)-modules (and as \(\mathbb{Q}[G]\)-modules)._
Proof.: By definition, \(L/K\) has a Minkowski unit if and only if \(\mathbb{Q}\otimes_{\mathbb{Z}}\mathcal{O}_{L}^{\times}\) is cyclic as a \(\mathbb{Q}[G]\)-module. By a theorem of Herbrand (see [13, Chapter I, SS4.3], for example) there is an isomorphism \((\mathbb{Q}\otimes_{\mathbb{Z}}\mathcal{O}_{L}^{\times})\oplus\mathbb{Q}\cong \mathbb{Q}[S_{\infty}]\) of \(\mathbb{Q}[G]\)-modules, where \(S_{\infty}\) denotes the set of infinite places of \(L\). So the existence of a Minkowski unit is equivalent to \(\mathbb{Q}[S_{\infty}]\) being cyclic as a \(\mathbb{Q}[G]\)-module, which in turn is equivalent to \(S_{\infty}\) being transitive as a \(G\)-set.
This occurs precisely when \(K\) has a unique infinite place. If either \(L\) is totally real or \(K\) is imaginary quadratic, then the unique infinite place of \(K\) splits completely in \(L/K\) and thus \(\mathbb{Q}[S_{\infty}]\cong\mathbb{Q}[G]\) as \(\mathbb{Q}[G]\)-modules.
_Remark 8.3_.: The existence of strong Minkowski units (which some authors refer to as Minkowski units) in special cases has been studied in numerous articles, including [10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 80, 79, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 231, 242, 251, 261, 272, 281, 290, 211, 223, 243, 252, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 281, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 323, 334, 35, 36, 37, 38, 39, 311, 332, 335, 36, 37, 38, 39, 312, 33, 34, 36, 38, 39, 32, 34, 36, 39, 33, 35, 37, 38, 39, 31, 33, 36, 39, 32, 34, 36, 39, 33, 37, 38, 39, 30, 31, 34, 38, 39, 32, 35, 39, 33, 36, 39, 34, 37, 38, 39, 30, 31, 34, 39, 35, 39, 36, 37, 39, 38, 39, 32, 39, 33, 37, 38, 39, 30, 31, 32, 34, 35, 39, 36, 38, 39, 37, 39, 38, 39, 30, 32, 39, 31, 34, 35, 39, 32, 36, 37, 38, 39, 31, 35, 39, 32, 38, 39, 33, 34, 37, 39, 35, 39, 36, 37, 38, 39, 38, 39, 30, 31, 32, 33, 39, 33, 34, 35, 39, 36, 37, 39, 38, 39, 31, 39, 32, 33, 35, 39, 34, 37, 38, 39, 35, 39, 36, 38, 37, 39, 38, 39, 30, 32, 33, 38, 39, 34, 35, 39, 36, 37, 38, 39, 37, 39, 38, 39, 30, 33, 38, 39, 31, 32, 33, 39, 34, 35, 39, 36, 39, 37, 38, 39, 32, 39, 33, 38, 39, 30, 34, 35, 39, 36, 37, 38, 39, 39, 31, 38, 32, 39, 33, 35, 39, 34, 36, 38, 39, 30, 37, 38, 39, 32, 39, 33, 34, 35, 39, 35, 36, 37, 38, 39, 39, 30, 31, 32, 33, 33, 34, 35, 39, 36, 39, 37, 38, 39, 31, 34, 38, 39, 32, 35, 39, 36, 37, 39, 38, 39, 31, 39, 32, 33, 38, 39, 34, 38, 39, 35, 39, 36, 38, 39, 37, 39, 38, 39, 39, 30, 31, 32, 34, 39, 35, 39, 36, 39, 37, 38, 39, 39, 38, 39, 30, 31, 39, 32, 33, 34, 35, 39, 36, 37, 38, 39, 31, 38, 39, 32, 39, 33, 34, 37, 38, 39, 30, 34, 35, 39, 36, 37, 38, 39, 37, 39, 38, 39, 31, 39, 33, 38, 39, 30, 32, 34, 35, 39, 37, 39, 38, 39, 30, 34, 39, 31, 35, 39, 36, 39, 37, 38, 39, 32, 38, 39, 30, 34, 35, 39, 36, 38, 39, 31, 39, 32, 39, 33, 34, 37, 38, 39, 35, 39, 31, 36, 39, 37, 38, 39, 30, 31, 38, 39, 32, 39, 33, 34, 35, 39, 36, 39, 37, 38, 39, 30, 34, 38, 39, 32, 39, 33, 35, 39, 34, 36, 39, 37, 38, 39, 30, 38, 39, 31, 39, 30, 32, 35, 39, 36, 39, 37, 39, 38, 39, 30, 31, 39, 32, 33, 34, 38, 39, 30, 34, 39, 31, 35, 36, 37, 38, 39, 32, 39, 33, 38, 39, 30, 34, 39, 35, 36, 37, 39, 38, 39, 32, 39, 30, 35, 39, 31, 36, 39, 37, 39, 31, 38, 39, 32, 39, 33, 34, 39, 35, 36, 37, 39, 38, 39, 30, 39, 31, 39, 32, 34, 39, 35, 37, 39, 31, 38, 39, 32, 36, 39, 30, 34, 37, 38, 39, 31, 32, 35, 39, 36, 37, 38, 39, 31, 39, 33, 32, 37, 38, 39, 30, 34, 35, 39, 31, 36, 37, 39, 32, 38, 39, 30, 31, 39, 33, 34, 39, 35, 36, 37, 39, 38, 39, 30, 39, 31, 32, 39, 33, 34, 39, 32, 35, 39, 36, 37, 38, 39, 30, 39, 31, 38, 39, 30, 32, 39, 30, 33, 34, 39, 31, 35, 39, 36, 37, 39, 31, 39, 32, 32, 39, 33, 34, 39, 35, 39, 36,
4.12] we have that \([\mathcal{O}_{L}^{\times}:\mu_{L}\mathcal{O}_{L^{+}}^{\times}]=1\) or \(2\), and so \([\mathcal{O}_{L}^{\times}/\mu_{L}:\mathbb{Z}[\mathrm{Gal}(L/\mathbb{Q})]\cdot \varepsilon]\) divides \(2[\mathcal{O}_{L^{+}}^{\times}/\{\pm 1\}:\mathbb{Z}[\mathrm{Gal}(L^{+}/\mathbb{Q})] \cdot\varepsilon]\). Thus \(\varepsilon\) is also a Minkowski unit of \(L/\mathbb{Q}\), and we obtain stronger analogues of Theorem 8.5 and Corollary 8.8 in this situation.
The above results can be strengthened for extensions of prime degree.
**Theorem 8.10**.: _Let \(p\) be an odd prime and let \(k\) be a positive integer. Then there exists a positive integer \(i\), which can be chosen to be coprime to \(k\), with the following property: given any cyclic field extension \(L/K\) with \([L:K]=p\) and \(K\) equal to either \(\mathbb{Q}\) or an imaginary quadratic field, there exists a Minkowski unit \(\varepsilon\in\mathcal{O}_{L}^{\times}/\mu_{L}\) such that \([\mathcal{O}_{L}^{\times}/\mu_{L}:\mathbb{Z}[\mathrm{Gal}(L/K)]\cdot \varepsilon]\) divides \(i\)._
Proof.: Let \(G\) be the cyclic group of order \(p\) and let \(\mathcal{M}=\mathbb{Z}[G]/(\mathrm{Tr}_{G})\). Then \(\mathcal{M}\cong\mathbb{Z}[\zeta_{p}]\), which is a maximal \(\mathbb{Z}\)-order. By Corollary 4.2 there exists a positive integer \(i\), which can be chosen to be coprime to \(k\), with the following property: given any \(\mathcal{M}\)-lattice \(X\) such that \(\mathbb{Q}X\) is free of rank \(1\) as a \(\mathbb{Q}[G]/(\mathrm{Tr}_{G})\)-module, there exists a free \(\mathcal{M}\)-sublattice \(Y\) of \(X\) such that \([X:Y]\) divides \(i\). Note that if \(\varepsilon\in X\) is a free \(\mathcal{M}\)-generator of \(Y\), then \(Y=\mathcal{M}\cdot\varepsilon=\mathbb{Z}[G]\cdot\varepsilon\). The desired result now follows from Lemma 8.2 since the hypotheses ensure that either \(L\) is totally real or \(K\) is imaginary quadratic.
The following result is not new, as it is the combination of [10] (see also [1, Corollary]) and an easy consequence of [12, Theorem]; we include it for completeness.
**Corollary 8.11**.: _Let \(p\) be a prime such that \(3\leq p\leq 19\). Then every cyclic field extension \(L/K\) with \([L:K]=p\) and \(K\) equal to either \(\mathbb{Q}\) or an imaginary quadratic field has a strong Minkowski unit._
Proof.: In the proof of Theorem 8.10, adding the hypothesis that \(p\leq 19\) implies that \(\mathcal{M}\cong\mathbb{Z}[\zeta_{p}]\) has trivial class group and so satisfies the equivalent conditions of Lemma 2.2. Hence we can take \(i=1\) by Remark 4.3, and this implies the desired result.
_Remark 8.12_.: It is interesting to compare Theorem 8.10 to (i) [1, Theorem] when \(K=\mathbb{Q}\) and (ii) [12, Theorem] when \(K\) is imaginary quadratic. Result (i) considers cyclic extensions \(L/\mathbb{Q}\) of odd prime degree \(p\) and gives sufficient conditions on ideals of \(\mathbb{Z}[\zeta_{p}]\) of norm equal to the class number \(h_{L}\) of \(\mathcal{O}_{L}\) for both the existence and non-existence of a strong Minkowski unit of \(L/\mathbb{Q}\). The proof uses the fact that \(\mathcal{O}_{L}^{\times}/\{\pm 1\}\) contains a free \(\mathbb{Z}[\zeta_{p}]\)-submodule of index \(h_{L}\) generated by a cyclotomic unit. Result (ii) is analogous and uses elliptic units. By contrast, the proof and statement of Theorem 8.10 do not depend on the particular extension \(L/K\).
## 9. Application: rational points on abelian varieties
In this section, we consider examples of applications of the algebraic machinery of previous sections to the Galois module structure of rational points of abelian varieties. By the Mordell-Weil theorem, for every abelian variety \(A\) over a number field \(K\), the group \(A(K)/A(K)_{\mathrm{tors}}\) is a free \(\mathbb{Z}\)-module of finite rank. If \(L/K\) is a Galois extension of number fields, then \(A(L)/A(L)_{\mathrm{tors}}\) is a \(\mathbb{Z}[\mathrm{Gal}(L/K)]\)-lattice, so is amenable to study via our methods.
**Theorem 9.1**.: _Let \(G\) be a finite group and let \(k\) be a positive integer. Set \(s=2\) if \(G\) is abelian and \(s=3\) otherwise. Then there exists a positive integer \(i\), which can be chosen to be coprime to \(k\), with the following property: given any Galois extension of number fields \(L/K\) with \(\mathrm{Gal}(L/K)\cong G\), and any abelian variety \(A/K\) such that \(\mathbb{Q}\otimes_{\mathbb{Z}}A(L)\) is cyclic_
_as a \(\mathbb{Q}[G]\)-module, there exists \(\varepsilon\in A(L)/A(L)_{\rm tors}\) such that \([A(L)/A(L)_{\rm tors}:\mathbb{Z}[G]\cdot\varepsilon]\) is finite and divides \(i:[\mathcal{M}:\mathbb{Z}[G]]^{s}\), where \(\mathcal{M}\) is any maximal \(\mathbb{Z}\)-order in \(\mathbb{Q}[G]\) containing \(\mathbb{Z}[G]\)._
_Remark 9.2_.: An explicit formula for \([\mathcal{M}:\mathbb{Z}[G]]\) is given in Corollary 5.6. In particular, a weak but general bound is that \([\mathcal{M}:\mathbb{Z}[G]]^{s}\) divides \(|G|^{\lceil s|G|/2\rceil}\).
_Remark 9.3_.: Let \(G\) be a finite group. The isomorphism class of a finite-dimensional \(\mathbb{Q}[G]\)-module \(X\) is entirely determined by the values of \(\dim_{\mathbb{Q}}X^{H}\) as \(H\) runs over a set of representatives of the set of cyclic subgroups of \(G\) up to conjugacy (see [10, SS13.1, Corollary to Theorem 30\({}^{\prime}\)]). In particular, \(X\) is free of rank \(1\) if and only if \(\dim_{\mathbb{Q}}X^{H}=[G:H]\) for all cyclic subgroups \(H\) of \(G\) up to conjugacy.
Proof of Theorem 9.1.: By Theorem 5.3, there exists a positive integer \(i\), which can be chosen to be coprime to \(n\), with the following property: given any \(\mathbb{Z}[G]\)-lattice \(X\) such that \(\mathbb{Q}\otimes_{\mathbb{Z}}X\) is free of rank \(1\) as a \(\mathbb{Q}[G]\)-module, there exists a free \(\mathbb{Z}[G]\)-sublattice \(Y\) of \(X\) such that \([X:Y]\) divides \(i\cdot[\mathcal{M}:\mathbb{Z}[G]]^{s}\). By Remark 4.6, \(i\) also has the property that given any \(\mathbb{Z}[G]\)-lattice \(X\) such that \(\mathbb{Q}\otimes_{\mathbb{Z}}X\) is cyclic as a \(\mathbb{Q}[G]\)-module, there exists a cyclic \(\mathbb{Z}[G]\)-sublattice \(Y\) of \(X\) such that \([X:Y]\) divides \(i\cdot[\mathcal{M}:\mathbb{Z}[G]]^{s}\). In particular, this holds for \(X=A(L)/A(L)_{\rm tors}\) after fixing an isomorphism \(G\cong\operatorname{Gal}(L/K)\).
**Theorem 9.4**.: _Let \(p\) be an odd prime and let \(k\) be a positive integer. Then there exists a positive integer \(i\), which can be chosen to be coprime to \(k\), with the following property: given any cyclic extension \(L/K\) of number fields with \([L:K]=p\) and any abelian variety \(A/K\) such that \(\operatorname{rank}_{\mathbb{Z}}A(K)=0\) and \(\operatorname{rank}_{\mathbb{Z}}A(L)=p-1\), there exists \(\varepsilon\in A(L)/A(L)_{\rm tors}\) such that \([A(L)/A(L)_{\rm tors}:\mathbb{Z}[\operatorname{Gal}(L/K)]\cdot\varepsilon]\) is finite and divides \(i\)._
Proof.: Let \(G\) be the cyclic group of order \(p\) and let \(\mathcal{M}=\mathbb{Z}[G]/({\rm Tr}_{G})\). Then \(\mathcal{M}\cong\mathbb{Z}[\zeta_{p}]\), which is a maximal \(\mathbb{Z}\)-order. By Corollary 4.2 there exists a positive integer \(i\), which can be chosen to be coprime to \(n\), with the following property: given any \(\mathcal{M}\)-lattice \(X\) such that \(\mathbb{Q}X\cong\mathbb{Q}[G]/({\rm Tr}_{G})\) as \(\mathbb{Q}[G]/({\rm Tr}_{G})\)-modules, there exists a free \(\mathcal{M}\)-sublattice \(Y\) of \(X\) such that \([X:Y]\) divides \(i\). After fixing an isomorphism \(G\cong\operatorname{Gal}(L/K)\), the desired result now follows since the rank hypotheses ensure that \(\mathbb{Q}\otimes_{\mathbb{Z}}(A(L)/A(L)_{\rm tors})\cong\mathbb{Q}[G]/({\rm Tr }_{G})\) as \(\mathbb{Q}[G]\)-modules (see Remark 9.3) and hence as \(\mathbb{Q}[G]/({\rm Tr}_{G})\)-modules.
_Remark 9.5_.: Note that for \(\mathbb{Q}\otimes_{\mathbb{Z}}(A(L)/A(L)_{\rm tors})\) to be cyclic as a \(\mathbb{Q}[G]/({\rm Tr}_{G})\)-module, it is necessary that \(\operatorname{rank}_{\mathbb{Z}}A(L)=0\) or \(p-1\).
**Corollary 9.6**.: _Let \(p\) be a prime such that \(3\leq p\leq 19\). Then given any cyclic extension of number fields \(L/K\) with \([L:K]=p\) and any abelian variety \(A/K\) such that \(\operatorname{rank}_{\mathbb{Z}}A(K)=0\) and \(\operatorname{rank}_{\mathbb{Z}}A(L)=p-1\), there exists \(\varepsilon\in A(L)/A(L)_{\rm tors}\) such that \(A(L)/A(L)_{\rm tors}=\mathbb{Z}[\operatorname{Gal}(L/K)]\cdot\varepsilon\)._
Proof.: In the proof of Theorem 9.4, the additional hypothesis that \(p\leq 19\) implies that \(\mathcal{M}\cong\mathbb{Z}[\zeta_{p}]\) has trivial class group and so satisfies the equivalent conditions of Lemma 2.2. Hence we can take \(i=1\) by Remark 4.3, and this implies the desired result.
**Theorem 9.7**.: _Let \(p\) be a prime, let \(F\) be an imaginary quadratic field with discriminant coprime to \(p\), and let \(\mathcal{K}\) be a non-zero ideal of \(\mathcal{O}_{F}\). Then there exists an ideal \(\mathcal{I}\) of \(\mathcal{O}_{F}\), which can be chosen to be coprime to \(\mathcal{K}\), with the following property: given any cyclic extension of number fields \(L/K\) with \([L:K]=p\) and such that \(K\) contains \(F\), and any elliptic curve \(E/K\) with complex multiplication by \(\mathcal{O}_{F}\) and with \(\operatorname{rank}_{\mathbb{Z}}E(K)=0\) and \(\operatorname{rank}_{\mathbb{Z}}E(L)=2(p-1)\), there exists \(\varepsilon\in E(L)/E(L)_{\rm tors}\) such that \([E(L)/E(L)_{\rm tors}:\mathcal{O}_{F}[\operatorname{Gal}(L/K)]\cdot\varepsilon]_ {\mathcal{O}_{F}}\) divides \(\mathcal{I}\)._
Proof.: Let \(G\) be the cyclic group of order \(p\). The discriminant of \(\mathbb{Z}[\zeta_{p}]\) is a power of \(p\) and in particular is coprime to the discriminant of \(\mathcal{O}_{F}\). Hence \(\mathbb{Q}(\zeta_{p})\) and \(F\) are linearly disjoint over \(\mathbb{Q}\). Moreover, by [1, III, SS3, Proposition 17] we have \(\mathcal{O}_{F}[\zeta_{p}]=\mathcal{O}_{F(\zeta_{p})}\). Thus \(\mathcal{O}_{F}[G]/(\mathrm{Tr}_{G})\cong\mathcal{O}_{F}[\zeta_{p}]\) is a maximal \(\mathcal{O}_{F}\)-order. By Corollary 4.2 there exists a non-zero ideal \(\mathcal{I}\) of \(\mathcal{O}_{F}\), which can be chosen to be coprime to \(\mathcal{K}\), with the following property: given any \(\mathcal{O}_{F}[G]/(\mathrm{Tr}_{G})\)-lattice \(X\) such that \(F\otimes_{\mathcal{O}_{F}}X\cong F[G]/(\mathrm{Tr}_{G})\) as \(F[G]/(\mathrm{Tr}_{G})\)-modules, there exists a free \(\mathcal{O}_{F}[G]/(\mathrm{Tr}_{G})\)-sublattice \(Y\) of \(X\) such that \([X:Y]_{\mathcal{O}_{F}}\) divides \(\mathcal{I}\).
Let \(E\), \(L\) and \(K\) be as in the theorem and fix an isomorphism \(G\cong\mathrm{Gal}(L/K)\). By assumption, \(E\) has CM by \(\mathcal{O}_{F}\) defined over \(K\). The commuting Galois action and action by endomorphisms then give \(E(L)/E(L)_{\mathrm{tors}}\) the structure of an \(\mathcal{O}_{F}[G]\)-lattice. Moreover, as \(\mathrm{rk}_{Z}E(K)=0\), it is in fact a \(\mathcal{O}_{F}[G]/(\mathrm{Tr}_{G})\)-lattice and since \(\mathrm{rank}_{\mathbb{Z}}E(L)=2(p-1)\), we have that \(\dim_{\mathbb{Q}}F\otimes_{\mathcal{O}_{F}}(E(L)/E(L)_{\mathrm{tors}})=2(p-1)\). Since \(F\) and \(\mathbb{Q}(\zeta_{p})\) are linearly disjoint, the unique \(F[G]/(\mathrm{Tr}_{G})\)-module with these properties is \(F[G]/(\mathrm{Tr}_{G})\) itself. Therefore \(E(L)/E(L)_{\mathrm{tors}}\) is an example of an \(\mathcal{O}_{F}[G]/(\mathrm{Tr}_{G})\)-lattice such that \(F\otimes_{\mathcal{O}_{F}}X\cong F[G]/(\mathrm{Tr}_{G})\).
_Remark 9.8_.: Since \(\mathbb{Q}(\zeta_{p})\) and \(F\) are linearly disjoint over \(\mathbb{Q}\), the \(F[G]/(\mathrm{Tr}_{G})\)-module \(F\otimes_{\mathcal{O}_{F}}(E(L)/E(L)_{\mathrm{tors}})\) is cyclic if and only if either \(\mathrm{rank}_{\mathbb{Z}}E(L)=0\) or \(2(p-1)\).
**Corollary 9.9**.: _Let \(p\) be a prime, let \(F\) be an imaginary quadratic field with discriminant coprime to \(p\) such that \(\mathcal{O}_{F(\zeta_{p})}\) has trivial class group. Then for every cyclic extension of number fields \(L/K\) such that \(K\) contains \(F\) and \([L:K]=p\), and for every elliptic curve \(E/K\) with complex multiplication by \(\mathcal{O}\) and with \(\mathrm{rank}_{\mathbb{Z}}E(K)=0\) and \(\mathrm{rank}_{\mathbb{Z}}E(L)=2(p-1)\), we have that \(E(L)/E(L)_{\mathrm{tors}}\) is free as an \(\mathcal{O}_{F}[G]/(\mathrm{Tr}_{G})\)-module._
Proof.: In the proof of Theorem 9.7, the additional hypothesis that \(\mathcal{O}_{F(\zeta_{p})}\) has trivial class group ensures that \(\mathcal{O}_{F}/(\mathrm{Tr}_{G})\cong\mathcal{O}_{F(\zeta_{p})}\) satisfies the equivalent conditions of Lemma 2.2. Hence we can take \(\mathcal{I}=\mathcal{O}_{F}\) by Remark 4.3, and this implies the desired result.
|
2309.10856 | Non-equilibrium critical scaling and universality in a quantum simulator | Universality and scaling laws are hallmarks of equilibrium phase transitions
and critical phenomena. However, extending these concepts to non-equilibrium
systems is an outstanding challenge. Despite recent progress in the study of
dynamical phases, the universality classes and scaling laws for non-equilibrium
phenomena are far less understood than those in equilibrium. In this work,
using a trapped-ion quantum simulator with single-ion resolution, we
investigate the non-equilibrium nature of critical fluctuations following a
quantum quench to the critical point. We probe the scaling of spin fluctuations
after a series of quenches to the critical Hamiltonian of a long-range Ising
model. With systems of up to 50 spins, we show that the amplitude and timescale
of the post-quench fluctuations scale with system size with distinct universal
critical exponents. While a generic quench can lead to thermal critical
behaviour, we find that a second quench from one critical state to another
(i.e. a double quench) results in critical behaviour that does not have an
equilibrium counterpart. Our results demonstrate the ability of quantum
simulators to explore universal scaling beyond the equilibrium paradigm. | A. De, P. Cook, K. Collins, W. Morong, D. Paz, P. Titum, G. Pagano, A. V. Gorshkov, M. Maghrebi, C. Monroe | 2023-09-19T18:04:25Z | http://arxiv.org/abs/2309.10856v1 | # Non-equilibrium critical scaling and universality in a quantum simulator
###### Abstract
Universality and scaling laws are hallmarks of equilibrium phase transitions and critical phenomena [1; 2; 3]. However, extending these concepts to non-equilibrium systems is an outstanding challenge. Despite recent progress in the study of dynamical phases [4; 5], the universality classes and scaling laws for non-equilibrium phenomena are far less understood than those in equilibrium [6]. In this work, using a trapped-ion quantum simulator with single-ion resolution, we investigate the non-equilibrium nature of critical fluctuations following a quantum quench to the critical point. We probe the scaling of spin fluctuations after a series of quenches to the critical Hamiltonian of a long-range Ising model. With systems of up to 50 spins, we show that the amplitude and timescale of the post-quench fluctuations scale with system size with distinct universal critical exponents. While a generic quench can lead to thermal critical behaviour, we find that a second quench from one critical state to another (i.e. a double quench) results in critical behaviour that does not have an equilibrium counterpart. Our results demonstrate the ability of quantum simulators to explore universal scaling beyond the equilibrium paradigm.
In recent years, substantial theoretical [4; 6; 7; 8] and experimental [5; 9] progress has been achieved in understanding emergent behaviour of isolated quantum systems out of equilibrium. In this context, non-equilibrium many-body systems can be investigated by measuring quantum dynamics after a quench [10], namely after a change of the Hamiltonian parameters that is much faster than the typical energy scales in the system--which is routinely performed in AMO (atomic, molecular, and optical) systems.
Although such dynamics are extremely complex in general, one would expect that macroscopic observables after a short time become insensitive to the microscopic details [2]. In particular, in the vicinity of a phase transition, the dynamics should give rise to universal critical behaviour which leads to scale-invariant spatio-temporal correlations with universal exponents [11]. In general, universal non-equilibrium phenomena are relevant far beyond the scope of AMO and condensed matter physics, including chemistry, biology and even sociology [12]. Examples ranging from glassy transitions seen in polymers, colloidal gels, and spin glasses [13] to symmetry-breaking transitions in the Universe after the 'Big Bang' [14] all exhibit non-equilibrium critical behaviour.
The unprecedented degree of control over quantum systems in platforms such as trapped ions [15; 16], ultracold atoms [17; 18], nitrogen-vacancy centers [19], superconducting circuits [20; 21] and others [22; 23; 24] have made it possible to probe fundamental questions about non-equilibrium many-body physics including prethermalization [25; 26], many-body localization [27; 28], discrete time crystals [19; 29], and dynamical phase transitions [5; 9]. For example, universal scaling around non-thermal fixed points has been observed with Bose-Einstein condensates that exhibit self-similar behaviour; these observations are however not related to an underlying critical behaviour [30; 31; 32]. In contrast, the work reported here is fundamentally tied to the existence of a phase transition and extends the remarkably rich domain of critical phenomena in equilibrium to far-from-equilibrium dynamics.
Recent theoretical works have demonstrated post-quench critical scaling behaviour in the Lipkin-Meshkov-Glick (LMG) model [11], an infinite-range version of the Ising model. Whether a broader class of many-body systems display similar universal scaling properties is an open question. In this work, we study the dynamics of a transverse-field Ising chain with tunable power-law interactions after a quantum quench. The Hamiltonian of the model is represented as (\(\hbar=1\)):
\[H=-\sum_{i<j}^{N}J_{ij}(\gamma^{x}\sigma_{i}^{x}\sigma_{j}^{x}+\gamma^{y} \sigma_{i}^{y}\sigma_{j}^{y})+B^{z}\sum_{i}^{N}\sigma_{i}^{z}, \tag{1}\]
where \(\sigma_{i}^{x,y,z}\) are the Pauli matrices acting on the \(i\)'th spin. \(J_{ij}\) is the interaction strength between ions \(i\) and \(j\), \(B^{z}\) is the global transverse magnetic
field and \(N\) is the number of spins. The coefficients \(\left(\gamma^{x},\gamma^{y}\right)\in\left[0,1\right]\), and only one of them is non-zero during a single quench (Methods). The interaction strength falls off approximately following a power-law of the form \(J_{ij}\sim J/|i-j|^{p}\), where \(J>0\) is the effective interaction strength and \(p\) is the range of interaction [15]. The exponent \(p\) was tuned to be at 0.89 for all the experiments and all system sizes (Methods). In order to maintain a well-defined thermodynamic limit, here onwards, we refer to the interaction after it is Kac-normalized as \(\mathcal{J}=\frac{1}{N-1}\sum_{i,j}J_{ij}\)[33]. We encode the quantum spins in the ground state hyperfine manifold of the \({}^{171}\)Yb\({}^{+}\) ions, where \(\left|\downarrow\right\rangle_{z}\equiv\left|{}^{2}S_{1/2},F=0,m_{F}=0\right\rangle\) and \(\left|\uparrow\right\rangle_{z}\equiv\left|{}^{2}S_{1/2},F=1,m_{F}=0\right\rangle\), and we perform high-fidelity state preparation and site-resolved detection using state-dependent fluorescence (Methods) [34].
The transverse-field Ising model exhibits a ground-state phase transition from a disordered paramagnet to a magnetically ordered state. As the ratio of Kac-normalized effective interaction field (\(\mathcal{J}\gamma^{x}\)) to the transverse magnetic field (\(B^{z}\)) is varied across the critical point \(\mathcal{J}\gamma^{x}/B^{z}=1\), the average in-plane magnetization (\(\langle\sigma^{x}\rangle\)) changes from zero (disordered phase) to a non-zero value (ordered phase) in a second-order phase transition (Fig. 1a). By performing quenches to various values of \(\mathcal{J}\gamma^{x}/B^{z}\), we identify the critical point of this phase transition and observe the characteristic divergent fluctuations. We report that, after a single quench, the critical behaviour and exponents are effectively thermal. However, this behaviour changes qualitatively in a sequence of quenches to multiple critical points (Fig. 1b). As we demonstrate later in this work, a double quench gives rise to genuinely non-equilibrium critical behaviour.
We begin with a single quench sequence where all the spins are initialized in the \(\left|\downarrow\downarrow\...\downarrow\right\rangle_{z}\) state, which is the ground state of the initial Hamiltonian in the absence of the Ising interaction (Supplementary Information (SI)). Then the spin system is evolved after an interaction quench of the form Eq. (1), in which \(\gamma^{x}=1,\ \gamma^{y}=0\) (Fig. 1a). We measure the total spin \(S_{x}=\sum_{i}^{N}\sigma_{i}^{x}/2\) projected along the direction of interaction (here along \(x\)) and calculate the net correlator defined as
\[\left\langle C_{x}^{2}\right\rangle=\left\langle S_{x}^{2}\right\rangle-\left \langle S_{x}\right\rangle^{2}. \tag{2}\]
We characterize the dynamics through the net correlator
Figure 1: **a.** Ground-state phase diagram of a long-range transverse-field Ising model with Ising interaction only along the x direction. \(\mathcal{J}\gamma^{x}/B^{z}\) is the ratio of the Kac-normalized effective interaction strength \(\mathcal{J}\gamma^{x}\) to the transverse field strength \(B^{z}\) [see main text]. The ordered and disordered phases are shown in blue and purple colors, respectively. The arrows indicate the quenches into different phases where the solid arrow indicates a quench to the critical point. **b.** Ground-state phase diagram with Ising interaction along x or y direction. The phase boundary is shown in gray dashed lines, with red and green circles indicating the critical points where the quenches are performed. The colored arrows indicate the sequence of quenches starting in the disordered phase. **c.** The experimental sequence starting with all spins initialized along \(\left|\downarrow\right\rangle_{z}\). The first quench is applied with interactions along the x direction, and the evolution is measured by projecting the spins along x. For the second quench, both the interaction and measurement bases are switched from \(x\) to \(y\) direction. In the double-quench experiment, the second quench is applied after evolving under the first quench, but no measurement is performed before the second quench. The curved lines illustrate the long-range interaction among all the spins where the opacity reflects interaction strengths that weaken with distance.
since the Ising symmetry of the Hamiltonian together with the initial magnetization being zero implies that the average magnetization along the \(x\) direction remains zero at all times. The definition of \(\left\langle C_{x}^{2}\right\rangle\) further removes any bias of the average magnetization due to imperfect single-qubit rotations in the experiment. In Fig. 2a, we show the post-quench evolution of \(\left\langle C_{x}^{2}\right\rangle/N^{2}\) with 10 ions, where each quench is performed with a different value of \(B^{z}\) while keeping \(\mathcal{J}\) constant. Overall, we observe, both numerically and experimentally, that the net correlator increases in amplitude and exhibit slower dynamics as \(B^{z}\) is swept from larger to smaller values. This behaviour hints at a continuous dynamical phase transition [6].
Equilibrium phase transitions are commonly identified by defining an order parameter observable, which acquires a nonzero value as the system transitions from the disordered to the ordered phase. However, in the context of non-equilibrium phase transitions, defining an order parameter can be ambiguous and different definitions have been proposed [5; 9; 35]. Analogous to equilibrium phases, one may consider the in-plane magnetization to identify a symmetry-breaking phase transition. Using a mean-field analysis to compute the long-time average of the magnetization, we can identify \(B_{c}^{z}/\mathcal{J}=1\) as the dynamical critical point of the disorder-to-order phase transition, which coincides with the ground-state critical point (SI Secs. I & III). While magnetization remains zero for our chosen initial state, we instead consider the maximum net correlator, \(\mathcal{M}^{2}=\max_{t}\left[\left\langle C_{x}^{2}\right\rangle/N^{2}\right]\), as a proxy for the order parameter; the maximum is chosen to find a large signal in spite of decoherence. In Fig. 2b, we show \(\mathcal{M}^{2}\) as a function of the scaled magnetic field strength \(B^{z}/\mathcal{J}\). While there is no sharp transition for finite system sizes (\(N=10,15,20\)), the order parameter clearly shows an inflection point around \(B^{z}/\mathcal{J}\sim 1\) and a peak at small \(B^{z}\), indicating the onset of ordering. Notably, the observed order parameter qualitatively follows the mean-field prediction in the ordered phase, \(\mathcal{M}^{2}\propto(B^{z}/\mathcal{J})(1-B^{z}/\mathcal{J})\); see dashed line in Fig. 2b. Moreover, one can even capture the finite-size corrections by considering fluctuations at finite system sizes. The solid lines in Fig. 2b depict the function describing the finite-size corrected order parameter, which has the critical point and an overall scale as fit parameters. The inferred critical values are well in agreement with the mean-field prediction (SI Secs. II.A & II.C). Having identified the dynamical critical point, the immediate questions are: What is the nature of the critical behaviour at the phase transition, and does it genuinely go beyond the equilibrium paradigm?
As a first step toward answering these questions, we experimentally scale up the single quench experiment to system sizes up to \(N=50\) ions and observe the net correlator which, at the critical point, characterizes critical fluctuations. As we calibrate the quench Hamiltonian parameters to be at the (mean-field) critical point for all the system sizes, within our experimental uncertainty, we see that the fluctuations grow and evolve more slowly with increasing system size, indicating an emergent universal critical behaviour (Fig. 3a). We numerically model the quench dynamics with experimental parameters for system sizes up to \(N=25\) and verify similar behaviour in Fig. 3b. Such critical behaviour leads to scaling relations which are independent of microscopic length/time scales [36]. Using scaling analysis, we find the net correlator
Figure 2: **a, Net correlator dynamics with 10 ions:** Here we compare the evolution of the experimental (dots) net correlator (\(\left\langle C_{x}^{2}\right\rangle/N^{2}\)) after a single quench with numerical results (solid lines). The latter were obtained by exactly diagonalizing the Hamiltonian in Eq. (1) with experimental parameters. Different colors represent the evolution at different values of \(B^{z}/\mathcal{J}\). The net correlator increases in amplitude and evolves more slowly as \(B^{z}\) is swept from larger to smaller values, except near \(B^{z}/\mathcal{J}\to 0\) where there are no correlations. **b, Phase transition from order parameter:** We report scaled maximum net correlator \(\mathcal{M}^{2}=\max\left[\left\langle C_{x}^{2}\right\rangle/N^{2}\right]\) as a function of \(B^{z}/\mathcal{J}\) for system sizes \(N=10,15,20\). The solid lines are obtained by fitting the experimental data to the finite size corrected order parameter (Eq. (28) of SI), which has the critical point as a fit parameter. The extracted values are \(0.83\,(19),0.88\,(6),1.01\,(9)\) respectively for \(N=10,15,20\); the difference from the predicted critical point \(B^{z}/\mathcal{J}=1\) is due to finite-size effects and experimental imperfections. For simplicity, we use the predicted critical value for studies in Figs. 3 and 4. We further verify the location of the critical point by simulating the (infinite-range) LMG model with \(N=10^{3}\) (dashed line) as a proxy for the mean-field solution. The error bars are statistical fluctuations around the mean value.
satisfies the functional form given by [11]
\[\left\langle C_{x}^{2}\right\rangle=N^{1+\alpha_{1}}f\left(\frac{\mathcal{J}t_{1} }{N^{\zeta_{1}}}\right), \tag{3}\]
where the exponent \(\alpha_{1}\) characterizes the amplitude scaling of fluctuations with system size and \(\zeta_{1}\) describes the dynamical scaling. We verify that the scaling relation and the exponents (\(\alpha_{1}=0.42\,(14)\), \(\zeta_{1}=0.19\,(8)\)) are consistent with the results of the exact simulation (see Fig. 3c,d). The procedure to determine exponents that yield the best collapse of the data is detailed in the SI Sec. VII. Remarkably, the above exponents are also consistent with those at the thermal phase transition of the LMG model [11]\(\alpha_{1}=1/2,\ \zeta_{1}=1/4\) (SI Sec. II.2). Additionally we fit the maximum amplitude of fluctuations against \(N^{\alpha_{1}}\) to obtain the exponent \(\alpha_{1}=0.50\,(4)\) (see the inset of Fig. 3c) which is in excellent agreement with that of thermal equilibrium. Indeed, it is expected that the latter procedure leads to a more precise exponent \(\alpha_{1}\) since the dynamical features are more susceptible to decoherence.
The emergence of the thermal critical exponents does not mean that the system has thermalized. In fact, long-range interacting systems often exhibit prethermalization for a long window in time [35; 37; 26]. Instead, this behaviour is due to the effective thermalization of a _soft mode_, which becomes gapless at the phase transition. This can be understood through a Holstein-Primakoff transformation that maps spins to bosonic variables, \(\sigma_{i}^{x}\rightarrow\frac{1}{\sqrt{N}}(a_{i}+a_{i}^{\dagger})\), a mapping that is valid near a fully polarized state along the \(z\) direction. The lowest energy excitation of the sys
Figure 3: **Unscaled (a, b) and scaled (c, d) fluctuations after a single quench.****a,** We report experimental critical fluctuations with system sizes up to \(N=50\) ions. **b,** Numerical simulation of critical fluctuations with the experimental Hamiltonian with up to \(N=25\) ions. We obtain the critical scaling exponents (\(\alpha_{1},\zeta_{1}\)) by optimizing the weighted Euclidean distance between each of the curves to get the best collapse for the experimental **c,** and simulation **d,** data separately [see main text for details]. We observe remarkable similarity between the exponents found in the experiment and simulations, highlighting the universality of the exponents despite experimental imperfections as well as finite-size effects. We also confirm the scaling exponents by fitting the maximum values of the fluctuation to a power-law fit to \(N^{\alpha_{1}}\) (**Inset c**). Although this method does not capture the full evolution, we get excellent agreement of exponents for both the simulation and the experiment. Note that the fluctuations in the experiment are reduced due to decoherence and imperfect detection fidelity; however, as we can see from the scaled data, the errors are within acceptable range even up to 50 ions. The error bars of the experimental data are statistical fluctuations around the mean value.
tem corresponds to a collective excitation of a bosonic mode, characterized by the operator \(a\!\equiv\!\sum_{i}a_{i}\), which becomes gapless (_softens_) at the phase transition. The total spin can be then described as \(S_{x}\!\to\!\sqrt{N}x\) where \(x\!\sim\!a\!+\!a^{\dagger}\) may be interpreted as the position operator of a harmonic oscillator with a characteristic frequency \(\Omega\) which vanishes at the phase transition. Applying the equipartition theorem, \(\lim_{t\to\infty}\Omega^{2}\langle x^{2}\rangle_{t}\sim T_{\rm eff}\) at long times, we find that the gapless mode is described by a finite effective temperature (SI Secs. II.A & III.A). In fact, identifying \(\langle x^{2}\rangle\!\sim\!N^{\alpha}\) and \(\Omega\!\sim\!N^{-\zeta}\), the equipartition theorem reveals that the effective temperature obeys the scaling relation \(T_{\rm eff}\!\sim\!N^{\alpha-2\zeta}\). Now, with \(\alpha\to\alpha_{1}=1/2\) and \(\zeta\to\zeta_{1}=1/4\), the effective temperature becomes a constant independent of system size, consistent with thermal equilibrium behaviour. We remark that the soft mode governs not only the behaviour of the infinite-range LMG model but also the power-law decaying experimental interaction matrix, only with a different identification of the soft mode (SI Secs. III & IV).
To break away from the effective thermalization, it has been proposed [11] that preparing an initial state at the critical point and performing a quench to a different critical point can lead to non-equilibrium dynamics, where the scaling exponents differ from a thermal or quantum critical point. However, realizing such a scheme in experiment can be challenging as it requires high-fidelity adiabatic preparation of the non-trivial critical state prior to the quench. In this work, we instead modify this scheme by applying a sequence of critical quenches to explore truly non-equilibrium phenomena.
Experimentally, we first perform a single quench (\(\gamma^{x}\!=\!1,\gamma^{y}\!=\!0\)) to a critical point and evolve until the fluctuations reach their first maxima. Then we switch the interaction from the \(x\) to the \(y\)-direction (Fig. 1c) to apply a second quench i.e. we make \(\gamma^{x}\!=\!0,\gamma^{y}\!=\!1\) (Methods). The intermediate evolution after the single quench brings the system to a critical state when the second quench is applied. Upon the second quench, the dominant fluctuations form along the \(y\)-direction and in Fig. 4a, we show the unscaled fluctuations (\(\langle C_{y}^{2}\rangle\,/N\)) for system sizes up to 50 ions. These fluctuations also obey the scaling relation in Eq. (3), but with the replacement \(C_{x}^{2}\to C_{y}^{2}\), \(t_{1}\to t_{2}\) (time after the latest quench) and with the distinct exponents \(\alpha_{2}\) and \(\zeta_{2}\). We find the optimal collapse for \(\alpha_{2}\!=\!0.63\,(33)\) and \(\zeta_{2}\!=\!0.10\,(17)\) (Fig. 4c). We verify that exact numerical simulation results in very similar critical exponents (Fig. 4b,d). These are also in good agreement with the analytical exponents \(\alpha_{2}\!=\!3/4\) and \(\zeta_{2}\!=\!1/8\) (SI Sec. II.D). Finally, we remark that the effective temperature now scales as \(T_{\rm eff}\!\sim\!N^{\alpha_{2}-2\zeta_{2}}\), which shows a nontrivial scaling with system size, underscoring a dramatic departure from equilibrium critical behaviour.
Experimental decoherences cause the observed fluctuations to be damped for both single and double quenches. We see that the unscaled 50 ion fluctuations after the double quench are significantly damped (Fig. 3a). The major sources of decoherence, which scale with the system size, remain within acceptable thresholds for system sizes \(N<50\), but these errors start to dominate for \(N\geq 50\) (Methods). This effect is more adverse for the double-quench sequence than the single-quench, since the former involves longer evolution under two quenches. For completeness, we have included all the 50 ion data in Fig. 4a,c., but excluded it in determining the best collapse exponent. Fitting the maximum amplitudes of the fluctuations to \(N^{\alpha_{2}}\) yields exponent \(\alpha_{2}\!=\!0.69\,(9)\), with tighter error bounds (Inset Fig. 4c). Errors in identifying the peak fluctuation result in erroneous switch time between the two quenches, contributing to imperfect exponents. This effect can be reproduced in the simulation with exact experimental parameters, and correction for such errors in further simulations results in exponents that are well in agreement with the analytically predicted non-equilibrium values (SI Sec. V.D).
In this work, we have demonstrated the ability to identify, both numerically and experimentally, the dynamical critical point of a disorder-to-order phase transition in a 1D transverse-field Ising model. We have observed the non-equilibrium critical behaviour upon single and double quenches with up to 50 ions and extracted the universal scaling exponents. Demonstrating the universal scaling behaviour highlights the self-verification ability of the quantum simulators in a regime that is difficult to simulate in classical computers. While the decay of experimental spin-spin interactions deviates from the exact power-law models with \(p<1\) (see SI Sec. V.B), the resulting critical behaviour follows the latter models closely, a feature that is also reflected in the spin-spin correlation function (see SI Sec. V.3). Furthermore, we theoretically predict that the observed double-quench critical scaling is only the first in an infinite hierarchy of universal critical behaviours that emerge in a sequence of multiple quenches (SI Sec II.D), an exciting direction to investigate in the future.
## References
* [1] Landau L.D. and Lifshitz E.M. _Statistical Physics. Part 1_. Pergamon Press, Oxford, 1 edition, (1958).
* [2] Sachdev S. _Quantum Phase Transitions_. Cambridge University Press, 2 edition, (2011).
* [3] Goldenfeld N. _Lectures On Phase Transitions And The Renormalization Group_. CRC Press, 1 edition, (1992).
* [4] Heyl M. Dynamical quantum phase transitions: A brief survey. _EPL_, **125**(2):26601, (2019).
* [5] Zhang J. et al. Observation of a many-body dynamical phase transition with a 53-qubit quantum simulator. _Nature_, **551**(7682):601-604, (2017).
* [6] Marino J., Eckstein M., M. M. Foster, and A.M. Rey. Dy
namical phase transitions in the collisionless pre-thermal states of isolated quantum systems: theory and experiments. _Reports on Progress in Physics_, (2022).
* [7] Heyl M., Polkovnikov A., and Kehrein S. Dynamical Quantum Phase Transitions in the Transverse-Field Ising Model. _Phys. Rev. Lett._, **110**(13):135704, (2013).
* [8] Li B. W. et al. Probing critical behavior of long-range transverse-field Ising model through quantum Kibble-Zurek mechanism. _arXiv:2208.03060_, (2022).
* [9] Jurcevic P. et al. Direct observation of dynamical quantum phase transitions in an interacting many-body system. _Phys. Rev. Lett._, **119**(8):080501, (2017).
* [10] Calabrese P., Essler F. H. L., and Fagotti M. Quantum Quench in the Transverse-Field Ising Chain. _Phys. Rev. Lett._, **106**(22):227203, (2011).
* [11] Titum P. and Maghrebi M. F. Nonequilibrium criticality in quench dynamics of long-range spin models. _Phys. Rev. Lett._, **125**(4):040602, (2020).
* [12] Haken H. Cooperative phenomena in systems far from thermal equilibrium and in nonphysical systems. _Rev. Mod. Phys._, **47**(1):67-121, (1975).
* [13] Li H., Jin Y., Jiang Y., and Chen J.Z.Y. Determining the nonequilibrium criticality of a Gardner transition via a hybrid study of molecular simulations and machine learning. _Proceedings of the National Academy of Sciences_, **118**(11):e2017392118, (2021).
* [14] Zurek W.H. Cosmological experiments in condensed matter systems. _Physics Reports_, **276**(4):177-221, (1996).
* [15] Monroe C., Campbell W. C., Duan L.-M., Gong Z.-X., Gorshkov A. V., Hess P. W., Islam R., Kim K., Linke N. M., Pagano G., Richerme P., Senko C., and Yao N. Y. Programmable quantum simulations of spin systems with trapped ions. _Rev. Mod. Phys._, **93**(2):025001, (2021).
Figure 4: **Unscaled (a, b) and scaled (c, d) fluctuations after a double quench.****a,** We plot the unscaled fluctuations along \(y\) direction at the predicted critical points for system sizes of \(N=10\)–\(50\) ions. **b,** Numerical simulation of the double quench dynamics under the same experimental Hamiltonian for system sizes of \(N=10\)–\(25\) ions. The second quench is applied when the fluctuations following the first quench reach their maxima and the time \(t_{2}\) is counted after the second quench. We apply the same scaling collapse technique as for the single quench to find the best scaling exponents \((\alpha_{2},\zeta_{2})\) for the experimental \(\mathbf{c}\), and simulation **d,** data separately. We observe that the critical fluctuations do not monotonically grow for increasing system sizes, as would be expected from the scaling relations. This effect can be attributed to the imperfect switching time between the first and second quench; a nearly perfect collapse can be reproduced numerically using the precise switch times (SI Sec. V.C). We also report exponents found by a power-law fit of the maximum fluctuations which agree more closely with the analytical prediction (**Inset c.**). While determining the experimental critical exponents, we have excluded \(50\) ion data (gray) [see main text for details]. The error bars of the experimental data are statistical fluctuation around the mean value.
* [16] Blatt R. and Roos C. F. Quantum simulations with trapped ions. _Nature Physics_, **8**(4):277-284, (2012).
* [17] Schreiber M., Hodgman S. S., Bordia P., Luschen H. P., M. H. Fischer, Vosk R., Altman E., Schneider U., and Bloch I. Observation of many-body localization of interacting fermions in a quasirandom optical lattice. _Science_, **349**(6250):842-845, (2015).
* [18] Bernien H., Schwartz S., Keesling A., Levine H., Omran A., Pichler H., Choi S., Zibrov A. S., Endres M., Greiner M., Vuletic V., and Lukin M. D. Probing many-body dynamics on a 51-atom quantum simulator. _Nature_, **551**(7682):579-584, (2017).
* [19] Choi S., Choi J., Landig R., Kucsko G., Zhou H., Isoya J., Jelezko F., Onoda S., Sumiya H., Khemani V., Keyserlingk C. V., Yao N. Y., Demler E., and Lukin M. D. Observation of discrete time-crystalline order in a disordered dipolar many-body system. _Nature_, **543**(7644):221-225, (2017).
* [20] Wallraff A. et al. Strong coupling of a single photon to a superconducting qubit using circuit quantum electrodynamics. _Nature_, **431**(7005):162-167, (2004).
* [21] Xu K. et al. Probing dynamical phase transitions with a superconducting quantum simulator. _Science Advances_, **6**(25):eaba4935, (2020).
* [22] Chang D. E., Douglas J. S., Gonzalez-Tudela A., Hung C.-L, and Kimble J. H. Colloquium: Quantum matter built from nanoscopic lattices of atoms and photons. _Rev. Mod. Phys._, **90**(3):031002, (2018).
* [23] Ye J., Kimble J. H., and Katori H. Quantum state engineering and precision metrology using state-insensitive light traps. _Science_, **320**(5884):1734-1738, (2008).
* [24] Raimond J. M., Brune M., and Haroche S. Manipulating quantum entanglement with atoms and photons in a cavity. _Rev. Mod. Phys._, **73**(3):565-582, (2001).
* [25] Gring M. et al. Relaxation and prethermalization in an isolated quantum system. _Science_, **337**(6100):1318-1322, (2012).
* [26] Neyenhuis B. et al. Observation of prethermalization in long-range interacting spin chains. _Science Advances_, **3**(8):e1700672, (2017).
* [27] Smith J. et al. Many-body localization in a quantum simulator with programmable random disorder. _Nature Physics_, **12**(10):907-911, (2016).
* [28] Choi J. Y. et al. Exploring the many-body localization transition in two dimensions. _Science_, **352**(6293):1547-1552, (2016).
* [29] Zhang J. et al. Observation of a discrete time crystal. _Nature_, **543**(7644):217-220, (2017).
* [30] Eigen C., Glidden J. A. P., Lopes R., Cornell E. A., Smith R. P., and Hadzibabic Z. Universal prethermal dynamics of Bose gases quenched to unitarity. _Nature_, **563**(7730):221-224, (2018).
* [31] Erne S., Buecker R., Gasenzer T., Berges J., and Schmiedmayer J. Observation of universal dynamics in an isolated one-dimensional Bose gas far from equilibrium. _Nature_, **563**(7730):225-229, (2018).
* [32] Prufer M. et al. Observation of universal dynamics in a spinor Bose gas far from equilibrium. _Nature_, **563**(7730):217-220, (2018).
* [33] Kac M. and Thompson C. J. Critical behavior of several lattice models with long-range interaction. _Journal of Mathematical Physics_, **10**(8):1373-1386, (1969).
* [34] S. Olmschenk, K. C. Younge, D. L. Moehring, D. N. Matsukevich, P. Maunz, and C. Monroe. Manipulation and detection of a trapped Yb\({}^{+}\) hyperfine qubit. _Phys. Rev. A_, 76:052314, Nov 2007.
* [35] Zunkovic B., Heyl M., Knap M., and Silva A. Dynamical quantum phase transitions in spin chains with long-range interactions: Merging different concepts of nonequilibrium criticality. _Phys. Rev. Lett._, **120**(13):130601, (2018).
* [36] Cardy J. _Scaling and renormalization in statistical physics_, volume 5. Cambridge university press, (1996).
* [37] Halimeh J. C., Zauner-Stauber V., McCulloch I. P., de Vega I., Schollwock U., and Kastner M. Prethermalization and persistent order in the absence of a thermal phase transition. _Phys. Rev. B_, **95**(2):024302, (2017).
* [38] L. Deslauriers, P. C. Haljan, P. J. Lee, K-A. Brickman, B. B. Blinov, M. J. Madsen, and C. Monroe. Zero-point cooling and low heating of trapped \({}^{111}\)Cd\({}^{+}\) ions. _Phys. Rev. A_, 70:043408, Oct 2004.
* [39] Molmer K. and Sorensen A. Multiparticle entanglement of hot trapped ions. _Phys. Rev. Lett._, **82**(9):1835-1838, (1999).
* [40] Pagano G. et al. Quantum approximate optimization of the long-range Ising model with a trapped-ion quantum simulator. _PNAS_, **117**(41), (2020).
* [41] Tan W. L., Becker P., Liu F., Pagano G., Collins K.S., De A., Feng L., Kaplan H.B., Kyprianidis A., Lundgren R., et al. Domain-wall confinement and dynamics in a quantum simulator. _Nature Physics_, 17(6):742-747, 2021.
## I Methods
**State preparation and readout:** The quantum simulator used in this experiment is based on \({}^{171}\)Yb\({}^{+}\) ions trapped in all three directions in a 3-layer Paul trap[38] with transverse center of mass (COM) mode frequencies ranging from \(\nu_{COM}\)= (4.64 to 4.73) MHz and axial COM mode frequencies ranging from \(\nu_{x}\)=(0.23 to 0.53) MHz depending on system size (N=10-50), with axial frequency being lowered to accommodate more ions in a linear chain. Before each experimental cycle, the ions are Doppler cooled in all three directions by a 369.5 nm laser beam, 10 MHz red-detuned from the \({}^{2}S_{1/2}\) to \({}^{2}P_{1/2}\) transition. We use the same laser to optically pump all the ions to initialize them in the low-energy hyperfine qubit state, \(\ket{\downarrow_{z}}\equiv{}^{2}S_{1/2}\ket{F=0,m_{F}=0}\) with \(>99\,\%\) fidelity. In addition to Doppler cooling, we apply the resolved sideband cooling method to bring the ions to their motional ground state with \(>90\,\%\) fidelity. After the Hamiltonian evolution, we apply global \(\pi/2\) rotations using composite BB1 pulses to project the spin along the \(x\) or \(y\) direction of the Bloch sphere to the \(z\) direction. We then measure the magnetization of each spin using a state-dependent fluorescence by applying a beam resonant with the \({}^{2}S_{1/2}\ket{F=1}\Longleftrightarrow\,^{2}F_{1/2}\ket{F=0}\) transition. The ions scatter photons if they are projected in the \(\ket{\uparrow_{z}}\) state, and appear bright, while in \(\ket{\downarrow_{z}}\) state, the number of scattered photons are negligible and the ions appear dark. A finite-conjugate NA = 0.4 objective lens system (total magnification of 70\(\times\)) collects scat
tered 369.5 nm photons and images them onto an Andor iXon Ultra 897 EMCCD camera, which allows us to perform site-resolved magnetization and correlation measurements with average fidelity of 97 %. No state preparation and measurement (SPAM) correction has been applied to data presented in this work. More details of this experimental apparatus can be found in our previous works [2915; 42].
**Generating XX and YY type Ising interaction:** The global spin-spin interaction in the trapped ion system in consideration is generated by applying a spin-dependent force via non-copropagating 355 nm pulsed laser beams that uniformly illuminate the ion chain. The pair of beams have a relative wavevector difference along the transverse motional direction of the ions. These beams are controlled by acousto-optic-modulators which impart beatnote frequencies at \(\nu_{COM}\pm\mu\), and phases (\(\phi_{b}\), \(\phi_{r}\)), respectively, where \(\mu\) is the symmetric detuning from the COM mode (\(\approx 56\) kHz). These two tones respectively drive the blue (BSB) and red (RSB) sideband transitions, which, following the Molmer-Sorensen (MS) protocol [39], generates an effective Hamiltonian
\[H=\sum_{i=1}^{N}\sum_{m=1}^{N}\frac{\eta_{i,m}\Omega_{i}}{2}[a_{ m}e^{\epsilon\delta_{m}t}e^{i\phi_{M}}+\] \[a_{m}^{\dagger}e^{-i\delta_{m}t}e^{-i\phi_{M}}]\sigma_{i}^{\phi_{ s}}, \tag{4}\]
where \(\eta_{i,m}\) is the Lamb-Dicke parameter for ion \(i\) and mode \(m\), \(\Omega_{i}\) is the Rabi frequency at ion \(i\), \(a_{m}^{\dagger},a_{m}\) are the creation and annihilation operators of motional quanta for \(m\)th motional mode, \(\delta_{m}=\mu-\nu_{m}\) is the MS detuning from the \(m\)th motional mode frequency \(\nu_{m}\). \(\sigma_{i}^{\phi_{s}}=\cos\phi_{s}\sigma_{i}^{x}+\sin\phi_{s}\sigma_{i}^{y}\), where the spin-phase is \(\phi_{s}=\frac{\phi_{b}+\phi_{r}+\pi}{2}\) and the motional-phase is \(\phi_{M}=\frac{\phi_{b}-\phi_{r}}{2}\) for the phase-sensitive realization of the MS scheme [44]. The unitary time evolution operator under this Hamiltonian (\(U(t)\sim e^{-\iota Ht}\)) can be found by taking the Magnus expansion, which after appropriate approximation leads to an effective Hamiltonian [15]
\[H=\sum_{i,j}J_{ij}\sigma_{i}^{\phi_{s}}\sigma_{j}^{\phi_{s}}. \tag{5}\]
In the far detuned limit (\(\delta_{m}\gg\eta\Omega\)), where the virtual couplings to the phonon modes are sufficiently suppressed, the analytical form of the Ising coupling between ions \(i\) and \(j\) is given by [15]
\[J_{ij}=\Omega^{2}\nu_{R}\sum_{m}\frac{b_{im}b_{jm}}{\mu^{2}-\nu_{m}^{2}}\approx \frac{J}{|i-j|^{p}}, \tag{6}\]
where \(\nu_{R}=h\delta k^{2}/(8\pi^{2}M)\) is the recoil frequency, and \(b_{im}\) is the eigenvector matrix element of the \(i\)-th ion's participation in the \(m\)-th motional mode (\(\sum_{i}\lvert b_{im}\rvert^{2}=\sum_{m}\lvert b_{im}\rvert^{2}=1\)), \(M\) is the mass of the single ion. \(J\) is the effective interaction strength obtained by a power-law fit of the interaction matrix elements and \(J/2\pi\) ranges within (0.25 to 0.4) kHz in the experiment for different system sizes. If we set \(\phi_{r}=0,\phi_{b}=\pi\), then \(\phi_{M}=\pi/2\) and \(\phi_{s}=\pi\), which makes the Hamiltonian of Eq. (5) an effective \(\sigma^{x}\sigma^{x}\) interaction. We can change this phase by changing the input waveform to the acousto-optic-modulator. Similarly, we set \(\phi_{r}=0,\phi_{b}=0\) to obtain an effective \(\sigma^{y}\sigma^{y}\) interaction. In the double quench experiment, we switch these waveform phases to switch between interactions along different Bloch sphere directions.
We further apply a common offset of \(2B^{z}\) to the frequencies of BSB and RSB tones which in the rotating frame of the qubit, results in an effective transverse field term \(B^{z}\sum_{i}^{N}\sigma_{i}^{z}\) in the Hamiltonian of Eq. (5) [15]. The magnetic field strength \(B^{z}\) is chosen such that \(B^{z}\ll\delta_{m}\) for the rotating frame approximation to be valid.
The approximate power law exponent can be theoretically tuned within the range \(0<p<3\). However, in this experiment, we kept the exponent \(\approx 0.89\) for all the system sizes by tuning the axial trap frequency (\(\nu_{x}\)) and motional detuning (\(\mu\)). We note that the experimental interaction matrix deviates from a pure power-law decay to an exponential decay at large distances, especially for large system sizes (SI Sec VI). In principle, one can tune this exponent by changing only the detuning (\(\mu\)) (see Eq. (6)), but changing the axial trap frequency (\(\nu_{x}\)) for different system sizes results in more self-consistent scaling of the exact spin-spin coupling matrix [40].
**Experimental error sources:** One of the main challenges in scaling up the system size is to maintain the fidelity of the quantum simulation experiments. Among various sources of decoherence in the trapped-ion platform, such as stray magnetic and electric fields, mode frequency drifts, off-resonant motional excitation, spontaneous emission, and additional spin-motion coupling that causes the evolution to depart from ideal simulation [15]. One such important source, which becomes significant in the large system size limit, is the off-resonant excitation of the motional modes causing residual spin-motion entanglement [40; 41]. In order to trap longer linear chains while maintaining the same interaction profile, we need to operate at a lower axial confinement which can become as low as \(\sim 200\) Hz for \(N=40-50\). At such low axial confinement, the trapped ions are more susceptible to electric field noise and background collisions [46]. The conventional laser cooling methods start to become inefficient in cooling the ions to their motional ground states and as a result, errors due to phonon evolution gets introduced in the Hamiltonian evolution. To the lowest order such an error can be modelled as an effective bit flip error during measurement [40]. Additional cooling methods such as EIT (electromagnetically induced transparency) cooling [47] and sympathetic cooling [46] would useful in mitigating effects of such errors.
Another source of bit-flip error is imperfect detection. Off-resonant pumping from the detection beam limits our detection fidelity to about 98 %. When a large number
of ions are trapped in a linear chain, ions near the center of the chain are closer together than the ones at the edges. A random bit-flip error can be introduced due to leakage of light from neighbouring ions, which might cause a dark ion to appear bright and vice versa. In the Extended Data Fig. 5 we show that a bit flip error can qualitatively explain part of the damping that we observe in the experimental net correlator. More details about various noise sources in this apparatus can be found in previous works.[40, 42, 48]
**Jackknife error estimation:** In the experiments reported in this work, we repeat the experiment and measurement sequence 400 times to reduce the quantum projection noise. To estimate the standard errors of the two-body correlators, we have implemented a Jackknife resampling technique[49]. In this method, we construct a distribution of net correlators by randomly sampling 399 experimental runs, each time leaving out only one run. We then calculate the variance of the distribution which corresponds to the standard error of the net correlator.
**Jackknife error estimation:** In the experiments reported in this work, we repeat the experiment and measurement sequence 400 times to reduce the quantum projection noise. To estimate the standard errors of the two-body correlators, we have implemented a Jackknife resampling technique[49]. In this method, we construct a distribution of net correlators by randomly sampling 399 experimental runs, each time leaving out only one run. We then calculate the variance of the distribution which corresponds to the standard error of the net correlator.
|
2309.16599 | Unlikelihood Tuning on Negative Samples Amazingly Improves Zero-Shot
Translation | Zero-shot translation (ZST), which is generally based on a multilingual
neural machine translation model, aims to translate between unseen language
pairs in training data. The common practice to guide the zero-shot language
mapping during inference is to deliberately insert the source and target
language IDs, e.g., <EN> for English and <DE> for German. Recent studies have
shown that language IDs sometimes fail to navigate the ZST task, making them
suffer from the off-target problem (non-target language words exist in the
generated translation) and, therefore, difficult to apply the current
multilingual translation model to a broad range of zero-shot language
scenarios. To understand when and why the navigation capabilities of language
IDs are weakened, we compare two extreme decoder input cases in the ZST
directions: Off-Target (OFF) and On-Target (ON) cases. By contrastively
visualizing the contextual word representations (CWRs) of these cases with
teacher forcing, we show that 1) the CWRs of different languages are
effectively distributed in separate regions when the sentence and ID are
matched (ON setting), and 2) if the sentence and ID are unmatched (OFF
setting), the CWRs of different languages are chaotically distributed. Our
analyses suggest that although they work well in ideal ON settings, language
IDs become fragile and lose their navigation ability when faced with off-target
tokens, which commonly exist during inference but are rare in training
scenarios. In response, we employ unlikelihood tuning on the negative (OFF)
samples to minimize their probability such that the language IDs can
discriminate between the on- and off-target tokens during training. Experiments
spanning 40 ZST directions show that our method reduces the off-target ratio by
-48.0% on average, leading to a +9.1 BLEU improvement with only an extra +0.3%
tuning cost. | Changtong Zan, Liang Ding, Li Shen, Yibin Lei, Yibing Zhan, Weifeng Liu, Dacheng Tao | 2023-09-28T17:02:36Z | http://arxiv.org/abs/2309.16599v1 | # Unlikelihood Tuning on Negative Samples Amazingly Improves Zero-Shot Translation
###### Abstract
Zero-shot translation (ZST), which is generally based on a multilingual neural machine translation model, aims to translate between unseen language pairs in training data. The common practice to guide the zero-shot language mapping during inference is to deliberately insert the source and target language IDs, e.g., \(<\)EN\(>\) for English and \(<\)DE\(>\) for German. Recent studies have shown that language IDs sometimes fail to navigate the ZST task, making them suffer from the off-target problem (non-target language words exist in the generated translation) and, therefore, difficult to apply the current multilingual translation model to a broad range of zero-shot language scenarios. To understand when and why the navigation capabilities of language IDs are weakened, we compare two extreme decoder input cases in the ZST directions: _Off-Target_ (Off) and _On-Target_ (ON) cases. By contrastively visualizing the contextual word representations (CWRs) of these cases with teacher forcing, we show that 1) the CWRs of different languages are effectively distributed in separate regions when the sentence and ID are matched (On setting), and 2) if the sentence and ID are unmatched (OFF setting), the CWRs of different languages are chaotically distributed. Our analyses suggest that although they work well in ideal DN settings, language IDs become fragile and lose their navigation ability when faced with off-target tokens, which commonly exist during inference but are rare in training scenarios. In response, we employ unlikelihood tuning on the negative (OFF) samples to minimize their probability such that the language IDs can discriminate between the on- and off-target tokens during training. Experiments conducted on the IWSLT, OPUS-100 (v1.0), WMT-5, and TED benchmarks spanning 40 ZST directions show that our method reduces the **off-target ratio** by **-48.0%** on average, leading to a **+9.1 bilingual evaluation understudy (BLEU)** improvement with only an extra **+0.3% tuning cost** on WMT-5. To facilitate reproducibility, we will publicly release our code at github.com/zanchangtong/UNIONS.
Artificial Intelligence, Natural Language Processing, Zero-Shot Translation, Off-Target Problem, Negative Samples.
## 1 Introduction
Machine translation (MT) [1, 2, 3, 4, 5, 6, 7, 8, 9] has had a profound impact on people worldwide, revolutionizing various fields such as communication, writing, and travel, among others. However, the growth of machine translation has not been equally beneficial for all languages, particularly for low-resource languages such as Zulu, Yoruba, and Uzbek, which lack the necessary quantity of training data in each translation direction. As the number of languages increases, ensuring the availability of large-scale supervised data for training models becomes increasingly impractical due to the quadratic growth exhibited by the number of translation directions. Consequently, zero-shot translation (ZST) [10, 11, 12, 13, 14] has emerged as an intriguing solution, garnering attention from the research community. ZST leverages a unified multilingual neural machine translation (MNMT) model [15, 16, 17, 18, 19, 20, 21, 22] to translate between language pairs that are absent from the training dataset, as depicted in Figure 1.
Due to the transferability of knowledge between languages, an MNMT [12, 23, 24, 25, 26] model trained on multiple languages with a shared encoder-decoder framework possesses some degree of ZST ability, which is particularly useful for low-resource languages. However, the ZST task is still challenging since it requires one model to complete thousands of translation tasks with the guidance of the target language ID.
Recent studies [11, 12] have highlighted the primary obstacle to achieving satisfactory ZST performance: the _off-target problem_. This issue arises when the language ID fails to effectively guide the associated model, leading to the inclusion of off-target tokens in the translation process. In our experiments, this problem is prevalent, with an incidence rate as high as **99.5%**. One line of research attributes this issue to the _spurious correlations_[11, 27] between source languages and specific target languages. Such correlations arise due to the common practice of training models to translate all non-central languages into a single central language (typically English). Another perspective suggests that the problem lies in the _missing ingredient_[10, 28], which fails to map different languages into a universal representation space during training. This absence of proper mappings confuses the decoding process and affects lexical choices. However, most of these studies overlook the fact that each translated word is predicted based on the context words contained in decoder input, which are derived from the same language during training but may differ during inference. This discrepancy
stems from the exposure bias between teacher forcing-based training paradigms and autoregressive inference.
In this paper, we revisit the off-target problem by exploring the gap between MNMT training and ZST inference. It is well known that a translation model generates sequentially conditioned outputs on context words, which are ground-truth words during training but previous model outputs during inference [29]. This gap inevitably leads the constructed MNMT model to predict ZST results conditioned on both correct (on-target) and erroneous (off-target) decodings during inference. For further analysis, we focus on the contextual word representations (CWRs) output by the decoder and consider two extreme cases according to whether the language & ID are matched: _On-Target_ (On) and _Off-Target_ (Off). Interestingly, the CWRs of these two cases behave differently in the supervised and ZST directions. 1) In the On case, the CWRs are nicely distributed in separate regions based on the language ID, even in the ZST directions. 2) In the Off case, different languages are still distributed in separate regions in supervised directions, but they are chaotically distributed in the ZST directions. In summary, the language ID loses its navigation ability only when the ID and decoder input are unmatched in ZST. Therefore, we argue that the _language ID has the potential to navigate arbitrary translation directions but becomes fragile when facing off-target decoder inputs due to the gap between training and inference_.
To address this issue, we propose an unlikelihood tuning method with _inexpensive-to-obtain_ negative (language- and ID-mismatched) samples by minimizing the probability of the negative samples. In this way, the language ID is encouraged to discriminate between the on/off-target tokens, whose capacity is hard to cultivate in the vanilla MNMT training procedure due to the lack of off-target samples but is required during ZST inference. To construct the negative samples, we replace the target language ID in each translation pair with an ID that is randomly sampled from all languages except the current source and target languages. We optimize the unlikelihood objective on these negative samples to minimize their probability and simultaneously optimize the MNMT objective on the (positive) translation samples to maintain the supervised translation ability. Additionally, as we cannot access the ZST samples during training, we select the model according to the degree of separation between the CWR distributions.
Experimentally, our method continued training upon the existing baseline methods1 substantially reduces the off-target ratio by -16.7%, -79.2%, -60.0%, and -14.8% on the IWSLT-4, OPUS-100 (v1.0), WMT-5, and TED benchmarks, respectively, thus significantly improving the resulting translation quality by +4.0, +14.6, +9.1, and +1.3 SacreBLEU points. The main **contributions** are as follows.
Footnote 1: To ensure effective performance, the baseline models are trained with their optimal training scripts.
* We show that language IDs have the potential to navigate arbitrary translation directions but become fragile when facing off-target decoder inputs due to the gap between the MNMT training and ZST inference processes.
* We propose a method to minimize the probability of negative (language- and ID-mismatched) samples with unlikelihood tuning for the pretrained MNMT models.
* Our _simple-but-sufficient_ method exhibits significant and consistent ZST improvements (up to 20.8 bilingual evaluation understudy (BLEU) by reducing the off-target ratio by 88.2%) over the baseline while maintaining its supervised translation quality.
The rest of this paper is organized as follows. SS2 discusses some related works. SS3 introduces the background of the study. In SS4, we revisit the gap between the MNMT training and ZST inference processes and reveal the impact of off-target tokens. Our approach, **UNII**kelihood tuning **On** Negative **S**amples (UNIONS), is presented in SS5. The experimental setup is demonstrated in SS6. The main experimental results are reported in SS7, followed by analysis experiments in SS8. Finally, conclusions are presented in SS9.
## 2 Related Works
### _Multilingual Neural Machine Translation_
MNMT [12, 15, 16, 17, 18, 19, 20, 21, 24, 25, 26, 30] aims to facilitate translation between multiple languages using a single model, typically by employing a designated token to indicate the target language for a given source sentence. Johnson et al. [18] introduced utilizing an artificial token at the beginning of the input sentence to specify the desired target language and enable the use of multilingual translation systems with a single model. Blackwood et al. [19] proposed a task-specific attention model, which demonstrated improvements over a fully shared model in terms of translation quality. Aharoni et al. [21] took a significant step toward developing a massively multilingual translation model by expanding the number of languages supported by MNMT to 102. Zhang et al. [12] further enhanced the translation performance of their massively multilingual translation model by leveraging larger model capacities. They also discovered that language-specific modeling and deep architectures improve ZST, albeit falling short in terms of addressing the off-target problem. Ustun et al. [25] explored the unsupervised setting, where the translation is performed
Fig. 1: Zero-shot translation (**ZST**) aims to transfer the navigation ability of the target language ID into translation directions that do not exist in the training process.
between languages with only monolingual data, by incorporating denoising adapters on top of pretrained language models. Recently, Sun et al. [26] introduced alternative signals, such as phonetic, romanized, and transliterated inputs, to MNMT to enhance the transferability of the constructed model across different languages.
However, these methods are not specifically tailored for the ZST task. In contrast, our method focuses on enhancing the navigation ability of the target language ID and can serve as a complementary plug-in algorithm to augment these existing approaches. This consideration will be further explored in our subsequent studies.
### _MNMT-Based ZST_
The existing works have mainly focused on two perspectives to mitigate the off-target translation problem. 1) Introducing more inductive regularizers. Arivazhagan et al. [10] forced their encoder to present different sentences with language-invariant representations to improve its generalization ability. Jin and Xiong [14] found that adding more target language information has a significant impact on translation performance, especially for ZST. Wang et al. [13] focused on the uncertainty property and proposed the use of data cleaning and vocabulary masking to reduce the off-target ratio.
2) Introducing more positive samples, e.g., non-English sentence pairs. Gu et al. [11] noted that the spurious correlation issue leads to inferior zero-shot performance, and they constructed positive paired samples offline, which is a similar approach to that of Fan et al. [31]. In addition, Zhang et al. [12] found that the synthetic parallel data generated by online self-training also help with ZST. However, it is difficult to construct positive datasets for the more than \(7100^{\circ}7100\) language pairs2 in the world.
Footnote 2: [https://en.wikipedia.org/wiki/Lists_of_languages](https://en.wikipedia.org/wiki/Lists_of_languages)
In contrast, we reveal the gap between the MNMT training and ZST inference processes and propose a simple and effective method to bridge this gap, which has the potential to complement the existing approaches with a simple strategy: performing continued training with our method on their existing checkpoints. Although Pan et al. [32] also used negative samples, their method completely differs from ours, as they employed contrastive learning on output encoder representations while training from scratch, whereas we address the training-inference gap by perform-ing simple unlikelihood tuning.
### _Exposure Bias of Sequence Generation_
Exposure bias [33] refers to the discrepancy between the training and inference processes in sequence generation tasks. During training, models are conditioned on ground-truth tokens, whereas during inference, they predict tokens based on their previous outputs. Numerous methods have been developed to address this issue. Scheduled sampling, proposed by Bengio et al. [34], gradually replaces the ground-truth tokens with tokens predicted by the model itself. To alleviate overcorrection and noise perturbations, Zhang et al. [29] introduced a sentence-level oracle, improving the predicted distribution.
Another approach involves training models using non-maximum likelihood estimation (non-MLE) objectives to mitigate exposure bias. Minimum risk training (MRT), proposed by Shen et al. [35], directly optimizes the model parameters with respect to arbitrary evaluation metrics. Bahdanau et al. [36] employed an actor-critic algorithm from the reinforcement learning (RL) domain to train models for sequence generation, enabling the training procedure to approximate the testing process and directly optimize the evaluation scores. Additionally, Du and Ji [37] combined RL-based imitation learning with a pointer-generator framework as their base model.
Several studies have focused on analyzing exposure bias. Wang and Sennrich [38] and Schmidt [39] established a link between exposure bias and the generation gap resulting from distribution and domain shifts. He et al. [40] discovered the self-recovery ability possessed by language models, which can counteract the harmful effects of exposure bias. From the perspective of imitation learning, Arora et al. [41] connected the degeneration problem encountered by large language models to exposure bias.
However, most existing methods primarily address supervised tasks. In our work, we identify a specific type of exposure bias in ZST that leads to the off-target problem. We aim to alleviate the misleading of model outputs in the wrong language.
### _Unlikelihood Training_
To address the degeneration problem encountered during neural language generation, Welleck et al. [42] highlighted the drawback of the likelihood loss itself and first proposed a complementary unlikelihood training objective, which forces unlikely samples to be assigned lower probabilities by the model. This method has been further explored in dialog tasks by Li et al. [43], who demonstrated its effectiveness in generating more consistent and coherent human-like dialog. Moreover, Song et al. [44] proposed a transformers (BERT)-based dialog model and demonstrated the benefits of incorporating unlikelihood training with nondialogue inference data to enhance the understanding capabilities of the resulting model. Additionally, Nogueira dos Santos et al. [45] used the unlikelihood loss for ranking and proposed a generative information retrieval approach. Hosseini et al. [46] proposed the combination of an unlikelihood objective with a reference-based setup for input sentences to model negation with pretrained BERT [47]. Recently, Wang et al. [48] proposed using unlikelihood training on a visual dialog model to reduce the probability of producing wrong answers and achieve state-of-the-art performance.
In this work, we focus on the ZST task and propose a new unlikelihood training method for our constructed negative translation samples to reduce the probability of off-target tokens, thus alleviating the severe off-target problem.
## 3 Background
In this section, we present the standard frameworks for neural machine translation (NMT) and MNMT models.
### _Neural Machine Translation_
NMT is a field that focuses on converting sentences from a source language into target language representations using neural networks. Initially, the early works in NMT primarily utilized recurrent neural networks (RNNs) as their foundational models. However, in recent years, encoder-decoder transformers [49] have emerged as the dominant architectures due to their superior parallelization capabilities and impressive performance. Both RNN and transformer models heavily rely on the teacher forcing-based training method and the autoregressive inference paradigm to effectively implement a translation system.
During the training stage of NMT, given a sample \(S_{i}=(X_{i},Y_{i})=\left(x_{1}^{i},..,x_{n}^{i};y_{1}^{i},..,y_{m}^{i}\right)\), the encoder maps the source sentence into the latent feature space to obtain \(H^{i}\). Then, the decoder predicts each token based on both the encoder output \(H^{i}\) and the previous tokens \(y_{<t}^{i}\) in the target sentence as follows:
\[\mathbf{H}^{i} = \mathit{ENC}\left(x_{1}^{i},..,x_{n}^{i}\right)\] \[\mathbf{P}\left(y_{t}^{i}|x^{i},y_{<t}^{i}\right) = \mathit{DEC}\left(H^{i},y_{<t}^{i}\right)\]
where \(t\) represents the position of a token in the sentence, and \(P\) denotes the model output probability of the target token. During the training process of the NMT model, we provide the preceding ground-truth tokens \(y_{<t}^{i}\) to assist with predicting the current token. The training objective is to maximize the log-likelihood of the training data with the following loss function:
\[\mathbf{L}_{\mathit{ENC,\mathit{DEC}}}= -\sum_{i=1}^{N}\sum_{t=1}^{m}log\left(P\left(y_{t}^{i}|x^{i},y_{<t} ^{i}\right)\right)\]
During the inference stage, our objective is to translate sentences into the target language representation without having access to any ground-truth tokens. The autoregressive inference paradigm utilizes the model to predict each token individually. Specifically, we replace the ground-truth tokens \(y_{<t}^{i}\) with the model-generated tokens from the previous steps and continue the loop until a stop word appears.
### _Multilingual Neural Machine Translation_
Following Johnson et al. [18], the objective of MNMT is to train a model that can handle multiple translation tasks by incorporating language IDs to provide translation direction guidance. To be more specific, the prediction process of MNMT in one direction can be formulated as follows. Given a source language ID \(l_{s}\) and a target language ID \(l_{t}\), the predicted probability distribution of the \(t\)-th target token in our encoder-decoder-based translation system is:
\[\mathbf{H}^{i} = \mathit{ENC}\left(l_{s},x_{1}^{i},..,x_{n}^{i}\right)\] \[\mathbf{P}\left(y_{t}^{i}|x^{i},y_{<t}^{i},l_{t}\right) = \mathit{DEC}\left(H^{i},y_{<t}^{i},l_{t}\right)\]
where the target language ID \(l_{t}\) and the decoder input \(y_{<t}\) work together to determine which language should to translate into. Compared with bilingual translation, we jointly optimize multiple translation tasks and use language IDs to navigate the translation process, which is more efficient for translating between many languages. Furthermore, the MNMT model can translate between language pairs that are not present in the training data by appropriately setting the IDs in the ZST directions [18].
## 4 Rethinking the Off-Target Problem
In this section, we first introduce our MNMT model and highlight the disparity between training it with MNMT data and inferring it with ZST tasks. Next, we delve into an analysis of when and why the MNMT model produces off-target translations, specifically focusing on the role of CWRs and providing insights into the reasons behind these deviations.
### _Multilingual Machine Translation Model_
We first present the key settings used to train the base MNMT model on the IWSLT-4 dataset for exploratory analysis purposes. As IWSLT-4 is a multialigned dataset, we concatenate sentences from all languages to train a 40k SentencePiece [50] vocabulary and then use it to tokenize all data into subword units. To distinguish between different translation tasks, we prepend the corresponding language IDs to the source and target sentences. More detailed settings are presented in SS6.1.
### _Gap Between MNMT Training and ZST Inference_
As mentioned by Ranzato et al. [33], an NMT model is typically trained to predict the next token in a sequence given the previous tokens. However, during the testing phase, the model is supposed to generate the entire sequence from the beginning. This discrepancy, commonly known as exposure bias, can make the generation process fragile due to the accumulation of errors.
In this paper, we establish a connection between exposure bias and ZST. We show that the gap between the MNMT training and ZST testing processes is a crucial cause of the terrible off-target problem. To elaborate, we present the formulation of ZST as follows:
\[\mathbf{P}_{t}=\mathit{DEC}\left(\mathit{ENC}\left(X,l_{s}\right),l_{t},P_{<t}\right) \tag{1}\]
where the translation direction \(l_{s}\to l_{t}\) does not exist during MNMT training and \(P_{<t}\) denotes the previous model outputs. During MNMT training, the teacher forcing paradigm forces the decoder input \(P_{<t}\) to be the ground-truth tokens in the correct language \(l_{t}\). During the inference stage, this condition is broken [29]. Autoregressive inference predicts the target words based on the previous model outputs, and error tokens \(P_{<t}\) (we only consider the off-target tokens that are present in another language rather than \(l_{t}\)) commonly exist.
### _Impact of Off-Target Tokens_
To understand how off-target tokens in the decoder input affect the translation flow, we focus on the output CWR of the decoder, which is computed with Equation 1, and consider two extreme cases, the on-target and off-target cases, in supervised translation and ZST.
* **Supervised on-target setting**: CWRs of supervised directions (En\(\rightarrow\)XX) with ground-truth decoder input.
* **Zero-shot on-target setting**: CWRs of zero-shot directions (Nl\(\rightarrow\)XX) with ground-truth decoder input.
* **Supervised off-target setting**: CWRs of supervised directions (En\(\rightarrow\)XX), decoder input is in Nl with the same content as source sentence.
* **Zero-shot off-target setting**: CWRs of zero-shot directions (Nl\(\rightarrow\)XX), decoder input is in En with the same content as source sentence.
Notably, we include Nl\(\rightarrow\)En, which is available in the training data, in both zero-shot settings for comparison, as the off-target translation is usually done in English. To address the language coverage bias [51], we sampled 200 sentences with the same content. We perform visualization with t-distributed stochastic neighbor embedding (t-SNE) [52].
First, we compare **on-target settings** in which the language ID and decoder input are matched. As shown in Figure 2(a), with the matched language ID and decoder input in the supervised directions, the CWRs are well grouped in separate regions according to their languages, which may be due to the MNMT model that translates English sentences is unlikely to predict off-target words, which is consistent with previous work [53]. For the zero-shot directions, in Figure 2(b), we can see a similar phenomenon with the CWR distributions in the supervised directions. This suggests that the _language ID is able to navigate an arbitrary translation flow into the correct language under the ideal on-target setting_.
We next set all ZST decoder inputs to be off-target tokens (**off-target setting**), as presented in Figures 2(c) and (d). Contrary to the phenomenon observed in the supervised directions (c), the CWRs in the zero-shot directions (d) are chaotically distributed, which means that the MNMT model does not know which language the text should be translated into. Therefore, we argue that the _language ID tends to be fragile and loses its navigation ability when faced with off-target tokens_.
Fig. 2: **Comparative visualization among different CWRs** on the IWSULT-4 dataset. The on-target settings in (a) and (b) are distributed in separate regions, which means that the language ID can navigate the translation flow. Contrary to the supervised off-target setting in (c), the MNMT model predicts the CWRs of a mixed distribution in (d) and shows that, with the distraction provided by the off-target tokens, the navigation ability of the language ID is covered by ZST.
## 5 Methodology
In this section, we introduce the problem definition and present our proposed method, UNIIkelihood tuning **On** Negative **S**amples (UNIONS).
### _Problem Definition_
Given an MNMT training dataset \(D=\left(D_{l_{1}\leftrightarrow En},...,D_{l_{N}\leftrightarrow En}\right)\) including languages \(L=\left\{l_{1},...,l_{N},En\right\}\), and \(D_{l_{t}\to l_{t}}=\left(X^{l_{t}},Y^{l_{t}}\right)\), the parallel data in \(N\ast\left(N-1\right)\) ZST directions are only available for evaluation. While all languages exist in the training dataset, the goal of UNIONS is to train an MNMT model for an arbitrary ZST direction.
### _General Framework_
We design the UNIONS method with the goal of enhancing the ZST translation performance of a pretrained MNMT model. To achieve this, we first initialize the model with pretrained MNMT parameters. Next, we simultaneously minimize the likelihood loss on the supervised training data and the unlikelihood loss on the coupled negative samples. This approach helps address the off-target problem.
The general objective function is formulated as follows:
\[\mathcal{F}= \arg\min\sum_{l=l_{1}}^{l_{N}}\mathcal{L}_{MLE}\left(X^{l},Y^{En },\theta\right)+\mathcal{L}_{MLE}\left(X^{En},Y^{l},\theta\right)\] \[+\mathcal{L}_{UL}\left(X^{l},\tilde{Y}^{En},\theta\right)+ \mathcal{L}_{UL}\left(X^{En},\tilde{Y}^{l},\theta\right)\]
where \(\theta\) denotes the parameters of the model, \(\left(X,\tilde{Y}\right)\) represents the coupled negative samples of \(\left(X,Y\right)\), \(\mathcal{L}_{MLE}\) is the likelihood loss induced on the supervised training data, and \(\mathcal{L}_{UL}\) represents the unlikelihood loss induced on our negative samples.
The commonly used early-stopping strategy relies on the validation loss induced on the training dataset. However, this strategy is not suitable for predicting ZST performance, as the dev set only comprises data in the supervised directions. It is possible for a model with a lower validation loss to achieve a worse ZST score. Hence, we propose an additional approach for selecting the final model based on the CWR separation degree in the zero-shot off-target setting.
The details of each part are discussed in the following subsections.
#### 5.2.1 Likelihood Tuning on Supervised Samples
We minimize the maximum likelihood estimation (MLE) loss on the supervised training data and aim to inherit the translation ability of the pretrained MNMT model. Specifically, we optimize multiple bilingual translation tasks on the multilingual translation dataset \(\mathcal{D}\), which consists of En\(\rightarrow\)XX and XX\(\rightarrow\)En bilingual corpora. Given a sentence pair \(\left(X_{i},Y_{i}\right)=\left(x_{1},x_{2},...,x_{n};y_{1},y_{2},...,y_{m}\right)\) with an index \(i\) in \(\mathcal{D}\), where \(l_{s},l_{t}\) are the source and target languages, respectively, \(x,y\) denote the words of \(X_{i},Y_{i}\) with lengths of \(n,m\), respectively.
As shown in the center of Figure 3, we append the corresponding language IDs \(\left\langle l_{s}\right\rangle,\left\langle l_{t}\right\rangle\) to the sentence pairs to construct positive translation samples, where the IDs distinguish different between translation tasks; e.g., the ZhEn sample can be denoted as "\(\left(\text{Zh}\right)\)" \(\left\|\text{LLLL}\right\|\)" " and "\(\left(\text{En}\right)\) Good morning" with ID tokens "\(\left(\text{Zh}\right)\)" and "\(\left(\text{En}\right)\)", respectively. First, we feed the source sentence into the encoder to obtain the encoder output feature, which consists of token-level features. Based on the encoder output and the decoder input, the decoder predicts the probabilities of the current token.
Fig. 3: **The training scheme of UNIONS. Given an on-target translation pair, we feed the source/target sentences into the encoder/decoder with the corresponding IDs to maximize the likelihood loss objective, i.e., \(\mathcal{L}_{likelihood}\). For negative samples whose only difference from the translation samples is the off-target language ID, we minimize the unlikelihood loss, i.e., \(\mathcal{L}_{likelihood}\).**
Then, we optimize the encoder-decoder-based translation model, e.g., a transformer [49], with the following likelihood loss objective:
\[\mathcal{L}_{MLE}\left(X^{l_{s}},Y^{l_{e}},\theta\right)=\] \[-\sum_{\left(X^{l_{s}}_{i},Y^{l_{s}}_{i}\right)\in\mathcal{D}^{l \in m}}\ \log P(y_{t}|l_{s},l_{t},X_{i},y_{<t},\theta),\]
where \((X_{i},Y_{i})\) is the translation sentence pair for training and \(y_{<t}\) denotes the input words of the decoder. Multilingual machine translation maximizes the probability of predicting correct tokens conditioned on the source sentence, the language IDs, and the ground-truth decoder inputs.
Considering the multiple directions contained in the training data, the final objective function of the likelihood training can be formulated as follows:
\[\mathcal{F}_{MNMT}=\] \[\arg\min\sum_{l=l_{1}}^{l_{N}}\mathcal{L}_{MLE}\ \ \ \left(X^{l},Y^{En},\theta \right)+\mathcal{L}_{MLE}\left(X^{En},Y^{l},\theta\right)\]
#### 5.2.2 Unlikelihood Tuning on Negative Samples
We also tune the pretrained MNMT model with negative samples to bridge the gap between the MNMT train and ZST inference processes. Specifically, as illustrated on the right side of Figure 3, for each translation sentence pair with language IDs \(l_{s},l_{t}\), we build a negative language ID set \(L_{ne}=\{l\in L,l\neq l_{t}\) and \(l\neq l_{s}\}\). During training, we randomly select a language ID \(l_{ne}\) from \(L_{ne}\) to replace \(l_{t}\), i.e., "\(\langle\text{En}\rangle\)" \(\rightarrow\) "\(\langle\text{De}\rangle\)", to construct a negative sample pair \(\left(X^{l_{s}},\tilde{Y}^{l_{ne}}\right)\). As they are not consistent with the target language ID \(l_{ne}\), we call the tokens of the target sentence "off-target tokens". The model is then trained on the constructed negative samples with the following unlikelihood loss:
\[\mathcal{L}_{UL}\left(X^{l_{s}},\tilde{Y}^{l_{ne}},\theta\right) =\] \[-\sum_{\left(X^{l_{s}},\tilde{Y}^{l_{ne}}\right)\in\mathcal{D}} \sum_{l\in m}\ \log\left(1-P(\tilde{y}_{t}|l_{s},l_{ne},X_{i},\tilde{y}_{<t}),\theta\right) \tag{2}\]
In this way, MNMT minimizes the probability of predicting off-target tokens that are conditioned on the off-target input tokens of the decoder. The objective function of unlikelihood training can be expressed as follows:
\[\mathcal{F}_{UL}=\arg\min\sum_{l=l_{1}}^{l_{N}}\mathcal{L}_{UL}\left(X^{l}, \tilde{Y}^{En},\theta\right)+\mathcal{L}_{UL}\left(X^{En},\tilde{Y}^{l}, \theta\right)\]
#### 5.2.3 Model Selection Indicator
As access to ZST samples is not permitted during training, we cannot directly select the final model according to the loss scores obtained on the dev set. Therefore, we select the final model based on the CWR separation degree in the zero-shot off-target setting. Given the CWRs \(P^{l_{i}}\) of a target language \(l_{i}\) with a mean of \(P^{l_{i}}_{mean}\) and a distance metric of \(\textbf{Dis}()\) between the two distributions, the separation degree is computed as follows:
\[\mathcal{Sep}=\frac{\sum_{i\in N}\sum_{j\in N,j>i}\textbf{Dis}\left(P^{l_{i}}, P^{l_{j}}\right)*2}{\sum_{i\in N}\textbf{Dis}\left(P^{l_{i}},P^{l_{i}}_{mean} \right)*(N+1)}, \tag{3}\]
where \(N\) is the number of languages in \(L\), where we set \(\textbf{Dis}(\cdot,\cdot)\) as the average distance between the CWR points of the different distributions.
As training proceeds, the separation degree increases, and we select the final model with converged scores. Specifically, we regard a model with changes below 0.01 as converged, and for efficient computation purposes, we only use 100 training samples and consider the languages included in the ZST test set.
## 6 Task Setup
We evaluate UNIONS on ZST tasks spanning 40 directions and two types of base models to verify the effectiveness and universality of our approach.
### _Multilingual Machine Translation Model_
We begin by examining models trained on English-centric multilingual translation datasets and proceed to evaluate these models directly on extensive zero-shot translations (e.g., non-English translations XX+XX). We take the following three translation benchmarks into consideration.
* **IWSLT-4:** Following Qu and Watanabe [54], we use the IWSLT-17 dataset to evaluate the performance of the models, and we remove four languages ( "En, Ro, It, NI") from MMRCR4NLP [55]. IWSLT-4 is a multialigned dataset with 145k training sentences for each language.
* **OPUS-100 (V1.0)**: We also conduct experiments on the OPUS-100 (v1.0) dataset from Yang et al. [53]. OPUS-100 [12] is an English-centric dataset that has 55 M samples with a maximum of 1 M sentence pairs for each language pair. It consists of parallel corpora between En and 100 other languages. Following Yang et al. [53], we construct OPUS-100 (v1.0) by removing 5 languages ("An, Dz, Hy, Mn, Yo") without training or testing data and removing all duplicate test sentence pairs from the training and testing sets.
* **WMT-5**: To further evaluate the extremely unbalanced large-scale scenario, we adopt 4 popular WMT parallel training datasets, including WMT14 En+De (4.5 M), WMT14 En+Fr (35.7 M), WMT16 En+Ro (608 k) and WMT17 En+Zn (20.5 M). To prevent language cover bias during the evaluation, we use the multialigned Flores-200 devtest set3[56] for all translations.
Footnote 3: [https://github.com/facebookresearch/flores/blob/main/flores200/README.md](https://github.com/facebookresearch/flores/blob/main/flores200/README.md)
We compare our model with two strong baselines, including vanilla and TLP\(\&\)TGP:
* **Vanilla:** We use the vanilla MNMT model by closely following the optimal model and training settings of Yang et al. [53], Qu and Watanabe [54] except for the language ID setting, where we follow Arivazhagan et al. [10], Pan et al. [32] to prepend the source and target language IDs to the inputs of the encoder and decoder, respectively. In contrast, Yang et al. [53], Qu and Watanabe [54] prepended the target ID into the encoder. Our model is further tuned by the vanilla method.
* **TLP\(\&\)TGP [53]**: On the OPUS-100 (v1.0) dataset, we also compare the proposed method with TLP\(\&\)TGP,
which regularize MNMT models at both the representation and gradient levels. We compare our results with those reported by Yang et al. [53].
We conduct experiments on the fairseq[57] toolkit with a transformer [49] as the MNMT backbone and select the final model according to the loss induced on the dev sets. To balance the distributions of different parallel corpora, we follow Arivazhagan et al. [22] and use a temperature-based sampling method and set \(T=5\). We tokenize IWSLT-4 via a 40k vocabulary and split the words in the OPUS-100 (v1.0) and WMT-5 corpora into subword units using a 64k SentencePiece [50] vocabulary to train their corresponding training sets.
For IWSLT-4, we use a 5-layer transformer with 8 attention heads, an embedding size of 512, an inner size of 2048, a training process with a dropout of 0.3, an lr of 5e-4, a 16k batch size, a label smoothing parameter of 0.1, and 100k update steps. In experiments conducted on the large-scale OPUS-100 (v1.0) and WMT-5 datasets, we use Transformer-big with 6 layers and 16 attention heads and set the warmup parameter to 4k, the batch size to 524k, the label smoothing parameter to 0.1, the dropout parameter to 0.1, and the attention dropout parameter to 0.1. In addition, we set the lr to 5e-4 for OPUS-100 (v1.0) and to 7e-4 for WMT-5.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline & \multicolumn{8}{c}{**IWSLT-4**} \\ \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**NI-Ro**} & \multicolumn{2}{c}{**NI-It**} & \multicolumn{2}{c}{**It-Ro**} & \multirow{2}{*}{**AVG**} \\ & \(\leftarrow\) & & \(\rightarrow\) & \(\leftarrow\) & & \(\rightarrow\) & \(\leftarrow\) & & \\ \hline \multicolumn{10}{c}{_SacreBLEU Scores_\(\uparrow\)} \\ \multicolumn{10}{c}{_Vanilla_} & 12.2 & 9.7 & 12.9 & 11.9 & 13.1 & 11.3 & 11.9 \\ +UNIONS & **15.6\({}^{\ddagger}\)** & **13.0\({}^{\ddagger}\)** & **16.1\({}^{\ddagger}\)** & **17.4\({}^{\ddagger}\)** & **18.1\({}^{\ddagger}\)** & **15.1\({}^{\ddagger}\)** & **15.9** \\ \(\Delta\) & +3.4 & +3.3 & +3.2 & +5.5 & +5.0 & +3.8 & +4.0 \\ \hline \multicolumn{10}{c}{_OTR Scores_\% \(\downarrow\)} \\ \multicolumn{10}{c}{_Reference_} & 2.8 & 4.3 & 2.9 & 0.7 & 0.8 & 4.4 & 2.7 \\ \multicolumn{10}{c}{_Vanilla_} & 12.6\({}^{\ddagger}\) & 21.7 & 12.0 & 25.0 & 20.6 & 23.0 & 19.2 \\ +UNIONS & **2.4** & 3.8 & **2.7** & 0.8 & **1.1** & **4.2** & **2.5** \\ \(\Delta\) & -10.2 & -17.9 & -9.3 & -24.2 & -19.5 & -18.8 & -16.7 \\ \hline \hline \end{tabular}
\end{table} TABLE I: ZST performance achieved on the IWSLT-4 dataset. \(\Delta^{*}\): improvements over the vanilla model. Underline: the averaged results. **Bold**: the best results. \({}^{\ddagger}\): statistically significant improvement (\(p<0.01\)).
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c} \hline \hline & \multicolumn{8}{c}{**OPUS-100 (v1.0)**} \\ \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**Fr-De**} & \multicolumn{2}{c}{**Ru-Fr**} & \multicolumn{2}{c}{**NI-De**} & \multicolumn{2}{c}{**Zh-Ru**} & \multicolumn{2}{c}{**Zh-Ar**} & \multicolumn{2}{c}{**NI-Ar**} & \multirow{2}{*}{**AVG**} \\ & \(\leftarrow\) & \(\rightarrow\) & \(\leftarrow\) & \(\rightarrow\) & \(\leftarrow\) & & \(\rightarrow\) & \(\leftarrow\) & & \(\rightarrow\) & \\ \hline \multicolumn{10}{c}{_SacreBLEU Scores_\(\uparrow\)} \\ \multicolumn{10}{c}{_TLP& **53**} & 6.6 & **14.2** & 16.7 & **21.4** & **16.2** & 8.6 & 12.9 & 14.2 & 14.7 & **12.6** & **11.8** & **4.6** & 12.9 \\ \multicolumn{10}{c}{_Vanilla_} & 3.3 & 3.0 & 4.8 & 5.4 & 4.7 & 4.3 & 3.7 & 5.1 & 4.4 & 5.4 & 1.4 & 0.9 & 3.9 \\ +UNIONS & **14.9\({}^{\ddagger}\)** & 12.3\({}^{\ddagger}\)** & 17.1\({}^{\ddagger}\)** & 19.1\({}^{\ddagger}\)** & 16.0\({}^{\ddagger}\)** & 15.2\({}^{\ddagger}\)** & 23.0\({}^{\ddagger}\)** & **14.5\({}^{\ddagger}\)** & **25.2\({}^{\ddagger}\)** & 11.6\({}^{\ddagger}\)** & 8.8\({}^{\ddagger}\)** & 1.8 & **15.0** \\ \(\Delta\) & +11.6 & +9.3 & +12.3 & +13.7 & +11.3 & +10.9 & +19.3 & +9.4 & +20.8 & +6.2 & +7.4 & +0.9 & +11.1 \\ \hline \multicolumn{10}{c}{_OTR Scores_\% \(\downarrow\)} \\ \multicolumn{10}{c}{_ITP& **53**} & 3.3 & 2.9 & 4.5 & 7.1 & 3.4 & 6.0 & 3.8 & 6.4 & 4.1 & 10.0 & 2.3 & 4.9 \\ \multicolumn{10}{c}{_Vanilla_} & - & - & - & - & - & - & - & - & - & - & - & - & - & - \\ \multicolumn{10}{c}{_Vanilla_} & 8.9 & 98.7 & 96.2\({}^{\ddagger}\) & 92.5 & 98.0 & 97.4\({}^{\ddagger}\) & 91.6 & 96.0 & 93.5 & 85.2 & 98.7 & 94.8 & 93.7 \\ +UNIONS & **7.9** & **19.9** & **8.8** & **8.0** & **14.0** & **11.2** & **29.6** & **7.9** & **21.9** & **5.6** & **27.3** & **12.8** & **14.6** \\ \(\Delta\) & -81.9 & -78.8 & -87.5 & -84.5 & -83.9 & -86.2 & -65.0 & -88.2 & -71.7 & -79.6 & -71.0 & -71.9 & -79.2 \\ \hline \hline \end{tabular}
\end{table} TABLE II: ZST performance achieved on the OPUS-100 (v1.0) dataset. \(\Delta^{*}\): improvements over the vanilla model. Underline: the averaged results. **Bold**: the best the best results. \({}^{\ddagger}\): statistically significant improvement (\(p<0.01\)).
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c} \hline \hline & \multicolumn{8}{c}{**WMT-5**} \\ \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**Fr-De**} & \multicolumn{2}{c}{**Zh-De**} & \multicolumn{2}{c}{**Ro-De**} & \multicolumn{2}{c}{**Zh-Fr**} & \multicolumn{2}{c}{**Ro-Fr**} & \multicolumn{2}{c}{**Ro-Zh**} & \multirow{2}{*}{**AVG**} \\ & \(\leftarrow\) & \(\rightarrow\) & \(\leftarrow\) & \(\rightarrow\) & \(\leftarrow\) & \(\rightarrow\) & \(\leftarrow\) & \(\leftarrow\) & \(\rightarrow\) & \(\leftarrow\) & & \(\rightarrow\) & \\ \hline \multicolumn{10}{c}{_SacreBLEU Scores_\(\uparrow\)} \\ \multicolumn{10}{c}{_Vanilla_} & 10.8 & 5.2 & 19.3 & 1.9 & 5.8 & 5.0 & 15.3 & 1.6 & 5.3 & 6.4 & 2.8 & 11.4 & 7.6 \\ +UNIONS & **29.6\({}^{\ddagger}\)** & **22.0\({}^{\ddagger}\)** & **27.0\({}^{\ddagger}\)** & **14.1\({}^{\ddagger}\)** & **8.5\({}^{\ddagger}\)** & **7.8\({}^{\ddagger}\)** & **27.2\({}^{\ddagger}\)** & **19.7\({}^{\ddagger}\)**
For fine-tuning, we set the warmup parameter to 1, the lr to 5e-5, the batch size to 1k, and the number of updates to 0.5k for IWSLT-4; we set the lr to 7e-5, the batch size to 32k, and the number of updates to 0.5k for WMT-5; and we set the lr to 5e-5, the batch size to 32k, and the number of updates to 5k for OPUS-100 (v1.0). For evaluation purposes, we save checkpoints every 50 updates for IWSLT-4/WMT-5 and every 500 updates for OPUS-100 (v1.0). We choose the final model according to the indicator defined in SS5.2.3.
All models are trained on Tesla-A100 GPU. Note that to conduct a fair comparison with Vanilla, we report the results obtained with a continued training setting that is identical to the tuning steps of our UNIONS method. We adopt **SacreBLEU**[58] to evaluate the translation accuracy, where we generate translations with a beam size of 5. We also compute the off-target ratio of the generated **OTR scores** with a publicly available language detector4[59, 60]. These two evaluation metrics are also used in adapter tuned translation model experiments.
Footnote 4: [https://fasttext.cc/docs/en/language-identification.html](https://fasttext.cc/docs/en/language-identification.html)
### _Adapter Tuned Translation Model_
Adapter tuning [61, 62, 63] adapts large pretrained language models (PLMs) for downstream tasks by incorporating lightweight residual layers into each model layer. During training, the adapter layers are fine-tuned using downstream data while the other parameters remain frozen. We take the open source **denoising adapters (D.A.) model5[25]** as our base model and tune it with UNIONS, similar to the MNMT models in SS6.1. D.A. is a two-stage tuned model based on mBART50 [64]. It contains 1) training adapters for each language with a denoising task for 37 languages6 containing monolingual data with a maximum of 20 million sentences per language and 2) cross-attention trained on the TED talks [65] dataset by selecting 20 languages7 with training sizes ranging from 214k to 18k parallel sentences. During translation, the target language ID and the corresponding adapter determine the translation flow. While D.A. only evaluates translation performance with English as the reference language, we further enhance its translation ability for non-English languages.
Footnote 4: [https://fasttext.cc/docs/en/language-identification.html](https://fasttext.cc/docs/en/language-identification.html)
Footnote 5: [https://europe.naverlabs.com/research/natural-language-processing/efficient-multilingual-machine-translation-2](https://europe.naverlabs.com/research/natural-language-processing/efficient-multilingual-machine-translation-2)
Footnote 6: Languages with monolingual data: Ar, He, Ko, It, Ja, Zh, Fr, Pt, Tr, Ro, PI, Vi, De, Fa, Cs, Th, My, Hi, Es, Nl, Hv, Uk, Id, St, Lv, Et, Et, Ur, Kk, Sk, Big, Hu, Se, El, Da, and Be:
7: Languages with En-\(\times\)X parallel data: Ar, He, Ru, Ko, It, Ja, Zh, Fe, Pt, Tr, Ro, PI, Vi, De, Fa, Cs, Th, My, and Hi.
During the tuning process of UNIONS, we use the same TED talk dataset and set 1 warmup step, an lr of 1e-5, 1024 max tokens, and 1K updates. We save a checkpoint every 100 updates. The evaluation process remains the same as that used in MNMT.
## 7 Experiments
In this section, we conduct extensive experiments spanning 40 ZST directions to verify the effectiveness and universality of our UNIONS method.
### _UNIONS Achieves Considerably Improved ZST Performance_
The main results obtained on the three benchmarks in Tables I, II and III show that our method achieves consistent and significant improvements over the vanilla model for both large- and small-scale datasets, as well as with different model sizes (Transformer-small is used for IWSLT, while Transformer-big is used for OPUS-100 (v1.0) and WMT-5). In particular, our method effectively reduces the off-target ratios compared to those of the baseline by \(-16.7\%/-79.2\%/-60.0\%\) for the IWSLT, OPUS, and WMT-5 benchmarks, respectively, thus significantly improving the resulting translation quality by \(+4.0/+14.6/+9.1\) averaged SacreBLEU score points.
Additionally, we observed that MNMT trained on large-scale datasets, such as OPUS-100 and WMT-5, face more severe off-target problems. And, UNIONS achieves the greatest improvement on the difficult OPUS-100 (v1.0) benchmarks with the largest number of languages. For IWSLT-4, the improvement is minimal, which we attribute to the lower upper bound of an MNMT model trained on a low-resource dataset. In comparison with previous works utilizing OPUS, our model outperforms TLP\(\&\)TG by \(+2.1\) SacreBLEU and \(-2.0\%\) OTR scores.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline & & & \multicolumn{6}{c}{**TED**} \\ \hline \hline \multirow{2}{*}{**Models**} & \multicolumn{4}{c}{\(l_{bi}\to l_{bi}\)} & \multicolumn{4}{c}{\(l_{mono}\to l_{mono}\)} & \multicolumn{4}{c}{\(l_{bi}\to l_{mono}\)} & \multicolumn{4}{c}{\(l_{mono}\to l_{bi}\)} & \multicolumn{4}{c}{**AVG**} \\ & Ko\(\rightarrow\) It & Zh\(\rightarrow\) Fr & Pl\(\rightarrow\) De & My\(\rightarrow\) Hi & Es\(\rightarrow\) Nl & Lt\(\rightarrow\) Et & Zh\(\rightarrow\) Nl & Pl\(\rightarrow\) Et & Uk\(\rightarrow\) Cs & Fi\(\rightarrow\) De & \\ \hline \multicolumn{10}{c}{_SacerBLEU Scores_ \(\uparrow\)} \\ D.A. [25] & 5.5 & 9.0 & 8.2 & 1.0 & 7.2 & 3.6 & 4.1 & 4.9 & 9.2 & 9.6 & 6.2 \\ +**UNIONS** & **6.3** & **10.3\({}^{\ddagger}\)** & **9.0** & **1.2** & **11.4\({}^{\ddagger}\)** & **4.4** & **5.3\({}^{\ddagger}\)** & **5.8** & **9.9** & **11.6\({}^{\ddagger}\)** & **7.5** \\ \(\Delta\) & +0.8 & +1.3 & +0.8 & +0.2 & +4.2 & +0.8 & +1.2 & +0.9 & +0.7 & +2.0 & +1.3 \\ \hline \multicolumn{10}{c}{_OTR Scores_ \(\uparrow\)} \\ \multirow{2}{*}{**Reference**} & 1.3 & 0.5 & 1.1 & 1.2 & 3.5 & 8.1 & 3.4 & 6.2 & 2.6 & 0.9 & 2.9 \\ D.A. [25] & 202 & 12.2 & 21.1 & 14.7 & 46.3 & 24.3 & 40.5 & 23.7 & 11.2 & 18.3 & 25.2 \\ +**UNIONS** & **5.3** & **4.1** & **7.2** & **7.6** & **15.4** & **8.1** & **14.7** & **10.9** & **6.1** & **4.8** & **8.4** \\ \(\Delta\) & -14.9 & -8.1 & -13.9 & -7.1 & -31.0 & -16.2 & -25.8 & -12.8 & -5.1 & -13.4 & -14.8 \\ \hline \end{tabular}
\end{table} TABLE IV: **ZST performance achieved on the TED dataset. \({}^{\ddagger}\): improvements over the vanilla model. Underline: the averaged results. Bold: the best results. \({}^{\ddagger}\): statistically significant improvement (\(p<0.01\)). \(l_{mono}\) only has monolingual data during training, while \(l_{bi}\) exists in the multilingual translation dataset.**
### _UNIONS Provides Greater Benefits for Target Languages that are Close to English_
We consider the influence of the similarity between languages on the performance of the model. According to the OTR scores obtained on OPUS-100 (v1.0) and WMT-5 and shown in Tables II and III, we find that our method is particularly effective for ZST tasks with target languages that are similar to English, e.g., Zh\(\rightarrow\)Ru (\(-88.2\%\)) in Table II and Zh\(\rightarrow\)De (\(-97.7\%\)), Zh\(\rightarrow\)Fr (\(-98.5\%\)), and Zh\(\rightarrow\)Ro (\(-73.6\%\)) in Table III. These languages are all in the Indo-European language family. In contrast, for their reverse directions, where the target languages are in non-English families, the achieved improvements are smaller, e.g., Ru-Zh (\(\Delta 23.2\%\)) in Table II and Zh-De (\(\Delta 75.6\%\)), Fr-Zh (\(\Delta 60.5\%\)), and Ro-Zh (\(\Delta 59.5\%\)) in Table III, where \(\Delta\) represents the gap of between the OTR scores obtained in these directions and those obtained in the forward directions.
We attribute this interesting phenomenon to the fact that our method is able to significantly enhance the navigation capabilities of the IDs of languages that are close to the central language (English), which are weaker in the vanilla model.
### _Adapter-Tuned Translation Model_
As reported in Table IV, our method achieves an average SacreBLEU score improvement of +1.3 and reduces the off-target ratio by -14.8% compared to that of the baseline D.A. model. These results demonstrate that UNIONS also effectively addresses off-target problems in fine-tuned translation models based on PLMs.
We utilize \(l_{mono}\) and \(l_{bi}\) to indicate whether the current language has bitext data during the D.A. training process. For instance, in the Zh\(\rightarrow\)nl case, we use \(l_{bi}\to l_{mono}\) to denote that the training set of D.A. has zh\(\rightarrow\)en bitext data and \(nl\) monolingual data. We find that UNIONS is more effective for target languages that only have monolingual data, e.g., Es\(\rightarrow\)Nl (-31.0% OTR score), Lt\(\rightarrow\)Et (-16.2% OTR score), Zh\(\rightarrow\)Nl (-25.8% OTR score) and Pl\(\rightarrow\)Et (-12.8% OTR score). One possible explanation for this observation is that the language IDs, which are trained solely on self-supervised objectives using monolingual data, have more delicate navigation capabilities, making them more susceptible to improvement through the UNIONS approach.
## 8 Analysis
To provide some insights to better understand our proposed method, i.e., UNIONS, we conduct extensive analyses from different perspectives to show 1) the maintained supervised translation performance, 2) the effectiveness of our model selection approach, 3) the negligible computational cost, and 4) the rejuvenated navigation capability of language IDs.
### _UNIONS Maintains the Supervised Translation Performance of the Model_
As our approach aims to optimize the zero-shot performance of a pretrained MNMT model, one may doubt whether the supervised translation performance is affected. To dispel this concern, we report the averaged SacreBLEU scores obtained for the supervised directions, including translating from English (En \(\rightarrow\) XX) and translating into English (XX \(\rightarrow\) En), on three benchmarks in Table V.
UNIONS demonstrates improvements of +0.1 and +0.2 average SacreBLEU scores over the vanilla baseline in IWSLT-4 and OPUS-100 (v1.0) respectively. And, the average performance drop of WMT-5 is negligible (-0.1 SacreBLEU score). Overall, UNIONS achieves comparable performance to that of the vanilla MNMT model for all benchmarks (with an average translation performance improvement of +0.1 SacreBLEU score), demonstrating that _our model successfully maintains its supervised translation ability_.
### _Effectiveness of Our Model Selection Approach During Training_
As mentioned above, the CWR separation degree in SS5.2.3 is used to select the checkpoints. To validate the effectiveness of our approach, we tune a trained MNMT model on OPUS-100 (v1.0) with UNIONS for 10K steps and report the SacreBLEU and OTR scores produced on zero-shot test sets during the training process in Figure 4.
As seen, 1) indicator \(Sep\) can easily choose the best checkpoint in 2.5K out of 10K steps, where the model has decent translation performance and a relatively low OTR score, showing the _effectiveness of our proposed proxy model selection indicator_ in SS5.2.3; 2) the dynamics of the OTR scores exhibit a significant decline first and then stabilize, demonstrating the effectiveness of reducing the off-target ratios; and 3) the BLEU dynamics yield higher results in the early stages (approximately 2K), then gradually decrease after 3K, and finally still exceed those of the untuned MNMT model. This may be due to the overfitting of the unlikelihood loss on negative samples, and this feature should be explored to stabilize the learning process in future work. Luckily, our indicator selects a good checkpoint for achieving better OTR and BLEU scores before the unstable learning dynamic appears.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{**IWSLT-4**} & \multicolumn{3}{c}{**OPUS-100 (v1.0)**} & \multicolumn{3}{c}{**WMT-5**} \\ \cline{2-10} & **En\(\rightarrow\)XX** & **XX\(\rightarrow\)En** & **AVG** & **En\(\rightarrow\)XX** & **XX\(\rightarrow\)En** & **AVG** & **En\(\rightarrow\)XX** & **XX\(\rightarrow\)En** & **AVG** \\ \hline
**Vanilla** & 27.8 & 31.8 & 29.8 & 25.7 & 32.3 & 29.0 & 33.1 & **31.7** & **32.4** \\ +UNIONS & **27.9** & **32.0** & **29.9** & **25.8** & **32.7** & **29.2** & **33.2** & 31.4 & 32.3 \\ \(\Delta\) & +0.1 & +0.2 & +0.1 & +0.1 & +0.4 & +0.2 & +0.1 & -0.3 & -0.1 \\ \hline \hline \end{tabular}
\end{table} TABLE V: **Supervised translation performance comparison. “\(\Delta\)”: improvements over the vanilla model. Bold: the best results. Underline: average scores obtained for all supervised directions.**
### _UnlIONS Requires a Negligible Computational Cost_
The training cost is a critical factor for the practical implementation of neural network models [66], and users often prefer methods that are both effective and efficient. To evaluate the efficiency of UNIONS, we report the computation budget required for tuning models trained on large-scale datasets, including OPUS-100 (v1.0) and WMT-5. Specifically, we use the number of GPU hours to quantitatively compare UNIONS with the vanilla MNMT training process.
As shown in Table VI, approximately 139.6 and 276.1 GPU hours are needed to train MNMT models on OPUS-100 (v1.0) and WMT-5, respectively. And, UNIONS requires 2.5 and 0.8 GPU hours to fine-tune the models, thereby reducing off-target ratios thus boosting ZST translation performance. Considering the significant improvements shown in Table II and Table III, _the addition training cost ratio, i.e.,1.8% for OPUS-100 (v1.0) and 0.3% for WMT-5, of UNIONS is negligible._
### _The Navigation Capabilities of Language IDs in ZST Can be Rejuvenated_
To understand whether our method can rejuvenate the navigation capabilities of language IDs facing off-target tokens in ZST, we use our model and reconduct the visualization analysis in SS4.3 for comparison with the vanilla model.
As depicted in Figure 5, in contrast with Figure 2(d), the MNMT model clearly divides embeddings for different languages into separate regions under the zero-shot off-target setting after tuning with UNIONS. This phenomenon is similar to the zero-shot on-target setting in Figure 2(b) and the supervised settings in Figure 2(a) and Figure 2(c), where the model exhibits no confusion regarding the translation direction. The higher separate ratio is also consistent with the lower OTR score (2.5 v.s. 19.2) reported in Table I. This suggests that _UNIONS rejuvenates the lost navigation capabilities of language IDs_, confirming our claim.
## 9 Conclusion
In this paper, we first revisit the off-target problem in zero-shot machine translation (ZST) and show that language IDs are able to guide the translation flow while remaining fragile when faced with off-target tokens, which commonly exist during inference but are rare during training. To address this issue, we propose a _simple but sufficient_ method - UNIONS - to minimize the probability of _easy-to-construct_ negative (language- and ID-unmatched) samples and bridge the gap between the MNMT training and ZST inference processes. Our method possesses a simple training strategy that can improve any pretrained MNMT model: continued tuning with UNIONS.
Experimentally, UNIONS effectively reduces the off-target ratios in translation tasks and improves the resulting translation quality with a negligible extra computational cost. Encouragingly, UNIONS provides more gains for larger-scale datasets, making it particularly beneficial for industry-level machine translation participants.
In future work, we will further investigate the impact of our method on non-English-centric data, e.g., parallel data between non-English languages. Meanwhile, it will be interesting to further design more effective methods to boost the zero-shot translation ability via post-training or tuning a pretrained generative language model, e.g., GPT-3 [67], BLOOM [68], and LLaMA [69].
Fig. 4: **Training dynamics exhibited by UNIONS on OPUS. We present the SacreBLEU and OTR scores according to the number of training steps. The dashed line indicates the final model selected by \(S_{cp}\) in §5.2.3.**
Fig. 5: **Visualization of the CWRs generated by our model in the zero-shot off-target setting in §4.3. Our model divides and navigates different languages into separate regions.** |
2309.05283 | Shape effect on ice melting in flowing water | Iceberg melting is a critical factor for climate change, contributing to
rising sea levels and climate change. However, the shape of an iceberg is an
often neglected aspect of its melting process. Our study investigates the
influence of different ice shapes and ambient flow velocities on melt rates by
conducting direct numerical simulations. Our study focuses on the ellipsoidal
shape, with the aspect ratio as the control parameter. It plays a crucial role
in the melting process, resulting in significant variations in the melt rate
between different shapes. Without flow, the optimal shape for a minimal melt
rate is the disk (2D) or sphere (3D), due to the minimal surface area. However,
as the ambient flow velocity increases, the optimal shape changes with the
aspect ratio. We find that ice with an elliptical shape (when the long axis is
aligned with the flow direction) can melt up to 10\% slower than a circular
shape when exposed to flowing water. We provide a quantitative theoretical
explanation for this optimal shape, based on the competition between surface
area effects and convective heat transfer effects. Our findings provide insight
into the interplay between phase transitions and ambient flows, contributing to
our understanding of the iceberg melting process and highlighting the need to
consider the aspect ratio effect in estimates of iceberg melt rates. | Rui Yang, Christopher J. Howland, Hao-Ran Liu, Roberto Verzicco, Detlef Lohse | 2023-09-11T07:42:26Z | http://arxiv.org/abs/2309.05283v1 | # Shape effect on ice melting in flowing water
###### Abstract
Iceberg melting is a critical factor for climate change, contributing to rising sea levels and climate change. However, the shape of an iceberg is an often neglected aspect of its melting process. Our study investigates the influence of different ice shapes and ambient flow velocities on melt rates by conducting direct numerical simulations. Our study focuses on the ellipsoidal shape, with the aspect ratio as the control parameter. It plays a crucial role in the melting process, resulting in significant variations in the melt rate between different shapes. Without flow, the optimal shape for a minimal melt rate is the disk (2D) or sphere (3D), due to the minimal surface area. However, as the ambient flow velocity increases, the optimal shape changes with the aspect ratio. We find that ice with an elliptical shape (when the long axis is aligned with the flow direction) can melt up to 10% slower than a circular shape when exposed to flowing water. We provide a quantitative theoretical explanation for this optimal shape, based on the competition between surface area effects and convective heat transfer effects. Our findings provide insight into the interplay between phase transitions and ambient flows, contributing to our understanding of the iceberg melting process and highlighting the need to consider the aspect ratio effect in estimates of iceberg melt rates.
keywords
## 1 Introduction
With global warming, more icebergs are breaking off from Antarctica, accelerating sea level rise (Scambos _et al._, 2017). Additionally, melting icebergs and sea ice floes also contribute significantly to climate and environment change, including freshwater supply (Huppert & Turner, 1978), biological productivity (Wadham _et al._, 2019), and carbon sequestration (Duprat _et al._, 2016), making the understanding of iceberg melting rates crucial for comprehending the interplay between icebergs and the climate (Cenedese & Straneo, 2023). To describe the melt rate of icebergs, different parameterization models are proposed, such as: employing
empirical relations for turbulent heat transfer over a flat plate to predict iceberg melt rates (Weeks & Campbell, 1973), considering meltwater-plume effect on the melt rate with ambient flows (FitzMaurice _et al._, 2017), and coupling effects of turbulence, buoyant convection, and waves (Martin & Adcroft, 2010).
Various lab experiments and direct numerical simulations have been conducted to investigate the interaction between ice melting and ambient flow in the laboratory scale, including ice melting in still and flowing water (Dumore _et al._, 1953; Vanier & Tien, 1970; Hao & Tao, 2002; Hester _et al._, 2021; Weady _et al._, 2022), in Rayleigh-Benard convection (Davis _et al._, 1984; Dietsche & Muller, 1985; Favier _et al._, 2019; Yang _et al._, 2023_b_), which involves a plate being heated from below and a plate being cooled from above (Lohse & Xia, 2010; Ecke & Shishkina, 2023), and in vertical convection (Wang _et al._, 2021\(c\); Yang _et al._, 2022). The density anomaly effect in freshwater causes distinct flow regimes and ice melting morphology under different temperatures (Wang _et al._, 2021\(b\),_a_,_c_). Salinity results in even more complex mushy layer structures (Worster, 1997), where convection also occurs (Du _et al._, 2023). These studies highlight the complex interplay between ice melting and ambient flow, leading to distinct ice morphologies and melt rates.
Icebergs and ice floes exhibit considerable variations in shape and size (Gherardi & Lagomarsino, 2015), with horizontal extents ranging from a few meters to several hundred kilometers. Despite extensive research on iceberg melt rates using models, experiments, and simulations, the potential effects of the aspect ratio on the melt rate did not get much attention. One of the experimental results is that the overall melt rate strongly depends on the aspect ratio (Hester _et al._, 2021; Cenedese & Straneo, 2023). However, this effect of the aspect ratio is still poorly understood. Therefore, to enhance the accuracy of iceberg melting predictions, it is imperative to consider the aspect ratio in models of iceberg melt rates.
In this study, we investigate the influence of the ice shape (ellipse aspect ratio) on melt rates through numerical simulations and theoretical analysis. We focus on the scenario of ice melting in cross-flow, as detailed in Methods, and neglect buoyancy effects and basal melting due to our focus on the top view of the melting process, distinguishing our work from previous studies (Couston _et al._, 2021; Hester _et al._, 2021). Our aim is to understand how the aspect ratio affects ice melt rates by conducting a series of numerical simulations. Our findings reveal that the aspect ratio plays a crucial role in the melting process, leading to substantial variations in melt rates. We further propose a theoretical model for the dependence of melt rates on the control parameters that agrees well with the simulation results. In this model, two contributions to the total melt rate are distinguished - surface contact-induced melt and advective flow-induced melt. The observed strong aspect ratio dependence can be quantitatively understood.
## 2 Numerical method and set-up
We numerically integrate the velocity field \(\mathbf{u}\) and the temperature field \(\theta\) according to the Navier-Stokes equations, neglecting the effect of buoyancy induced by temperature differences. The melting process is modeled by the phase-field method, which has been widely used in previous studies (Favier _et al._, 2019; Hester _et al._, 2021; Couston _et al._, 2021; Yang _et al._, 2022, 2023_b_). In this technique, the phase field variable \(\phi\) is integrated in time and space and smoothly transitions from a value of 1 in the solid to a value of 0 in the liquid. The equations are non-dimensionalized by inflow speed \(U_{0}\) as the velocity scale, ice effective diameter \(D\) as the length scale, and the temperature difference between the ambient flow and the ice \(T_{0}\) as the temperature scale. The non-dimensionalized quantities include three velocity components \(u_{i}\) with \(i=1,2,3\), the pressure \(p\), the temperature \(\theta\), and the phase field
scalar \(\phi\). The dimensionless governing equations read
\[\nabla\cdot\mathbf{u}=0,\] \[\partial_{t}\mathbf{u}+\nabla\cdot(\mathbf{uu})=-\nabla p+\frac{1}{Re}\left( \nabla^{2}\mathbf{u}-\frac{\phi\mathbf{u}}{\eta}\right),\] \[\partial_{t}\theta+\nabla\cdot(\mathbf{u}\theta)=\frac{1}{RePr}\nabla^{2 }\theta+St\partial_{t}\phi,\] \[\frac{\partial\phi}{\partial t}=\frac{6}{5StC\,RePr}\left[\nabla^{2 }\phi-\frac{1}{\beta^{2}}\phi(1-\phi)(1-2\phi+C\theta)\right],\]
where \(\beta\) is the diffusive interface thickness, which is typically set to be the mean grid spacing. \(C\) is the phase mobility parameters related to the Gibbs-Thompson relation. We choose \(C=1\), which may overestimate the Gibbs-Thomson effect for high curvature. Therefore we avoid too extreme values of \(\gamma\), where the high curvature regions might be inaccurate. More details can be found in previous studies (Yang _et al._, 2023; Hester _et al._, 2020). Simulations are performed using the second-order staggered finite difference code AFiD, which has been extensively validated and used to study a wide range of turbulent flow problems (Ostilla-Monico _et al._, 2015; Yang _et al._, 2016; Liu _et al._, 2022), including phase-change problems (Yang _et al._, 2022, 2023_b_). The phase field method is applied to model the phase-change process, which has been widely used in previous studies (Favier _et al._, 2019; Hester _et al._, 2021; Couston _et al._, 2021; Ravichandran & Wettlaufer, 2021; Yang _et al._, 2023_b_).
The control parameters of the system are the Reynolds number \(Re\), which is the dimensionless flow strength, the Prandtl number \(Pr\), which is the ratio between kinematic viscosity \(\nu\) and thermal diffusivity \(\kappa\), and the Stefan number \(St\), which is the ratio between latent heat and sensible heat, and the aspect ratio of the initial ice shape \(\gamma\), which is defined as the ratio of its width and length and will be the focus of this article:
\[Re=\frac{U_{0}D}{\nu},\hskip 56.905512ptPr=\frac{\nu}{\kappa},\hskip 56.905512ptSt =\frac{\mathcal{L}}{c_{p}\Delta T},\hskip 56.905512pt\gamma=\frac{l_{w}}{l_{l}}. \tag{1}\]
Figure 1: (a). An illustration of the setup for ice melting in flowing water. The inflow is set at the left boundary with unidirectional velocity and uniform temperature \(\theta_{0}\). (b). Zoom in on the ice object. \(l_{w}\) and \(l_{l}\) represent the width and the length of the object, respectively. (c)-(f) represent the snapshots of the temperature field of ice melting in flowing water with different aspect ratios \(\gamma\), namely \(\gamma=0.25\) (c), \(0.44\) (d), \(1\) (e), and \(2.25\) (f). The corresponding movies are shown as supplementary materials. (g)-(j) show the snapshots of \(\partial\phi/\partial t\) (see text for more details), corresponding to the cases in (c)-(f). The color represents the local melt rate over the surface at different times. Also, the corresponding plot of the shift distance \(\widetilde{D}_{X}\) (normalized by corresponding \(l_{l}\)) of the centroids as a function of the dimensionless time is given. The mass loss rate \(\dot{m}\), normalized \(\partial\phi/\partial t\), is color-coded.
Here, \(U_{0}\) is the inflow velocity, \(l_{w}\) and \(l_{l}\) are the initial width and length of the ice shape (shown in fig. 1(b)), \(\mathcal{L}\) is the latent heat, \(c_{p}\) is the specific heat capacity, \(\Delta T\) is the temperature difference between inflow and the ice. Due to the large parameter space, some of the control parameters have to be fixed in order to make the study feasible. For the time being, we fix \(Pr=7\) and \(St=4\) as the values for water at \(20^{\rm o}\)C, unless specified otherwise. Later we also investigate the effect of \(Pr\) and \(St\) independently. Our simulations cover a parameter range of \(0\leqslant Re\leqslant 10^{3}\) (\(Re=10^{3}\) corresponds to flow speed \(U=5~{}cm/s\) and ice diameter \(D=2~{}cm\)) and \(0.1\leqslant\gamma\leqslant 2.25\).
The flow is vertically confined by two parallel boundaries with free-slip boundary conditions for the velocity and adiabatic for the temperature (see fig. 1(a)). In the simulations, we prescribe an ice object with area \(A_{0}\) and effective diameter \(D=2\sqrt{A_{0}/\pi}\) in a domain of length \(L=20D\) and width \(W_{z}=5D\). Both two-dimensional (\(L/W_{z}=4\)) and three-dimensional simulations (\(L/W_{z}=L/W_{x}=4\)) are conducted. The long and short axis lengths of the ellipse are \(l_{l}=D/\sqrt{\gamma}\) and \(l_{w}=D\sqrt{\gamma}\), respectively. Note that the finite cross-stream dimension of the computation domain could in principle lead to blockage effects. To ensure that this does not affect our results, we performed a test for circle shapes at \(Re=400\) with different \(W_{z}\), which produced a convergent melt rate at \(W_{z}=5D\). Initially, the simulations are run with the ice object fixed at zero velocity and \(\theta_{i}=0\), and the melting process is turned off until the flow reaches the fully developed stage, then we turn on the melting process. The velocity and temperature at the left and right boundary are set as uniform \(U_{0}\) and \(T_{0}\) by a penalty force. The initial temperature of the ice is set as the melting temperature \(\theta_{i}=0\), and the ambient fluid is set as the inflow temperature \(\theta=1\). The position of the ice front is described as iso-contour \(\phi=1/2\) as time evolves. A grid convergence test is shown in fig. 2.
## 3 Shape evolution and melt rate dependence on parameters
We first investigate the evolution of ice shapes, which depend on the initial shape and the flow rate. Temperature fields under different initial shapes for a fixed area are shown in fig. 1(c-f) (\(\gamma\) increases from (c) to (f)). From the temperature field, one can see the complex interaction between melting and detaching wake vortices. The ice has distinct front and back shapes - ice at the back is more deformed than at the front, due to the detaching vortices in the wake flow, which induces reduced and non-uniform heat flux.
To study the local melt rate at the ice front, we further plot \(\partial\phi/\partial t\) in fig. 1(g-j), whose
Figure 2: Resolution convergence test. (a). The normalized area as a function of dimensionless time for different resolutions at \(Re=400\). (b). The contour plots of the ice surface at \(t=8t_{0}\) (gray dashed line in (a)). Based on this, our final choice for \(Re=400\) is \(N=512\).
value is zero in the solid and liquid phases. At the solid-liquid interface, its value provides insight into the local melt rate. It reveals that two parts of the ice melt faster (in reddish color): the front due to inflow, and the back due to wake flow. This melting characteristic is similar to that of dissolution (Ristroph _et al._, 2012; Mac Huang _et al._, 2015) and to what was formed in previous experiments on melting spheres (Hao & Tao, 2002), where the front tends to melt faster than the back. This occurs because the liquid at the back of the solid is cooler than at the front due to mixing induced by the wake. We investigate the front-to-back asymmetry in the melting by plotting the distance of the ice centroid from its initial position over time in fig. 1(g-j). For small \(\gamma\), we observe a significant shift of the centroid to the back, while for large \(\gamma\), the larger cross-section results in stronger detaching flow, which enhances melting at the back and thus prevents a significant movement of the centroid (see fig. 1(f)). As melting progresses, the front of the shape remains rounded, and different back shapes are obtained for different \(\gamma\). These shapes can be attributed to the presence of the detaching vortices, whose shedding occurs in an oscillating manner at the top and bottom of the body. As a result, melting is favored at the top and bottom, creating a wedge-like shape.
The complicated ice-water interaction not only affects the melting shape but also the melt rate. Fig. 3(a) shows the normalized total area change over time for different aspect ratio \(\gamma\). For increasing \(\gamma\), the melt curves exhibit a non-monotonic trend, with the melt rate first decreasing and then increasing. We measure the time taken for the ice to completely melt as \(t_{f}\), and take \(f=1/t_{f}\) to be the mean melt rate. Fig. 3(b) shows the normalized melt rate \(f/\hat{f}_{0}\) as a function of \(\gamma\) for different \(Re\), where \(f_{0}(Re)\) is the mean melt rate for the circles (\(\gamma=1\)). Here, the same trend is observed, and \(\overline{f}\) depends not only on \(\gamma\) but also on \(Re\). In the absence of flow (\(Re=0\)), the minimum melt rate occurs at the optimal aspect ratio \(\gamma_{min}=1\) for a circle shape. As \(Re\) increases, \(\gamma_{min}\) decreases until it converges to around \(\gamma_{min}\approx 0.5\). The same trend is also observed for 3D simulations of a cylinder with the cross-section of different aspect ratios.
Our findings demonstrate that ice with an elliptical shape (when the long axis is aligned with the flow direction) can melt up to 10% slower than a circle shape when exposed to flowing water. This previously neglected shape factor may have implications for accurately estimating the melt rate of icebergs in previous models that neglected it (Weeks & Campbell
Figure 3: (a). The area \(A\) normalized \(A_{0}\) as function of time normalized by \(t_{0}=D/U_{0}\) for different \(\gamma\) and fixed \(Re=10^{3}\), \(Pr=7\), and \(St=4\). (b). Overall melt rate \(\overline{f}/f_{0}\) (from initial to complete melt), normalized by the overall melt rate \(f_{0}\) at \(\gamma=1\), as a function of \(\gamma\) for different \(Re\). The inset image shows the snapshots of temperature fields for 3D simulation results (\(Re=10^{3}\)). The dashed line is the theoretical curve from eq. (4.6), which is expected to hold for larger \(Re\).
1973; Martin & Adcroft 2010; FitzMaurice _et al._ 2017). The non-intuitive nature of this phenomenon raises a further question about its physical mechanism.
## 4 Theoretical model for the melt rate
To quantitatively explain the result, we consider the total ice mass budget for the melting process:
\[\frac{dA(t)}{dt}=P(\gamma,t)v_{n}, \tag{1}\]
where \(P(\gamma,t)\) is the perimeter of the ice (for 2D), and \(v_{n}(t)\) is the surface averaged melt speed. Assuming the elliptical shape is not significantly changing with time (i.e., \(\gamma(t)\approx\gamma(t=0)=\gamma\)), we have the expression for the ellipse perimeter from the elliptic integral:
\[P(\gamma,t)=2\sqrt{\frac{A(t)}{\pi\gamma}}\int_{0}^{2\pi}\sqrt{1-(1-\gamma^{2 })\sin^{2}\alpha}d\alpha, \tag{2}\]
where \(\alpha\) is the angle. \(v_{n}(t)\) would be uniform everywhere in the absence of flow. The presence of a flow around the body increases the temperature gradient at the ice front which enhances the heat transfer and also makes \(v_{n}(t)\) non-uniform as \(v_{n}(\alpha,t)\). The flow creates a viscous boundary layer and a thermal boundary layer, respectively of thicknesses \(\delta_{\nu}\) and \(\delta_{\theta}\). The width ratio of the boundary layers is given by the Prandtl number; \(Pr=\nu/\kappa=7\) in our case implies \(\delta_{\nu}>\delta_{\theta}\), as illustrated in fig. 4(a-b). In order to understand the coupling between the shape of the object and the flow around it, we consider the Stefan boundary condition, i.e. the (dimensionless) surface-averaged melt speed \(\tilde{v}_{n}\) is related to the surface-averaged heat flux \(\overline{Nu}\):
\[\tilde{v}_{n}=\frac{v_{n}}{U_{0}}=-\frac{1}{U_{0}}\frac{\kappa c_{p}}{\mathcal{ L}}\frac{\partial T}{\partial n}=\frac{\overline{Nu}}{StPrReA^{1/2}}, \tag{3}\]
where we can use the relation from the well-known scaling of thermal boundary layer thickness (Meksyn 1961; Grossmann & Lohse 2004) for the laminar boundary layer:
\[\overline{Nu}\sim\delta_{\theta}^{-1}\sim Re_{l_{w}}^{1/2}Pr^{1/3}/C(Pr), \tag{4}\]
with \(Re_{l_{w}}=Rel_{w}/D\) the Reynolds number defined by the cross-section length \(l_{w}\), \(C(Pr)\) is an infinite alternating series given in Meksyn (1961). In the limit of large \(Pr\), the series for \(C(Pr)\) will converge to \(C(Pr)=1\). By substituting eq. (2) and eq. (3) into eq. (1), we obtain an ODE which is easily solved, with the result
\[A(t)=A_{0}\left(1-\frac{t}{t_{f}}\right)^{\frac{4}{3}}, \tag{5}\]
\[t_{f}^{-1}\sim P(\gamma)\gamma^{1/4}\frac{Pr^{-2/3}}{C(Pr)}St^{-1}. \tag{6}\]
Thus the overall melt rate \(\overline{f}\) depends on \(\gamma\) as \(\overline{f}=t_{f}^{-1}\sim P(\gamma)\gamma^{1/4}\), where \(\gamma^{1/4}\) originates from the scaling of \(\overline{Nu}\), representing the advective flow-induced melt, and \(P(\gamma)\) represents the surface contact-induced melt. The contributions of \(P(\gamma)\) and \(\gamma^{1/4}\) are both presented in fig. 4(b), with \(P(\gamma)\) exhibiting a symmetric curve as a function of \(\gamma\) with a minimum at \(\gamma=1\), which represents melting without flow. In contrast, \(\gamma^{1/4}\) displays a monotonic increase in melt rate, indicating that ambient flow enhances melt rate. The overall melt rate \(\overline{f}\sim P(\gamma)\gamma^{1/4}\) has a non-monotonic trend with \(\gamma\), with a minimum at \(\gamma_{\min,\text{theory}}=0.48\) (which is essentially the
same as the value \(\gamma_{\min,\mathrm{sim}}\simeq 0.5\) observed by our numerical simulations). Fig. 3(b) shows the total melt rate curve, which agrees well with the numerical results obtained at the high \(Re\) in our numerical results. For relatively low \(Re\) and even \(Re=0\), the advective flow is weak, meaning \(\overline{Nu}\) is close to 1, instead of satisfying the boundary layer relation eq. (20).
In summary, the non-monotonic relation and shift of the minimum melt rate point are physically explained by the competition between the melt driven by surface contact and the melt driven by the advective flow. The former is related to the surface area, with the minimum at \(\gamma=1\), while the latter is driven by the ambient flow, leading to a monotonic increase in melt rate with increasing \(\gamma\). We confirmed that the assumption of a fixed shape as the ice melts is valid by comparing the measured and calculated perimeters from eq. (19) in fig. 5(a). The ratio remains close to 1, except for large \(\gamma\), which is attributed to the strong wake flow (see fig. 1(f)). The deviation of the assumption at large \(\gamma\) also explains the deviation in fig. 3(b) at large \(\gamma\).
## 5 Melt rate scalings - comparison of theory and numerical result
In order to test the scaling law \(A(t)\sim(1-t/t_{f})^{4/3}\) derived in the previous section, we conducted simulations with varying \(\gamma\) and fixed \(Re=10^{3}\), \(Pr=7\), and \(St=4\). The evolution of the area of the object is depicted in fig. 5(b), along with the scaling law given by eq. (21). The trends from simulations for different \(\gamma\) all follow the \(4/3\) scaling, in agreement with the theoretical result. Our findings demonstrate that regardless of the different physical mechanisms causing the body to ablate, whether it be through erosion (Ristroph _et al._, 2012), dissolution (Mac Huang _et al._, 2015), or melting, the scaling laws associated with the vanishing of the body are analogous.
Besides the dependence of the melt rate on \(\gamma\), we can also obtain the scaling dependence of the melt rate on \(Pr\) and \(St\) from eq. (22). To validate this scaling law, we performed simulations with varying \(St\) in fig. 5(c) while keeping \(\gamma\) fixed at 1 and \(Pr\) fixed at 7. At large \(St\) (relevant for iceberg melting in cold water), the melt rate from simulation results follows \(\overline{f}\sim St^{-1}\), which is consistent with the scaling law from eq. (22). At small St, the scaling deviates from \(St^{-1}\) because ice melts so fast that the melted fluid is still surrounding the ice,
Figure 4: (a). A schematic of the flow and temperature fields for different \(\gamma\). The steady outer flow consists of warm water, while the attached flow near the body forms a boundary layer (dashed line) containing the melted fluid. It also shows that the separation point of flow moves downstream as \(\gamma\) decreases. (b). Zoomed-in view of the velocity and temperature boundary layers, the former defined by the velocity and the latter by the temperature gradient. (c). The theoretical curve from eq. (22), including \(P(\gamma)\), \(\epsilon^{1/4}\), and \(P(\gamma)\gamma^{1/4}\) as a function of \(\gamma\). Here, Pr=7, St=4.
regardless of the ambient flow. This trend has also been observed in the study of melting in convection (Favier _et al._, 2019). We also conducted simulations with varying \(Pr\) in fig. 5(c) while keeping \(\gamma\) fixed as \(1\) and \(St\) fixed as \(4\). The simulation results follow \(\overline{f}\sim Pr^{-2/3}/C(Pr)\) from eq. (10), implying a simplified scaling \(\overline{f}\sim Pr^{-2/3}\) for large \(Pr\), which agrees with our results. In brief, our simulation results for varying \(St\) and \(Pr\) both show good agreement with the scaling law derived in eq. (10).
## 6 Conclusions and Outlook
Through a combination of simulations and theoretical modeling, we conducted a comprehensive investigation of the melting dynamics of a disk/elliptical shape of ice immersed in a cross-flow, taking into account the coupled dynamics between flow and ice melting for various initial shapes. Our results demonstrate that in the presence of an incoming flow, the front keeps a rounded shape during the melting process, while the back of the ice exhibits different shapes depending on the initial shape. Our simulations reveal that the presence of detaching vortices behind the ice plays a crucial role in shaping the melting pattern of the object.
Furthermore, we observed that the shape of the ice also has a significant effect on the melt rate. In the absence of flow, the circular shape (\(\gamma=1\)) melts more slowly than other elliptical shapes \(\gamma\neq 1\). However, in the presence of an external flow, some elliptic cylindrical bodies (\(\gamma<1\)) melt less rapidly than circular cylinders with the same volume. This optimal aspect ratio depends on the flow strength, represented by \(Re\). Our physical understanding of this dependence of the melt rate on the initial aspect ratio comes from the competition between the surface-contact-induced melt (\(\sim P(\gamma)\)) and the advective-flow-induced melt (\(\sim\gamma^{1/4}\)). By assuming a laminar boundary layer, we derived a model (eq. (11)and eq. (10)), accurately predicting the overall melt rate and its scaling as a function of \(\gamma\), \(Pr\), and \(St\). Our findings provide insight into the rich coupling dynamics between the ice-water interface and ambient flows and demonstrate the importance of considering ice shape in predicting melting rates.
The approach employed in this study, which combines fully resolved direct numerical simulations and theoretical explanations, offers the possibility to explain other phase-change problems coupled with advective flows. We caution that our findings reveal only a subset of the numerous factors that influence the ice-water system dynamics. In future investigations, it would be worthwhile to further explore additional factors such as the buoyancy force (Favier _et al._, 2019; Couston _et al._, 2021; Wang _et al._, 2021; Yang _et al._, 2023), melting near sidewalls or basal walls in the presence of ambient flow, and the effect of dissolved salt (Huppert &
Figure 5: (a). The ratio between the measured perimeter from simulations and the ideal elliptical perimeter from eq. (11) for varying \(\gamma\) at \(Re=400\). The ratio close to \(1\) means that our assumption is valid. (b). The normalized area \(A/A_{0}\) as a function \(1-t/t_{f}\) for varying \(\gamma\) at \(Re=400\). All curves follow the \(4/3\) scaling as theoretically derived in eq. (11). (c). The melt rate as a function of \(St\) for fixed \(\gamma=1\) and \(St=4\). The dashed gray line represents \(\overline{f}\sim St^{-1}\). (d). The melt rate as a function of \(Pr\) for fixed \(\gamma=1\) and \(Pr=7\). The dashed gray line represents \(\overline{f}\sim Pr^{-2/3}/C(Pr)\).
Turner 1978; Yang _et al._ 2023\(a\); Du _et al._ 2023). These topics have significant relevance to the modeling of geophysical and climatological large-scale processes. Additionally, we plan to investigate the interaction between the motion of solid objects and phase change, as this holds significant potential for further exploration since icebergs can move and rotate.
### Funding
We acknowledge PRACE for awarding us access to MareNostrum in Spain at the Barcelona Computing Center (BSC) under the project 2020235589 and project 2021250115 and the Netherlands Center for Multiscale Catalytic Energy Conversion (MCEC). We also acknowledge the support by the Priority Programme SPP 1881 Turbulent Superstructures of the Deutsche Forschungsgemeinschaft. This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958.
## Declaration of interests
The authors report no conflict of interest.
|
2310.20148 | Decision-Making for Autonomous Vehicles with Interaction-Aware
Behavioral Prediction and Social-Attention Neural Network | Autonomous vehicles need to accomplish their tasks while interacting with
human drivers in traffic. It is thus crucial to equip autonomous vehicles with
artificial reasoning to better comprehend the intentions of the surrounding
traffic, thereby facilitating the accomplishments of the tasks. In this work,
we propose a behavioral model that encodes drivers' interacting intentions into
latent social-psychological parameters. Leveraging a Bayesian filter, we
develop a receding-horizon optimization-based controller for autonomous vehicle
decision-making which accounts for the uncertainties in the interacting
drivers' intentions. For online deployment, we design a neural network
architecture based on the attention mechanism which imitates the behavioral
model with online estimated parameter priors. We also propose a decision tree
search algorithm to solve the decision-making problem online. The proposed
behavioral model is then evaluated in terms of its capabilities for real-world
trajectory prediction. We further conduct extensive evaluations of the proposed
decision-making module, in forced highway merging scenarios, using both
simulated environments and real-world traffic datasets. The results demonstrate
that our algorithms can complete the forced merging tasks in various traffic
conditions while ensuring driving safety. | Xiao Li, Kaiwen Liu, H. Eric Tseng, Anouck Girard, Ilya Kolmanovsky | 2023-10-31T03:31:09Z | http://arxiv.org/abs/2310.20148v2 | Decision-Making for Autonomous Vehicles with Interaction-Aware Behavioral Prediction and Social-Attention Neural Network
###### Abstract
Autonomous vehicles need to accomplish their tasks while interacting with human drivers in traffic. It is thus crucial to equip autonomous vehicles with artificial reasoning to better comprehend the intentions of the surrounding traffic, thereby facilitating the accomplishments of the tasks. In this work, we propose a behavioral model that encodes drivers' interacting intentions into latent social-psychological parameters. Leveraging a Bayesian filter, we develop a receding-horizon optimization-based controller for autonomous vehicle decision-making which accounts for the uncertainties in the interacting drivers' intentions. For online deployment, we design a neural network architecture based on the attention mechanism which imitates the behavioral model with online estimated parameter priors. We also propose a decision tree search algorithm to solve the decision-making problem online. The proposed behavioral model is then evaluated in terms of its capabilities for real-world trajectory prediction. We further conduct extensive evaluations of the proposed decision-making module, in forced highway merging scenarios, using both simulated environments and real-world traffic datasets. The results demonstrate that our algorithms can complete the forced merging tasks in various traffic conditions while ensuring driving safety.
Autonomous Vehicles, Interaction-Aware Driving, Imitation Learning, Neural Networks, Traffic Modeling
## I Introduction
One of the challenges in autonomous driving is interpreting the driving intentions of other human drivers. The communication between on-road participants is typically non-verbal, and relies heavily on turn/brake signals, postures, eye contact, and subsequent behaviors. In uncontrolled traffic scenarios, e.g., roundabouts [1], unsignalized intersections [2], and highway ramps [3], drivers need to negotiate their order of proceeding. Fig. 1 illustrates a forced merging scenario at a highway entrance, where the ego vehicle in red attempts to merge into the highway before the end of the ramp. This merging action affects the vehicle behind in the lane being merged, and different social traits of its driver can result in different responses to the merging intent. A cooperative driver may choose a lane change to promote the merging process, while an egoistic driver may maintain a constant speed and disregard the merging vehicle. Therefore, understanding the latent intentions of other drivers can help the ego vehicle resolve conflicts and accomplish its task.
In this paper, we specifically focus on the highway forced merging scenario illustrated in Fig. 1 and the objective of transitioning the ego vehicle onto the highway from the ramp in a timely and safe manner. The difficulty of developing suitable automated driving algorithms for such scenarios is exacerbated by the fact that stopping on the ramp in non-congested highway traffic could be dangerous.
The forced merging has been addressed in the automated driving literature from multiple directions. In particular, learning-based methods have been investigated to synthesize controllers for such interactive scenarios. End-to-end planning methods [4] have been proposed to generate control inputs from Lidar point clouds [5] and RGB images [6]. Reinforcement Learning (RL) algorithms have also been considered to learn end-to-end driving policies [7, 8]. A comprehensive survey of RL methods for autonomous driving applications is presented in [9]. Meanwhile, Imitation Learning-based methods have been exploited to emulate expert driving behaviors [10, 11]. However, the end-to-end learning-based controllers lack interpretability and are limited in providing safety guarantees in unseen situations. To address these concerns, researchers have explored the integration of learning-based methods with planning and control techniques. Along these lines, Model Predictive Control (MPC) algorithms have been integrated with the Social-GAN [12] for trajectory prediction and planning [13]. Meanwhile, Inverse RL methods have also been explored to predict drivers' behavior for planning purposes [14, 15, 16]. However, the learning-based modules in these systems may have limited capability to generalize and transfer to unobserved scenarios or behaviors.
There also exists extensive literature on modeling the interactive behaviors between drivers using model-based ap
Fig. 1: Schematic diagram of the highway forced merging scenario: An on-ramp ego vehicle in red is merging onto the highway while interacting with the highway vehicles in grey.
proaches. Assuming drivers maximize their rewards [14], game-theoretic approaches have been proposed to model traffic interactions, such as level-\(k\) hierarchical reasoning framework [17], potential games [18], and Stackelberg games [15, 19]. In the setting of the level-\(k\) game-theoretic models, approaches to estimating drivers' reasoning levels have been proposed [20]. A novel Leader-Follower Game-theoretic Controller (LFGC) has been developed for decision-making in forced merging scenarios [21]. However, solving game-theoretic problems could be computationally demanding and has limited scalability to a larger number of interacting drivers or longer prediction horizons. To be able to account for the uncertainty in the interactions, probabilistic methods, leveraging either Bayesian filter [21, 22] or particle filter [23] with Partially Observable Markov Decision Process (POMDP) [21, 24], have also been implemented to encode and estimate the uncertain intent of other drivers as hidden variables.
In this paper, we consider the Social Value Orientation (SVO) from social psychology studies [25] to model drivers' interactions. The SVO quantifies subjects' tendencies toward social cooperation [26] and has been previously used to model drivers' cooperativeness during autonomous vehicle decision-making in [15, 27]. In addition, researchers have combined SVO-based rewards with RL to generate pro-social autonomous driving behaviors [28, 29] or synthesize realistic traffic simulation with SVO agents [30]. In our proposed behavioral model, we consider both social cooperativeness and the personal objectives of the interacting drivers. Leveraging a Bayesian filter, we propose a decision-making module that accounts for pairwise interactions with other drivers, and computes a reference trajectory for the ego vehicle under the uncertain cooperative intent of other drivers. The method proposed in this paper differs from the previous work [31] in the following aspects: 1) Instead of using an action space with a few coarse action primitives, we synthesize a state-dependent set of smooth and realistic trajectories as our action space. 2) We design a Social-Attention Neural Network (SANN) architecture to imitate the behavioral model that structurally incorporates the model-based priors and can be transferred to various traffic conditions. 3) We develop a decision-tree search algorithm for the ego vehicle's decision-making, which guarantees safety and scalability. 4) We conduct an extensive evaluation of the behavioral model in predicting real-world trajectory and demonstrate the decision-making module capabilities in forced merging scenarios on both simulations and real-world datasets, which is not done in [31].
The proposed algorithm has several distinguished features:
1. The behavioral model incorporates both the driver's social cooperativeness and personal driving objectives, which produces rich and interpretable behaviors.
2. The proposed decision-making module handles the uncertainties in the driving intent using a Bayesian filter and generates smooth and realistic reference trajectories for the downstream low-level vehicle controller.
3. Differing from pure learning-based methods, the designed SANN incorporates social-psychological model-based priors. It imitates the behavioral model and is transferable across different traffic conditions while providing better online computation efficiency.
4. The decision-making module utilizes an interaction-guided decision tree search algorithm, which ensures probabilistic safety and scales linearly with the number of interacting drivers and prediction horizons.
5. The behavioral model is evaluated in predicting real-world trajectories. This model demonstrates good quantitative accuracy in short-term prediction and provides qualitative long-term behavioral prediction.
6. The decision-making module is tested in the forced merging scenarios on a comprehensive set of environments without re-tuning the model hyperparameters. The proposed method can safely merge the ego vehicle into the real-world traffic dataset [32] faster than the human drivers, and into diverse Carla [33] simulated traffic with different traffic conditions.
This paper is organized as follows: In Sec. II, we describe the model preliminaries, including the vehicle kinematics model, the action space with the lane-change modeling, and the choice of model hyperparameters. In Sec. III, we present the behavioral model, which can be utilized to predict interacting drivers' trajectories given their latent driving intentions. In Sec. IV, we discuss the SANN architecture that imitates the behavioral model and is suitable for online deployment. In Sec. V, we introduce the decision-making module of our ego vehicle together with a decision tree search algorithm that incorporates the SANN and improves computation efficiency. In Sec. VI, we report the results of real-world trajectory prediction using the behavioral model for the forced merging test on a real-world dataset and in simulations. Finally, conclusions are given in Sec. VII.
## II System and Model Preliminaries
In this paper, we design a modularized algorithm architecture for decision-making and control of the autonomous (ego) vehicle in the forced merging scenario. In this framework (see Fig. 2), we develop a parameterized behavioral model for modeling the behavior of interacting drivers. Leveraging this model and observed traffic interactions, we can estimate the latent driving intentions of interacting drivers as model parameters. Thereby, we can predict their future trajectories in response to the action of the ego vehicle. Based on the observations, and the predictions, a high-level decision-making module optimizes a reference trajectory for the ego vehicle merging into the target highway lane while ensuring its safety
Fig. 2: Proposed algorithm architecture for autonomous vehicles decision-making and control.
in the traffic. A low-level controller controls the vehicle throttle and steering angle to track the reference trajectory.
This chapter first introduces the vehicle kinematics model in Sec. II-A. Then, we present a state-dependent action space (in Sec. II-B) of the vehicle, which is a set of trajectories synthesized from the kinematics model. We discuss the detailed lane change trajectory modeling in Sec. II-C together with model hyperparameter identification from a naturalistic dataset [32].
### _Vehicle Kinematics_
We use the following continuous-time bicycle model [34] to represent the vehicle kinematics,
\[\left[\begin{array}{c}\dot{x}\\ \dot{y}\\ \dot{\varphi}\\ \dot{v}\end{array}\right]=\left[\begin{array}{c}v\cos(\varphi+\beta)\\ v\sin(\varphi+\beta)\\ \frac{\mu}{l_{r}}\sin(\beta)\\ a\end{array}\right]+\tilde{w}, \tag{1}\] \[\beta=\arctan\left(\frac{l_{r}}{l_{r}+l_{f}}\tan\delta\right),\]
where \(x\), \(v\), and \(a\) are the longitudinal position, velocity, and acceleration of the vehicle center of gravity (CoG), respectively; \(y\) is the lateral position of the CoG; \(\varphi\) is the heading angle of the vehicle; \(\beta\) is the sideslip angle; \(\delta\) is the front wheel steering angle; \(l_{r}\) and \(l_{f}\) denote the distance from the vehicle CoG to the front and rear wheel axles, respectively; \(\tilde{w}\in\mathbb{R}^{4}\) is a disturbance representing unmodeled dynamics.
We assume that all the highway vehicles, together with the ego vehicle, follow this kinematics model. We then derive discrete-time kinematics from (1) assuming zero-order hold with the sampling period of \(\Delta T\) sec. This leads to the discrete-time kinematics model,
\[s_{k+1}^{i}=f(s_{k}^{i},u_{k}^{i})+\tilde{w}_{k}^{i},\;i=0,1,2,\ldots, \tag{2}\]
where the subscript \(k\) denotes the discrete time instance \(t_{k}=k\Delta T\) sec; the superscript \(i\) designates a specific vehicle, where we use \(i=0\) to label the ego vehicle and \(i=1,2\ldots\) for other interacting vehicles; \(s_{k}^{i}=[x_{k}^{i},y_{k}^{i},\varphi_{k}^{i},{v_{k}^{i}}]^{T}\) and \(u_{k}^{i}=[a_{k}^{i},\delta_{k}^{i}]^{T}\) are the state and control vectors of vehicle \(i\) at time instance \(t_{k}\). Then, we can use this discrete kinematics model to synthesize vehicle trajectories. Note that there are other vehicle kinematics and dynamics models that could potentially represent vehicle behaviors [34] more realistically. The model (2) was chosen as it provides adequate accuracy for the purpose of decision-making (planning) of the ego vehicle while it is simple and computationally efficient [35].
### _Trajectory Set as Action Space_
Given the initial vehicle state and the kinematics model (2) neglecting the disturbance \(\tilde{w}_{k}^{i}\) and using different control signal profiles, we can synthesize various trajectories with duration \(N\Delta T\) sec for trajectory prediction and planning. For the \(i\)th vehicle at time \(t_{k}\), we assume that vehicle's action space is \(\Gamma(s_{k}^{i})=\left\{\gamma_{m}(s_{k}^{i})\right\}_{m=k}^{M}\) where each individual element \(\gamma^{(m)}(s_{k}^{i})=\left\{s_{n}^{i}\right\}_{n=k}^{k+M+1}\) is a trajectory of time duration \(N\Delta T\) sec synthesized using a distinct control sequence \(\left\{u_{n}^{i}\right\}_{n=k,\ldots,k+N}\) via the kinematics model (2). We select \(225\) different control sequences such that the number of considered trajectories is finite, i.e., \(M\leq 225\) for all \(\Gamma(s_{k}^{i})\) and all initial state \(s_{k}^{i}\). As shown in Fig. 3, the action space \(\Gamma(s_{k}^{i})\) depends on the current vehicle state \(s_{k}^{i}\) for two reasons: With a fixed control sequence \(\left\{u_{n}^{i}\right\}_{n}\), the resulted trajectory from (2) varies with the initial condition \(s_{k}^{i}\); A safety filter is implemented for \(\Gamma(s_{k}^{i})\) such that all trajectories intersect with the road boundaries are removed, which is also dependent on \(s_{k}^{i}\). The chosen \(225\) control sequences generate trajectories that encompass a set of plausible driving behaviors (see Fig. 3) and suffice tasks of trajectory prediction and planning. Note that the trajectory set can be easily enlarged with more diverse control sequences if necessary.
Meanwhile, we assume a complete lane change takes \(T_{\text{lane}}=N_{\text{lane}}\Delta T\) sec to move \(w_{\text{lane}}\) meters from the center line of the current lane to that of adjacent lanes. We set \(N_{\text{lane}}<N\) to allow trajectory sets to contain complete lane change trajectories. As shown in Fig. 3a), the trajectory set comprises 109 regular driving trajectories for an on-ramp vehicle that is intended to merge. This trajectory set considers varieties of driver's actions such as lane keeping with longitudinal accelerations/decelerations, merging with constant longitudinal speed, merging with longitudinal acceleration/deceleration, accelerating/decelerating before or after merging, etc. Moreover, we also include the behavior of aborting lane change (see Fig. 3b). For a lane-changing vehicle, this is a regular "change-of-mind" behavior to avoid collision with nearby highway vehicles. For longitudinal behaviors, we also assume speed and acceleration/deceleration limits of \([v_{\min},v_{\max}]\) and \([a_{\min},a_{\max}]\) for all vehicle at all times. Namely, the trajectory sets and control sequences satisfy \(a_{n}^{i}\in[a_{\min},a_{\max}]\) and \(v_{n}^{i}\in[v_{\min},v_{\max}]\) for all \(s_{n}^{i}\in\gamma^{(m)}(s_{k}^{i}),n=k,\ldots,k+N+1\) and for all \(\gamma^{(m)}(s_{k}^{i})\in\Gamma(s_{k}^{i}),m=1,\ldots,M(s_{k}^{i})\). The speed limits are commonly known quantities on highways and the longitudinal acceleration/deceleration is typically limited by the vehicle's performance limits.
Fig. 3: Examples of trajectory set \(\Gamma(s_{k}^{i})\) with duration \(6\) sec: (a) A trajectory set of \(M=109\) encompasses behaviors of lane keeping, lane change, and coupled longitudinal and lateral behavior (e.g., lane change with longitudinal acceleration/deceleration). (b) A trajectory set of \(M=129\) contains actions of lane change abortion and re-merge after aborting the previous lane change. Other normal lane change trajectories are in semi-transparent lines. Likewise, the lane change abortion behaviors are also coupled with various longitudinal acceleration/deceleration profiles.
### _Trajectory Hyperparameters and Lane Change Behavior_
We use a naturalistic highway driving High-D [32] dataset to identify the model hyperparameters, i.e., \(v_{\min}\), \(v_{\max}\), \(a_{\min}\), \(a_{\max}\), \(w_{\text{lane}}\), and \(T_{\text{lane}}\). The statistics visualized in Fig. 4 are obtained from data of 110,500 vehicles driven over 44,500 kilometers. The minimum speed is set to \(v_{\min}=2\;\mathrm{m}/\mathrm{s}\) since most of the vehicles have speeds higher than that and the maximum speed limit of the dataset is \(v_{\max}=34\;\mathrm{m}/\mathrm{s}\). The majority of longitudinal accelerations and decelerations of High-D vehicles are within the range of \([a_{\min},a_{\max}]=[-6,6]\;\mathrm{m}/\mathrm{s}^{2}\). The lane width \(w_{\text{lane}}=3.5\;\mathrm{m}\) as in High-D dataset. We select \(T_{\text{lane}}=N_{\text{lane}}\Delta T=4\;\mathrm{sec}\) since most of the High-D vehicles take between \(4\) and \(6\;\mathrm{sec}\) to change lanes. We keep these hyperparameters fixed for the following discussion and experiments. Note that these parameters can be identified similarly to different values in other scenarios if necessary.
In terms of lane change behaviors, given an acceleration sequence \(\left\{a_{k}^{i}\right\}_{k}\), we can derive the steering profile \(\left\{\delta_{k}^{i}\right\}_{k}\) of a lane change trajectory from 5th order polynomials [36],
\[x(t|\{p_{j}\}) =p_{0}+p_{1}t+p_{2}t^{2}+p_{3}t^{3}+p_{4}t^{4}+p_{5}t^{5}, \tag{3}\] \[y(t|\{q_{j}\}) =q_{0}+q_{1}t+q_{2}t^{2}+q_{3}t^{3}+q_{4}t^{4}+q_{5}t^{5},\]
which represents a vehicle lane change between time \(t=0\) and \(t=T_{\text{lane}}\;\mathrm{sec}\). Such lane change trajectories are commonly used in vehicle trajectory planning and control [37, 38].
Suppose, without loss of generality, that the lane change starts and ends at \(t_{0}=0\) and \(t_{N_{\text{lane}}}=T_{\text{lane}}\;\mathrm{sec}\), respectively. The following procedure is utilized to determine the trajectory and steering profile during a lane change: At time \(t_{k}=k\Delta T\;\mathrm{sec}\), given vehicle state \(s_{k}^{i}=[x_{k}^{i},y_{k}^{i},\varphi_{k}^{i},v_{k}^{i}]^{T}\) and lateral target lane center \(y_{\text{target}}\), we first solve for the coefficients \(\{p_{k,j}\}\) and \(\{q_{k,j}\}\) in (3) from the following two sets of boundary conditions,
\[\left\{\begin{array}{ll}x(t_{k})=x_{k}^{i},\;\;\dot{x}(t_{k})=v_{k}^{i},& \;\ddot{x}(t_{k})=a_{k}^{i},\\ y(t_{k})=y_{k}^{i},&\;\dot{y}(t_{k})=\dot{y}_{k}^{i},&\;\ddot{y}(t_{k})=\ddot{ y}_{k}^{i},\end{array}\right. \tag{4}\]
\[\left\{\begin{array}{ll}&x(T_{\text{lane}})=x_{k}^{i}+v_{k}^{i}(T_{\text{ lane}}-t_{k})\\ &+\frac{1}{2}a_{k}^{i}(T_{\text{lane}}-t_{k})^{2},\\ \dot{x}(T_{\text{lane}})=v_{k}^{i}+a_{k}^{i}(T_{\text{lane}}-t_{k}),&\;\ddot{x }(T_{\text{lane}})=a_{k}^{i},\\ y(T_{\text{lane}})=y_{\text{target}},\;\dot{y}(T_{\text{lane}})=0,&\;\ddot{y}(T _{\text{lane}})=0,\end{array}\right.\]
where we assume initial/terminal conditions \(\dot{y}(0)=\dot{y}(T_{\text{lane}})=0\) and \(\ddot{y}(0)=\ddot{y}(T_{\text{lane}})=0\), i.e., zero lateral velocity and acceleration at the beginning and the end of a lane change. Recursively, initial conditions \(\dot{y}(t_{k})=\dot{y}_{k}^{i}\) and \(\ddot{y}(t_{k})=\ddot{y}_{k}^{i}\) at step \(k\) can be computed using (3) with the coefficients \(\{q_{k-1,j}\}\) at previous step \(k-1\); and we assume a constant longitudinal acceleration \(a_{k}^{i}\) throughout the lane change process. Then, we can compute \(s_{k+1}^{i}=[x_{k+1}^{i},y_{k+1}^{i},\varphi_{k+1}^{i},v_{k+1}^{i}]^{T}\) from (3) using the following equations,
\[x_{k+1}^{i}=x(t_{k+1}|\{q_{k,j}\}_{j}),y_{k+1}^{i}=y(t_{k+1}|\{q _{k,j}\}_{j}), \tag{5}\] \[\varphi_{k+1}^{i}=\arctan\left(\dot{y}(t_{k+1}|\{q_{k,j}\}_{j})/ \dot{x}(t_{k+1}|\{q_{k,j}\}_{j})\right),\] \[v_{k+1}^{i}=\dot{x}(t_{k+1}|\{q_{k,j}\}_{j}).\]
Repeating this procedure for \(k=0,1,\ldots,N_{\text{lane}}-1\), we can synthesize a smooth lane change trajectory \(\left\{s_{k}^{i}\right\}_{k=0,\ldots,N_{\text{lane}}}\) with corresponding acceleration sequences \(\left\{a_{k}^{i}\right\}_{k=0,\ldots,N_{\text{lane}}-1}\). Fig. 5 illustrates this approach to the lane change modeling. Given an acceleration sequence \(\left\{a_{k}^{i}\right\}_{k}\) (see Fig. 5b) from one of the \(225\) control sequences \(\left\{u_{k}^{i}\right\}_{k}\), we leverage (3) and produce a smooth lane change trajectory that qualitatively matches with a real-world (High-D) lane change trajectory. Meanwhile, the resulting steering angle profile \(\left\{\delta_{k}^{i}\right\}_{k}\) (see Fig. 5c) is similar to those from human driving [39, 40].
## III Social Behavior Modeling
In this section, we model the two components of drivers' driving incentives that motivate them to take action from the trajectory sets defined in Sec. II. The first component consists of each individual driver's objectives as a personal reward in Sec. III-A. The second component uses an SVO-based reward to incorporate the drivers' social cooperativeness (see Sec. III-B). In Sec. III-C, we integrate this reward model into the interacting vehicle's decision-making process.
### _Driver's Driving Objectives and Personal Rewards_
Similar to the previous work [31], we model the personal reward of the \(i\)th driver who interacts with an adjacent vehicle
Fig. 4: Histogram of vehicle driving statistics in the High-D dataset [32]: (a) Time duration for a complete lane change. (b) Longitudinal velocity. (c) Longitudinal acceleration/deceleration (y-axis in log scale).
Fig. 5: A lane change trajectory synthesized using a given acceleration sequence: (a) Synthesized trajectory using our algorithm compared with a real-world lane change trajectory in the High-D dataset. (b) Designed acceleration sequence \(\left\{a_{k}^{i}\right\}_{k}\). (c) Derived steering sequence \(\left\{\delta_{k}^{i}\right\}_{k}\) from (2).
\(j\) using the following formula,
\[\begin{split}& r\Big{(}\gamma_{n_{1}}^{n_{2}}(s_{k}^{i}),\gamma_{n_{ 1}}^{n_{2}}(s_{k}^{j})|w^{i}\Big{)}=\neg c\Big{(}\gamma_{n_{1}}^{n_{2}}(s_{k}^ {i}),\gamma_{n_{1}}^{n_{2}}(s_{k}^{j})\Big{)}.\\ &\Big{[}\begin{array}{c}h(s_{k+n_{2}}^{i},s_{k+n_{2}}^{j})\quad \tau(s_{k+n_{2}}^{i})\quad e\left(\gamma_{n_{1}}^{n_{2}}(s_{k}^{j})\right)\end{array} \Big{]}\cdot w^{i}\end{split}\;, \tag{6}\]
where \(s_{k}^{i},s_{k}^{j}\) are the current states of the vehicles \(i,j\); \(\gamma_{n_{1}}^{n_{2}}(s_{k}^{i})=\{s_{k+n}^{i_{2}}\}_{n=1}^{n_{2}}\circ\gamma (s_{k}^{i})\) is a segment of the trajectory \(\gamma(s_{k}^{i})\in\Gamma(s_{k}^{i})\), and \(0\leq n_{1}\leq n_{2}\leq N+1\); \(\gamma_{n_{1}}^{n_{2}}(s_{k}^{j})\) is defined likewise; \(\neg\) is the logical negative operator; \(w^{i}\in\mathbb{R}^{3}\) is a vector of weights so that the personal reward is a weighted summation of personal objectives \(h\), \(\tau\), and \(e\). Here, \(c\), \(h\), \(\tau\), and \(e\) are four functions that capture different aspects of drivers' driving objectives:
1. Collision avoidance \(c\in\{0,1\}\): \(c(\gamma_{n_{1}}^{n_{2}}(s_{k}^{i}),\gamma_{n_{1}}^{n_{2}}(s_{k}^{j}))=1\) implies vehicle \(i\) following trajectory \(\gamma_{n_{1}}^{n_{2}}(s_{k}^{i})\) collides with vehicle \(j\) which follows trajectory \(\gamma_{n_{1}}^{n_{2}}(s_{k}^{j})\), and \(c=0\) indicates that two vehicles' trajectories are free of collision with each other. This is used to penalize collisions between trajectories.
2. Safety consciousness \(h\in[0,1]\): If vehicle \(j\) is the leading vehicle of vehicle \(i\), \(h(s_{k+n_{2}}^{i},s_{k+n_{2}}^{i})\) computes a normalized Time-to-Collision (\(TTC\)) at the end of their corresponding trajectories \(\gamma_{n_{1}}^{n_{2}}(s_{k}^{i}),\gamma_{n_{1}}^{n_{2}}(s_{k}^{j})\). A larger \(h\) implies a larger TTC with the leading vehicle. The safety consciousness can encourage vehicles to keep an appropriate headway distance and be conscious of potential collisions.
3. Travelling time \(\tau\in[0,1]\): \(\tau(s_{k+n_{2}}^{i})\) measures the closeness between the vehicle's final state in the trajectory \(\gamma_{n_{1}}^{n_{2}}(s_{k}^{i})\) with its destination, where a larger value implies shorter distance to the goal. Including \(\tau\) in the reward reflects the objective of shortening the traveling time, e.g., merging to the highway as soon as possible for the on-ramp vehicles.
4. Control effort \(e\in[0,1]\): \(e\left(\gamma_{n_{1}}^{n_{2}}(s_{k}^{i})\right)\) takes a lower value if \(\gamma_{n_{1}}^{n_{2}}(s_{k}^{i})\) is a lane-changing trajectory segment or generated with longitudinal acceleration/deceleration. The control effort captures drivers' desire to keep the lane and constant speed to avoid both longitudinal and lateral maneuvers.
We refer the readers to our previous work [31] for more detailed descriptions of the four functions. Similar methods that model the driver's driving objectives have also been reported in [41, 42, 43, 21, 44]. The weight \(w^{i}\) is the latent model parameter in the reward function \(r(\cdot|w^{i})\). Different weights reflect distinct personal goals and, therefore, embed various driving behaviors. For example, a driver considering personal reward with \(w^{i}=[0,0,1]^{T}\) may keep the lane and drive at a constant speed, thereby maximizing the reward via minimizing the control effort. Another driver with weights \(w^{i}=[1,0,0]^{T}\) tries to maximize the headway distance and might change lanes to overtake a leading vehicle if there is one.
### _Social Value Orientation and Multi-modal Reward_
The personal reward function \(r(\cdot|w^{i})\) captures drivers' decision-making as maximizing their own gain in the traffic interaction. However, this model does not encode the behaviors of cooperation and competition. For example, a highway driver observing the merging intention of an on-ramp vehicle might slow down to yield. In social psychology studies [25, 26], the notion of Social Value Orientation (SVO) was proposed to model this cooperative/competitive behavior in experimental games, and it has more recently been applied to autonomous driving [15]. Taking inspiration from this work, we use the driver's SVO to incorporate the personal reward with the driver's tendency toward social cooperation.
To this end, we assume each vehicle \(i\) interacts with the adjacent vehicle \(j\in A(i)\), where \(A(i)\) contains indices of all the adjacent vehicles around vehicle \(i\). We model driver \(i\)'s intention using a multi-modal reward function of the form,
\[\begin{split}& R\Big{(}\gamma_{n_{1}}^{n_{2}}(s_{k}^{i}),\gamma_{n_{ 1}}^{n_{2}}(s_{k}^{-i})|\sigma^{i},w^{i}\Big{)}\\ &=\alpha(\sigma_{i})\cdot r\Big{(}\gamma_{n_{1}}^{n_{2}}(s_{k}^{ i}),\gamma_{n_{1}}^{n_{2}}(s_{k}^{j})|w^{i}\Big{)}\\ &+\beta(\sigma_{i})\cdot\mathbb{E}_{j\in A(i)}\left[r\Big{(} \gamma_{n_{1}}^{n_{2}}(s_{k}^{j}),\gamma_{n_{1}}^{n_{2}}(s_{k}^{i})|w^{j} \Big{)}\right],\end{split} \tag{7}\]
where \(\mathbf{s}_{k}^{-i}=[s_{k}^{0},s_{k}^{i},s_{k}^{i},\dots]\) is the aggregated state of all the adjacent vehicles of vehicle \(i\); \(\mathbf{\gamma}_{n_{1}}^{n_{2}}(\mathbf{s}_{k}^{-i})=[\gamma_{n_{1}}^{n_{2}}(s_{k}^{0}), \gamma_{n_{1}}^{n_{2}}(s_{k}^{1}),\gamma_{n_{1}}^{n_{2}}(s_{k}^{2}),\dots]\) concatenates one possible trajectory segment \(\gamma_{n_{1}}^{n_{2}}(s_{k}^{j})\) for each vehicle \(j\in A(i)\); The SVO \(\sigma^{i}\) is another latent model parameter. It takes one of the four values corresponding to four SVO categories and the values of \(\alpha(\sigma^{i})\) and \(\beta(\sigma^{i})\) are specified as follows
\[(\alpha,\beta)=\left\{\begin{array}{cl}(0,1)&\text{if $\sigma^{i}=$ ``altruistic'',}\\ (1/2,1/2)&\text{if $\sigma^{i}=$ ``prosocial'',}\\ (1,0)&\text{if $\sigma^{i}=$ ``egoistic'',}\\ (1/2,-1/2)&\text{if $\sigma^{i}=$ ``competitive''.}\end{array}\right. \tag{8}\]
In (7), \(\alpha(\sigma^{i})\) weighs the self-reward while \(\beta(\sigma^{i})\) weighs an averaged reward to the other vehicles. We also note that the weight \(w^{j}\) is an internal parameter of vehicle \(j\) and is a latent variable affecting the decision of vehicle \(i\). Similar to [31], we assume \(w^{j}=[1/3,1/3,1/3]\) in Eq. (7) for \(j\in A(i)\). The rationale behind this assumption is that an altruistic or prosocial (or competitive) driver of vehicle \(i\) is likely to cooperate (or compete) with other drivers in all three objectives if they do not know others' actual intentions.
Using this multi-modal reward, we can model each driver's intention to achieve their personal objectives (reflected in \(w^{i}\)) and, to a certain extent, cooperate with others (encoded in \(\sigma^{i}\)). For example, suppose two highway drivers with the same personal weights \(w^{i}=[0,0,1]^{T}\) encounter a merging on-ramp vehicle. A "egoistic" highway driver values the control effort heavily and, therefore is likely to keep the lane at a constant speed and ignore the merging vehicle. On the contrary, a "prosocial" highway driver might consider changing lanes or slowing down to promote on-ramp merging action such that the net reward in (7) is larger.
### _Driving Behavior Model_
Using the multi-modal reward, we can decode/infer drivers' intentions from their actions/trajectories, which can be represented by the model parameters \(w^{i},\sigma^{i}\). We formalize this process into a behavioral model,
\[\gamma^{*}(s_{k}^{i})=\operatorname*{argmax}_{\gamma(s_{k}^{i})\in\Gamma(s_{k}^{i })}\;Q\left(\mathbf{s}_{k}^{-i},\gamma(s_{k}^{i})|\sigma^{i},w^{i}\right), \tag{9}\]
where \(\gamma^{*}(s^{i}_{k})\) is the resulting reference trajectory for vehicle \(i\) and \(Q\) denotes the corresponding cumulative reward function. This cumulative reward admits the following form,
\[\begin{split}& Q\left(\mathbf{s}^{-i}_{k},\gamma(s^{i}_{k})|\sigma^{i}, w^{i}\right)=\underset{\mathbf{\gamma}(\mathbf{s}^{-i}_{k})\in\mathbf{\Gamma}(\mathbf{s}^{-i}_{k})}{ \mathbb{E}}\\ &\left[\sum_{n=0}^{\left\lfloor N^{\prime}/N^{\prime}\right\rfloor }\lambda^{n}R\Big{(}\gamma^{(n+1)N^{\prime}}_{\textsc{n}N^{\prime}}(s^{i}_{k} ),\mathbf{\gamma}^{(n+1)N^{\prime}}_{\textsc{n}N^{\prime}}(\mathbf{s}^{-i}_{k})|\sigma ^{i},w^{i}\Big{)}\right],\end{split} \tag{10}\]
where \(\lambda\in(0,1)\) is a discount factor; the summation is a cumulative reward of vehicle \(i\) over a \(N\Delta T\) sec look-ahead/prediction horizon, and this reward is obtained according to (7); \(N^{\prime}\Delta T\) (\(N^{\prime}<N\)) in second denotes the sampling period that the driver updates its decision; \(\lfloor x\rfloor\) denotes the largest integer lower bound of \(x\in\mathbb{R}\); the expectation averages the reward over all possible aggregated trajectories in the set,
\[\mathbf{\Gamma}(\mathbf{s}^{-i}_{k})=\left\{\mathbf{\gamma}(\mathbf{s}^{-i}_{k}):\gamma(s^{j}_ {k})\in\Gamma(s^{j}_{k}),j\in A(i)\right\}. \tag{11}\]
After obtaining the optimal \(\gamma^{*}(s^{i}_{k})\) using (9), we compute the control signal \(u^{i}_{n}=[a^{i}_{n},\delta^{i}_{n}]^{T}\) at each time step \(n=k,\dots,k+N^{\prime}\) to track this reference trajectory \(\gamma^{*}(s^{i}_{k})\) for one sampling period \(N^{\prime}\Delta T\) sec. Then, we update the reference trajectory using (9) after \(N^{\prime}\Delta T\) sec. Eventually, using the behavioral model (9), we formulate the decision-making process of a highway driver motivated by the reward (10) and a combination of social psychological model parameters \(\sigma^{i},w^{i}\), while the driving behaviors are formalized as a receding-horizon optimization-based trajectory-tracking controller with a horizon of \(\lfloor N/N^{\prime}\rfloor\). However, solving this problem online can be computationally demanding. We can use a neural network to learn the solutions of (9) offline from a dataset, thereby imitating this behavioral model for online deployment.
## IV Interaction-Aware Imitation Learning with Attention Mechanism
Based on (9), the decision-making process of vehicle \(i\) is deterministic given \(s^{i}_{k}\), \(\mathbf{s}^{-i}_{k}\). Instead of learning a one-hot encoding and to include stochasticity in the decision-making process, we prescribe a policy distribution from (10) using a softmax decision rule [45] according to,
\[\begin{split}&\pi\left(\gamma(s^{i}_{k})|\sigma^{i},w^{i},s^{i}_{ k},\mathbf{s}^{-i}_{k}\right)\\ &\propto\exp\left[Q\left(\mathbf{s}^{-i}_{k},\gamma(s^{i}_{k})|\sigma ^{i},w^{i}\right)\right].\end{split} \tag{12}\]
Note that \(\pi\) takes values in \(\mathbb{R}^{225}\), where we assign zero probabilities in \(\pi\) for the unsafe trajectories \(\gamma_{\text{ unsafe}}(s^{i}_{k})\notin\Gamma(s^{i}_{k})\) filtered out in Sec. II-B. Then, we can learn a neural network mapping \(\pi_{NN}\) for imitating the actual behavioral model \(\pi\) using minimization of a modified Kullback-Leibler divergence according to the loss function,
\[\begin{split}&\mathcal{L}\left(\pi,\pi_{NN}|\sigma^{i},w^{i},s^{i }_{k},\mathbf{s}^{-i}_{k}\right)\\ &=\sum\limits_{m=1}^{225}\Big{\{}\pi\left(\gamma^{(m)}(s^{i}_{k} )\right)\cdot\log\Big{[}\pi\left(\gamma^{(m)}(s^{i}_{k})\right)+\epsilon\Big{]} \Big{\}}\\ &-\pi\left(\gamma^{(m)}(s^{i}_{k})\right)\cdot\log\big{[}\pi_{NN} \left(\gamma^{(m)}(s^{i}_{k})\right)+\epsilon\big{]}\Big{\}},\end{split} \tag{13}\]
where a positive constant \(\epsilon\ll 1\) is chosen to avoid zero probability inside the logarithm for numerical stability, and for simplicity, we omit the terms \(\sigma^{i},w^{i},s^{i}_{k},\mathbf{s}^{-i}_{k}\) in the notation of \(\pi\), \(\pi_{NN}\). This loss function \(\mathcal{L}(\cdot)\geq 0\) measures the similarity between two discrete probability distributions where smaller loss implies more similar distributions.
We design a Social-Attention Neural Network (SANN) architecture (see Fig. 6) that comprises three components: The input normalization (Sec. IV-A) derives a set of normalized vectors from inputs \(\sigma^{i},w^{i},s^{i}_{k},\mathbf{s}^{-i}_{k}\) using the highway structural information. Then, we generate a set of feature vectors via an interaction-aware learning process using the attention backbone in Sec. IV-B, and we present the attention mechanism in Sec. IV-C. Finally, using the learned feature vectors, the policy head imitates the policy distribution \(\pi\) from the behavioral model (Sec. IV-D).
### _Input Normalization_
Given different lane dimensions as labeled in Fig. 1, we aim to normalize the inputs \(\sigma^{i},w^{i},s^{i}_{k},\mathbf{s}^{-i}_{k}\) accordingly to facilitate the neural network training. The normalization produces input vectors \(\bar{s}^{i}_{k}\) for vehicle \(i\) and a set of vectors \(\{\bar{s}^{j}_{k}\}\) for \(j\in A(i)\) according to,
\[\bar{s}^{i}_{k}=\left[\begin{array}{c}\left(x^{i}_{k}-\nicefrac{{l}}{{2}}-x _{\text{ramp}}\right)/l_{\text{ramp}}\\ \left(x^{i}_{k}+\nicefrac{{l}}{{2}}-x_{\text{ramp}}\right)/l_{\text{ramp}}\\ \left(y^{i}_{k}-y_{\text{ramp}}\right)/w_{\text{ramp}}\\ \left(\nu^{i}_{k}-v_{\text{min}}\right)/(v_{\text{max}}-v_{\text{min}})\\ w^{i}\end{array}\right],\iota\in\{i\}\cup A(i), \tag{14}\]
where \(l^{\iota}\) is the wheelbase length of vehicle \(\iota\); the first two elements of the feature vector (14) are the normalized longitudinal coordinates of the vehicle rear end and front end; \(w^{\iota}=[1/3,1/3,1/3]\) for all \(\iota\in A(i)\) per Sec. III-B.
Fig. 6: Schematic diagram of our SANN architecture: The attention backbone takes the normalized input vectors and produces their corresponding feature vectors via the attention mechanism [46]. The policy head fits a policy distribution \(\pi_{NN}\) from the resulting feature vectors incorporating the driver \(i\)’s social value orientation \(\sigma^{i}\).
### _Attention Backbone_
We first use a multi-layer perceptron (MLP), i.e., a fully connected neural network, to expand the dimension of the inputs \(\{\tilde{s}_{k}^{i}\}\) individually from \(\mathbb{R}^{7}\) to \(\mathbb{R}^{225}\) according to,
\[z_{\ell}=\sigma_{\text{ReLU}}\left(W_{\ell}z_{\ell-1}+b_{\ell}\right),\ \ \ \ell=1,\ldots,L, \tag{15}\]
where \(W_{\ell}\) and \(b_{\ell}\) are the network parameters of the \(\ell\)th layer; \(\sigma_{\text{ReLU}}(z)=\max\left\{0,z\right\}\) is an element-wise ReLU activation function; \(L\in\mathbb{Z}\) is the number of layers; the inputs are \(z_{0}=\tilde{s}_{k}^{i}\in\mathbb{R}^{5}\), \(\iota\in\{i\}\cup A(i)\); the outputs of the MLP are learned vectors \(z_{k}^{i}=z_{L}\in\mathbb{R}^{225}\), \(\iota\in\{i\}\cup A(i)\). Then, we combine the learned vectors \(\{z_{k}^{i}\}_{*}\), into one matrix \(Z=[z_{k}^{i},\ldots,z_{k}^{j},\ldots]^{T}\) where each row corresponds to a learned feature vector \((z_{k}^{i})^{T}\). The row dimension of \(Z\) can vary with the numbers of interacting vehicles in \(A(i)\), which is undesirable for forming a batch tensor in neural network training [47]. Thus, we consider a maximum of \(N_{z}-1\) adjacent vehicles in \(A(i)\) such that we can append zero rows to \(Z\) and construct \(Z\in\mathbb{R}^{N_{z}\times 225}\). Moreover, we use a mask matrix \(H\in\mathbb{R}^{N_{z}\times N_{z}}\) to mark down the indices of the appended rows for the latter masked self-attention process. The element of \(H\) in the \(i\)th row and \(j\)th column attains \(H_{i,j}=-\infty\) if the \(i\)th or \(j\)th row vector in \(Z\) is an appended zero vector, and obtains \(H_{i,j}=0\) otherwise.
Subsequently, we pass \(Z\) through three identical cascaded blocks (see Fig. 6) using the following formula,
\[\begin{split}\bar{Z}_{\ell}&=\texttt{Attention}(Z_{ \ell-1}|W_{Q,\ell},W_{K,\ell},W_{V,\ell}),\\ Z_{\ell}&=\sigma_{\text{ReLU}}(\bar{Z}_{\ell})+Z_ {\ell-1},\ \ \ell=1,2,3,\end{split} \tag{16}\]
where \(\texttt{Attention}(\cdot)\) denotes the masked self-attention block [46]; \(W_{Q,\ell}\in\mathbb{R}^{225\times|Q|}\), \(W_{K,\ell}\in\mathbb{R}^{225\times|Q|}\), and \(W_{V,\ell}\in\mathbb{R}^{225\times|V|}\) are the parameters named query, key, and value matrix of the \(\ell\)th masked self-attention; the inputs are \(Z_{0}=Z\) and each layer \(\ell=1,2,3\) produces \(Z_{\ell}\in\mathbb{R}^{N_{z}\times 225}\); the summation outside \(\sigma_{\text{ReLU}}\) corresponds to the bypass connection in Fig. 6 from the beginning of a masked self-attention block to the summation symbol \(\oplus\). This bypass connection is called residual connection [48] and is designed to mitigate the vanishing gradient issue in the back-propagation of deep neural networks. We chose to cascade three such blocks via trading-off between empirical prediction performance and computational cost in comparison with those using different numbers of this block.
### _Attention Mechanism_
In the \(\ell\)th attention block (16), we leverage the attention mechanism to interchange information between row vectors of the matrix \(Z_{\ell-1}\) in the learning process. Specifically, the \(\ell\)th masked self-attention block can be represented using the following set of equations,
\[Q_{\ell}=Z_{\ell-1}W_{Q,\ell}, \tag{17a}\] \[K_{\ell}=Z_{\ell-1}W_{K,\ell},\] (17b) \[V_{\ell}=Z_{\ell-1}W_{V,\ell},\] (17c) \[E_{\ell}=(Q_{\ell}K_{\ell}^{T})\circ\left[1/\sqrt{|Q|}\right]_{225 \times 225}+H,\] (17d) \[P_{\ell}=\texttt{Softmax}(E_{\ell},\text{dim=1}),\] (17e) \[\bar{Z}_{\ell}=P_{\ell}V_{\ell}, \tag{17f}\]
where the row vectors of matrices \(Q_{\ell}\), \(K_{\ell}\), \(V_{\ell}\) are called query, key, and value vectors, respectively, learned from the corresponding row vectors in matrix \(Z_{\ell-1}\); \(|Q|\) and \(|V|\) are the dimensions of the query and value vectors; \([x]_{a\times b}\) is a matrix of size \(a\times b\) with all entries equal to \(x\); \(\circ\) is the element-wise Hadamard product; the element \(e_{i,j}\) in \(i\)th row and \(j\)th column of \(E_{\ell}\) represents a normalized similarity score induced by dot-product between \(i\)th query vector in \(Q_{\ell}\) and \(j\)th key vector in \(K_{\ell}\); \(E_{\ell}\) essentially encodes how similar two row vectors in \(Z_{\ell-1}\) are with each other; (17e) apply \(\texttt{Softmax}(\cdot)\) to each column of \(E_{\ell}\); the element \(p_{i,j}\) in \(i\)th row and \(j\)th column of \(P_{\ell}\) is equal to \(\exp(e_{i,j})/\sum_{k}\exp(e_{k,j})\) such that each column vector of \(P_{\ell}\) is a weight vector; each row vector in \(\bar{Z}_{\ell}\) is a weighted summation of value vectors in \(V_{\ell}\) using the weights learned in \(P_{\ell}\).
Notably, in the first layer \(\ell=1\), the mask \(H\) in (17d) sets the similarities between appended zero row vectors and other row vectors in \(Z_{0}=Z\) to \(-\infty\). Subsequently, if the \(i\)th or \(j\)th row vector in \(Z\) is an appended zero vector, (17e) results in weights \(p_{i,j}=0\) in \(P_{1}\) and (17f) yields the \(i\)th or \(j\)th row also a zero vector in \(\bar{Z}_{1}\). Furthermore, the computation in (16) inherits the zero-to-row vectors from \(\bar{Z}_{1}\) to \(Z_{1}\). Inductively, through \(\ell=1,2,3\), the attention backbone preserves the zero rows in \(Z_{0}=Z\). Eventually, the attention backbone outputs \(Z_{3}=[f_{k}^{i},\ldots,f_{k}^{j},\ldots]^{T}\) where each row vector \((f_{k}^{i})^{T}\) is a learned feature vector corresponds to the input row vector \((z_{k}^{i})^{T}\) in \(Z=[z_{k}^{i},\ldots,z_{k}^{j},\ldots]^{T}\).
### _Policy Head_
Similar to (7), we use the SVO category \(\sigma^{i}\) of the \(i\)th driver to combine the learned information \(\{f_{k}^{j}\}_{j\in A(i)}\) from adjacent vehicles \(j\in A(i)\) with \(f_{k}^{i}\) of the driver \(i\) and attain a single vector \(\bar{f}_{k}^{i}\in\mathbb{R}^{225}\) according to,
\[\bar{f}_{k}^{i}=\alpha(\sigma^{i})f_{k}^{i}+\beta(\sigma^{i})\Sigma_{j\in A(i)}f_{k }^{j}. \tag{18}\]
We use another MLP similar to (15) with a bypass residual connection. This is followed by the final element-wise \(\texttt{Softmax}\) activation function that admits the following form,
\[\texttt{Softmax}(z)=\exp(z_{i})/\Sigma_{i}\exp(z_{i}), \tag{19}\]
where \(z\) is a column vector and \(z_{i}\) is the \(i\)th element in \(z\). The \(\texttt{Softmax}\) calculates a probability distribution as the policy distribution output \(\pi_{NN}\in\mathbb{R}^{225}\).
The SANN architecture provides several advantages:
1. The normalization process normalizes the input information using lane and vehicle dimensions which improves the prediction robustness to different highway structural dimensions and vehicle model types.
2. The learning process is interaction-aware. The attention backbone interchanges information between each feature vector corresponding to each interacting driver, which captures the inter-traffic dependencies in the personal reward (6).
3. The learning process is cooperation-aware. The policy head fuses the learned features using the driver's SVO \(\sigma^{i}\). This process emulates (7) and introduces the notion of cooperation/competition into learning.
4. The SANN incorporates the behavioral model priors \(\sigma^{i},w^{i}\) that are later estimated by a Bayesian filter. This offers better online transferability to different drivers.
5. The SANN is permutation invariant, namely, interchanging the order of the inputs in \(\{\tilde{s}_{k}^{j}\}_{j\in A(i)}\) will not alter the value of the learned features \(\{f_{k}^{j}\}_{j\in A(i)}\) or affect the final policies \(\pi_{NN}\). Given the interacting drivers \(j\in A(i)\) geographically located in a 2D plane, the SANN learned policy should not be affected by the artificial information carried in the input order.
6. The SANN can handle variable numbers of inputs up to a maximum number \(N_{z}\). This offers better transferability to different traffic conditions/densities.
These properties might not necessarily be preserved by other networks, e.g., Graph Neural Networks [49], the LSTM [50]. Then, we use this neural network behavioral model to predict the trajectories of interacting vehicles for the decision-making of our ego vehicle in the forced merging scenario.
## V Decision-Making Under Cooperation Intent Uncertainty
We use the Bayesian filter to infer drivers' latent model parameters from observed traffic interactions (Sec.V-A). Then, based on the behavioral model, we incorporate the predictions of interacting drivers' behavior into a receding-horizon optimization-based controller to generate reference trajectories for the ego vehicle (Sec.V-B). In Sec.V-C, we use an interaction-guided decision tree search algorithm to solve this optimization problem and integrate it with the SANN for prediction. The learned SANN improves online prediction efficiency and while the tree-search algorithm offers a probabilistic safety guarantee and good scalability.
### _Bayesian Inference of Latent Driving Intentions_
At each time step \(k\), we assume the ego vehicle \(0\) interacts with adjacent vehicle \(i\in A(0)\) and can observe its nearby traffic state \(s_{k}^{i},\mathbf{s}_{k}^{-i}\). Assuming that the \(i\)th driver's decision-making process follows the policy (12), we need to infer the latent social psychological parameters \(\sigma^{i},w^{i}\) in order to predict the future behavior/trajectory of vehicle \(i\). We assume observation history of traffic around vehicle \(i\) is available,
\[\xi_{k}^{i}=[s_{0}^{i},\mathbf{s}_{0}^{-i},s_{1}^{i},\mathbf{s}_{1}^{-i},\ldots,s_{k}^ {i},\mathbf{s}_{k}^{-i}], \tag{20}\]
which contains the state \(s_{n}^{i}\) of vehicle \(i\) and the aggregated state \(\mathbf{s}_{n}^{-i}\) of adjacent vehicles around vehicle \(i\) for all time step \(n=0,1,\ldots,k\). Then, we use the following Bayesian filter to recursively estimate a posterior distribution of the latent parameters \(\sigma^{i},w^{i}\) from \(\xi_{k}^{i}\).
**Proposition 1**.: _Given a prior distribution \(\mathbb{P}\left(\sigma^{i},w^{i}|\xi_{k}^{i}\right)\) and assume the unmodeled disturbance \(\tilde{w}_{k}^{i}\sim\mathcal{N}(0,\Sigma)\) in (2) follows a zero-mean Gaussian distribution, then the posterior distribution \(\mathbb{P}\left(\sigma^{i},w^{i}|\xi_{k+1}^{i}\right)\) admits the following form,_
\[\begin{split}&\mathbb{P}\left(\sigma^{i},w^{i}|\xi_{k+1}^{i} \right)=\frac{1}{N\left(\xi_{k+1}^{i}\right)}.\\ & D(s_{k+1}^{i},\sigma^{i},w^{i},\dot{s}_{k},\mathbf{s}_{k}^{-i}) \cdot\mathbb{P}\left(\sigma^{i},w^{i}|\xi_{k}^{i}\right),\end{split} \tag{21}\]
_where \(N\left(\xi_{k+1}^{i}\right)\) is a normalization factor, and_
\[\begin{split}& D(s_{k+1}^{i},\sigma^{i},w^{i},\dot{s}_{k}^{i}, \mathbf{s}_{k}^{-i})=\\ &\sum_{\gamma(s_{k}^{i})\in\Gamma(s_{k}^{i})}\left[\mathbb{P} \left(\tilde{w}_{k}^{i}=s_{k+1}^{i}-\gamma_{1}^{1}(s_{k}^{i})\right)\right. \right.\\ &\left.\left.\pi\left(\gamma(s_{k}^{i})|\sigma^{i},w^{i},\dot{s}_ {k}^{i},\mathbf{s}_{k}^{-i}\right)\right].\end{split} \tag{22}\]
We note that (22) represents a transition probability of a driver moving from \(s_{k}^{i}\) to \(s_{k+1}^{i}\) following the kinematics (2) and policy (12). We initialize the Bayesian filter with a uniform distribution. Meanwhile, we can replace \(\pi\) in (22) with \(\pi_{NN}\) for faster online Bayesian inference. We also provide a proof of the proposition in the following:
Proof.: We apply the Bayesian rule to rewrite the posterior distribution according to,
\[\begin{split}&\mathbb{P}\left(\sigma^{i},w^{i}|\xi_{k+1}^{i} \right)=\mathbb{P}\left(\sigma^{i},w^{i}|s_{k+1}^{i},\mathbf{s}_{k+1}^{-i},\xi_{k} ^{i}\right)\\ &=\frac{\mathbb{P}\left(\mathbf{s}_{k+1}^{i}|\xi_{k}^{i}\right)}{ \mathbb{P}\left(\mathbf{s}_{k+1}^{i},\mathbf{s}_{k+1}^{i}|\xi_{k}^{i}\right)}\cdot \mathbb{P}\left(\sigma^{i},w^{i}|\xi_{k}^{i}\right)\\ &=\frac{1}{N\left(\xi_{k+1}^{i}\right)}\cdot D(s_{k+1}^{i}, \sigma^{i},w^{i},s_{k}^{i},\mathbf{s}_{k}^{-i})\cdot\mathbb{P}\left(\sigma^{i},w^ {i}|\xi_{k}^{i}\right),\end{split}\]
where we define \(N\left(\xi_{k+1}^{i}\right)=\frac{\mathbb{P}\left(\mathbf{s}_{k+1}^{i},\mathbf{s}_{k+ 1}^{-i}|\xi_{k}^{i}\right)}{\mathbb{P}\left(\mathbf{s}_{k+1}^{-i}|\xi_{k}^{i}\right)}\) and rewrite the transition probability,
\[\begin{split}& D(s_{k+1}^{i},\sigma^{i},w^{i},\dot{s}_{k}^{i}, \mathbf{s}_{k}^{-i})=\mathbb{P}\left(s_{k+1}^{i}|\sigma^{i},w^{i},\xi_{k}^{i}\right) \\ &=\sum_{\gamma(s_{k}^{i})\in\Gamma(s_{k}^{i})}\left[\mathbb{P} \left(\tilde{w}_{k}^{i}=s_{k+1}^{i}-\gamma_{1}^{1}(s_{k}^{i})\right)\right.\\ &\left.\cdot\pi\left(\gamma(s_{k}^{i})|\sigma^{i},w^{i},\dot{s}_ {k}^{i},\mathbf{s}_{k}^{-i}\right)\right].\end{split} \tag{23}\]
### _Receding-horizon Optimization-based Control_
Leveraging the posterior from (21), we use a receding-horizon optimization-based controller to incorporate the trajectory predictions (12) of interacting vehicles \(i\in A(0)\) and plan a reference trajectory for the ego vehicle according to
\[\gamma^{*}(s_{k}^{0})=\operatorname*{argmax}_{\gamma(s_{k}^{0})\in\Gamma(s_{k}^ {0})}Q_{0}\left(\mathbf{s}_{k}^{-0},\gamma(s_{k}^{0})\right). \tag{24}\]
Similar to Sec. III-C, we compute the control signal \(u_{n}^{0}=[a_{n}^{0},\delta_{n}^{0}]^{T}\) at each time step \(n=k,\ldots,k+N^{\prime}\) to track this reference trajectory \(\gamma^{*}(s_{k}^{0})\) for one control sampling period \(N^{\prime}\Delta T\) sec. Then, we update the reference trajectory using (23) after \(N^{\prime}\Delta T\) sec. Meanwhile, the cumulative reward function \(Q_{0}\left(\mathbf{s}_{k}^{-0},\gamma(s_{k}^{0})\right)\) admits the following form,
\[\begin{split}& Q_{0}\Big{(}\mathbf{s}_{k}^{-0},\gamma(s_{k}^{0})\Big{)}= \frac{1}{|A(0)|}\sum_{i\in A(0)}\\ &\Big{[}\mathbb{E}_{\sigma^{i},w^{i}\sim\mathbb{P}\left(\sigma^{i},w ^{i}|\xi_{k}^{i}\right)}\,Q_{0}^{\prime}\Big{(}\mathbf{s}_{k}^{-0},\gamma(s_{k}^{0})| \sigma^{i},w^{i}\Big{)}\Big{]},\end{split} \tag{25}\]
where the function value \(Q_{0}^{\prime}\) is computed according to,
\[\begin{split}& Q_{0}^{\prime}\Big{(}\mathbf{s}_{k}^{-0},\gamma(s_{k}^{0})| \sigma^{i},w^{i}\Big{)}=\mathbb{E}_{\gamma(s_{k}^{i})\sim\pi\left(\cdot|\sigma^{i},w^{i},s_{k}^{i},\mathbf{s}_{k}^{-i}\right)}\\ &\Big{[}\sum\limits_{n=0}^{|N^{\prime}|}r_{0}\Big{(}\gamma_{n^{ \prime}N^{\prime}}^{(n+1)N^{\prime}}(s_{k}^{0}),\gamma_{n^{\prime}N^{\prime}}^{(n+1)N^ {\prime}}(s_{k}^{i})\Big{)}\Big{]},\end{split} \tag{26}\]
and the ego vehicle acts to minimize its traveling time and avoid collision, thereby the ego reward function \(r_{0}\) attains the following form,
\[r_{0}\left(\gamma_{n_{1}}^{n_{2}}(s_{k}^{0}),\gamma_{n_{1}}^{n_{2}}(s_{k}^{i}) \right)=-\mathbb{C}\left(\gamma_{n_{1}}^{n_{2}}(s_{k}^{0}),\gamma_{n_{1}}^{n_{2}}(s_{k}^{ i})\right)\cdot\tau(
expectation of the reward function with respect to the behavioral model parameters \(\sigma^{i},w^{i}\), while (25) samples trajectory predictions \(\gamma(s^{i}_{k})\) of vehicle \(i\) from the policy \(\pi\) conditioned on \(\sigma^{i},w^{i}\). In (25), \(\pi\) can be replaced by \(\pi_{NN}\) learned by the SANN to speed up the computations. Nonetheless, solving the problem (23) requires an exhaustive search over the trajectory set \(\Gamma(s^{0}_{k})\) that can be computationally demanding. Instead, we can treat the trajectory set \(\Gamma(s^{0}_{k})\) as a decision tree (see Fig. 3) and a tree-search-based algorithm can be developed to improve the computation efficiency.
### _Interaction-Guided Decision Tree Search_
```
0:\(\Gamma(s^{0}_{k})\), \(A(0)\), \(\pi_{NN}\), and \(\xi^{i}_{k}\), \(\Gamma(s^{i}_{k})\), \(\mathbb{P}\left(\sigma^{i},w^{i}|\xi^{i}_{k-1}\right)\) for all \(i\in A(0)\)
1: initialize \(Q_{0}\left(\mathbf{s}^{-0}_{k},\gamma(s^{0}_{k})\right)=0\) for all \(\gamma(s^{0}_{k})\in\Gamma(s^{0}_{k})\)
2:\(\mathbb{P}\left(\sigma^{i},w^{i}|\xi^{i}_{k}\right)\propto\left[\sum_{\gamma(s^ {i}_{k})\in\Gamma(s^{0}_{k})}\mathbb{P}\left(\tilde{w}^{i}_{k}=s^{i}_{k+1}- \gamma^{1}_{1}(s^{i}_{k})\right)\cdot\pi_{NN}\left(\gamma(s^{i}_{k})|\sigma^{i },w^{i},\mathbf{s}^{i}_{k},\mathbf{s}^{-i}_{k}\right)\right]\cdot\mathbb{P}\left( \sigma^{i},w^{i}|\xi^{i}_{k-1}\right)\) for all \(i\in A(0)\)\(\triangleright\) Bayesian filter using SANN imitated behavioral model
3:\(\mathbb{P}\left(\gamma(s^{0}_{k})|\xi^{i}_{k}\right)=\sum_{\sigma^{i},w^{i}} \pi_{NN}\left(\gamma(s^{i}_{k})|\sigma^{i},w^{i},\mathbf{s}^{i}_{k},\mathbf{s}^{-i}_{k }\right)\cdot\mathbb{P}\left(\sigma^{i},w^{i}|\xi^{i}_{k}\right)\) for all \(i\in A(0)\)\(\triangleright\) interaction behavior prediction using SANN imitated behavioral model
4: sort \(A(0)\) w.r.t. \(\left((x^{0}_{k}-x^{i}_{k})^{2}+(y^{0}_{k}-y^{i}_{k})^{2}\right)^{1/2}\), \(i\in A(0)\) in a descending order
5:for\(i\in A(0)\)do\(\triangleright\) prioritize search over closer interactions
6:for\(\gamma(s^{0}_{k})\in\Gamma(s^{0}_{k})\)do\(\triangleright\) parallel search
7:for\(n=0-\lfloor N/N^{\prime}\rfloor\)do\(\triangleright\) over prediction horizons
8:\(c_{n}=\sum_{\gamma(s^{i}_{k})\in\Gamma(s^{0}_{k})}\mathbb{P}\left(\gamma(s^ {i}_{k})|\xi^{i}_{k}\right)\)\(\cdot c\left(\gamma^{(n+1)N^{\prime}}(s^{0}_{k}),\gamma^{(n+1)N^{\prime}} _{N^{\prime}}(s^{i}_{k})\right)\)\(\triangleright\) probability of collision with \(i\)
9:if\(c_{n}\)\(\triangleright\) 0.5 then \(\triangleright\) trim unsafe decision tree branch
10:\(\Gamma(s^{0}_{k})\leftarrow\Gamma(s^{0}_{k})\setminus\Gamma_{\text{ unsafe}}(s^{0}_{k})\), \(\Gamma_{\text{ unsafe}}(s^{0}_{k}):=\left\{\gamma\in\Gamma(s^{0}_{k})|\gamma^{(n +1)N^{\prime}}_{N^{\prime}}=\gamma^{(n+1)N^{\prime}}_{N^{\prime}}(s^{0}_{k})\right\}\), and terminate all parallel search branches in \(\Gamma_{\text{ unsafe}}(s^{0}_{k})\)
11:else
12:\(Q_{0}\left(\mathbf{s}^{-0}_{k},\gamma(s^{0}_{k})\right)\gets Q_{0}\left(\mathbf{s} ^{-0}_{k},\gamma(s^{0}_{k})\right)+\lambda^{n}\cdot r_{0}\left(\gamma^{(n+1)N^ {\prime}}_{N^{\prime}}(s^{0}_{k}),\gamma^{(n+1)N^{\prime}}_{N^{\prime}}(s^{i} _{k})\right)\cdot\mathbb{P}\left(\gamma(s^{i}_{k})|\xi^{i}_{k}\right)\)\(\triangleright\) update discounted cumulative reward
13:endif
14:endfor
15:endfor
16:endfor
17:\(\gamma^{*}(s^{0}_{k})=\operatorname*{argmax}_{\gamma(s^{0}_{k})\in\Gamma(s^ {0}_{k})}Q_{0}\left(\mathbf{s}^{-0}_{k},\gamma(s^{0}_{k})\right)\)
18:return\(\gamma^{*}(s^{0}_{k})\)
```
**Algorithm 1** Interaction-Guided Decision Tree Search
For online deployment, we incorporate the SANN-imitated behavioral model \(\pi_{NN}\) into a decision tree search Algorithm 1 to facilitate the Bayesian filtering, trajectory prediction of interacting vehicles, and decision-making of the ego vehicle. As shown in Fig. 3, the trajectory set \(\Gamma(s^{0}_{k})\) can be viewed as a decision tree and is initiated from the current state \(s^{0}_{k}\) as the tree root. Each trajectory \(\gamma(s^{0}_{k})\) is a branch in the decision tree \(\Gamma(s^{0}_{k})\), each state in a trajectory is a node, and each (directed) edge encodes reachability from one node to another in a certain trajectory/branch. Meanwhile, two distinct branches \(\gamma^{(m_{1})}(s^{0}_{k})\), \(\gamma^{(m_{2})}(s^{0}_{k})\) can share a same trajectory segments, i.e., \((\gamma^{(m_{1})})^{n_{1}}_{0}(s^{0}_{k})=(\gamma^{(m_{2})})^{n_{1}}_{0}(s^{0}_{k })=\gamma^{n_{1}}_{0}(s^{0}_{k})\). Algorithm 1 searches over this decision tree, updates the cumulative reward (24) for each branch, and trims unsafe branches, thereby, improving the searching efficiency. For example in Fig. 3b), the on-ramp ego vehicle is trying to merge where there is a following highway vehicle in grey. The normal lane-changing trajectories/branches share the same subset of initial trajectory segments \(\gamma^{n_{1}}_{0}(s^{0}_{k})\) and are highly likely to cause a collision with the highway vehicle. Therefore, we can trim all the lane change trajectories from the decision tree (shown by semi-transparent lines in Fig. 3b) and terminate further searches along these unsafe branches.
This process is formalized in Algorithm 1. In lines 2 and 3, we use the SANN behavioral model \(\pi_{NN}\) to update the posterior distribution in the Bayesian filter (see Proposition 1) and predict trajectory distributions that are used to compute the reward (24). Since a closer interacting vehicle is more likely to collide with the ego vehicle in the near future, we rank the indices of the adjacent vehicles in \(A(0)\) in descending order with respect to their Euclidean distances to our ego vehicle. The three for-loops in line 5,6,7 of Algorithm 1 search over interactions with different vehicles, branches, and prediction horizons of a branch, respectively. In lines 8-13, the sorted set \(A(0)\) prioritizes collision checks with closer interacting vehicles and trims the unsafe branches as early as possible. We trim all branches with the common set of nodes \(\gamma^{(n+1)N^{\prime}}_{nN^{\prime}}(s^{0}_{k})\) if the ego vehicle of trajectory segment \(\gamma^{(n+1)N^{\prime}}_{nN^{\prime}}(s^{0}_{k})\) has a probability of collision with the \(i\)th vehicle higher than a threshold of 0.5. Otherwise, we will update the cumulative reward according to line 12 in Algorithm 1. Eventually, in line 17, we solve (23) by simply choosing the branch with the maximum cumulative reward. We also note that the three for-loops enable linear scalability of this algorithm with respect to both the number of interacting drivers in \(A(0)\) and the number of prediction horizons \(\lfloor N/N^{\prime}\rfloor\).
## VI Simulation and Experimental Results
We first present both qualitative and quantitative results of real-world trajectory prediction using our Behavioral Model and the Bayesian Filter in Sec. VI-A. We report the performance of the SANN imitation learning in Sec. VI-B. We provide extensive evaluations of the proposed decision-making framework in forced merging tasks using our simulation environment (Sec. VI-C), the real-world High-D traffic dataset [32] (Sec. VI-D), and the Carla simulator [33] (Sec. VI-E). We note that we **do not** need to re-tune the model hyperparameters or re-train the SANN for different forced merging evaluation environments. The demonstration videos are available in [https://xiaolisean.github.io/publication/2023-10-31-TCST2024](https://xiaolisean.github.io/publication/2023-10-31-TCST2024)
### _Real-world Trajectory Prediction_
We present qualitative (Fig. 8) and quantitative results (Fig. 7) of predicting real-world trajectories using our behavioral model and the Bayesian filter. In our first task, we aim to reproduce a real-world trajectory from a naturalistic High-D [32] dataset. In a High-D traffic segment (Fig. 8) of 12 sec, we identify a target vehicle \(i\) (in green) that is overtaking its
leading vehicle \(2\) (in orange). We initialize a virtual vehicle (in red) at \(t_{k}=0\) using the actual vehicle state \(s^{i}_{k}\), and we set \(\sigma^{i}=\) "competitive", \(w^{i}=[0,1,0]^{T}\) in the behavioral model (9). Afterwards, we control this virtual vehicle using (9) assuming no tracking error, and we set the control sampling period to \(N^{\prime}\Delta T=0.5\) sec. We compare the synthesized trajectories using our behavioral model with the actual ones in Fig. 8. Our behavioral model demonstrates a good prediction accuracy from 0-6 sec, which is adequate for a predictive controller with a sampling period \(N^{\prime}\Delta T\leq 6\) sec. We also note that the prediction error is large at \(t_{k}=12\) sec because the error accumulates over longer prediction windows. Our prediction captures the overtaking trajectories and qualitatively demonstrates the effectiveness of our method in modeling real-world driving behavior.
Fig. 7 summarizes the error statistics of the trajectory prediction. In online application scenarios, given an observed interacting history \(\xi^{i}_{k}\), we recursively infer the latent behavioral model parameters \(\sigma^{i},w^{i}\) using the Bayesian filter (21) as a posterior distribution \(\mathbb{P}\left(\sigma^{i},w^{i}|\xi^{i}_{k}\right)\). Subsequently, the interacting vehicles' trajectories are predicted as a distribution using policy (12) according to \(\mathbb{P}\left(\gamma(s^{i}_{k})|\xi^{i}_{k}\right)=\sum_{\sigma^{i},w^{i}} \pi\left(\gamma(s^{i}_{k})|\sigma^{i},w^{i},s^{i}_{k},\mathbf{s}^{-i}_{k}\right) \cdot\mathbb{P}\left(\sigma^{i},w^{i}|\xi^{i}_{k}\right)\). We quantify the prediction error between the actual trajectory \(\hat{\gamma}(s^{i}_{k})=\left\{\hat{s}^{i}_{n}\right\}_{n}^{k+k^{\prime}}\) and the predicted trajectory distribution \(\mathbb{P}\left(\gamma(s^{i}_{k})|\xi^{i}_{k}\right)\) using the following metric
\[\begin{split}&\operatorname{\mathbf{err}}\left(\hat{\gamma}(s^{i}_{ k}),\xi^{i}_{k}\right)=\mathbb{E}_{\gamma(s^{i}_{k})\sim\mathbb{P}\left( \gamma(s^{i}_{k})|\xi^{i}_{k}\right)}\\ &\left[\frac{1}{k^{\prime+1}}\sum\limits_{s^{i}_{k}\in\gamma(s^{ i}_{k}),n=k}^{n=k+k^{\prime}}\left\|\begin{bmatrix}x^{i}_{n}-\hat{x}^{i}_{n}\\ y^{i}_{n}-\hat{y}^{i}_{n}\end{bmatrix}\right\|_{2}\right],\end{split} \tag{26}\]
where \(k^{\prime}\Delta T\) in second is the duration of the actual trajectory \(\hat{\gamma}(s^{i}_{k})\). This metric computes the expected \(\ell_{2}-\)norm error in position prediction averaged over time steps.
We sample different traffic segments of different duration from the High-D dataset, and each traffic segment is bisected by time instance \(t_{k}\) into training segments \(\xi^{i}_{k}\) and prediction segment \(\hat{\gamma}(s^{i}_{k})\) corresponding to a sampled vehicle \(i\). We apply the aforementioned procedure to each training segment \(\xi^{i}_{k}\) and calculate the prediction error (26) using the corresponding prediction segment \(\hat{\gamma}(s^{i}_{k})\). Meanwhile, in the sequel, we assume \(w^{i}\) in a finite set
\[W=\left\{\begin{matrix}[0,0,1],\,[0,1,1]/2,\,[0,1,0],\,[1,1,1]/3,\\ [1,0,1]/2,\,[1,1,0]/2,\,[1,0,0]\end{matrix}\right\} \tag{27}\]
Fig. 8: Reproducing real-world overtaking trajectory: The trajectories of the virtual vehicle \(i\) (in red) are synthesized using our behavioral model (9) considering its driver of model parameters \(\sigma^{i}=\) “competitive”, \(w^{i}=[0,1,0]^{T}\). This virtual “competitive” driver overtakes the vehicle \(2\in A(i)\) minimizing the traveling time \(\tau\). The resulting trajectories in red match the actual trajectories in green.
Fig. 7: Error statistics of trajectory prediction comprises 6,774 High-D trajectories of in total 28,078 driving seconds: (a) Each grid reports the mean of prediction errors using \(\xi^{i}_{k}\), \(\hat{\gamma}(s^{i}_{k})\) of the same lengths. (b) Each subplot visualizes the mean prediction errors (red lines) corresponding to \(\hat{\gamma}(s^{i}_{k})\) of the same duration versus variable duration of \(\xi^{i}_{k}\). We use red shaded areas to denote the one standard deviation and use blue shaded areas to represent the minimum/maximum error bounds.
to reduce the dimension of parameter space \((\sigma^{i},w^{i})\). Subsequently, we have 22 possible combinations of \((\sigma^{i},w^{i})\): We assign 7 different \(w^{i}\in W\) to three \(\sigma^{i}\neq\) "altruistic"; if \(\sigma^{i}=\) "altruistic", the weights \(w^{i}\) do not matter for the altruistic driver as defined in (7), (8).
As shown in Fig. (a)a, the mean prediction errors are below \(4\ \mathrm{m}\) for predictions of 1-9 sec ahead and using training segments of 1-18 sec. We also note that longer training segments \(\xi_{k}^{i}\) (see Fig. (b)b) reduce both the prediction error standard deviation and its maximum values. However, for longer prediction windows of 7-9 sec (see Fig. (b)b), the error accumulates and leads to larger standard deviations and mean errors which are also observed in Fig. 8. For shorter prediction windows of 1-6 sec, we have a maximum error below \(5\ \mathrm{m}\) for most of the cases and the standard deviation is smaller than \(1.5\ \mathrm{m}\) (see Fig. (b)b). The results in Fig. 8 and Fig. 7 provide evidence that our algorithms have good quantitative accuracy over shorter prediction windows, and good qualitative accuracy over longer prediction windows. Based on these considerations, we set the trajectory length as \(N\Delta T=6\ \mathrm{sec}\), so that we have a good prediction performance over a shorter prediction window of 6 sec. This duration covers a complete lane change of \(T_{\text{lane}}=4\ \mathrm{sec}\), and suffice the task of predicting interacting vehicles' trajectories for ego vehicle control.
### _Imitation Learning with SANN_
The goal of imitation learning is to train the SANN to mimic the behavioral model. Namely, the predicted policy \(\pi_{\text{NN}}\) should match the actual one \(\pi\) from behavioral model (12) accurately. We leverage the High-D dataset [32] to synthesize realistic traffic \(s_{k}^{i},\mathbf{s}_{k}^{-i}\) for training the SANN. We randomly sample a frame from the High-D traffic together with a target vehicle \(i\), thereby we can extract states \(s_{k}^{i},\mathbf{s}_{k}^{-i}\) of vehicles \(i\) and its interacting vehicles. Together with sampled parameters \(\sigma^{i},w^{i}\), we can compute the actual policy distribution \(\pi\left(\cdot|\sigma^{i},w^{i},s_{k}^{i},\mathbf{s}_{k}^{-i}\right)\) using (12). We repeat this procedure to collect a dataset of 183,679 data points. We decompose the dataset into training (70%), validation (15%), and test datasets (15%). The MLP in the Attention Backbone has three layers of sizes 7, 32, 225, respectively. The MLP in the Policy Head has two layers of sizes 225, 225. Furthermore, we set \(|Q|=32\), \(|V|=225\), and the maximum number of feature vectors as \(N_{z}=9\).
We use Python with Pytorch [47] to train the SANN using the training dataset and the loss function (13). We train the SANN for 1000 epochs using a Batch Stochastic Gradient Descent algorithm with momentum (batch size of 200). We set the initial learning rate and momentum to \(0.01\) and \(0.9\), respectively. We also adopt a learning rate scheduler that decreases the learning rate by half every 200 epochs. The training process takes in total 7 hours (25 sec per epoch) on a computer with 16 GB RAM and Intel Xeon CPU E3-1264 V3. We evaluate the performance of the SANN in the task of predicting the policy distributions in the test dataset, and the results are reported in Fig. 9. For the majority of the test cases, the SANN achieves losses smaller than 0.4 which corresponds to a good statistical performance. Moreover, in the example (see Fig. 9) with relatively larger losses of 0.8514, the learned policy (in blue) also qualitatively matches the actual one (in red). To conclude, the SANN imitates the proposed behavioral model with good accuracy. Hence, as presented in Algorithm 1, we use this SANN learned policy to perform Bayesian filtering and trajectory prediction to facilitate the online decision-making for our ego vehicle.
### _Forced Merging in Simulations_
We set up a simulation environment (see Fig. 10) where the interacting vehicle is controlled by the behavioral model (9) with \(\sigma^{1}=\) "altruistic". We use the proposed Algorithm 1 to control the ego vehicle. For this and the following experiments, we set the control sampling period to \(N^{\prime}\Delta T=0.5\ \mathrm{sec}\) for both Algorithm 1 and the behavioral model. Instead of passively inferring the driver's intention from its behavior, the Algorithm 1 controls the ego vehicle and exhibits a merging strategy with proactive interaction to test the interacting driver's intention.
Specifically, in Fig. (b)b), the interacting driver 1 first drives at a constant speed from \(0\) to \(0.5\) sec. Meanwhile, from \(0\) to \(1\) sec, the ego vehicle 0 is not certain if driver
Fig. 9: Test statistics and examples of imitation learning on the test dataset. The histogram presents the loss statistics between the SANN learned policy distributions \(\pi_{NN}\) and the actual ones \(\pi\) computed from the behavioral model (12). Five qualitative examples are visualized in call-out boxes: The \(y\)-axis shows the probability of driver \(i\) taking a certain trajectory \(\gamma(s_{k}^{i})\). For comparison, policies \(\pi\) and \(\pi_{NN}\) are plotted above and below the dashed line, respectively, in mirror reflection
1 will yield the right of way for its merging attempt and, therefore it tentatively merges after vehicle 1 with longitudinal deceleration. Meanwhile, in the time interval between \(0.5\) and \(1.5\) sec, the "altruistic" driver of vehicle 1 notices the merging intention of the ego vehicle and decelerates to promote merging. In the aforementioned interactions, the ego vehicle gradually updates its belief of the latent model parameters \(\sigma^{1},w^{1}\) of driver 1. As shown in Fig. 10a), our algorithm correctly infers the "altruistic" identity of driver 1. Subsequently, the ego vehicle aborts the merging action due to safety concerns with longitudinal acceleration to build up speed advantage for future merging. Then, in the time interval between \(2.0\) and \(5.5\) sec, being aware of the yielding behavior, the ego vehicle re-merges before vehicle 1 with longitudinal acceleration. This simulation example provides evidence that our method can effectively interpret the driving intentions of other drivers. Moreover, our decision-making module can leverage this information to facilitate the forced merging task while ensuring the safety of the ego vehicle.
### _Forced Merging in Real-world Traffic Dataset_
We further evaluate the performance of our method in the High-D [32] real-world traffic dataset. There are 60 recordings in the High-D dataset where the recordings 58-60 correspond to highways with ramps. In recordings 58-60, we identify in total 75 on-ramp vehicles (High-D target vehicles) that merge into highways. For each one of the 75 vehicles, we extract its initial state at a recording frame when it appears on the ramp. Then, we initialize a virtual ego vehicle using this state and control this virtual ego vehicle using our decision-making module. Other vehicles are simulated using the actual traffic recordings, where we neglect the interaction between the virtual ego vehicle and the High-D target vehicle. Eventually, we generate 75 forced merging test cases repeating this procedure over all target vehicles. We visualize two test cases out of 75 as examples in Figs. 11 and 12. The two interacting vehicles 1 and 2 in Fig. 11 are approximately driving at constant speeds. Therefore, our decision-making module accelerates the virtual ego vehicle to a comparable speed and merges the virtual ego vehicle in between the two interacting vehicles. In the second example (see Fig. 12), our decision-making module can interpret the yielding behavior of vehicle 2 and, thereby merge the ego vehicle highway in a timely manner. In both examples, our algorithm is able to complete the forced-merging task faster than the real-world human drivers (in green boxes).
Meanwhile, we identify a test case failure if our method fails to merge the virtual ego vehicle into the highway before the end of the ramp, or if the ego vehicle collides with other vehicles and road boundaries. Otherwise, we consider it a success case. In the three recordings, we have 18, 21, and 36 test cases, respectively. Our method can achieve a success rate of 100% in all recordings. We also report the results of the time it took to merge the virtual ego vehicle in Fig. 13. Compared to the actual human driver of the target vehicle (green bars),
Fig. 11: A forced merging evaluation example on the High-D dataset: Our ego vehicle (in red) accelerates first to create sufficient gaps between highway vehicles 1 and 2, which are driving approximately at constant speeds. The ego vehicle successfully merges into the highway in \(5.2\) sec which is faster than the actual human driver (in green).
Fig. 10: A forced merging example of proactive interaction in a simulation: (a) Each subplot reports a posterior distribution of \(\mathbb{P}\left(\sigma^{i},w^{i}|\xi_{k}^{i}\right)\) at a certain time instance \(t_{k}\) from the Bayesian filter (21). The \(x\)-axis shows 7 cases of \(w^{1}\in W\), and the \(y\)-axis shows three different SVO categories with \(\sigma^{i}=\) “altruistic” stands along; (b) Each subplot visualizes a frame of highway interaction between the ego vehicle and vehicle 1 controlled by Algorithm 1 and behavioral model (9), respectively.
our decision-making module can merge the ego vehicle into the highway using a shorter time. Meanwhile, for 64 out of the 75 test cases, our method completes the task within \(5\ \mathrm{sec}\) while we model a complete lane change with \(T_{\text{lane}}=4\ \mathrm{sec}\) in Sec. II-C. The results demonstrate that our method can successfully complete the forced merging task with traffic in real-world recordings in a timely and safe manner. Though the traffic is realistic in the dataset, we also note that there is no actual interaction between the virtual ego vehicle and other vehicles. Namely, the recorded vehicles in the dataset will not respond to the action of our virtual ego vehicle.
### _Forced Merging in Diverse Carla Traffics_
We set up a forced merging scenario in the Carla simulator [33] (see Fig. 15) and test our decision-making module in diverse and reactive simulated traffic. The vehicles in the traffic are controlled by the default Carla Traffic Manager. In the previous experiments, we assumed that the ego vehicle was able to accurately track the reference trajectories \(\gamma^{*}(s_{k}^{0})\) from Algorithm 1. To mimic the real application scenario of our algorithm, we developed a PID controller for reference trajectory tracking. As visualized in the system diagram of Fig. 2, Algorithm 1 in the high-level decision-making module updates optimal reference trajectories \(\gamma^{*}(s_{k}^{0})\) every \(N^{\prime}\Delta T=0.5\ \mathrm{sec}\). Meanwhile, to track this reference trajectory \(\gamma^{*}(s_{k}^{0})\), the low-level PID controller computes the steering and throttle signal of the actual vehicle plant (simulated by Carla using Unreal Engine) at 20 Hz.
Fig. 16 illustrates our method capability to merge the ego vehicle into a dense highway platoon in Carla. From \(0\) to \(4.60\ \mathrm{sec}\), the ego vehicle attempts to merge between vehicles 1 and 2 and, eventually aborts lane change due to safety concerns. From \(4.60\ \mathrm{to}\ 6.12\ \mathrm{sec}\), vehicle 3 is decelerating due to a small headway distance. And vehicle 2 is accelerating seeing an enlarged headway distance as a result of the acceleration of vehicle 1 before \(4.60\ \mathrm{sec}\). Consequently, the gap between vehicles 2 and 3 is enlarged due to this series of interactions from \(4.60\ \mathrm{to}\ 6.12\ \mathrm{sec}\). Taking advantage of these interactions, our ego vehicle decides to accelerate and merge before vehicle \(3\ \mathrm{at}\ 4.60\ \mathrm{sec}\).
Moreover, we are interested in quantifying the state of the traffic that the ego vehicle directly interacts with, i.e., the traffic in the lane near the ramp. We first define the traffic Region Of Interest (ROI) as the traffic in the near-ramp lane between the longitudinal position \(x_{\text{ramp}}-l_{\text{ramp}}\) and \(x_{\text{ramp}}\) (see Fig. 1). Subsequently, we define the following two variables: We compute the average traffic flow as the number of vehicles entering this traffic ROI over the entire simulation time; and we calculate the traffic density as the number of vehicles located in this traffic ROI over the length of this region \(l_{\text{ramp}}\) at a certain time instance. Then, we average the traffic density over measurements taken every \(\Delta T=0.05\ \mathrm{sec}\) which yields the average traffic density.
The two variables quantify the averaged speed and density of the near-ramp traffic that the ego vehicle directly interacts with. For the example in Fig. 16, the traffic in the ROI has an average traffic density of \(0.055\ \mathrm{vehicle/meter}\) and an average traffic flow \(0.3646\ \mathrm{vehicle/sec}\). We also note that the vehicles controlled by the default Carla Traffic Manager have no interpretation of other drivers' intentions and, therefore, are unlikely to yield to the ego vehicle. This introduces more difficulties in merging into a dense vehicle platoon (such as the example in Fig. 16) even for human drivers.
Therefore, we introduce another variable, i.e., the probability of yielding, to have more diversified evaluations for our method. Specifically, for traffic with zero probability of yielding, we control all vehicles in the traffic using the default Carla Traffic Manager. For that being set to a non-zero value of \(X\%\), the vehicles in the traffic ROI have \(X\%\) probability
Fig. 12: Another forced merging evaluation example on the High-D dataset: The interacting vehicle 2 changes its lane in order to promote the merging action of the on-ramp vehicle. Our ego vehicle (in red) merges into the highway once observing the lane change behavior of vehicle 2. The ego vehicle successfully merges into the highway in \(5.72\ \mathrm{sec}\) which is faster than the actual human driver (in green).
Fig. 13: Forced merging test statistics in High-D: The results are reported separately for the three recordings, and our method achieves a 100% success rate. (Upper) We visualize the time-to-merge of the virtual ego vehicles (red bars) in comparison with the ones of the actual High-D vehicles (green bars). (Lower) Three histograms visualize the number of cases in which our method takes certain seconds to merge the virtual ego vehicle.
of yielding to the on-ramp ego vehicle. The yielding behavior is generated by the Cooperate Forced Controller [51]. Similar to the Intelligent Driver Model (IDM) [52] that controls a vehicle to follow the leading vehicle ahead, the Cooperate Forced Controller is a car-following controller derived from the IDM, but assuming the on-ramp ego vehicle is the leading vehicle.
We test our algorithm at various traffic conditions with different settings. As shown in Fig. 14, we achieve a 100 % success rate among 200 test cases where we have different traffic densities and different probabilities of yielding for interacting vehicles. In the majority of the test cases, the ego vehicle takes less than \(9\) sec to merge. Generally, in denser traffic, our algorithm needs more time to interact with more vehicles and, thus, more time to merge. With a higher probability of yielding, the algorithm takes less time to complete the merging tasks on average. Notably, our algorithm can achieve a 100% success rate even in traffic with zero probability of yielding.
## VII Conclusions
In this paper, we proposed an interaction-aware decision-making module for autonomous vehicle motion planning and control. In our approach, interacting drivers' intentions are modeled as hidden parameters in the proposed behavioral model and are estimated using a Bayesian filter. Subsequently, interacting vehicles' behaviors are predicted using the proposed behavioral model. For the online implementation, the Social-Attention Neural Network (SANN) was designed and utilized to imitate the behavioral model in predicting the interacting drivers' behavior. The proposed approach is easily transferable to different traffic conditions. In addition, the decision tree search algorithm was proposed for faster online computation; this algorithm leads to linear scalability with respect to the number of interacting drivers and prediction horizons. Finally, a series of studies, based on naturalistic traffic data and simulations, were performed to demonstrate the capability of the proposed decision-making module. In particular, we have shown that the behavioral model has good prediction accuracy, and the proposed algorithm is successful in merging the ego vehicle in various simulations and real
Fig. 14: Forced merging test statistics in the Carla simulator with different traffic conditions: The \(x\)-axis shows four different traffic settings where vehicles have different probabilities of being willing to yield to the ego vehicle. We have 50 test cases for each of the four traffic settings. Each dot is a single test case where the color and the size of the dot reflect the time needed to merge the ego into the highway. The \(y\) and \(z\) axes report the average density and flow of the traffic in the ROI.
Fig. 15: A forced merging example in the Carla simulator: the ego vehicle is the black vehicle in the middle lower side of each subplot with its trajectories in green lines and reference trajectories (from our decision-making module) in red lines.
world traffic scenarios, without the need for returning the model hyperparameters or retraining the SANN.
|
2302.00089 | Mind the (optimality) Gap: A Gap-Aware Learning Rate Scheduler for
Adversarial Nets | Adversarial nets have proved to be powerful in various domains including
generative modeling (GANs), transfer learning, and fairness. However,
successfully training adversarial nets using first-order methods remains a
major challenge. Typically, careful choices of the learning rates are needed to
maintain the delicate balance between the competing networks. In this paper, we
design a novel learning rate scheduler that dynamically adapts the learning
rate of the adversary to maintain the right balance. The scheduler is driven by
the fact that the loss of an ideal adversarial net is a constant known a
priori. The scheduler is thus designed to keep the loss of the optimized
adversarial net close to that of an ideal network. We run large-scale
experiments to study the effectiveness of the scheduler on two popular
applications: GANs for image generation and adversarial nets for domain
adaptation. Our experiments indicate that adversarial nets trained with the
scheduler are less likely to diverge and require significantly less tuning. For
example, on CelebA, a GAN with the scheduler requires only one-tenth of the
tuning budget needed without a scheduler. Moreover, the scheduler leads to
statistically significant improvements in model quality, reaching up to $27\%$
in Frechet Inception Distance for image generation and $3\%$ in test accuracy
for domain adaptation. | Hussein Hazimeh, Natalia Ponomareva | 2023-01-31T20:36:40Z | http://arxiv.org/abs/2302.00089v1 | # Mind the (optimality) Gap: A Gap-Aware Learning Rate Scheduler for Adversarial Nets
###### Abstract
Adversarial nets have proved to be powerful in various domains including generative modeling (GANs), transfer learning, and fairness. However, successfully training adversarial nets using first-order methods remains a major challenge. Typically, careful choices of the learning rates are needed to maintain the delicate balance between the competing networks. In this paper, we design a novel learning rate scheduler that dynamically adapts the learning rate of the adversary to maintain the right balance. The scheduler is driven by the fact that the loss of an ideal adversarial net is a constant known a priori. The scheduler is thus designed to keep the loss of the optimized adversarial net close to that of an ideal network. We run large-scale experiments to study the effectiveness of the scheduler on two popular applications: GANs for image generation and adversarial nets for domain adaptation. Our experiments indicate that adversarial nets trained with the scheduler are less likely to diverge and require significantly less tuning. For example, on CelebA, a GAN with the scheduler requires only one-tenth of the tuning budget needed without a scheduler. Moreover, the scheduler leads to statistically significant improvements in model quality, reaching up to \(27\%\) in Frechet Inception Distance for image generation and \(3\%\) in test accuracy for domain adaptation.
## 1 Introduction
Adversarial networks have proved successful in generative modeling (Goodfellow et al., 2014), domain adaptation (Ganin et al., 2016), fairness (Zhang et al., 2018), privacy (Abadi and Andersen, 2016), and other domains. Generative Adversarial Nets (GANs) are a foundational example of this class of models (Goodfellow et al., 2014). Given a finite sample from a target distribution, a GAN aims to generate more samples from that distribution. This is achieved by training two competing networks. A generator \(G\) transforms noise samples into the sample space of the target distribution, and a discriminator \(D\) attempts to distinguish between the real and generated samples. To generate realistic samples, \(G\) is trained to fool \(D\). Adversarial nets used in domains other than generative modeling follow the same principle of training two competing networks.
Training an adversarial net typically requires solving a non-convex, non-concave min-max optimization problem, which is notoriously challenging (Razaviyayn et al., 2020). In practice, first-order methods are commonly used as a heuristic for this problem. One popular choice is Stochastic Gradient Descent Ascent (SGDA), which is an extension of SGD that takes gradient descent and ascent steps over the min and max problems, respectively1. SGDA and its adaptive variants (e.g., based on Adam) are the defacto standard for optimizing adversarial nets (Ganin et al., 2016; Radford et al., 2016; Arjovsky et al., 2017). These methods require choosing two base learning rates2; one for each competing network. However, adversarial nets are very sensitive to the learning rates (Lucic et al., 2018), and careful choices are needed to maintain a balance between the competing networks. In practice, the same learning rate is often used for both networks (Wang et al., 2021), even though decoupled rates can lead to improvements (Heusel et al., 2017). The base learning rates typically used in the literature are constant, but could also be decayed during training. In either case, these rates do not depend on the current state of the network.
Footnote 1: The steps could be simultaneous or alternating.
Footnote 2: We use the term base learning rate to refer to the base learning rate in adaptive optimizers and to the learning rate of SGDA.
In this paper, we argue that a dynamic choice of the base learning rate that responds to the current state of the adversarial net can significantly enhance training. Specifically, we propose a learning rate scheduler that dynamically changes the base learning rate of existing optimizers (e.g., Adam), based on the current loss of the network. Our scheduler is
driven by the following key observation: in many popular formulations, the loss of an ideal adversarial net is a constant known a priori. For example, an ideal GAN is one in which the distributions of the real and generated samples match. Therefore, we can define an optimality gap, which refers to the gap (absolute difference) between the losses of the current and ideal adversarial nets.
Our main hypothesis is that adversarial nets with smaller optimality gaps tend to perform better--we present empirical evidence that verifies this hypothesis on different loss functions and datasets. Motivated by this hypothesis, our proposed scheduler keeps track of the optimality gap. At each optimization step, the scheduler decides whether to increase or decrease the base learning rate of the adversary (e.g., discriminator), in order to keep the optimality gap relatively small. The base learning rate of the competing network (e.g., generator) is kept constant, since controlling the loss of the adversary (through its base rate) effectively modifies that of the competing network3.
Footnote 3: If the game is zero-sum, an increase in the objective of the adversary will lead to a decrease in the objective of the competing network with an equal magnitude (and vice versa).
We demonstrate the effectiveness of the scheduler empirically in two popular use cases: GANs for image generation and Domain Adversarial Neural Nets (DANN) (Ganin et al., 2016) for domain adaptation. We observe that the scheduler significantly reduces the need for tuning (by \(\sim 10\)x in many cases) and can lead to statistically significant improvements in the main performance metrics (image quality or accuracy) on five benchmark datasets.
**Contributions:** **(i)** We present statistical evidence showing that GANs with smaller optimality gaps tend to generate higher quality samples (see Sec. 2). **(ii)** Motivated by the latter evidence, we propose a novel scheduler that adapts the base learning rate of the adversary to keep the optimality gap relatively small and maintain a balance with the competing network (see Sec. 3). **(iii)** We carry out a large-scale statistical study on GANs and DANN to compare the performance of the scheduler with popular alternatives. Specifically, we study how the tuning budget and weight initialization affect performance by systematically training over 25,000 GANs. The results indicate that the scheduler can reduce the need for tuning by \(\sim\) 10x, improve Frechet Inception Distance in GANs by up to \(27\%\), and improve accuracy in DANN by up to \(3\%\) (see Sec. 4). We provide a simple open-source implementation4 of the scheduler that can be used with any existing optimizer.
Footnote 4: [https://github.com/google-research/google-research/tree/master/adversarial_nets_lr_scheduler](https://github.com/google-research/google-research/tree/master/adversarial_nets_lr_scheduler)
### Related Work
Gradient-based methods for non-convex, non-concave min-max problems are known to face difficulties during training and may generally fail to achieve even simple notions of stationarity (Razaviyayn et al., 2020). In the context of GANs, there has been active research on stabilizing training (with different notions of stability). One important line of work introduces new loss functions or formulations that may be more amenable to first-order methods (e.g., via additional smoothness or avoiding vanishing gradients) (Li et al., 2015; Arjovsky et al., 2017; Mao et al., 2017; Zhao et al., 2017; Nowozin et al., 2016; Gulrajani et al., 2017). Another related approach is to augment existing GAN loss functions with regularizers or perform simple modifications to SGDA (which may be interpreted as regularization) to improve stability (Che et al., 2017; Mescheder et al., 2017; Nagarajan and Kolter, 2017; Yadav et al., 2018; Mescheder et al., 2018; Xu et al., 2020). Improved architectures have also been vital in successfully training GANs, e.g., see Radford et al. (2016); Neyshabur et al. (2017); Lee et al. (2021) and the references therein. See also Karras et al. (2020) for improving stability using data augmentation. Fundamental to all the approaches described above is the choice of the (base) learning rates, which effectively controls the balance between the competing networks. The base rates used in the literature are typically fixed, but may also be decayed during training. In either setting, the base rates used do not take into account the current state of the network. The main novelty of our scheduler is that it uses the current state of the network (gauged by the optimality gap) when modifying the learning rate.
## 2 Adversarial Nets and their Ideal Loss
We start this section by briefly reviewing a few popular variants of GANs and discussing how their ideal loss can be determined a priori. Then, in Section 2.1.1, we discuss how the quality of generated samples correlates with the optimality gap. Finally, in Section 2.2, we introduce DANN and discuss how to estimate its ideal loss.
### Generative Adversarial Nets (GANs)
First, we introduce some notation. Let \(\mathbb{P}_{t}\) be the real distribution and \(\mathbb{P}_{n}\) be some noise distribution. The generator \(G\) is a function that maps samples from \(\mathbb{P}_{n}\) to the sample space of \(\mathbb{P}_{t}\) (e.g., space of images). We define \(\mathbb{P}_{g}\) as the distribution of \(\tilde{x}:=G(z)\) where \(z\sim\mathbb{P}_{n}\), i.e., \(\mathbb{P}_{g}\) is distribution of generated samples. The discriminator \(D\) is a function that maps samples from \(G\) to a real value.
**Standard GAN and its Ideal Loss.** The standard GAN introduced by Goodfellow et al. (2014) can be written as:
\[\min_{G}\max_{D}\ \ \mathbb{E}_{x\sim\mathbb{P}_{t}}\log D(x)+\mathbb{E}_{ \tilde{x}\sim\mathbb{P}_{g}}\log\big{(}1-D(\tilde{x})\big{)},\]
where \(D\) in this case outputs a probability. In practice, we have a finite sample from \(\mathbb{P}_{t}\) so it is replaced by the corresponding empirical distribution. Moreover, the expectation over \(\mathbb{P}_{g}\) is estimated by sampling from the noise distribution.
We say that a GAN is ideal if the generated and real samples follow the same distribution, i.e., \(\mathbb{P}_{g}=\mathbb{P}_{r}\). When the standard GAN is ideal, the objective function becomes:
\[\max_{D}\ \ \mathbb{E}_{x\sim\mathbb{P}_{t}}\Big{[}\log D(x)+\log\big{(}1-D(x) \big{)}\Big{]}.\]
The solution to the problem above is given by \(D(x)=0.5\) for all \(x\) in the support of \(\mathbb{P}_{t}\). Thus, the optimal objective is \(-\log(4)\). Throughout the paper, we will be focusing on the loss, i.e., the negative of the utility discussed above. We will denote the optimal loss of \(D\) in an ideal GAN by \(V^{*}\), so in this case \(V^{*}=\log(4)\). This quantity allows for computing the optimality gap, which is essential for the operation of the scheduler.
**Popular GAN Variants.** While the standard GAN is conceptually appealing, the gradients of the generator may vanish early on during training. To mitigate this issue, Goodfellow et al. (2014) proposed the non-saturating GAN (NSGAN), which uses the same objective for \(D\), but replaces the objective of \(G\) with another that (directly) maximizes the probability of the generated samples being real-see Table 1. Similar to the standard GAN, the optimal discriminator loss of an ideal NSGAN is \(V^{*}=\log(4)\).
Many follow-up works have proposed alternative loss functions and divergence measures in attempt to improve the quality of the generated samples, e.g., see Arjovsky et al. (2017); Mao et al. (2017); Nowozin et al. (2016); Li et al. (2017) and Wang et al. (2021) for a survey. In Table 1, we present the objective functions of two popular GAN formulations: Wasserstein GAN (WGAN) and least-squares GAN (LSGAN) (Arjovsky et al., 2017; Mao et al., 2017). WGAN uses a similar formulation to the standard GAN but drops the log, and \(D\) outputs a logit (not a probability). Arjovsky et al. (2017) shows that under an optimal k-Lipschitz discriminator, WGAN minimizes the Wasserstein distance between the real and generated distributions. LSGAN uses squared-error loss as an alternative to cross-entropy, and Mao et al. (2017) motivate this by noting that squared-error loss typically leads to sharper gradients.
Similar to an ideal standard GAN, the optimal discriminator losses of ideal WGAN and LSGAN are known constants-see the last column of Table 1 (these constants are derived by plugging \(\mathbb{P}_{g}=\mathbb{P}_{r}\) in the discriminator loss).
#### 2.1.1 Correlation between the Optimality Gap and Sample Quality
For all the GAN formulations in Table 1, it is known in theory that if the model capacity is sufficiently high, solving the optimization problem to global optimality leads to an ideal GAN (Goodfellow et al., 2014; Arjovsky et al., 2017; Mao et al., 2017). However, in practice, the capacity of the GAN is limited and optimization is done using first-order methods, which are generally not guaranteed to obtain optimal solutions. Thus, obtaining an ideal GAN in practice is generally infeasible. However, as we demonstrate in our experiments, it is possible to train GANs that are "close enough" to an ideal GAN in terms of the loss. Specifically, given a GAN whose discriminator loss is \(\hat{V}\), we define the optimality gap as \(|\hat{V}-V^{*}|\). Our main hypothesis is:
GANs that achieve smaller optimality gaps tend to generate better samples.
We stress that this hypothesis applies to GANs that are trained with reasonable hyperparameters and initialization. It is possible to obtain GANs whose optimality gap is 0 or close to 0 without training, e.g., initializing a GAN with all-zero weights will lead to a 0 gap in standard GAN.
**Empirical Evidence.** We validate the hypothesis through multiple experiments on MNIST (Xiao et al., 2017), Fashion MNIST (Xiao et al., 2017), CIFAR-10 (Krizhevsky et al., 2009), and CelebA (Liu et al., 2015). Next, we briefly discuss one of these experiments and leave the rest to Section 4. We consider generating images from MNIST using a GAN, based on the DCGAN architecture (Radford et al., 2016), and we study different GAN variants (NSGAN, LSGAN, and WGAN). We consider \(100\) sets of hyperparameter values, drawn randomly, on which we train each GAN (see Section 4 for more details). For evaluation, we compute the Frechet Inception Distance (FID) (Heusel et al., 2017), which is a standard for assessing image quality.
In Figure 1, we present scatter plots of FID versus the optimality gap; here each point corresponds to a particular hyperparameter configuration. For the three variants of GAN, we observe medium to strong, positive spearman's correlation between FID and the optimality gap. That is, models with a smaller optimality gap tend to have better image quality. The scheduler we develop (in Sec. 3) attempts to keep the optimality gap in check by modifying the learning rate.
### Domain Adversarial Neural Nets (DANN)
DANN is another important example of adversarial nets used in domain adaptation (Ganin et al., 2016). Given labelled data from a source domain and unlabelled data from a related, target domain, the goal is to train a model that generalizes well on the target. The main principle behind DANN is that for good generalization, the feature representations should be domain-independent (Ben-David et al., 2010). DANN consists of: (i) a feature extractor \(F\) that receives features (from either the source or target data) and generates representations, (ii) a label predictor \(Y\) that classifies the source data based on the representations from the feature extractor, (iii) a discriminator \(D\)-a probabilistic classifier-that takes the feature representations from the extractor and attempts to predict whether the sample came from the source or target domain. Let \(\mathbb{P}_{s}\) and \(\mathbb{P}_{t}\) be the input distributions of the source and target domains, respectively.
At the population level, DANN solves:
\[\min_{F,Y}\max_{D}\ \ \mathcal{L}_{y}(F,Y)-\lambda\mathcal{L}_{d}(F,D),\]
where \(\mathcal{L}_{y}(F,Y)\) is the risk of the label predictor, \(\lambda\) is a non-negative hyperparameter, and \(\mathcal{L}_{d}(F,D)\) is the discriminator risk defined by:
\[-\mathbb{E}_{x\sim\mathbb{P}_{s}}\log\big{[}D(F(x))\big{]}-\mathbb{E}_{\hat{x }\sim\mathbb{P}_{t}}\log\big{[}1-D(F(\tilde{x}))\big{]}.\]
We say that DANN is ideal if the distribution of \(F(x),x\sim\mathbb{P}_{s}\) is the same as that of \(F(\tilde{x}),\tilde{x}\sim\mathbb{P}_{t}\). By the same reasoning used for standard GAN, the optimal discriminator in this ideal case outputs \(0.5\), and thus \(\mathcal{L}_{d}(F,D)=\log(4)\). However, generally, \(\lambda\) controls the extent to which the two distributions discussed above are matched, and thus the optimal \(\mathcal{L}_{d}(F,D)\) generally depends on \(\lambda\). Very small values of \(\lambda\) may5 lead to a discriminator that distinguishes well between the two domains. On the other hand, by increasing \(\lambda\), we can get arbitrarily close the ideal case (where the discriminator outputs \(0.5\)). In theory, for effective domain transfer, \(\lambda\) needs to be chosen large enough so that discriminator is well fooled (Ben-David et al., 2010), so for such \(\lambda\)'s we expect the optimal \(\mathcal{L}_{d}(F,D)\) to be roughly close to \(\log(4)\). Finally, similar to GANs, we remark that the ideal case is typically infeasible to achieve in practice (due to several factors, including using first-order methods and limited capacity); but controlling the optimality gap can be useful, as we demonstrate in our experiments.
Footnote 5: Small values are not always guaranteed to lead to a discriminator that distinguishes well. This depends on a combination of factors including the architecture and the input distributions. As a trivial example, if DANN is supplied with identical domains (\(\mathbb{P}_{s}=\mathbb{P}_{t}\)), the optimal discriminator outputs \(0.5\) for any \(\lambda\geq 0\).
## 3 Gap-Aware Learning Rate Scheduling
In Section 2, we presented empirical evidence that validates our hypothesis that GANs with smaller optimality gaps tend to generate higher quality samples. In this section, we put the hypothesis into action and design a learning rate scheduler that attempts to keep the gap relatively small throughout training. Besides the hypothesized improvement in sample quality, keeping the optimality gap small throughout training can mitigate potential drifts in the loss (e.g., the discriminator loss dropping towards zero), which may lead to more stable training. Next, we describe the optimization setup and then introduce the scheduling algorithm.
**Optimization Setup.** We assume that the optimization problem of the adversarial net is cast as a minimization over both the loss of the adversary \(D\) (e.g., the discriminator in a GAN) and the loss of the competing network \(G\) (e.g., the generator in a GAN). We focus on the popular strategy of optimizing the two competing networks simultaneously using (minibatch) SGD6. We use the notation \(\alpha_{d}\) to refer to the learning rate of \(D\). The learning rate scheduler will modify \(\alpha_{d}\) throughout training whereas the learning rate of \(G\) remains fixed. We note that the scheduler can be applied to adaptive optimizers (e.g., Adam or RMSProp) as well-in such cases, \(\alpha_{d}\) will refer to the base learning rate. We denote
Figure 1: Scatter plots of the Frechet Inception Distance (FID) versus the optimality gap on MNIST. Each point corresponds to a particular hyperparameter configuration obtained by random sampling. Lower FID values typically correspond to better image quality. For the three GAN variants, we observe moderate to strong, positive (rank) correlation between FID and the optimality gap. To improve visualization, a small number of outliers was removed–these outliers do not affect correlation and are presented in Appendix A. For WGAN, we removed outliers with FID \(>5\) and applied a log transformation to the gap (since it varies over \(8\) orders of magnitude).
\begin{table}
\begin{tabular}{l l l l} \hline \hline GAN & Discriminator Loss (Minimized) & Generator Loss (Minimized) & Ideal Discriminator Loss \(V^{*}\) \\ \hline Standard & \(-\mathbb{E}_{x\sim\mathbb{P}_{t}}[\log(D(x))]-\mathbb{E}_{x\sim\mathbb{P}_{t}}[ \log(1-D(\tilde{x}))]\) & \(\mathbb{E}_{x\sim\mathbb{P}_{t}}[\log(1-D(\tilde{x}))]\) & \(\log(4)\) \\ \hline NSGAN & \(-\mathbb{E}_{x\sim\mathbb{P}_{t}}[\log(D(x))]-\mathbb{E}_{x\sim\mathbb{P}_{t}}[ \log(1-D(\tilde{x}))]\) & \(-\mathbb{E}_{x\sim\mathbb{P}_{t}}[\log(D(\tilde{x}))]\) & \(\log(4)\) \\ \hline WGAN & \(-\mathbb{E}_{x\sim\mathbb{P}_{t}}[D(x)]+\mathbb{E}_{x\sim\mathbb{P}_{t}}[D( \tilde{x})]\) & \(-\mathbb{E}_{x\sim\mathbb{P}_{t}}[D(\tilde{x})]\) & 0 \\ \hline LSGAN & \(\mathbb{E}_{x\sim\mathbb{P}_{t}}[(D(x)-1)^{2}]+\mathbb{E}_{x\sim\mathbb{P}_{t}}[ D(\tilde{x})^{2}]\) & \(\mathbb{E}_{x\sim\mathbb{P}_{t}}[(D(\tilde{x}-1))^{2}]\) & 0.5 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Popular GAN variations considered in this work. Both the discriminator and generator losses are minimized. The value \(V^{*}\) denotes the loss of the discriminator in an ideal GAN.
by \(V_{d}\) the current loss of \(D\) (a scalar representing the average of the loss over the whole training data). The scheduler takes \(V_{d}\) and D's ideal loss \(V^{*}\) as inputs and outputs a scalar, which is used as a multiplier to adjust \(\alpha_{d}\).
**Effect of D's learning rate on the optimality gap.** Recall that in our setup \(D\) and \(G\) are simultaneously optimized. During each optimizer update, \(D\) aims to decrease \(V_{d}\) while \(G\) typically aims to increase \(V_{d}\). The optimizer update may increase or decrease \(V_{d}\), depending on how large \(D\)'s learning rate is w.r.t. that of \(G\). If D's learning rate is sufficiently larger, we expect \(V_{d}\) to decrease after the update, and otherwise, we expect \(V_{d}\) to increase. This intuition will be the basis of how the scheduler controls the optimality gap.
Next, we introduce the scheduling mechanism, where we differentiate between two cases: **(i)**\(V_{d}\geq V^{*}\) and **(ii)**\(V_{d}<V^{*}\).
**Scheduling when \(V_{d}\geq V^{*}\).** First, we give an abstract definition of the scheduler and then define the scheduling function formally. In this case, the current loss of \(D\) is larger than \(V^{*}\), so to reduce the gap, we need to decrease \(V_{d}\). As discussed earlier, this effect can be achieved by increasing D's learning rate sufficiently. Therefore, when \(V_{d}\geq V^{*}\), we design the scheduler to increase the learning rate, and we make the increase proportional to the gap \((V_{d}-V^{*})\), so that the scheduler focuses more on larger deviations from optimality.
There are a couple of important constraints that should be taken into account when increasing the learning rate. First, the increase should be bounded because too large of a learning rate will lead to convergence issues. Second, we need to control the rate of increase and ensure the chosen rate works in practice (e.g., too fast of a rate can lead to sharp changes in the loss and cause instabilities). Next, we define a function that satisfies the desired constraints.
We introduce a scheduling function \(f:\mathbb{R}\rightarrow\mathbb{R}\), which takes \(x:=(V_{d}-V^{*})\) as an input and returns a multiplier for the learning rate. That is, the new learning rate of the discriminator (after scheduling) will be \(\alpha_{d}\times f(x)\). To satisfy the constraints discussed above (boundedness and rate control), we introduce two user-specified parameters: \(f_{\text{max}}\in[1,\infty)\) and \(x_{\text{max}}\in\mathbb{R}_{>0}\). The function \(f\) interpolates between the points \((0,1)\) and \((x_{\text{max}},f_{\text{max}})\) and caps at \(f_{\text{max}}\), i.e., \(f(x)=f_{\text{max}}\) for \(x\geq x_{\text{max}}\). Here \(x_{\text{max}}\) is viewed as a parameter that controls the rate of the increase-a larger \(x_{\text{max}}\) leads to a slower rate, and thus the scheduler becomes less stringent. There are different possibilities for interpolation. In our experiments, we tried linear and exponential interpolation and found the latter to work slightly better. Thus, we use exponential interpolation and define \(f\) as:
\[f(x)=\min\Big{\{}[f_{\text{max}}]^{x/x_{\text{max}}},f_{\text{max}}\Big{\}}. \tag{1}\]
Note that since \(f_{\text{max}}\geq 1\), we always have \(f(x)\geq 1\) for \(x\geq 0\), so the learning rate will increase after scheduling. Moreover, the learning rate is not modified when the gap is zero since \(f(0)=1\).
**Scheduling when \(V_{d}\leq V^{*}\).** In this case, reducing the gap requires increasing \(V_{d}\). This can be achieved by decreasing the learning rate of \(D\). Similar to the previous case, we design the scheduler so that the decrease is proportional to \((V^{*}-V_{d})\) (a non-negative quantity). More formally, we define a scheduling function \(h:\mathbb{R}\rightarrow\mathbb{R}\), which takes \(x:=(V^{*}-V_{d})\) as an input and returns a multiplier for the learning rate, i.e., the new learning rate is \(\alpha_{d}\times h(x)\). Similar to the previous case, we introduce two user-specified parameters \(h_{\text{min}}\in(0,1]\) (the minimum value \(h\) can take) and \(x_{\text{min}}\in\mathbb{R}_{>0}\) to control the decay rate. We define \(h\) as an interpolation between \((0,1)\) and \((x_{\text{min}},h_{\text{min}})\), which is clipped from below at \(h_{\text{min}}\). We use exponential decay interpolation, leading to:
\[h(x)=\max\Big{\{}[h_{\text{min}}]^{x/x_{\text{min}}},h_{\text{min}}\Big{\}}. \tag{2}\]
Since \(h_{\text{min}}\in[0,1]\), we always have \(h(x)\leq 1\) for \(x\geq 0\), implying that the learning rate will decrease after scheduling. We summarize the scheduling mechanism in Algorithm 1.
**Algorithm 1: Gap-Aware Scheduling Algorithm**
**Inputs**: Current loss \(V_{d}\) and ideal loss \(V^{*}\).
**Parameters**: \(x_{\text{min}},x_{\text{max}},h_{\text{min}}\in(0,1],f_{\text{max}}\in[1, \infty)\).
1. If \(V_{d}\geq V^{*}\), increase D's learning rate by multiplying it with \(f(V_{d}-V^{*})\) - see (1).
2. If \(V_{d}<V^{*}\), decrease D's learning rate by multiplying it with \(h(V^{*}-V_{d})\) - see (2).
In our experiments, we inspect the optimality gap of GANs trained with and without the scheduler. We observe that the scheduler effectively reduces the optimality gap on all datasets and GAN variants, by up to \(70\)x (see Table 2). In most cases, we also observe that models with smaller gaps tend to have better sample quality.
**Choice of Parameters.** Based on our experiments, we propose setting the same base learning rate for \(G\) and \(D\) (and tuning over the learning rate, if the computational budget allows). Under this setting, in all of our GAN experiments and across all datasets, we fix the parameters: \(h_{\text{min}}=0.1\), \(f_{\text{max}}=2\), \(x_{\text{min}}=x_{\text{max}}=0.1V^{*}\) for NSGAN and LSGAN; and \(x_{\text{min}}=x_{\text{max}}=0.1\) for WGAN. These values were only tuned on MNIST for a very limited number of configurations-see Appendix B for details and intuition. We found these parameters to transfer well to Fashion MNIST, CIFAR-10, and CelebA. In Section 4, we present a sensitivity analysis in which we vary these parameters over multiple orders of magnitude. The results generally indicate that the
scheduler is relatively stable around the default values reported above (but setting these parameters to extreme values may cause instabilities).
For the DANN experiments, we use the same fixed parameters as in GANs (\(x_{\text{max}}=x_{\text{min}}=0.1V^{*}\)), but we consider a single tuning parameter: \(V^{*}\). As discussed in Section 2.2, the optimal discriminator loss in DANN depends on \(\lambda\), but we expect it to be roughly close to \(\log(4)\) (for good choices of \(\lambda\)). In our experiments we tune over \(V^{*}\in[0.5\log(4),\log(4)]\) and demonstrate that DANN is not sensitive to \(V^{*}\), e.g., with only \(10\) random search trials for tuning the base learning rate, \(V^{*}\), and \(\lambda\), optimizing with the scheduler outperforms its no-scheduler counterpart (with the same tuning budget).
**Batch-level Scheduling.** We apply Algorithm 1 at the batch level, i.e., the learning rate is modified at each minibatch update. The motivation behind batch-level scheduling is to keep the loss in check after each update. One popular alternative is to schedule at the epoch level. However, if the epoch involves many batches, the loss may drift drastically throughout one or few epochs (an observation that is common in practice). Scheduling at the batch level can mitigate such drifts early on.
**Estimating the Current Discriminator Loss.** The scheduling algorithm requires access to the discriminator's loss \(V_{d}\) at every minibatch update. The loss should be ideally evaluated over all training examples, however, this is typically inefficient. We resort to an exponential moving average to estimate \(V_{d}\). Specifically, let \(\hat{V}_{d}\) be the current estimate of \(V_{d}\) and denote by \(V_{\text{batch}}\) the loss of the current batch (which is available from the forward pass). The moving average update is: \(\hat{V}_{d}\leftarrow\alpha\hat{V}_{d}+(1-\alpha)V_{\text{batch}}\), where \(\alpha\in[0,1)\) is a user-specified parameter that controls the decay rate. In all experiments, we fix \(\alpha=0.95\) (no tuning was performed) and initialize with \(V^{*}\). We also note that if the training loss is evaluated periodically over the whole dataset (e.g., every number of epochs), the moving average can be reinitialized with this value.
## 4 Experiments
We study the performance of the scheduler on GANs for image generation and DANN.
### GANs
GANs are generally sensitive to weight initialization and hyperparameters (especially, learning rate) and require sufficiently large tuning budgets to perform well (Lucic et al., 2018). Thus, our main goal is to study if the learning rate scheduler can improve stability and reduce the need for tuning.
**A Statistical Study.** We perform a systematic study in which we tune GANs under different tuning budgets and repeat experiments over many random seeds. Our study allows for a rigorous understanding of the statistical significance and stability of the results. The study is large-scale as it involves training over 25,000 GANs (for 100s of epochs each) and requires around 6 GPU years on NVIDIA P100. In this respect, we note that a large part of the literature on GANs reports results on a single random seed and manually tunes hyperparameters (without reporting the tuning budget)-as reported by Lucic et al. (2018), this may result in misleading conclusions.
**Competing Methods, Datasets, and Architecture.** We compare with popular mechanisms for choosing the learning rate, including using the same learning rate for \(G\) and \(D\), decoupled learning rates (tuned independently) (Heusel et al., 2017), and a classical scheduler that monotonically decays the learning rate. Since our study involves training a large number of GANs (over 25,000), we consider the following standard datasets that allow for reasonable computation time: CelebA, CIFAR, Fashion MNIST, and MNIST. We focus on three popular GAN variants: NSGAN, LSGAN, and WGAN, and use a DCGAN architecture (Radford et al., 2016)-see Appendix D for details. Our setup (both datasets and architecture) is standard for large-scale tuning studies of GANs, e.g., see Lucic et al. (2018).
While it would be interesting to consider larger datasets and architectures, we note that performing such a large-scale study may become computationally infeasible. Moreover, we stress that our goal is to understand how the scheduler performs compared to other alternatives, under a clear, fixed tuning budget. Thus, it would be unfair to compare with models in the literature that do not report the tuning budget and the exact tuning procedure.
**Experimental Details.** We use Adam (Kingma and Ba, 2015) as it is the most popular choice for optimizing GANs (Wang et al., 2021), and fix the batch size to \(256\). On MNIST, Fashion MNIST, and CIFAR, we use \(500\) epochs, and \(200\) epochs on CelebA (as it is \(\sim 3\)x larger than the other datasets). To avoid overfitting, we periodically compute FID on the validation set during training, and upon termination return the version of the model with the best FID (this simulates early stopping). We tune over the following key hyperparameters: base learning rate(s), \(\beta_{1}\) in Adam, and the clipping weight in WGAN. We consider two settings when tuning the base learning rate: (i) the same rate for both \(G\) and \(D\), and (ii) two decoupled rates that are tuned independently (Heusel et al., 2017). We report the results of (i) in the main paper and the results of (ii) in the appendix-in both cases, the scheduler outperforms its no scheduler counterpart. We use \(100\) trials of random search, where in each trial, training is repeated \(5\) times over random seeds to reduce variability. We use FID on the validation set as the tuning metric, and we report the final FID results on a separate test set. See Appendix D for more details.
Figure 2: Plots of the best FID as function of the tuning budget. Following Lucic et al. (2018), for each tuning budget \(k\), we report the mean and 99\(\%\) confidence intervals of the best FID, estimated using 5,000 bootstrap samples of size \(k\) from the original \(100\) tuning runs.
#### 4.1.1 Results
**Tuning Budget and Performance.** Here we compare the performance with and without the scheduler, using the same base learning rate for \(G\) and \(D\); see Appendix C for decoupled rates (Heusel et al., 2017). In Figure 2, we plot the best FID as a function of the tuning budget. The results indicate that on all datasets, all variants of GANs, and almost every computational budget, the scheduler outperforms the (tuned) optimizer without the scheduler. The improvement reaches up to \(27\%\) in some cases, e.g., for WGAN on CelebA. The magnitude of the improvement is more pronounced on CelebA and CIFAR compared to MNIST/Fashion MNIST. This may be attributed to the more complex nature of CelebA and CIFAR, which can require more careful choices of the learning rates. Additionally, we note that the learning rate with the scheduler does not monotonically decrease (as in common learning rate decay)-it varies up and down as the training progresses (see Figure C.2 in the appendix).
**Stability.** To get an idea about the stability of the scheduler w.r.t. weight initialization, we pick the best hyperparameters from the tuning study (after \(100\) random search trials), and train each model \(100\) times using random seeds. In Table 2, we report the mean and standard error of both FID and the Inception Score (Salimans et al., 2016). The improvements we saw from using the scheduler in the tuning study (represented by Figure 2) generalize to this experiment, i.e., the performance of the scheduler does not seem to be sensitive to the random seed. For LSGAN without the scheduler, there are significant outlier runs (the standard error is \(\sim 10\)) on CIFAR-10 and CelebA-the same observation was made by Lucic et al. (2018) for LSGAN on the latter datasets. In contrast, for LSGAN with the scheduler, we did not observe outlier runs and this is evidenced by the small standard error (\(<0.1\)). Thus, for the datasets considered, the scheduler appears to be generally more stable.
**Optimality Gap.** In Table 2, we also report the optimality gap of the tuned models (averaged over \(100\) randomly initialized training runs). Out of the \(12\) dataset/GAN-type pairs, the scheduler achieves a strictly lower optimality gap in \(9\) cases, equal gap in \(2\) cases, and \(1\) worse gap that is statistically insignificant (see LSGAN on MNIST). On CIFAR and CelebA, the scheduler achieves significantly lower gaps, e.g., \(70\)x lower for WGAN on CelebA. For LSGAN on MNIST and Fashion MNIST, the optimizer without the scheduler already achieves small gaps, so the scheduler does not offer noticeable improvements. Generally, the results are in line with our hypothesis that models with smaller optimality gaps tend to generate better samples.
**Sensitivity Analysis.** We study the sensitivity of the scheduler to its parameters: \(h_{\text{min}}\), \(f_{\text{max}}\), \(x_{\text{min}}\), and \(x_{\text{max}}\). Specifically, we vary the value of each parameter (individually) over multiple orders of magnitude and study the change in FID. When varying a given parameter, we fix the other parameters to their default values (discussed in Section 3). The analysis is done on MNIST using the best (tuned) hyperparameters of the GAN, and training is repeated for \(50\) random seeds to account for the variability due to initialization. In Figure 3, we present sensitivity plots for NSGAN, LSGAN, and WGAN.
The results indicate that NSGAN and WGAN are relatively insensitive: there is a wide range of values (over one order of magnitude) that lead to good performance, which exceeds that of no scheduler. LSGAN has sharp transitions for large values of \(f_{\text{max}}\) (specifically \(>5\)); this is intuitively expected because increasing the learning rate beyond a certain threshold will cause the model to diverge. For very small \(x_{\text{min}}\) and \(x_{\text{max}}\) (\(<0.02\)), LSGAN performs poorly; this is also expected because such small values force the training loss to be almost constant so essentially the model does not train. We also note that LSGAN is known in the literature to be more sensitive and suffer from frequent failure even for well-tuned hyperparameters, compared to NSGAN and WGAN (Lucic et al., 2018).
**Optimality Gap of G.** Given that the scheduler only controls the learning rate (and loss) of \(D\), a natural question is: what happens to the loss of \(G\)? In Appendix C, we study the effect of the scheduler on G's loss. Specifically, we measure G's optimality gap, which we define as the absolute difference between G's training and ideal losses. The main conclusion of the experiment is that the scheduler
\begin{table}
\begin{tabular}{|l|c c c c|c c c c|c c c c|} \hline \multirow{2}{*}{GAN/Dataset} & \multicolumn{3}{c|}{FID} & \multicolumn{3}{c|}{Inception Score} & \multicolumn{3}{c|}{Optimality Gap \(\times 10^{3}\)} \\ & MNIST & Fashion & CIFAR & CelebA & MNIST & Fashion & CIFAR & CelebA & MNIST & Fashion & CIFAR & CelebA \\ \hline NS & 1.4 (0.03) & 12.4 (0.04) & 42.1 (0.09) & 16.5 (0.06) & 8.18 (0.01) & 4.11 (0.01) & 6.27 (0.01) & 3.1 (0.01) & 22 (1) & 34 (1) & 203 (6) & 513 (8) \\ NS + Sched. & **1.2** (0.02)* & **12.0** (0.04)* & **41.1** (0.1)* & **15.0** (0.07)* & **8.23** (0.01)* & 4.11 (0.01) & **6.41** (0.01)* & **3.12** (0.0004) & **19** (1) & 34 (1) & **91** (3)* & **238** (5) \\ \hline LS & 1.3 (0.02) & 12.2 (0.04) & 68.1 (10.28) & 40.6 (9.5) & 8.19 (0.01) & 4.04 (0.03) & 6.01 (0.13) & 2.97 (0.05) & **10** (1) & 17 (0.4) & 115 (10) & 246 (8) \\ LS + Sched. & 1.3 (0.02) & **12.1** (0.04) & **40.9** (0.09)* & **14.5** (0.05)* & 8.19 (0.01) & **4.11** (0.01) & **6.4** (0.01)* & **3.12** (0.0004) & 11 (1) & 17 (0.4) & **27** (0.43)* & **111** (3)* \\ \hline W & 1.1 (0.02) & 13.8 (0.06) & 49.9 (0.11) & 23.4 (0.11) & 8.32 (0.01) & 4.1 (0.01) & 5.93 (0.01) & 2.99 (0.01) & 2143 (169) & 346 (35) & 118101 (813) & 9861 (225) \\ W + Sched. & **1.0** (0.02)* & **11.6** (0.04)* & **41.8** (0.17)* & **17.1** (0.11)* & **8.4** (0.01)* & **4.15** (0.01)* & **6.32 (0.02)* & **3.1** (0.01) & **57** (5)* & **117** (9)* & **825** (96)* & **139** (6)* \\ \hline \end{tabular}
\end{table}
Table 2: Frechet Inception Distance (lower is better), Inception Score (higher is better), and the Optimality Gap (multiplied by \(10^{3}\)) on the test set after tuning. Each entry represents the mean and standard error, computed over 100 training runs (initialized with random seeds). Best values are in bold. An asterisk (*) indicates statistical significance based on a two-sample t-test at a level of \(0.01\).
can significantly reduce G's optimality gap, compared to no scheduler.
**Additional Comparisons.** In Appendix C, we compare with two additional alternative strategies for choosing the learning rate: (i) decoupled base learning rates (tuned independently) (Heusel et al., 2017), and (ii) a classical scheduler that monotonically decreases the learning rate. In both cases, the scheduler reduces the need for tuning (by up to 10x) and significantly improves FID (by up to 38%). Moreover, we present a comparison between exponential and linear interpolation for the scheduling functions \(f(x)\) and \(h(x)\).
### Dann
We consider a standard domain adaptation benchmark: MNIST as the source and MNIST-M as the target. MNIST-M consists of MNIST images whose background has been altered (Ganin et al., 2016). We conduct a tuning study to understand how DANN with the scheduler compares to (i) DANN without a scheduler and (ii) a model without domain adaptation (i.e., trained only on the source).
**Experimental Setup.** We use a CNN-based architecture for DANN, similar to that in Ganin and Lempitsky (2015), and optimize using SGD with a batch size of \(256\). We train for \(300\) epochs, computing the validation accuracy at each epoch. At the end of training, we pick the version of the model with the highest validation accuracy (simulating early stopping). Additionally, we tune over the following hyperparameters: learning rate, \(\lambda\), and \(V^{*}\), using \(100\) random search trials, and training is repeated \(5\) times per trial (using random seeds). See Appendix D for details.
**Results.** In Figure 4 (left), we report the test accuracy (on the target) as a function of the tuning budget for DANN with and without the scheduler. The results indicate that the scheduler performs better for every tuning budget. The relative improvement in mean accuracy reaches around \(0.7\%\) at \(100\) trials. We also experimented with a source-only model that does not perform domain adaptation (specifically, DANN with \(\lambda=0\)). The accuracy of the source-only model is 60.4% (with standard error of 0.4%) at 100 trials, which is significantly lower than the two models in Figure 4. In Figure 4 (right), we study the training stability of the model using the optimal hyperparameters (obtained by tuning). Specifically, we report the accuracy of 100 models trained with random initialization. We observe that the scheduler has roughly a 40% smaller interquartile range, suggesting that it leads to more stable training. Moreover, the scheduler significantly improves the lower tail of the accuracy distribution, e.g., the first quartile and minimum (worst-case) accuracy improve by 1% and 3%, respectively.
## 5 Conclusion
We proposed a novel gap-aware learning rate scheduler for adversarial nets. The scheduler monitors the optimality gap (from an ideal network) during training and modifies the base learning rate to keep the gap in check. This is in contrast to the common choices of base learning rates which do not take into account the gap or the current state of the network. Our experiments on GANs for image generation and DANN for domain adaptation demonstrate that the scheduler can significantly improve performance and reduce the need for tuning.
Figure 4: Domain adaptation (MNIST \(\rightarrow\) MNIST-M) using DANN. **[Left]** Test accuracy of the best model as a function of the tuning budget. The 99% confidence intervals are estimated using 5000 bootstrap samples. **[Right]** Training stability: test accuracy of 100 models trained using random initialization and optimal hyperparameters.
Figure 3: Sensitivity plots for the scheduler applied to NSGAN, LSGAN, and WGAN on MNIST. The x-axis is on a log scale. When varying each parameter, we fix the other parameters to the default values. We repeat training 50 times (using random seeds) and report the mean and standard error (represented by the shaded region). A star represents the default parameter value used in the experiments. |
2310.00298 | Compilation Semantics for a Programming Language with Versions | Programming with versions is a paradigm that allows a program to use multiple
versions of a module so that the programmer can selectively use functions from
both older and newer versions of a single module. Previous work formalized
$\lambda_{\mathrm{VL}}$, a core calculus for programming with versions, but it
has not been integrated into practical programming languages. In this paper, we
propose VL, a Haskell-subset surface language for $\lambda_{\mathrm{VL}}$ along
with its compilation method. We formally describe the core part of the VL
compiler, which translates from the surface language to the core language by
leveraging Girard's translation, soundly infers the consistent version of
expressions along with their types, and generates a multi-version interface by
bundling specific-version interfaces. We conduct a case study to show how VL
supports practical software evolution scenarios and discuss the method's
scalability. | Yudai Tanabe, Luthfan Anshar Lubis, Tomoyuki Aotani, Hidehiko Masuhara | 2023-09-30T08:15:11Z | http://arxiv.org/abs/2310.00298v1 | # Compilation Semantics for a Programming Language with Versions
###### Abstract
_Programming with versions_ is a paradigm that allows a program to use multiple versions of a module so that the programmer can selectively use functions from both older and newer versions of a single module. Previous work formalized \(\lambda_{\mathrm{VL}}\), a core calculus for programming with versions, but it has not been integrated into practical programming languages. In this paper, we propose VL, a Haskell-subset surface language for \(\lambda_{\mathrm{VL}}\) along with its compilation method. We formally describe the core part of the VL compiler, which translates from the surface language to the core language by leveraging Girard's translation, soundly infers the consistent version of expressions along with their types, and generates a multi-version interface by bundling specific-version interfaces. We conduct a case study to show how VL supports practical software evolution scenarios and discuss the method's scalability.
Keywords:Type system Type inference Version control system.
## 1 Introduction
Updating dependent software packages is one of the major issues in software development. Even though a newer version of a package brings improvements, it also brings the risk of breaking changes, which can make the entire software defective.
We argue that this issue originates from the principle of most programming languages that only allow the use of one version of a package at a time. Due to this principle, developers are faced with the decision to either update to a new, improved version of a package that requires many changes or to remain with an older version. The problem gets worse when a package is indirectly used. This dilemma often results in delays in adopting upgrades, leading to stagnation in software development and maintenance [16, 2].
#### Programming with versions
[28, 29, 15, 30] is a recent proposal that allows programming languages to support multiple versions of programming elements at a time so that the developer can flexibly cope with incompatible changes. \(\lambda_{\mathrm{VL}}\) is the core calculus in which a _versioned value_ encapsulates multiple versions of a value (including a function value). The \(\lambda_{\mathrm{VL}}\) type system checks the consistency of each term so that a value produced in a version is always passed to functions in the same version. The calculus and the type system design are based on coeffect calculus [3, 20].
While \(\lambda_{\mathrm{VL}}\) offers the essential language constructs to support multiple versions in a program, the language is far from practical. For example, with multiple versions of a module, each version of the function must be manually represented inside a versioned value (i.e., a record-like expression). \(\lambda_{\mathrm{VL}}\) is as simple as lambda calculus, yet it has a verbose syntax due to the coeffect calculus. In short, there are aspects of versioning in \(\lambda_{\mathrm{VL}}\) that a surface language compiler can automate.
We propose the functional language VL as a surface language for \(\lambda_{\mathrm{VL}}\) along with its compilation method. In VL, a function name imported from an external module represents a multi-version term, where each occurrence of the function name can reference a different version of the function. The VL compiler translates a program into an intermediate language VLMini, a version-label-free variant of \(\lambda_{\mathrm{VL}}\), determines the version for each name occurrence based on a type and version inference algorithm, and translates it back into a version-specialized Haskell program. VL also offers the constructs to explicitly control versions of expressions, which are useful to keep using an older version for some reason.
This paper presents the following techniques in VL: (a) _an application of Girard's translation_ for translating VL into VLMini, (b) _the bundling_ for making a top-level function act as a versioned value, and (c) _a type and version inference algorithm_ for identifying the version of each expression with respect to the \(\lambda_{\mathrm{VL}}\) type system. Finally, we prove the soundness of the inference system and implement a VL compiler. Code generation converts a VL program into a version-specialized Haskell program using the solution obtained from z3 [18].
Paper Organization.Section 2 introduces incompatibility issues and fundamental concepts in programming with versions with \(\lambda_{\mathrm{VL}}\) and VL. Section 3 introduces bundling and Girard's transformation. Section 4 presents an algorithmic version inference for VL. Section 5 features an implementation of VL, and Section 6 introduces a case study that simulates an incompatible update made in a Haskell library. Finally, Section 7 discusses further language development and concludes the paper by presenting related work and a conclusion.
## 2 Overview
### Motivating Example
First, we will explain a small example to clarify incompatibility issues. Consider a scenario where an incompatible change is made to a dependent package.
Figure 1 shows the package dependencies in a file explorer App based on a hash-based file search. This function is developed using the system library Dir and the cryptography library Hash. For simplicity, we equate packages and modules here (each package consists of a single module), and we only focus on the version of Hash. The pseudocode is written in a Haskell-like language.
Before its update, App depends on version 1.0.0 of Hash (denoted by \(\dashrightarrow\)). The App's main function implements file search by a string from standard input using mkHash and exists. The function mkHash is in version 1.0.0 of Hash, and it generates a hash value using the MD5 algorithm from a given string. Hash also provides a function match that determines if the argument string and hash value match under mkHash. The function exists is in version 1.0.0 of Dir, which is also dependent on version 1.0.0 of Hash, and it determines if a file with a name corresponding to a given hash exists.
Due to security issues, the developer of App updated Hash to version 2.0.0 (denoted by \(\longrightarrow\)). In version 2.0.0 of Hash, SHA-3 is adopted as the new hash algorithm. Since Dir continues to use version 1.0.0 of Hash, App needs two different versions of Hash. Various circumstances can lead to this situation: Dir may have already discontinued maintenance, or functions in Dir, other than exists, might still require the features provided by version 1.0.0 of Hash.
Figure 1: Minimal module configuration before and after the dependency update causing an error due to inconsistency expected to the dependent package.
Although the update does not modify App, it causes errors within App. Even if a file with an input filename exists, the program returns Not Found error contrary to the expected behavior. The cause of the unexpected output lies in the differences between the two versions required for main. In line 6 of App, an SHA-3 hash value is generated by mkHash and assigned to digest. Since exists evaluates hash equivalence using MD5, exists digest compares hashes generated by different algorithms, evaluating to false.
This example highlights the importance of version compatibility when dealing with functions provided by external packages. Using different versions of Hash in separate program parts is fine, but comparing results may be semantically incorrect. Even more subtle changes than those shown in Figure 1 can lead to significant errors, especially when introducing side effects or algorithm modifications that break the application's implicit assumptions. Manually managing version compatibility for all external functions is unfeasible.
In practical programming languages, dependency analysis is performed before the build process to prevent such errors, and package configurations requiring multiple versions of the same package are rejected. However, this approach tends towards conservative error reporting. In cases where a core package, which many other libraries depend on, receives an incompatible change, no matter how minuscule, it requires coordinated updates of diverse packages across the entire package ecosystem [2, 29, 31].
### \(\lambda_{\text{VL}}\)
\(\lambda_{\text{VL}}\)[28, 29] is a core calculus designed to follow the principles: (1) enabling simultaneous usage of multiple versions of a package, (2) ensuring version consistency within a program. \(\lambda_{\text{VL}}\) works by encapsulating relevant terms across multiple versions into a record-like term, tagged with a label indicating the specific module version. Record-like terms accessible to any of its several versions are referred to as _versioned values_, and the associated labels are called _version labels_.
#### 2.2.1 Version Labels
Figure 2 shows the syntax of \(\lambda_{\text{VL}}\). Given modules and their versions, the corresponding set of version labels characterizes the variation of programs of a versioned value. In \(\lambda_{\text{VL}}\), version labels are implicitly generated for all external module-version combinations, in which \(M_{i}\) is unique, with the universal set of these labels denoted by \(\mathcal{L}\). Specifically, in the example illustared in Figure 1, \(\mathcal{L}=\{l_{1},l_{2}\}\) and \(l_{1}=\{\texttt{Hash}=1.0.0,\texttt{Dir}=1.0.0\},l_{2}=\{\texttt{Hash}=2.0.0, \texttt{Dir}=1.0.0\}\). The size of \(\mathcal{L}\) is proportional to \(V^{M}\) where \(M\) is the number of modules and \(V\) is the maximum number of versions.
#### 2.2.2 Syntax of \(\lambda_{\text{VL}}\)
\(\lambda_{\text{VL}}\) extends \(\ell\mathcal{R}\)PCF [3] and GrMini [20] with additional terms that facilitate introducing and eliminating versioned values. Versioned values can be introduced through versioned records \(\{\overline{l_{i}=t_{i}}\}\) and promotions \([t]\). A versioned record encapsulates related definitions \(t_{1},\ldots,t_{n}\) across multiple versions
and their version labels \(l_{1},\ldots,l_{n}\). For instance, the two versions of mkHash in Figure 1 can be bundled as the following version record.
\[\mathit{mkHash}\quad:=\quad\begin{array}{lcl}\{l_{1}=\lambda s.\{-\text{ make MD5 hash }-\},\\ l_{2}=\lambda s.\{-\text{ make SHA-3 hash }-\}\}\end{array}\]
In \(\lambda_{\text{VL}}\), programs are constructed via function application of versioned values. A function application of mkHash to the string s can be written as follows.
\[\mathit{app}\quad:=\quad\begin{array}{lcl}\textbf{let}\ [mkHash^{\prime}]=mkHash \textbf{in}\\ \textbf{let}\ [s]=[\text{``compiler.vl''}]\textbf{in}\ [mkHash^{\prime}\,s]\end{array}\]
This program (\(\mathit{app}\) hereafter) makes a hash for the string "compiler.vl" and is available for both \(l_{1}\) and \(l_{2}\). The contextual let-binding **let**\([x]=t_{1}\)**in**\(t_{2}\) provides the elimination of version values by binding a versioned value for \(t_{1}\) to \(x\), thus making it accessible in \(t_{2}\). Promotion \([x]\) offers an alternative way to introduce versioned values, making any term \(t\) act as a versioned value.
The evaluation of terms \(t_{i}\) stored in a versioned value \(\{\overline{l_{i}=t_{i}}\}\) and \([t]\) is postponed until a specific version label is later specified. To proceed with a postponed evaluation of a versioned value, we use extraction \(u.l_{k}\). Extraction specifies one versioned label \(l_{k}\) for the versioned value \(u\) and recursively extracts the inner term \(t_{k}\) corresponding to \(l_{k}\) from \(\{l_{i}=t_{i}\}\), and \(t\) from \([t]\) as follows.
\[\mathit{app}\#l_{1}\quad:=\quad\begin{array}{lcl}\textbf{let}\ [mkHash^{\prime}]=mkHash \textbf{in}\\ \textbf{let}\ [s]=[\text{``compiler.vl''}]\textbf{in}\ [mkHash^{\prime}\,s].l_{1} \\ \longrightarrow^{*}\quad(\lambda s.\{-\text{ make MD5 hash }-\})\text{``compiler.vl''}\\ \longrightarrow\quad\text{ 4dcb6ebe3c6520d1f57c906541cf3823}\end{array}\]
Consequently, \(\mathit{app}\#l_{1}\) evaluates into an MD5 hash corresponding to \(l_{1}\).
#### 3.2.2 Type of Versioned Values
The type of a versioned value is expressed as \(\Box_{r}A\), assigning a set of version labels \(r\), called _version resources_, to a type \(A\). Intuitively, the type of a versioned value represents the versions available to that versioned value. For example, _mkHash_ and \(\mathit{app}\) are typed as follows.
\[\mathit{mkHash}\,:\,\Box_{\{l_{1},l_{2}\}}\ (\mathsf{String}\rightarrow\mathsf{ String})\quad\mathit{app}\,:\,\Box_{\{l_{1},l_{2}\}}\ (\mathsf{String}\rightarrow\mathsf{ String})\]
Figure 2: The syntax of \(\lambda_{\text{VL}}\).
The types have \(\{l_{1},l_{2}\}\) as their version resource, illustrating that the versioned values have definitions of \(l_{1}\) and \(l_{2}\). For function application, the type system computes the intersection of the version resource of subterms. Since the promoted term is considered to be available in all versions, the version resource of the entire function application indicates \(\{l_{1},l_{2}\}=\{l_{1},l_{2}\}\cap\mathcal{L}\).
For extractions, the type system verifies if the version resource contains the specified version as follows.
\[\mathit{app}\#l_{1}\,:\,\mathsf{String}\rightarrow\mathsf{String}\quad app\#l _{3}\,:\,(\mathit{rejected})\]
Assuming \(\mathcal{L}=\{l_{1},l_{2},l_{3}\}\), \(\mathit{app}\#l_{3}\) is rejected by type checking because the version resource of \(\mathit{app}\) does not contain \(l_{3}\). Conversely, \(\mathit{app}\#l_{1}\) is well-typed, but note that the resultant type lost its version resource. It is attributed to the design principle that it could be used in other versions upon extraction.
The \(\lambda_{\mathrm{VL}}\) type system incorporates the notion of version consistency in addition to the standard notions of preservation and progress. Proofs of these theorems can be found in Appendix 0.C.
### Programming with Versions in VL
Our contributions enjoy the benefits of programming with versions on a \(\lambda\)-calculus-based functional language VL. To achieve this, we develop a compilation
Figure 3: The programs in Figure 1 in VL.
method between lambda calculus and VLMini, a version-label free variant of \(\lambda_{\text{VL}}\), and a version inference algorithm to infer the appropriate version of expressions.
In VL, (1) all versions are available for every module, and (2) the version of each expression is determined by expression-level dependency analysis. This approach differs from existing languages that determine one version for each dependent package. Figure 3 shows how the programs in Figure 1 are interpreted in VL. The VL compiler bundles the interfaces of multiple versions and generates a cross-version interface to make external functions available in multiple versions. The VL type system enforces version consistency in main and selects a newer version if multiple versions are available. Thus it gives the version label \(\{\texttt{Hash}=2.0.0,\texttt{Dir}=1.0.0\}\) to dependent expressions of main. As a result, since Hash version referenced from Dir is no longer limited to 1.0.0, exists digest is evaluated using SHA-3 under the context of Hash version 2.0.0.
Furthermore, VL provides _version control terms_ to convey the programmer's intentions of versions to the compiler. For example, to enforce the evaluation in Figure 3 to MD5, a programmer can rewrite line 7 of App as follows.
The program dictates that exists digest is evaluated within the context of the Hash version 1.0.0. Consequently, both mkHash and match, which depend on exists digest, are chosen to align with version 1.0.0 of Hash. Moreover, VL provides unversion t. It eliminates the dependencies associated with term t, facilitating its collaboration with other versions under the programmer's responsibility, all while maintaining version consistency within its subterm. Thus, VL not only ensures version consistency but also offers the flexibility to control the version of a particular part of the program.
## 3 Compilation
The entire translation consists of three parts: (1) _Girard's translation_, (2) an _algorithmic type inference_, and (3) _bundling_. Figure 4 shows the translation process of a single module. First, through Girard's translation, each version of the
Figure 4: The translation phases for a single module with multiple versions.
VL program undergoes a version-wise translation into the VLMini program. Second, the type inference synthesizes types and constraints for top-level symbols. Variables imported from external modules reference the bundled interface generated in the subsequent step. Finally, to make the external variables act as multi-version expressions, bundling consolidates each version's interface into one VLMini interface. These translations are carried out in order from downstream of the dependency tree. By resolving all constraints up to the main module, the appropriate version for every external variable is determined.
It is essential to note that the translations focus on generating constraints for dispatching external variables into version-specific code. While implementing versioned records in \(\lambda_{\mathrm{VL}}\) presents challenges, such as handling many version labels and their code clones, our method is a constraint-based approach in VLMini that enables static inference of version labels without their explicit declaration.
In the context of coeffect languages, constraint generation in VL can be seen as the automatic generation of type declarations paired with resource constraints. Granule[20] can handle various resources as coeffects, but it requires type declarations to indicate resource constraints. VL restricts its resources solely to the version label set. This specialization enables the automatic collection of version information from external sources outside the codebase.
### An Intermediate Language, VLMini
#### 3.1.1 Syntax of VLMini
Figure 5 shows the syntax of VLMini. VLMini encompasses all the terms in \(\lambda_{\mathrm{VL}}\) except for versioned records \(\{l_{i}=t_{i}\}\), intermediate term \(\langle\overline{l_{i}=t_{i}}\,|\,l_{k}\rangle\), and extractions \(t.l_{k}\). As a result, its terms are analogous to those in \(\ell\mathcal{R}\mathrm{PCF}\)[3] and GrMini[20]. However, VLMini is specialized to treat version resources as coeffects. We also introduce data constructors by introduction \(C\,t_{1},...,t_{n}\) and elimination **case \(t\) of \(\overline{p_{i}\mapsto t_{i}}\)** for lists and pairs, and version control terms **unversion \(t\)** and **version \(\{\overline{M_{i}=V_{i}}\}\) of \(t\)**. Here, contextual-let in \(\lambda_{\mathrm{VL}}\) is a syntax sugar of lambda abstraction applied to a promoted pattern.
\[\textbf{let}\ [x]=t_{1}\ \textbf{in}\ t_{2}\triangleq(\lambda[x].t_{2})\,t_{1}\]
Types, version labels, and version resources are almost the same as \(\lambda_{\mathrm{VL}}\). Type constructors are also added to the type in response to the VLMini term having a data constructor. The remaining difference from \(\lambda_{\mathrm{VL}}\) is type variables \(\alpha\). Since VLMini is a monomorphic language, type variables act as unification variables; type variables are introduced during the type inference and are expected to be either concrete types or a set of version labels as a result of constraint resolution. To distinguish those two kinds of type variables, we introduce kinds \(\kappa\). The kind Labels is given to type variables that can take a set of labels \(\{\overline{l_{i}}\}\) and is used to distinguish them from those of kind Type during algorithmic type inference.
#### 3.1.2 Constraints
The lower part of Figure 5 shows constraints generated through bundling and type inference. Dependency constraints comprise _variable dependencies_ and _label dependencies_ in addition to propositional formulae. Variable
dependencies \(\alpha\sqsubseteq\alpha^{\prime}\) require that if a version label for \(\alpha^{\prime}\) expects a specific version for a module, then \(\alpha\) also expects the same version. Similarly, label dependencies \(\alpha\preceq\langle\!\langle\overline{M_{i}=V_{i}}\rangle\!\rangle\) require that a version label expected for \(\alpha\) must be \(V_{i}\) for \(M_{i}\). For example, assuming that versions \(1.0.0\) and \(2.0.0\) exist for both modules \(\mathbbmss{A}\) and \(\mathbbmss{B}\), the minimal upper bound set of version labels satisfying \(\alpha\preceq\langle\!\langle\mathbbmss{A}\mapsto 1.0.0\rangle\!\rangle\) is \(\alpha=\{\{\mathtt{A}=1.0.0,\mathtt{B}=1.0.0\}\}\), \(\{\mathbbmss{A}=1.0.0,\mathtt{B}=2.0.0\}\}\). If the constraint resolution is successful, \(\alpha\) will be specialized with either of two labels. \(\Theta\) is a set of type equations resolved by the type unification.
### Girard's Translation for VLMini
We extend Girard's translation between VL (lambda calculus) to VLMini following Orchard's approach [20].
\[\llbracket n\rrbracket\equiv n\qquad\llbracket x\rrbracket\equiv x\qquad \llbracket\lambda x.t\rrbracket\equiv\lambda[x].\llbracket t\rrbracket\qquad \llbracket t\ s\rrbracket\equiv\llbracket t\rrbracket\ \llbracket s\rrbracket]
The translation replaces lambda abstractions and function applications of VL by lambda abstraction with promoted pattern and promotion of VLMini, respectively. From the aspect of types, this translation replaces all occurrences of \(A\to B\) with \(\Box_{r}A\to B\) with a version resource \(r\). This translation inserts a syntactic annotation \([*]\) at each location where a version resource needs to be
Figure 5: The syntax of VLMini.
addressed. Subsequent type inference will compute the resource at the specified location and produce constraints to ensure version consistency at that point.
The original Girard's translation [11] is well-known as a translation between the simply-typed \(\lambda\)-calculus and an intuitionistic linear calculus. The approach involves replacing every intuitionistic arrow \(A\to B\) with \(!A\multimap B\), and subsequently unboxing via let-in abstraction and promoting during application [20].
### Bundling
Bundling produces an interface encompassing types and versions from every module version, allowing top-level symbols to act as multi-version expressions. During this process, bundling reviews interfaces from across module versions, identifies symbols with the same names and types after removing \(\Box_{r}\) using Girard's transformation, and treats them as multiple versions of a singular symbol (also discussed in Section 7). In a constraint-based approach, bundling integrates label dependencies derived from module versions, ensuring they align with the version information in the typing rule for versioned records of \(\lambda_{\mathrm{VL}}\).
For example, assuming that the _id_ that takes an \(\mathsf{Int}\) value as an argument is available in version 1.0.0 and 2.0.0 of \(\mathtt{M}\) as follows:
\[\mathit{id} :\Box_{\alpha_{1}}(\Box_{\alpha_{2}}\mathsf{Int}\rightarrow\mathsf{ Int})\ |\ \mathcal{C}_{1}\] (version 1.0.0) \[\mathit{id} :\Box_{\beta_{1}}(\Box_{\beta_{2}}\mathsf{Int}\rightarrow\mathsf{ Int})\ |\ \mathcal{C}_{2}\] (version 2.0.0)
where \(\alpha_{1}\) and \(\alpha_{2}\) are version resource variables given from type inference. They capture the version resources of _id_ and its argument value in version 1.0.0. \(\mathcal{C}_{1}\) is the constraints that resource variables of version 1.0.0 will satisfy. Likewise for \(\beta_{1}\), \(\beta_{2}\), and \(\mathcal{C}_{2}\). Since the types of _id_ in both versions become \(\mathsf{Int}\rightarrow\mathsf{Int}\) via Girard's translation, they can be bundled as follows:
\[\mathit{id}:\Box_{\gamma_{1}}(\Box_{\gamma_{2}}\mathsf{Int} \rightarrow\mathsf{Int})\ |\ \mathcal{C}_{1}\land\mathcal{C}_{2}\land\Big{(} (\gamma_{1}\preceq\langle\!\langle\mathtt{M}=1.0.0\rangle \rangle\land\gamma_{1}\preceq\alpha_{1}\land\gamma_{2}\preceq\alpha_{2})\] \[\lor (\gamma_{1}\preceq\langle\!\langle\mathtt{M}=2.0.0\rangle\! \rangle\land\gamma_{1}\preceq\beta_{1}\land\gamma_{2}\preceq\beta_{2})\ \Big{)}\]
where \(\gamma_{1}\) and \(\gamma_{2}\) are introduced by this conversion for the bundled _id_ interface, with label and variable dependencies that they will satisfy. \(\gamma_{1}\) captures the version resource of the bundled _id_. The generated label dependencies \(\gamma_{1}\preceq\langle\!\langle\mathtt{M}=1.0.0\rangle\!\rangle\) and \(\gamma_{1}\preceq\langle\!\langle\mathtt{M}=2.0.0\rangle\!\rangle\) indicate that _id_ is available in either version 1.0.0 or 2.0.0 of \(\mathtt{M}\). These label dependencies are exclusively4 generated during bundling. The other new variable dependencies indicate that the _id_ bundled interface depends on one of the two version interfaces. The dependency is made apparent by pairing the new resource variables with their respective version resource variable for each version. These constraints are retained globally, and the type definition of the bundled interface is used for type-checking modules importing _id_.
Footnote 4: In the type checking rules for **version \(\mathit{l}\) of \(t\)**, type inference exceptionally generates label dependencies. Please see Appendix 0.B.4
## 4 Algorithmic Type Inference
We develop the algorithmic type inference for VLMini derived from the declarative type system of \(\lambda_{\mathrm{VL}}\)[28, 29]. The type inference consists of two judgments: _type synthesis_ and _pattern type synthesis_. The judgment forms are similar to Gr [20], which is similarly based on coeffect calculus. While Gr provides type-checking rules in a bidirectional approach [8, 9] to describe resource constraint annotations and performs unifications inside the type inference, VLMini only provides synthesis rules and unification performs after the type inference. In addition, Gr supports user-defined data types and multiple computational resources, while VLMini supports only built-in data structures and specializes in version resources. The inference system is developed to be sound for declarative typing in \(\lambda_{\mathrm{VL}}\), with the proof detailed in Appendix 0.D.
Type synthesis takes type variable kinds \(\Sigma\), a typing context \(\Gamma\) of term variables, and a term \(t\) as inputs. Type variable kinds \(\Sigma\) are added to account for distinct unification variables for types and version resources. The synthesis produces as outputs a type \(A\), type variable kinds \(\Sigma^{\prime}\), type constraints \(\Theta\), and dependency constraints \(\mathcal{C}\). The type variable kinds \(\Sigma\) and \(\Sigma^{\prime}\) always satisfy \(\Sigma\subseteq\Sigma^{\prime}\) due to the additional type variables added in this phase.
Pattern type synthesis takes a pattern \(p\), type variable kinds \(\Sigma\), and resource environment \(R\) as inputs. It synthesizes outputs, including typing context \(\Gamma\), type variable kinds \(\Sigma^{\prime}\), and type and dependency constraints \(\Theta\) and \(\mathcal{C}\). Pattern
Figure 6: VLMini algorithmic typing.
type synthesis appears in the inference rules for \(\lambda\)-abstractions and case expressions. It generates a typing context from the input pattern \(p\) for typing \(\lambda\)-bodies and branch expressions in case statements. When checking a nested promoted pattern, the resource context \(R\) captures version resources inside a pattern.
### Pattern Type Synthesis
Pattern type synthesis conveys the version resources captured by promoted patterns to the output typing context. The rules are classified into two categories, whether or not it has resources in the input resource context \(R\). The base rules are pVar, p\(\Box\), while the other rules are resource-aware versions of the corresponding rules. The resource-aware rules assume they are triggered within the promoted pattern and collect version resource \(r\) in the resource context.
The rules for variables pVar and [pVar] differ in whether the variable pattern occurs within a promoted pattern. pVar has no resources in the resource context because the original pattern is not inside a promoted pattern. Therefore, this pattern produces typing context \(x:A\). [pVar] is for a variable pattern within the promoted pattern, and a resource \(r\) is recorded in the resource context. The rule assigns the collected resource \(r\) to the type \(A\) and outputs it as a versioned assumption \(x:[A]_{r}\).
The rules for promoted patterns p\(\Box\) propagate version resources to the sub-pattern synthesis. The input type \(A\) is expected to be a versioned type, so the rule generates the fresh type variables \(\alpha\) and \(\beta\), then performs the subpattern synthesis considering \(A\) as \(\Box_{\alpha}\beta\). Here, the resource \(\alpha\) captured by the promoted pattern is recorded in the resource context. Finally, the rule unifies \(A\) and \(\Box_{\alpha}\beta\) and produces the type constraints \(\Theta^{\prime}\) for type refinement.
### Type Synthesis
The algorithmic typing rules for VLMini, derived from declarative typing rules for \(\lambda_{\mathrm{VL}}\), are listed in Figure 6. We explain a few important rules in excerpts.
The rule \(\Rightarrow_{\textsc{Ans}}\) generates a type variable \(\alpha\), along with the binding pattern \(p\) of the \(\lambda\)-abstraction generating the typing context \(\Gamma^{\prime}\). Then the rule synthesizes a type \(B\) for the \(\lambda\)-body under \(\Gamma^{\prime}\), and the resulting type of the \(\lambda\)-abstraction is \(\alpha\to B\) with the tentatively generated \(\alpha\). With the syntax sugar, the type rules of the contextual-let are integrated into \(\Rightarrow_{\textsc{Ans}}\). Instead, \(\lambda\)-abstraction does not just bind a single variable but is generalized to pattern matching, which leverages pattern typing, as extended by promoted patterns and data constructors.
The rule \(\Rightarrow_{\textsc{Pr}}\) is the only rule that introduces constraints in the entire type inference algorithm. This rule intuitively infers consistent version resources for the typing context \(\Gamma\). Since we implicitly allow for weakening, we generate a constraint from \(\Gamma^{\prime}\) that contains only the free variables in \(t\), produced by _context grading_ denoted as \([\Gamma]_{\mathsf{Labels}}\). Context grading converts all assumptions in the input environment into versioned assumptions by assigning the empty set for the assumption with no version resource.
Finally, the rule generates constraints from \(\Gamma^{\prime}\) and a fresh type variable \(\alpha\) by constraints generation defined in the lower part of Figure 6. The rules assert that the input type variable \(\alpha\) is a subset of all the resources of the versioned assumptions in the input environment \(\Gamma\). The following judgment is the simplest example triggered by the type synthesis of \([f\,x]\).
\[r:\mathsf{Labels},s:\mathsf{Labels}\,\vdash\,\alpha\sqsubseteq_{c}f:[\mathsf{ Int}\to\mathsf{Int}]_{r},x:[\mathsf{Int}]_{s}\rhd\alpha\preceq r\wedge\alpha\preceq s\]
The inputs are type variable \(\alpha\) and the type environment \((f:[\mathsf{Int}\to\mathsf{Int}]_{r},x:[\mathsf{Int}]_{s})\). In this case, the rules generate variable dependencies for \(r\) and \(s\), each resource of the assumptions, and return a constraint combined with \(\wedge\).
### Extensions
#### 4.3.1 Version Control Terms
The rule for **version \(l\) of \(t\)** uses the same trick as (\(\Rightarrow_{\textsc{PR}}\)), and generates label dependencies from the input environment \(\Gamma\) to \(\langle\!\langle l\rangle\!\rangle\). Since **version \(l\) of \(t\)** only instructs the type inference system, the resulting type is the same as \(t\). **unversion \(t\)** removes the version resource from the type of \(t\), which is assumed to be a versioned value. We extend Girard's translation so that \(t\) is always a versioned value. Since a new resource variable is given to the term by the promotion outside of **unversion**, the inference system guarantees the version consistency inside and outside the boundary of **unversion**. The list of the rules is provided in Appendix 0.B.4.
#### 4.3.2 Data Structures
To support data structures, Hughes et al. suggest that coeffecftl data types are required to consider the interaction between the resources inside and outside the constructor [13]. They introduce the derivation algorithm for _push_ and _pull_ for an arbitrary type constructor \(K\) to address this.
```
push:\(\forall\){a b:Type, r:Labels}.(a,b)[r]->(a[r],b[r]) push[(x,y)]=([x],[y]) pull:\(\forall\){a b:Type, m:Labels}.(a[n],b[m])->(a,b)[n\(\sqcap\)m] pull([x],[y])=[(x,y)]
```
Following their approach, we developed inference rules for pairs and lists. When a data structure value \(p\) is applied to a function \(f\), the function application \(f\,p\) is implicitly interpreted as \(f\,(pull\,p)\). As a dual, a pattern match for a data structure value **case \(p\) of \(\overline{p_{i}\mapsto t_{i}}\)** is interpreted as **case \((push\,p)\) of \(\overline{p_{i}\mapsto t_{i}}\)**. Appendix 0.B.5 provides the complete set of extended rules.
## 5 Implementation
We implement the VL compiler5 on GHC (v9.2.4) with haskell-src-exts6 as its parser with an extension of versioned control terms, and z3 [18] as its constraint
solver. The VL compiler performs the code generation by compiling VLMini programs back into \(\lambda\)-calculus via Girard's translation and then translating them into Haskell ASTs using the version in the result version labels.
Ad-hoc Version Polymorphism via DuplicationThe VL compiler replicates external variables to assign individual versions to homonymous external variables. Duplication is performed before type checking of individual versions and renames every external variable along with the type and constraint environments generated from the import declarations. Such ad hoc conversions are necessary because VLMini is monomorphic, and the type inference of VLMini generates constraints by referring only to the variable's name in the type environment. Therefore, assigning different versions to homonymous variables requires manual renaming in the preliminary step of the type inference. A further discussion on version polymorphism can be found in Section 7.
Constraints Solving with z3We use \(\mathrm{sbv}\)7 as the binding of z3. The \(\mathrm{sbv}\) library internally converts constraints into SMT-LIB2 scripts [1] and supplies it to z3. Dependency constraints are represented as vectors of symbolic integers, where the length of the vector equals the number of external modules, and the elements are unique integers signifying each module's version number. Constraint resolution identifies the expected vectors for symbolic variables, corresponding to the label on which external identifiers in VL should depend. If more than one label satisfies the constraints, the default action is to select a newer one.
Footnote 7: [https://hackage.haskell.org/package/sbv-9.0](https://hackage.haskell.org/package/sbv-9.0)
## 6 Case Study and Evaluation
### Case Study
We demonstrate that VL programming achieves the two benefits of programming with versions. The case study simulated the incompatibility of hmatrix,8 a popular Haskell library for numeric linear algebra and matrix computations, in the VL module **Matrix**. This simulation involved updating the applications **Main** depending on **Matrix** to reflect incompatible changes.
Footnote 8: [https://github.com/haskell-numerics/hmatrix/blob/master/packages/base/CHANGELOG](https://github.com/haskell-numerics/hmatrix/blob/master/packages/base/CHANGELOG)
Table 1 shows the changes introduced in version 0.16 of hmatrix. Before version 0.15, hmatrix provided a join function for concatenating multiple vectors. The update from version 0.15 to 0.16 replaced join with vjoin. Moreover,
\begin{table}
\begin{tabular}{c|c c} version & join & vjoin & udot, sortVector, roundVector \\ \hline \(<\) 0.15 & available undefined & undefined \\ \(\geq\) 0.16 & deleted & available & available \\ \end{tabular}
\end{table}
Table 1: Availability of functions in hmatrix before and after tha update.
several new functions were introduced. We implement two versions of Matrix to simulate backward incompatible changes in VL. Also, due to the absence of user-defined types in VL, we represent Vector a and Matrix a as List Int and List (List Int) respectively, using List, a partial port of Data.List from the Haskell standard library.
We implement Main working with two conflicting versions of Matrix. The left side of Figure 7 shows a snippet of Main in the process of updating Matrix from version 0.15.0 to 0.16.0. main uses functions from both versions of Matrix together: join and sortVector are available only in version 0.15.0 and 0.16.0 respectively, hence Main has conflicting dependencies on both versions of Matrix. Therefore, it will be impossible to successfully build this program in existing languages unless the developer gives up using either join or sortVector.
* **Detecting Inconsistent Version**: VL can accept Main in two stages. First, the compiler flags a version inconsistency error. It is unclear which Matrix version the main function depends on as join requires version 0.15.0 while sortVector requires version 0.16.0. The error prevents using such incompatible version combinations, which are not allowed in a single expression.
* **Simultaneous Use of Multiple Versions**: In this case, using join and sortVector simultaneously is acceptable, as their return values are vectors and matrices. Therefore, we apply unversion t for \(t\) to collaborate with other versions. The right side of Figure 7 shows a rewritten snippet of Main, where sortVector vec is replaced by unversion (sortVector vec). Assuming we avoid using programs that depend on a specific version elsewhere in the program, we can successfully compile and execute main.
### Scalability of Constraint Resolution
We conducted experiments on the constraint resolution time of the VL compiler. In the experiment, we duplicated a VL module, renaming it to #mod like List_i,
Figure 7: Snippets of Main before (left) and after (right) rewriting.
and imported each module sequentially. Every module had the same number of versions, denoted as #ver. Each module version was implemented identically to List, with top-level symbols distinguished by the module name, such as concat_List_i. The experiments were performed ten times on a Ryzen 9 7950X running Ubuntu 22.04, with #mod and #ver ranging from 1 to 5.
Figure 8 shows the average constraint resolution time. The data suggests that the resolution time increases polynomially (at least square) for both #mod and #ver. Several issues in the current implementation contribute to this inefficiency: First, we employ sbv as a z3 interface, generating numerous redundant variables in the SMT-Lib2 script. For instance, in a code comprising 2600 LOC (with #mod = 5 and #ver = 5), the VL compiler produces 6090 version resource variables and the sbv library creates SMT-Lib2 scripts with approximately 210,000 intermediate symbolic variables. Second, z3 solves versions for all AST nodes, whereas the compiler's main focus should be on external variables and the subterms of unversion. Third, the current VL nests the constraint network, combined with \(\vee\), #mod times at each bundling. This approach results in an overly complex constraint network for standard programs. Hence, to accelerate constraint solving, we can develop a more efficient constraint compiler for SMT-Lib2 scripts, implement preprocess to reduce constraints, and employ a greedy constraint resolution for each module.
## 7 Related Work, Future Work, and Conclusion
#### 7.0.1 Managing Dependency Hell
Mainstream techniques for addressing dependency hell stand in stark contrast to our approach, which seeks to manage dependencies at a finer granularity. _Container_[17] encapsulates each application
Figure 8: Constraint resolution time for the duplicated List by #mod \(\times\) #ver.
with all its dependencies in an isolated environment, a container, facilitating multiple library versions to coexist on one physical machine. However, it does not handle internal dependencies within the container. _Monorepository_[21, 10] versions logically distinct libraries within a single repository, allowing updates across multiple libraries with one commit. It eases testing and bug finding but can lower the system modularity.
#### 4.1.2 Toward a Language Considering Compatibility
The next step in this research is to embed compatibility tracking within the language system. The current VL considers different version labels incompatible unless a programmer uses unversion. Since many updates maintain backward compatibility and change only minor parts of the previous version, the existing type system is overly restrictive.
To illustrate, consider Figure 3 again with more version history. The module Hash uses the MD5 algorithm for mkHash and match in the 1.x.x series. However, it adopts the SHA-3 algorithm in version 2.0.0, leaving other functions the same. The hash by mkHash version 1.0.1 (an MD5 hash) aligns with any MD5 hash from the 1.x.x series. Therefore, we know that comparing the hash using match version 1.0.0 is appropriate. However, the current VL compiler lacks mechanisms to express such compatibility in constraint resolution. The workaround involves using unversion, risking an MD5 hash's use with match version 2.0.0.
One promising approach to convey compatibilities is integrating semantic versioning [22] into the type system. If we introduce semantics into version labels, the hash generated in version 1.0.1 is backward compatible with version 1.0.0. Thus, by constructing a type system that respects explicitly defined version compatibilities, we can improve VL to accept a broader range of programs.
It is important to get reliable versions to achieve this goal. Lam et al. [14] emphasize the need for tool support to manage package modifications and the importance of analyzing compatibility through program analysis. _Delta-oriented programming_[26, 25, 24] could complement this approach by facilitating the way modularizing addition, overriding, and removal of programming elements and include application conditions for those modifications. This could result in a sophisticated package system that provides granular compatibility information.
Such a language could be an alternative to existing technologies for automatic update, collectively known as _adoptation_. These methods generate replacement rules based on structural similarities [5, 32] and extract API replacement patterns from migrated code bases [27]. Some techniques involve library maintainers recording refactorings [7, 12] and providing annotations [4] to describe how to update client code. However, the reported success rate of these techniques is less than 20% on average [6].
#### 4.1.3 Supporting Type Incompatibility
One of the apparent problems with the current VL does not support _type incompatibilities_. VL forces terms of different versions to have the same type, both on the theoretical (typing rules in \(\lambda_{\text{VL}}\)
and implementation (bundling in VLMini) aspects. Supporting type incompatibility is important because type incompatibility is one of the top reasons for error-causing incompatibilities [23]. The current VL is designed in such a way because it retains the principle that equates the types of promotions and versioned records in \(\lambda_{\text{VL}}\), easing the formalization of the semantics.
A promising approach to address this could be to decouple version inference from type inference and develop a version inference system on the polymorphic record calculus [19]. The idea stems from the fact that versioned types \(\Box_{\{l_{1},l_{2}\}}A\) are structurally similar to record types \(\{l_{1}:A,\,l_{2}:A\}\) of \(\Lambda^{\forall,\bullet}\). Since \(\Lambda^{\forall,\bullet}\) allows different record-element types for different labels and has concrete inference algorithms with polymorphism, implementing version inference on top of \(\Lambda^{\forall,\bullet}\) would also make VL more expressive.
#### 4.2.3 Adequate Version Polymorphism
In the current VL, there is an issue that the version label of top-level symbols in imported modules must be specified one, whereas users can select specific versions of external variables using **unversion** within the importing module. Consider using a generic function like List.concat in Figure 7. If it is used in one part of the program within the context of **Matrix** version 1.0.0, the solution of the resource variable of List.concat version 1.0.0 becomes confined to \(\{\texttt{Matrix}=1.0.0,\texttt{List}=\ldots\}\). As a result, it is impossible to utilize List.concat version 1.0.0 with **Matrix** version 2.0.0 elsewhere in the program. This problem becomes apparent when we define a generic module like a standard library.
It is necessary to introduce full-version polymorphism in the core calculus instead of duplication to address this problem. The idea is to generate a type scheme by solving constraints for each module during bundling and instantiate each type and resource variable at each occurrence of an external variable. Such resource polymorphism is similar to that already implemented in Gr [20]. However, unlike Gr, VLMini provides a type inference algorithm that collects constraints on a per-module basis, so we need the well-defined form of the principal type. This extension is future work.
#### 4.2.4 Conclusion
This paper proposes a method for dependency analysis and version control at the expression level by incorporating versions into language semantics, which were previously only identifiers of packages. This enables the simultaneous use of multiple versions and identifies programs violating version consistency at the expression level, which is impossible with conventional languages.
Our next step is to extend the version label, which currently only identifies versions, to _semantic versions_ and to treat the notion of compatibility with language semantics. Like automatic updates by modern build tools based on semantic versioning, it would be possible to achieve incremental updates, which would be done step-by-step at the expression level. Working with existing package managers to collect compatibility information at the expression level would be more feasible to realize the goal. |
2309.04189 | High-power intracavity single-cycle THz pulse generation using thin
lithium niobate | Ultrafast laser driven, single-cycle THz pulsed sources hold immense
potential for scientific and industrial applications; however, their limited
average power hinders their widespread application. In particular, applications
where high repetition rates in the multi-MHz region and beyond are required are
more severely affected, due to the lower pulse energies available for frequency
conversion. In this respect, resonant enhancement both in passive and active
resonators is a well-known technique for boosting the efficiency of nonlinear
frequency conversion; however, this route has remained poorly explored for the
generation of broadband THz pulses due to the inadequacy of typically employed
nonlinear crystals. Here, we demonstrate that thin lithium niobate crystals
used intracavity of multimode diode-pumped mode-locked thin-disk lasers are a
promising platform to circumvent these difficulties. Using a 50-{\mu}m thin
lithium niobate plate intracavity of a compact high-power mode-locked thin-disk
laser, we generate milliwatt-level broadband THz pulses with a spectrum
extending up to 3 THz at 44.8 MHz repetition rate, driven by 264 W of
intracavity average power. This approach opens the door to efficient high-power
single-cycle THz generation using affordable nonlinear crystals at very high
repetition rates, scalable to kilowatt-level driving power with low cost and
complexity. | Yicheng Wang, Tim Vogel, Mohsen Khalili, Samira Mansourzadeh, Kore Hasse, Sergiy Suntsov, Detlef Kip, Clara J. Saraceno | 2023-09-08T08:02:55Z | http://arxiv.org/abs/2309.04189v2 | # High-power intracavity single-cycle THz pulse generation using thin lithium niobate
###### Abstract
Ultrafast laser driven, single-cycle THz pulsed sources hold immense potential for scientific and industrial applications; however, their limited average power hinders their widespread application. In particular, applications where high repetition rates in the multi-MHz region and beyond are required are more severely affected, due to the lower pulse energies available for frequency conversion. In this respect, resonant enhancement both in passive and active resonators is a well-known technique for boosting the efficiency of nonlinear frequency conversion; however, this route has remained poorly explored for the generation of broadband THz pulses due to the inadequacy of typically employed nonlinear crystals. Here, we demonstrate that thin lithium niobate crystals used intracavity of multimode diode-pumped mode-locked thin-disk lasers are a promising platform to circumvent these difficulties. Using a 50-\(\upmu\)m thin lithium niobate plate intracavity of a compact high-power mode-locked thin-disk laser, we generate milliwatt-level broadband THz pulses with a spectrum extending up to 3 THz at 44.8 MHz repetition rate, driven by 264 W of intracavity average power. This approach opens the door to efficient high-power single-cycle THz generation using affordable nonlinear crystals at very high repetition rates, scalable to kilowatt-level driving power with low cost and complexity.
2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 20100 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 20100 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 20010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 2010 200 2010 2010 2010 2010 2001 20010 2010 2001 2010 2001 2001 2001 20010 2010 20010 20010 2010 2010 2010 2010 20010 2010 2010 20010 20010 2010 20010 20010 20010 20010 20010 20010 20010 20010 20010 20010 200
The presence of gain, i.e. driving the nonlinear conversion process inside of an optical parametric oscillator (OPO) or mode-locked laser, is in this respect beneficial, however the nonlinear conversion medium needs to be chosen very carefully to maintain the delicate oscillator and/or mode-locking stability. Early on, periodically-inverted gallium arsenide (GaAs) was placed in an OPO, resulting in mW-level THz average power, at the cost of multi-cycle narrowband THz emission [17] due to limited quasi-phase matching acceptance bandwidths. Photoconductive switches were placed inside a mode-locked Yb fiber oscillator at 135 mW intracavity power, achieving 4.2 \(\upmu\)W of THz power [18]. Such a photoconductive switch requires a rather complex fabrication, and the intracavity driving laser power is limited in ultrafast fiber oscillators. Efforts were also made with Tiaspphire oscillators, again with limited success [19, 20, 21], where a maximum of 7 \(\upmu\)W of THz average power was achieved, using electrically biased low-temperature GaAs as an emitter [20]. More recently, an Yb-bulk oscillator was used with gallium phosphide (GaP) at 22 W intracavity power to achieve 150 \(\upmu\)W of THz power [22]. However, our recent extracavity study [23] shows that, at high average power, nonlinear and linear absorption in GaP introduce strong thermal effects, resulting in an axicon lens for the near-infrared (NIR) pump laser, which acts as a tremendous source of loss for the oscillator. Nonlinear absorption is well known to act as an 'inverse' saturable absorber for mode locking and can also cause mode-locking instabilities and pulse break up [24, 25]. Thus, significantly higher average power operation cannot be expected with GaP. In this respect, LN is a more promising material to apply higher average power in the multi-100-W regime: it does not suffer from strong multi-photon absorption (MPA) and has a significantly higher damage threshold and effective nonlinearity. However, due to the large group velocity mismatch (GVM) between the THz and the NIR pulse, the most common scheme so far used is the tilted pulse front method [26], which is complex, sensitive to alignment and generally not suitable for intracavity use. The use of thinner LN plates has been employed at very high energies at 5 Hz repetition rate, demonstrating 0.71 \(\upmu\)THzenergy, however with multicycle pulses [27]. Using significantly thinner LN on substrates, another emerging area of research is the use of thin LN in a waveguided geometry for integrated THz photonics [28].
Here we propose and implement the use of thin LN plates for high-power intracavity OR. For this purpose, we choose thin-disk oscillators as they are the only technology allowing for multi-kW intracavity powers, achieving orders of magnitude higher intracavity pulse energies and peak powers than any other oscillator platform at high repetition rate [29, 30], making them an ideal platform for intracavity THz generation. We operate a LN plate intracavity of a Kerr-lens mode-locked (KLM) TDL at 265 W of intracavity average power and achieve 1.3 mW of THz average power at 44.8 MHz, surpassing the state-of-the-art results of intracavity THz generation by an order of magnitude. This is achieved with a multimode pump power of 314 W, effectively trading ultrashort pulse average power for multimode cw power, thus significantly reducing the cost and complexity of the setup.
The crystals used in this experiment are fabricated from a 3-inch 50 \(\upmu\)x-cut 5-mol. % MgO:LiNbO\({}_{3}\) wafer (congruently melting LN, from NanoLN). The crystal thickness of 50 \(\upmu\)m is chosen to avoid strong GVM in a collinear geometry between the driving laser at 1030 nm and the generated THz radiation. Thinner crystals could possibly be used to achieve even broader THz bandwidth but by sacrificing THz conversion efficiency, as well as at the cost of more engineering efforts for fabrication and handling of the crystals. The wafer is cut into dimensions of 17 mm(z) \(\times\) 15 mm(y). This allows us to approach the maximum nonlinear coefficient \(d_{3}\)s of the crystal for extraordinary polarization input. A crucial aspect for the efficient use of LN for intracavity operation is the anti-reflection (AR) coating for the laser radiation wavelength, which is a very delicate task for such thin crystals. In principle, uncoated samples at Brewster's angle can be used, however this results in very reduced output coupling of the THz radiation due to internal reflections of the THz light. In our experiments, our sample is double-side AR-coated at the pump wavelength of 1030 nm using a five-layer dielectric stack deposited with a dual-magnetron sputtering machine (NanoChrome IV, IntIvac). The LN samples are further mounted on a copper frame (Fig. 1b) for intracavity experiments without additional water cooling with up to hundreds of watts.
The experimental setup is shown in Fig. 1a. The home-built KLM TDL has a separated gain material (Yb-garmet thin-disk) and Kerr medium (KM, 3-mm sapphire), offering flexibility in the design of the Kerr-lens saturable absorber and allowing for a wide range of operation parameters. The laser resonator is designed to have one focus between two concave mirrors with a radius of curvature (RoC) of 500 mm and 300 mm, respectively. Both KM and the LN are positioned at the focus. This has two benefits compared with separate waists for KM and LN: It reduces the system complexity and enhances stability when operating in ambient air at \(>\)100 W intracavity power. The stability improvement is due to the reduced self-phase modulation (SPM) turbulence from the air at the waists. The LN is placed close to the waist with a calculated beam radius of 160 nm. The position of the plate was chosen to optimize conversion efficiency following guidelines to obtain optimal conversion efficiency, which we detail in the supplemental document. A hard aperture with diameter of 3.3 mm is placed close to the end mirror. The pulse circulating intracavity fulfills the soliton law: a total round-trip group delay dispersion of -10000 fs\({}^{2}\) is introduced to compensate for the nonlinear phase within one roundtrip, originating mostly from the sapphire plate and the air inside the resonator. The thin LN plate only contributes \(\sim\)3% (60 mrad) to the total SPM due to its small thickness. The cavity has a total length of 3.4 m, corresponding to a repetition rate of 44.8 MHz. One of the end mirrors of the cavity is a 1% output coupler, providing enough output power for laser diagnostics and for the electro-optic sampling (EOS) probe beam. Since we do not make use of the full 1% output power, we could close the resonator
Figure 1: (a) Schematic of the experimental setup. DM, dispersive mirrors. KM, Kerr medium. BPD, balanced photodetector. (b) Mounted LN sample. (c) Real setup in operation. (d) Thermal camera image of the LN.
by using a higher reflectivity mirror (instead of a 1% output coupler) and reach higher efficiency (i.e. operate at lower pump power for the same intracavity power) in future experiments, offering a straightforward path for further improvements. The entire setup works in ambient air inside a dust-proof box. The oscillator and THz generation part have a compact footprint of 40 cm x 90 cm.
With a minimal pump power of 128 W at 969 nm, mode-locking is self-starting. Second harmonic generation from the LN is observed once mode-locking is initialized (Fig 1c). As shown in Fig 3a, mode-locking is stable with a diode pump power up to 314 W. At maximum pump power, 264 W intracavity power is achieved with a pulse duration of 115 fs, assuming a soliton pulse shape as shown in Fig. 2a. The mode-locked spectrum has a full-width at half maximum bandwidth of 10.0 nm centered at 1031.2 nm (Fig. 2b). The time-bandwidth product is 0.323, deviating only slightly from an ideal soliton pulse shape. Kelly sidebands in the optical spectrum indicate that the oscillator is operated with strongly localized dispersion and SPM in the resonator, resulting in periodic breathing in the soliton propagation [31]. As a result, the pulses at the position of the LN plate were slightly longer with 170 fs pulse duration. Radio frequency spectra with 1 GHz span and 1 MHz span are also performed indicating single-pulse operation without Q-switching instabilities (Fig. 2c,2d).
The THz radiation is collected by a 50.8 mm focal length 2-inch diameter off-axis parabolic (OAP) mirror with a \(\sim\)5.3 mm diameter hole along its focus axis, placed inside the cavity. A second OAP with 152.4 mm focal length is used to focus the THz light into a power meter (3A-P-THz, Ophir), a THz camera (Rigi S2, Swiss Terahertz) or onto an EOS detection crystal. To measure the THz power precisely, 4 high-density polyethylene (HDPE) plates with a total thickness of 32 mm are used to filter out the residual pump power and green second harmonic generation signal from the THz signal. As shown in Fig. 3b, we did not observe a strong saturation effect of THz power with respect to intracavity power, despite the rather high temperature on the LN sample of approximately 120 degC (Fig. 1d). The main limitation observed in our experiment is rather caused by a damping of the intracavity power with respect to pump power as seen in Fig. 3a, likely due to thermal and other higher-order nonlinear effects in the LN plate.
At the maximum intracavity power available, we measure a maximum THz power of 510 uW at a diode pump power of 314 W and an estimated mode-locked intracavity power of 264 W. At this point, the thin LN plate operates at a peak intensity of 0.1 TW/cm\({}^{2}\) free of instabilities or damage, which is in strong contrast to the usual low intensities applied to materials such as GaP because of MPA limitations. The robustness of LN in terms of high intensity thus compensates the required small thickness to avoid GVM. At the same time operation with comparable efficiency to other commonly used materials like GaP is possible. With a measured HDPE transmission of 39%, the corrected THz power is 1.3 mW. It is important to note that the current THz power is only measured on one side of the sample. Due to the standing wave cavity design, THz is generated in both directions, which results in a doubling of the THz power. We note thatin future experiments, this dual-output could be a feature of the system if considering pump-probe experiments. For other applications, the oscillator could be designed to operate in a ring-cavity configuration; this however would only be achieved at the expense of available roundtrip gain, which would also be divided by two. The THz beam is measured at the focus to have a cross section of \(\sim\)1.66 x 1.37 mm\({}^{2}\), (Fig. 3b.)
Next, we measured the EOS traces at the maximum THz power. The EOS setup consists of a 1 mm GaP as detection crystal, a \(\lambda\)/4 plate, followed by a Wollaston prism, and a balanced photodetector. Residual pump light is filtered with a 14 mm thick HDPE plate. Dry nitrogen is used to purge the setup, reducing humidity to 20% to minimize THz absorption and improve EOS. Along the probe beamline a manual delay stage is introduced to match the temporal overlap, and a delay line with a shaker at a frequency of 10 Hz and a scanning range of 28 ps is introduced for the EOS. Within a measurement time of \(\sim\)50 s, 478 traces are recorded without lock-in detection and can afterwards be averaged as shown in Fig. 4a. Notice that chopping the NIR pump beam intracavity is not possible without disturbing mode-locking. For future optimization aimed at enhancing detection via lock-in amplification, extracavity chopping of the THz beam could be considered. The noise trace is obtained by blocking the THz beam. A 2nd-order polynomial fit is further applied to the noise trace and the THz trace to filter out the periodical offset introduced by the beam pointing of the probe, introduced by the imperfect alignment of the shaker. A 10th-order super-Gaussian window of 23 ps (80% of the measurement data) is further applied to the time trace. The clean time trace without lock-in detection indicates a stable and high THz power.
The corresponding THz spectrum spans up to 3 THz with a dynamic range of 60 dB above the noise floor, cf. Fig. 4b. We
Figure 3: (a) Intracavity NIR average power vs. multi-mode diode pump power. (b) THz power vs. intracavity NIR average power. Inset, THz beam profile.
Figure 2: Characterization of the KIM: (a) autocorrelation trace. (b) optical spectrum. (c) and (d) radio frequency spectra, RBW, resolution bandwidth.
compare the obtained traces with simulations performed by solving the coupled wave equations for OR, in a similar way to [32], including the response of the 1 mm GaP crystal, OAP/hole filtering effects [33] and echoes from the LN reflections. The low frequency part up to 1.5 THz is in a good agreement with the measurement and the dips reproduce the expected thickness of the crystal well. The small difference observed at higher frequencies is most likely due to lack of precise literature data on the THz refractive index of LN beyond \(\sim\)2 THz at room temperature [34] and corresponding changes with temperature. Both the current bandwidth and output power in the setup can be optimized in the future, as high-power mode-locked TDLs with high intracavity powers in the kW range and pulses as short as 50 fs have been demonstrated [35].
In summary, we propose and demonstrate a new, simple and cost-effective approach for high-power THz generation using thin LN plates that is compatible with intracavity operation inside a high power TDL. In a KLM TDL operating at 264 W, 115 fs pulse duration, and at 44.8 MHz repetition rate, high THz powers of 1.3 mW (\(\times\)2), with a spectrum spanning 3 THz have been obtained. We would like to highlight that these results show that thin LN is a very economic and robust platform for THz generation in general, not only for intracavity use. While ultra-thin LN films prove to be a promising platform for integrated THz applications with electric fields of V/m scale, our results using thin plates in free-space generating kV/cm fields indicate the versatility of the thin LN platform, in this form offering simple collinear, broad-bandwidth and wavelength-independent operation. The current approach has potential for further power and bandwidth scaling to tens of mW THz power in compact and efficient configuration, for example by increasing the resonator tolerance to loss with multiple passes on the disk and better thermal management of the nonlinear crystal.
**Funding.** Ruhr-Universitat Bochum (Open Access Publication Funds); Ministerium fur Kultur und Wissenschaft des Landes Nordrhein-Westfalen (terahertz.NRW); Deutsche Forschungsgemeinschaft (287022738TRR 196, 390677874); HORIZON EUROPE European Research Council (805202).
**Acknowledgments.** We would like to thank Ch. Launschkat for his support in the production of the AR coatings.
**Disclosures.** The authors declare no conflicts of interest.
**Data availability.** Data underlying the results presented in this paper are available in Ref. [36].
**Supplemental document.** See Supplement 1 for supporting content.
## References
* [1] J. Kolbel, W. Schirmacher, E. Shalaev, _et al._, Phys. Rev. B **107**, 104203 (2023).
* [2] T. Matsuda, T. Higo, T. Koretsune, _et al._, Phys. Rev. Lett. **130**, 126302 (2023).
* [3] B. Zhou, M. Rasmussen, P. R. Whelan, _et al._, Sensors **23**, 3669 (2023).
* [4] I. Amenabar, F. Lopez, and A. Mendikute, J. Infrared Millim. Terahertz Waves **34**, 152 (2013).
* [5] K. Dutzi, N. Regner, N. Vieweg, _et al._, in _47th IRMMW-THz_ (IEEE, 2022), pp. 1-2.
* [6] T. Dietz, M. Jenne, D. Bauer, _et al._, Opt. Express **28**, 11415 (2020).
* [7] M. Muller, A. Klenke, A. Steinkopff, _et al._, Opt. Lett. **43**, 6037 (2018).
* [8] P. L. Kramer, M. K. R. Windeler, K. Mecseki, _et al._, Opt. Express **28**, 16951 (2020).
* [9] J. Buldt, H. Stark, M. Muller, _et al._, Opt. Lett. **46**, 5256 (2021).
* [10] T. Vogel and C. J. Saraceno, in _CLEO_ (2023), p. SF31.1.
* [11] B. Coullaud, T. W. Hansch, and S. G. MacLean, Opt. Commun. **50**, 127 (1984).
* [12] R. Tanaka, T. Matsuzawa, H. Yokota, _et al._, Opt. Express **16**, 18667 (2008).
* [13] I. Pupeza, T. Eidam, J. Rauschenberger, _et al._, Opt. Lett. **35**, 2052 (2010).
* [14] I. Pupeza, C. Zhang, M. Hogner, _et al._, Nat. Photonics **15**, 175 (2021).
* [15] J. Fischer, J. Drs, F. Labaye, _et al._, Opt. Express **29**, 5833 (2021).
* [16] M. Theuer, D. Molter, K. Maki, _et al._, Appl. Phys. Lett. **93**, 041119 (2008).
* [17] K. L. Vodopyanov, Laser & Photon. Rev. **2**, 11 (2008).
* [18] G. Matthaus, B. Orta, J. Limpert, _et al._, Appl. Phys. Lett. **93**, 261105 (2008).
* [19] N. Sarukura, H. Ohtake, Z. Liu, _et al._, Jpn. J. Appl. Phys. **37**, L125 (1998).
* [20] J. Darno, T. Muller, G. Strasser, _et al._, Opt. Lett. **27**, 1941 (2002).
* [21] S. Xu, J. Liu, G. Zheng, _et al._, Opt. Express **18**, 22625 (2010).
* [22] M. Hamrouni, J. Drs, N. Modsching, _et al._, Opt. Express **29**, 23729 (2021).
* [23] N. Helkmat, T. Vogel, Y. Wang, _et al._, Opt. Mater. Express **10**, 2768 (2020).
* [24] T. R. Schibli, E. R. Thoen, F. X. Kartner, _et al._, Appl. Phys. B **70**, S41 (2000).
* [25] R. Grange, M. Haiml, R. Paschotta, _et al._, Appl. Phys. B **80**, 151 (2005).
* [26] J. Hebling, G. Almasi, I. Kozma, _et al._, Opt. Express **10**, 1161 (2002).
* [27] D. Jang, J. H. Sung, S. K. Lee, _et al._, Opt. Lett. **45**, 3617 (2020).
* [28] A. Herter, A. Shams-Ansari, F. F. Settenbrini, _et al._, Nat. Commun. **14**, 11 (2023).
* [29] F. Saltarelli, I. J. Graumann, L. Lang, _et al._, Opt. Express **27**, 31465 (2019).
* [30] C. J. Saraceno, D. Sutter, T. Metzger, _et al._, J. Eur. Opt. Soc. Rapid Publ. **15**, 15 (2019).
* [31] P. Grelu and N. Akhmediev, Nat. Photonics **6**, 84 (2012).
* [32] T. Hattori and K. Takeuchi, Opt. Express **15**, 8076 (2007).
* [33] J. Faure, J. Van Tilborg, R. A. Kaindl, _et al._, Opt. Quantum Electron. **36**, 681 (2004).
* [34] M. Unferdorben, Z. Szaller, I. Hajdara, _et al._, J. Infrared Millim. Terahertz Waves **36**, 1203 (2015).
* [35] J. Fischer, J. Drs, N. Modsching, _et al._, Opt. Express **29**, 42075 (2021).
* [36] Y. Wang, T. Vogel, M. Khalili, _et al._, Zenodo [https://doi.org/10.5281/zenodo.10043826](https://doi.org/10.5281/zenodo.10043826) (2023).
Figure 4: Intracavity THz characterization. (a) THz time trace of the EOS measurement. (b) Corresponding THz spectrum with simulated spectrum. |
2309.05297 | On Ahn-Hendrey-Kim-Oum question for twin-width of graphs with 6 vertices | Twin-width is a recently introduced graph parameter for finite graphs. It is
an open problem to determine whether there is an $n$-vertex graph having
twin-width at least $n/2$ (due to J. Ahn, K. Hendrey, D. Kim and S. Oum). In an
earlier paper, the author showed that such a graph with less than equal to 5
vertices does not exist. In this article, we show that such a graph with 6
vertices does not exist. More precisely, we prove that each graph with 6
vertices has twin-width less than equal to 2. | Kajal Das | 2023-09-11T08:29:17Z | http://arxiv.org/abs/2309.05297v1 | # On Ahn-Hendrey-Kim-Oum question for Twin-width of graphs with 6 vertices
# On Ahn-Hendrey-Kim-Oum question for Twin-width of graphs with 6 vertices
Kajal Das
**Abstract:** Twin-width is a recently introduced graph parameter for finite graphs. It is an open problem to determine whether there is an \(n\)-vertex graph having twin-width at least \(n/2\) (due to J. Ahn, K. Hendrey, D. Kim and S. Oum). In an earlier paper, the author showed that such a graph with less than equal to 5 vertices does not exist. In this article, we show that such a graph with 6 vertices does not exist. More precisely, we prove that each graph with 6 vertices has twin-width less than equal to 2.
**Mathematics Subject Classification (2020):**05C30, 05C38, 05C76, 68R10.
**Key terms:** Finite graphs, Twin-width, Graphs with 6 vertices, Ahn-Hendrey-Kim-Oum conjecture.
## 1. Introduction
Twin-width is an invariant of graphs introduced in [1]. It is defined for a finite simple graph, later it is extended for a simple infinite graph. It is used to study the parameterized complexity of graph algorithms. It has applications in logic, enumerative combinatorics etc. Recently, it has appeared in many articles ([1], [2], [3], [4], [5], [6], [7], [8]). Moreover, it has been studied in the context of finitely generated groups [5]. The computation of twin-width of a finite graph is extremely difficult. There are some results for some well known graphs, for example, complete graphs, path graphs, cyclic graphs (or graphs with at most one cycle), Paley graphs, Caterpillar tree, planar graphs etc.
However, it is an open problem to determine whether there is an \(n\)-vertex graph having twin-width at least \(n/2\)( (due to J. Ahn, K. Hendrey, D. Kim and S. Oum, see [1], page 3). In [1], the author proves the following theorem.
**Theorem 1.1**.: _[_1_]_ _Let \(G\) be a graph with number of vertices less than equal to 5. Then, \(G\) has twin-width less than equal to 2. In particular, the Ahn-Hendrey-Kim-Oum question is not true for graphs with vertices less than equal to 5._
In this article, we prove the following theorem.
**Theorem 1.2**.: _(Main Theorem) Let \(G\) be a graph with number of vertices 6. Then, \(G\) has twin-width less than equal to 2. In particular, the Ahn-Hendrey-Kim-Oum question is not true for graphs with number of vertices 6._ |
2306.17770 | MTR++: Multi-Agent Motion Prediction with Symmetric Scene Modeling and
Guided Intention Querying | Motion prediction is crucial for autonomous driving systems to understand
complex driving scenarios and make informed decisions. However, this task is
challenging due to the diverse behaviors of traffic participants and complex
environmental contexts. In this paper, we propose Motion TRansformer (MTR)
frameworks to address these challenges. The initial MTR framework utilizes a
transformer encoder-decoder structure with learnable intention queries,
enabling efficient and accurate prediction of future trajectories. By
customizing intention queries for distinct motion modalities, MTR improves
multimodal motion prediction while reducing reliance on dense goal candidates.
The framework comprises two essential processes: global intention localization,
identifying the agent's intent to enhance overall efficiency, and local
movement refinement, adaptively refining predicted trajectories for improved
accuracy. Moreover, we introduce an advanced MTR++ framework, extending the
capability of MTR to simultaneously predict multimodal motion for multiple
agents. MTR++ incorporates symmetric context modeling and mutually-guided
intention querying modules to facilitate future behavior interaction among
multiple agents, resulting in scene-compliant future trajectories. Extensive
experimental results demonstrate that the MTR framework achieves
state-of-the-art performance on the highly-competitive motion prediction
benchmarks, while the MTR++ framework surpasses its precursor, exhibiting
enhanced performance and efficiency in predicting accurate multimodal future
trajectories for multiple agents. | Shaoshuai Shi, Li Jiang, Dengxin Dai, Bernt Schiele | 2023-06-30T16:23:04Z | http://arxiv.org/abs/2306.17770v2 | # MTR++: Multi-Agent Motion Prediction with Symmetric Scene Modeling and Guided Intention Querying
###### Abstract
Motion prediction is crucial for autonomous driving systems to understand complex driving scenarios and make informed decisions. However, this task is challenging due to the diverse behaviors of traffic participants and complex environmental contexts. In this paper, we propose Motion TRansformer (MTR) frameworks to address these challenges. The initial MTR framework utilizes a transformer encoder-decoder structure with learnable intention queries, enabling efficient and accurate prediction of future trajectories. By customizing intention queries for distinct motion modalities, MTR improves multimodal motion prediction while reducing reliance on dense goal candidates. The framework comprises two essential processes: global intention localization, identifying the agent's intent to enhance overall efficiency, and local movement refinement, adaptively refining predicted trajectories for improved accuracy. Moreover, we introduce an advanced MTR++ framework, extending the capability of MTR to simultaneously predict multimodal motion for multiple agents. MTR++ incorporates symmetric context modeling and mutually-guided intention querying modules to facilitate future behavior interaction among multiple agents, resulting in scene-compliant future trajectories. Extensive experimental results demonstrate that the MTR framework achieves state-of-the-art performance on the highly-competitive motion prediction benchmarks, while the MTR++ framework surpasses its precursor, exhibiting enhanced performance and efficiency in predicting accurate multimodal future trajectories for multiple agents.
Motion Prediction, Transformer, Intention Query, Autonomous Driving
## 1 Introduction
Motion prediction constitutes a pivotal undertaking within the realm of contemporary autonomous driving systems, and it has gained significant attention in recent years due to its vital role in enabling robotic vehicles to understand driving scenarios and make judicious decisions [21, 24, 25, 26, 33, 39, 49, 63, 69]. The core of motion prediction lies in accurately anticipating the future actions of traffic participants by considering observed agent states and complex road maps. However, this task is challenging due to the inherent multimodal behaviors exhibited by agents and the intricacies of the surrounding environment.
To tackle these formidable challenges, prior studies [17, 31, 40] have delved into diverse strategies aimed at encoding the complex scene context. Some works [39, 50] have employed the encoded agent features in motion decoders based on Multi-Layer Perceptrons (MLPs) to directly predict multiple potential future trajectories for the agent. Nonetheless, these methodologies generally exhibit a bias towards predicting the most frequently occurring modes observed in the training data, thereby yielding homogeneous trajectories that inadequately capture the agent's multimodal behaviors. To improve trajectory predictions encompassing all potential future behaviors for the agent, alternative approaches [67, 21] have explored a goal-based strategy. This strategy involves using a dense set of goal candidates representing feasible destinations for the agent. By predicting the probability associated with each candidate being the actual destination, these methods generate a full trajectory for each selected goal candidate. While this strategy reduces trajectory uncertainty during model optimization, the performance of such methods is highly dependent on the density of goal candidates. Fewer candidates lead to decreased performance, while an excessive number of candidates significantly increases computation and memory costs. To address these challenges and enhance multimodal motion prediction while reducing reliance on dense goal candidates, we propose a novel collection of frameworks called Motion TRansformer (MTR) frameworks. These frameworks consist of an initial MTR framework and an advanced MTR++ framework.
In the MTR frameworks, we introduce a novel set of learnable intention queries integrated into a transformer encoder-decoder structure, which facilitates efficient motion prediction by employing each intention query to encompass the behavior prediction of a bunch of potential trajectories directed towards the same region. Guided by these intention queries, the MTR frameworks optimize two key tasks simultaneously. The first task is global intention localization, which aims to roughly identify the agent's intention, thereby enhancing overall efficiency. The second task is local movement refinement, which strives to adaptively refine the predicted trajectory for each intention, thereby improving accuracy. The proposed MTR frameworks not only foster a stable training process without depending on dense goal candidates but also enable flexible and adaptable prediction by facilitating local refinement for each motion mode.
Specifically, the MTR frameworks introduce distinct learnable intention queries to handle trajectory prediction across different motion modes. To accomplish this, a limited
number of spatially distributed intention points (e.g., 64 in our case) are initially generated for each category. These intention points effectively reduce uncertainty in future trajectories by encompassing both motion direction and velocity. Each intention query represents the learnable position embedding of a specific intention point, assuming responsibility for predicting the future trajectory of that particular motion mode. This approach not only enhances multimodal motion prediction by explicitly leveraging different queries for different modes but also eliminates the necessity of dense goal candidates, as each query assumes responsibility for a substantial destination region. Moreover, the MTR frameworks employ the classification probability of all intention queries to roughly localize the agent's motion intention, while the predicted trajectory of each intention query undergoes iterative refinement through stacked transformer decoder layers. This iterative refinement process involves continually retrieving fine-grained local features of each trajectory. Our experiments show that these two complementary processes have demonstrated remarkable efficacy in predicting multimodal future motion.
In contrast to the initial MTR framework presented in our preliminary version [46], which focuses on the multimodal motion prediction of a single agent, we introduce an advanced MTR++ framework that extends the capability to predict multimodal motion concurrently for multiple agents (see Fig. 1). Instead of individually encoding the scene context around each agent as in the preliminary version, we propose a novel symmetric scene context modeling strategy. This strategy employs a shared context encoder to symmetrically encode the entire scene for each agent, incorporating a novel query-centric self-attention module to jointly capture the intricate scene context information within their respective local coordinate systems. Furthermore, we introduce mutually-guided intention quering module in the motion decoder network, enabling agents to interact and influence each other's behavior. This facilitates more precise and scene-compliant joint motion prediction for multiple agents. Through these two enhancements, experimental results demonstrate that compared to the initial MTR framework, the MTR++ framework effectively predicts more accurate multimodal future trajectories for multiple agents simultaneously. Additionally, as shown in Fig. 1, the efficiency advantage of the MTR++ framework becomes more pronounced as the number of agents increases.
Our contributions are four-fold: (1) We introduce the MTR frameworks, which incorporate a novel set of learnable intention queries within the transformer encoder-decoder architecture for motion prediction. By customizing intention queries to address distinct motion modalities, the MTR frameworks not only achieve more precise multimodal future trajectory predictions that encompass a wide range of possibilities but also obviate reliance on dense goal candidates. (2) We propose the advanced MTR++ framework for simultaneous multimodal motion prediction of multiple agents. This framework incorporates two key components: a symmetric scene context modeling module that allows for shared context encoding among multiple agents, and a mutually-guided intention querying module that facilitates the interaction of agents' future behaviors and enables the prediction of scene-compliant trajectories. (3) The initial MTR framework achieves state-of-the-art performance on the motion prediction benchmark of Waymo Open Motion Dataset (WOMD) [15], surpassing previous ensemble-free approaches with a remarkable mAP gain of \(+8.48\%\). Additionally, the MTR++ framework further enhances the capabilities of the initial MTR framework, enabling concurrent joint multimodal motion prediction for multiple agents and improving both performance and efficiency. (4) Notably, our initial MTR and MTR++ frameworks won the championship of the highly-competitive Waymo Motion Prediction Challenge in 2022 [57] and 2023 [57], respectively, demonstrating their superiority and effectiveness.
## 2 Related Work
**Scene Context Encoding for Motion Prediction.** The motion prediction task in autonomous driving scenarios involves the encoding of the input road map and agent history states to generate future trajectories of the agent, which plays a crucial role in this task. Prior works [4, 6, 9, 13, 35, 40, 66] have commonly employed rasterization techniques to convert the scene context into images, allowing for processing with convolutional neural networks (CNNs). LaneGCN [31] utilizes a lane graph to capture the topological information of the map, and recent works [21, 39, 47, 50] have widely adopted VectorNet [17] for its efficiency and scalability. VectorNet represents both road maps and agent trajectories as polylines. In our MTR frameworks, we also adopt this vector representation. However, instead of constructing a global graph of polylines, we advocate employing a transformer encoder on a locally connected graph. This strategy not
Fig. 1: The comparison of MTR and MTR++ frameworks. The MTR++ framework surpasses its predecessor, MTR, in several key aspects. In subfigure (a), MTR++ demonstrates its ability to predict the future trajectories of multiple agents simultaneously. Notably, in subfigure (b), MTR++ excels in both inference speed and memory efficiency, particularly when dealing with a larger number of interested agents. Additionally, as depicted in subfigure (c), the MTR++ framework outperforms both MTR and all other approaches, achieving superior performance overall.
only better preserves the input's locality structure but also improves memory efficiency, enabling larger map encodings for long-term motion prediction.
**Multimodal Future Behavior Modeling.** Given the encoded scene context features, existing works explore diverse strategies for modeling the agent's multimodal future behaviors. Early works [22, 43, 44, 2, 48] suggests generating a set of trajectory samples to approximate the output distribution. other studies [10, 23, 37, 41, 45] parameterize multimodal predictions with Gaussian Mixture Models (GMMs) to generate compact distribution. The HOME series [18, 19] generates trajectories by sampling a predicted heatmap. IntentNet [8] considers intention prediction as a classification problem involving eight high-level actions, while [33] proposes a region-based training strategy. Goal-based methods [16, 34, 44, 67] represent another category, estimating several agent goal points before completing the full trajectory for each goal.
The large-scale Waymo Open Motion Dataset (WOMD) [15] has recently been introduced for long-term motion prediction. To address this challenge, DenseTNT [21] employs a goal-based strategy to classify trajectory endpoints from dense goal candidates. Other works directly predict the future trajectories based on the encoded agent features [39] or latent anchor embedding [50]. Nonetheless, the goal-based strategy raises efficiency concerns due to the numerous goal candidates, while the direct-regression strategy converges slowly and exhibits a bias to predict homogeneous trajectories since various motion modes are regressed from identical agent features. In contrast, our MTR frameworks employ a small set of learnable intention queries to address these limitations, facilitating the generation of future trajectories with extensive modalities and eliminating numerous goal candidates by employing mode-specific learnable intention queries to predict different motion modes.
**Simultaneous Motion Prediction of Multiple Agents.** In predicting an individual agent's future trajectories, state-of-the-art works [21, 50], including our preliminary version [46], typically customize the scene context encoding for that agent by normalizing all inputs centered on it. This strategy results in computational inefficiencies when predicting motion for multiple agents. To simultaneously predict future trajectories for multiple agents, SceneTransformer [39] encodes all road graphs and agents into a scene-centric embedding applicable to all agents. However, their feature encoding still relies on a global coordinate system centered on an agent of interest (_e.g._, the autonomous vehicle), limiting its performance for off-center agents. Recent works [69, 27] explore encoding the agents' node features in an ego-centric coordinate system, while they generally construct hand-crafted relation graphs and alternate node-edge updating strategy. In contrast, our MTR++ framework introduces symmetric scene context modeling for all agents with innovative query-centric self-attention, operating on a straightforward polyline graph using the native transformer encoder module with relative position encoding, thereby promoting more efficient and concise shared scene context encoding.
To enable the behavioral interaction of multiple agents, recent research M2I [47] introduces a triad of models, initially employing a relation predictor to categorize two interacting agents as influencer and reactor, followed by the sequential generation of their future trajectories via a marginal predictor and a conditional predictor, respectively. Conversely, our MTR++ framework integrates mutually-guided intention queries, fostering the behavioral interaction of more than two agents within a unified model, wherein their predicted future behaviors naturally interact through stacked transformer decoder layers, thereby yielding superior scene-compliant trajectories with higher efficiency for multiple agents.
**Transformer.** Transformer [51] has been extensively employed in natural language processing [3, 12] and computer vision [5, 54, 53, 14, 52, 64]. Our approach draws inspiration from DETR [5] and its subsequent works [11, 29, 32, 65, 62, 70, 36], particularly DAB-DETR [32], where the object query serves as the positional embedding of an anchor box. Motivated by their notable success in object detection, we introduce the innovative concept of learnable intention query to model multimodal motion prediction with prior intention points. Each intention query is tasked with predicting a specific motion mode and enables iterative motion refinement by integrating with transformer decoders.
## 3 MTR for Multimodal Motion Prediction
We propose Motion TRansformer (MTR), which adopts a novel transformer encoder-decoder architecture incorporating iterative motion refinement for predicting multimodal future motion. The overall framework is presented in Figure 2. In Sec. 3.1, we introduce our encoder network for scene context modeling. In Sec. 3.2, we present motion decoder network with a novel concept of intention query for predicting multimodal trajectories. Finally, in Sec. 3.3, we introduce the optimization process of our framework.
### _Transformer Encoder for Scene Context Modeling_
The forthcoming actions of the agents are greatly influenced by their interactions and the road map. To incorporate this contextual information into the model, prior approaches have employed diverse techniques, such as constructing a comprehensive interactive graph [17, 21] or condensing map features into agent-specific features [39, 50]. However, we argue that preserving the locality structure of the scene context, particularly the road map, is crucial. Thus, we introduce a transformer encoder network that utilizes local self-attention to better capture this structural information.
**Input Representation with Single Focal Agent.** We adopt the vectorized depiction [17] to arrange both input trajectories and road maps as polylines. When forecasting the motion of an individual focal agent, we employ the _focal-agent-centric_ approach [67, 21, 50], which normalizes all inputs to the global coordinate system centered on this agent.
Concretely, the past states of \(N_{a}\) agents are denoted as \(S_{\text{A}}^{(\text{g})}\in\mathbb{R}^{N_{a}\times T_{\text{A}}\times C_{a}}\) (where "g" indicating the global reference frame). Here, \(T_{h}\) represents the duration of the historical observations, and \(C_{a}\) corresponds to the dimensionality of the state information, encompassing factors such as position, orientation, and velocity. Zero-padding is applied to the positions of absent frames in trajectories comprising fewer than \(T_{h}\) frames. The road map is represented as \(S_{\text{M}}^{(\text{g})}\in\mathbb{R}^{N_{m}\times n\times C_{m}}\), where \(N_{m}\) indicates the number of map
polylines, \(n\) represents the number of points in each polyline, and \(C_{m}\) signifies the number of attributes for each point (e.g., location and road type). Both \(S_{\text{A}}^{(\text{g})}\) and \(S_{\text{M}}^{(\text{g})}\) are encoded utilizing a PointNet-like [42] polyline encoder as:
\[F_{\text{A}}^{(\text{g})}=\phi\left(\text{MLP}(S_{\text{A}}^{(\text{g})}) \right),\quad F_{\text{M}}^{(\text{g})}=\phi\left(\text{MLP}(S_{\text{M}}^{( \text{g})})\right), \tag{1}\]
where \(\text{MLP}(\cdot)\) represents a multi-layer perceptron, while \(\phi\) denotes max-pooling, employed to encapsulate each polyline's feature as agent features \(F_{\text{A}}^{(\text{g})}\in\mathbb{R}^{N_{a}\times D}\) and map features \(F_{\text{M}}^{(\text{g})}\in\mathbb{R}^{N_{m}\times D}\) with a feature dimension of \(D\).
These two types of polyline features are concatenated to form the following input token features, denoted as \(F_{\text{AM}}^{(\text{g})}=[F_{\text{A}}^{(\text{g})},F_{\text{M}}^{(\text{g} )}]\in\mathbb{R}^{(N_{a}+N_{m})\times D}\). The positions of these tokens are denoted as \(P_{\text{AM}}^{(\text{g})}=[P_{\text{A}}^{(\text{g})},P_{\text{M}}^{(\text{g} )}]\in\mathbb{R}^{(N_{a}+N_{m})\times 2}\), where we utilize the most recent positions for agent tokens (denoted as \(P_{\text{A}}^{(\text{g})}\in\mathbb{R}^{N_{a}\times 2}\)) and polyline centers for map tokens (denoted as \(P_{\text{M}}^{(\text{g})}\in\mathbb{R}^{N_{m}\times 2}\)).
**Scene Context Encoding with Local Transformer Encoder.** The local structure of scene context is vital for motion prediction. For instance, the relationship between two parallel lanes is essential for modeling lane-changing behavior, but utilizing attention on a globally connected graph treats all lane relations equally. Therefore, we incorporate prior knowledge into the context encoder by employing local attention, which better preserves the locality structure and is more memory-efficient. Specifically, the attention module of each transformer encoder layer can be expressed as:
\[F^{\text{(g})}_{\text{AM}[i]}=\text{MHSA} \big{(}\text{Q}:[F_{\text{AM}}^{(\text{g})},\text{PE}(P_{\text{AM }}^{(\text{g})}]),\] \[\text{K}:\{[F_{\text{AM}}^{(\text{g})}],\text{PE}(P_{\text{AM}}^{ (\text{g})}[j)]\}_{j\in\Omega(i)}, \tag{2}\] \[\text{V}:\{F_{\text{AM}}^{(\text{g})}[j]\}_{j\in\Omega(i)}\big{)},\]
where \(i\in\{1,\dots,N_{a}+N_{m}\}\). \(\Omega(i)\) indicates the index set of the \(k\) neighborhoods of \(i\)-th token. \(\text{MHSA}(\cdot_{\text{query}},\cdot_{\text{key}},\cdot_{\text{value}})\) denotes multi-head self-attention layer [51]. \(\text{PE}(\cdot)\) signifies the sinusoidal positional encoding of input tokens. \(F^{\text{(g})}_{\text{AM}[i]}\in\mathbb{R}^{D}\) is the output feature of the \(i\)-th token of this encoder layer. Thanks to this local self-attention, our framework can encode a considerably larger scene context.
By stacking multiple transformer encoder layers, the encoder network generates the token features \(F^{\text{(g)}}_{\text{AM}}\in\mathbb{R}^{(N_{a}+N_{m})\times 2}\). We decompose these features to obtain the agent history features \(F_{\text{A}}^{(\text{g}\text{ past})}\in\mathbb{R}^{N_{a}\times D}\) and map features \(F_{\text{M}}^{(\text{g})}\in\mathbb{R}^{N_{m}\times D}\), where the agent history features will be further enhanced as \(F_{\text{A}}^{(\text{g})}\in\mathbb{R}^{N_{a}\times D}\) by the following dense future prediction module. Note that in the following sections, we employ the same notations for convenience, referring to \(F_{\text{A}}^{(\text{g})}\in\mathbb{R}^{N_{a}\times D}\) and \(F_{\text{M}}^{(\text{g})}\in\mathbb{R}^{N_{m}\times D}\) to represent the agent features and map features, respectively, which are encoded by the context encoder.
**Dense Future Prediction for All Agents.** Interactions with other agents significantly influence the behaviors of our focal agent. Existing methods, such as hub-host networks [71], dynamic relational reasoning [30], and social spatial-temporal networks [61], mainly focus on learning past interactions but often overlook future trajectory interactions. To compensate for this limitation, we propose a method that densely predicts future states for all agents using a straightforward regression head on the encoded history features \(F_{\text{A}}^{(\text{g, past})}\), as follows
\[S_{\text{A}}^{(\text{g, future})}=\text{MLP}(F_{\text{A}}^{(\text{g, past})}), \tag{3}\]
where \(S_{\text{A}}^{(\text{g, future})}\in\mathbb{R}^{N_{a}\times(T_{f}\times 4)}\) includes the future position and velocity of each agent, and \(T_{f}\) denotes the number of future frames to be predicted. The predicted trajectories \(S_{\text{A}}^{(\text{g, future})}\) are encoded using the same polyline encoder as in Eq. (1), producing features \(F_{\text{A}}^{(\text{g, future})}\in\mathbb{R}^{N_{a}\times D}\). These features are combined with \(F_{\text{A}}^{(\text{g, past})}\in\mathbb{R}^{N_{a}\times D}\) using feature concatenation and three MLP layers, resulting in enhanced features \(F_{\text{A}}^{(\text{g})}\in\mathbb{R}^{N_{a}\times D}\).
By supplying the motion decoder network with additional future context information, this approach effectively improves the model's capability to predict more accurate future trajectories for the focal agent. Experimental results demonstrate that this simple auxiliary task effectively enhances the performance of multimodal motion prediction.
Fig. 2: The architecture of the MTR framework. In this framework, we first utilize two polyline encoders to encode the polylines derived from agent history trajectories and road lanes into token features. Next, multiple transformer encoder layers with local self-attention are utilized to model the relationships among different tokens within the global coordinate system centered around the focal agent of interest. This allows for a comprehensive understanding of the scene contextual information. Finally, a small set of learnable intention queries are integrated into the stacked transformer decoder layers to aggregate information from the encoded context features. Each intention query is responsible for predicting future trajectories towards a specific intention point, enabling the generation of multimodal future trajectories for the focal agent.
### _Motion Decoder with Intention Query_
To facilitate multimodal motion prediction, the MTR framework utilizes a transformer-based motion decoder network that incorporates the previously encoded scene context features. We introduce the concept of intention query, which facilitates multimodal motion prediction through the joint optimization of global intention localization and local movement refinement. As depicted in Fig. 2, the motion decoder network consists of stacked transformer decoder layers that iteratively refine predicted trajectories utilizing learnable intention queries. Next, we elaborate on the detailed structure.
**Learnable Intention Query1**. To efficiently and precisely pinpoint an agent's potential motion intentions, we propose the learnable _intention query_ to diminish the uncertainty of future trajectories by employing different intention queries for different motion modes. Specifically, for each category, we generate \(\mathcal{K}\) representative intention points \(I^{(\text{s})}\in\mathbb{R}^{\mathcal{K}\times 2}\) (where "s" indicating a single focal agent) by utilizing the k-means clustering algorithm on the endpoints of ground-truth (GT) trajectories in the training dataset (refer to Fig. 3). Each intention point embodies an implicit motion mode, accounting for both motion direction and velocity. Given the intention points of a single focal agent, we model each intention query as the learnable positional embedding of a specific intention point:
Footnote 1: To streamline the illustration of the motion decoder, we simplify the two components of the motion query pair in our preliminary version [46] by using the new concept of intention query.
\[E_{\text{I}}^{(\text{s})}{}_{[i]}=\text{MLP}\left(\text{PE}(I^{(\text{s})}{}_ {[i]})\right), \tag{4}\]
where \(i\in\{1,\dots,\mathcal{K}\}\) and \(E_{\text{I}}^{(\text{s})}\in\mathbb{R}^{\mathcal{K}\times D}\). \(\text{PE}(\cdot)\) denotes the sinusoidal position encoding. Notably, each intention query is responsible for predicting trajectories for a specific motion mode, which stabilizes the training process and facilitates multimodal trajectory prediction since each motion mode possesses its own learnable embedding. Owing to their learnable and adaptive properties, we require only a minimal number of queries (e.g., 64 queries in our setting) for efficient intention localization, rather than employing densely-placed goal candidates [67, 21] to cover the agents' destinations.
**Scene Context Aggregation with Intention Query.** These intention queries are considered as the learnable query embedding of the transformer decoder layer for aggregating context features from the encoded agent features and map features. Specifically, in each transformer decoder layer, we first apply the self-attention module to propagate information among \(\mathcal{K}\) intention queries as follows:
\[F^{\prime}{}^{(\text{s})}{}_{[i]}=\text{MHSA}\big{(} \text{Q: }F^{(\text{s})}_{1}{}_{[i]}+E^{(\text{s})}_{1}{}_{[i]}, \tag{5}\] \[\text{K: }\big{\{}F^{(\text{s})}_{[i]}{}_{[j]}+E^{(\text{s})}_{ \text{I}}{}_{[j]}\big{\}}_{j=1}^{\mathcal{K}},\] \[\text{V: }\big{\{}F^{(\text{s})}_{[j]}\big{\}}_{j=1}^{\mathcal{K}} \big{\}},\]
where \(i\in\{1,\dots,\mathcal{K}\}\). \(F^{(\text{s})}_{1}\in\mathbb{R}^{\mathcal{K}\times D}\) is the query content feature from the previous transformer decoder layer, and it is initialized as zero in the first transformer decoder layer. \(F^{(\text{s})}_{1}\in\mathbb{R}^{\mathcal{K}\times D}\) indicates the updated query content feature. Next, to aggregate scene context features from the encoder network, inspired by [32, 36], we concatenate content features and position embedding for both query and key to decouple their contributions to the attention weights. Thus, the cross-attention module can be formulated as follows:
\[F^{\prime}{}^{(\text{s})}{}_{[i]}=\text{MHCA}\big{(} \text{Q: }[F^{\prime}{}^{(\text{s})}{}_{[i]},E^{\text{s}}{}_{[i]}], \tag{6}\] \[\text{K: }[F^{(\text{s})}_{\text{A}}{}_{\text{D}},(P^{(\text{s})}_{ \text{M}})]\cup[F^{(\text{s})}_{\text{M}},\text{PE}(P^{(\text{s})}_{\text{M}} )],\] \[\text{V: }F^{(\text{s})}_{\text{A}}{}_{\text{D}}\cup F^{(\text{s})}_{ \text{M}},\]
where \(i\in\{1,\dots,\mathcal{K}\}\). \(\text{MHCA}_{(\text{query},\ \text{key},\ \text{value})}\) denotes the multi-head cross-attention layer [51]. The sign "\([\cdot,\cdot]\)" indicates feature concatenation, and "\(\cup\)" combines the agent tokens and map tokens as the key and value of the cross-attention module. Finally, \(F^{\prime}{}^{(\text{s})}_{1}\in\mathbb{R}^{\mathcal{K}\times D}\) is the final updated query content feature in this transformer decoder layer.
Additionally, for each intention query, we introduce the dynamic map collection strategy to extract fine-grained trajectory features by querying map features from a trajectory-aligned local region. Specifically, by adopting such a module, the key and value of the map tokens in Eq. (6) are restricted to a local region by gathering the polylines whose centers are nearest to the predicted trajectory of the current intention query. As the agent's behavior is largely influenced by road maps, this local movement refinement strategy enables a continuous focus on the most recent local context information for iterative motion refinement.
**Global Intention Localization.** By considering different motion modes with different learnable queries, the intention queries capture representative features \(F^{\prime\prime}{}^{(\text{s})}_{1}\in\mathbb{R}^{\mathcal{K}\times D}\) to model the focal agent's future motion. Thus, we propose to coarsely localize the agent's intention by predicting the occurrence probability of each intention point as follows:
\[p=\text{MLP}(F^{\prime\prime}{}^{(\text{s})}_{1}), \tag{7}\]
where \(p\in\mathbb{R}^{\mathcal{K}}\) is a probability distribution to model the potential future intention of the focal agent.
**Local Movement Refinement.** To complement the coarse global intention localization, we further predict the detailed future trajectory for each intention query as follows:
\[Z=\text{MLP}(F^{\prime\prime}{}^{(\text{s})}_{1}), \tag{8}\]
where \(Z\in\mathbb{R}^{\mathcal{K}\times(T\times 5)}\) indicates the \(\mathcal{K}\) predicted future trajectories, and each of them has \(T\) future frames. "5" indicates that we model the uncertainty of each trajectory waypoint with Gaussian distribution as \(\mathcal{N}(\mu_{x},\sigma_{x};\mu_{y},\sigma_{y};\rho)\).
As the query content feature \(F^{\prime\prime}{}^{(\text{s})}_{1}\) will be constantly propagated to the next transformer decoder layer as the new
Fig. 3: The distribution of intention points for each category, where the intention points are shown as orange stars. The gray dotted lines indicate the distribution of ground-truth trajectories for each category, and note that only 10% ground-truth trajectories in the training dataset are drawn in the figure for better visualization.
query content feature, the predicted future trajectories can be iteratively refined with multiple stacked transformer decoder layers by continually aggregating scene context features from the encoder network.
### _Multimodal Prediction with Gaussian Mixture Model_
As the behaviors of the agents are highly multimodal, we follow [50, 10] to represent the distribution of predicted trajectories with Gaussian Mixture Model (GMM) at each time step. Specifically, for a specific future time step \(i\), MTR will predict \(\mathcal{K}\) candidate goal positions with distribution \(\mathcal{N}_{1:\mathcal{K}}(\mu_{x},\sigma_{x};\mu_{y},\sigma_{y};\rho)\) and probability distribution \(p\in\mathbb{R}^{\mathcal{K}}\). The predicted distribution of the focal agent's position at time step \(i\) can be formulated as a GMM with \(\mathcal{K}\) components:
\[\mathcal{P}_{i}(o)=\sum_{k=1}^{\mathcal{K}}p_{k}\cdot f_{k}(o_{x}-\mu_{x},o_{y }-\mu_{y}), \tag{9}\]
where \(f_{k}(\cdot,\cdot)\) is the probability density function of the \(k\)-th component of this GMM, and \(\mathcal{P}_{i}(o)\) is the occurrence probability density of the agent at spatial position \(o\in\mathbb{R}^{2}\). The predicted trajectories can be generated by simply extracting the predicted centers of Gaussian components.
**Training Loss.** Given the predicted Gaussian Mixture Models for a specific future time step, we adopt negative log-likelihood loss to maximum the likelihood of the agent's ground-truth position \((\hat{Y}_{x},\hat{Y}_{y})\) at this time step, and the detailed loss can be formulated as:
\[L_{\text{GMM}}=-\log f_{h}(\hat{Y}_{x}-\mu_{x},\hat{Y}_{y}-\mu_{y})-\log(p_{h}), \tag{10}\]
where \(f_{h}(\hat{Y}_{x}-\mu_{x},\hat{Y}_{y}-\mu_{y})\) is the selected positive Gaussian component for optimization. Here the positive Gaussian component is selected by finding the closest intention query with the endpoint of this GT trajectory. \(p_{h}\) is the predicted probability of this selected positive Gaussian component, and we adopt cross entropy loss in the above equation to maximize the probability of the selected positive Gaussian component. The final loss of our framework is denoted as:
\[L_{\text{SUM}}=L_{\text{GMM}}+L_{\text{DMP}}, \tag{11}\]
where \(L_{\text{DMP}}\) is the \(L1\) regression loss on the outputs of Eq. (3).
## 4 MTR++: Multi-Agent Motion Prediction
The above MTR framework proposed for multimodal motion prediction has demonstrated state-of-the-art performance. However, its scene context modeling module adopts the focal-agent-centric strategy commonly found in previous works [67, 21, 21], which encodes the scene context separately for each focal agent, leading to computational inefficiencies when predicting motion for multiple agents. Although the Scene Transformer model [39] presents a shared context encoding strategy for predicting trajectories of multiple agents, it still centers the scene around a specific agent, potentially limiting its performance for off-center agents due to uneven distribution of shared context information.
To address the aforementioned challenges, we introduce an enhanced version of the MTR framework, denoted as MTR++. As shown in Fig. 4, the MTR++ framework enables simultaneous motion prediction of multiple agents via shared symmetric scene context modeling and mutually-guided intention querying. We elaborate these two improvements in Sec. 4.1 and Sec. 4.2, respectively.
### _Symmetric Scene Context Modeling for All Agents_
To improve the efficiency of predicting future trajectories of multiple agents simultaneously, we propose a symmetric scene context modeling module that employs a shared context encoder to encode complex multimodal scene context information for all agents. In contrast to most existing methods that center the scene around a particular agent, our approach encodes the entire scene symmetrically for each agent. As a result, the encoded scene context features can be directly utilized for predicting the motion of any agent by attaching a motion decoder network.
**Input Representation with Polyline-Centric Encoding.** We employ the same vectorized representation as in Sec. 3.1 to encode the input context features. However, instead of normalizing all inputs to the global coordinate system centered on one focal agent, we encode the feature of each polyline in a polyline-centric local coordinate system (see Fig. 5). Specifically, we modify the polyline feature
Fig. 4: The architecture of the MTR++ framework builds upon the initial MTR framework and introduces several enhancements. In the MTR++ framework, we introduce the symmetric context encoder layer, which facilitates the understanding of relationships among tokens within their respective local coordinate systems. By incorporating these symmetrically encoded token features as input, the MTR++ framework employs a joint motion decoder that leverages multiple sets of intention queries. This enables the simultaneous prediction of future trajectories for multiple agents, with the mutually-guided intention querying module facilitating the interaction of future behaviors among different agents. As a result, the MTR++ framework generates more scene-compliant future trajectories, enhancing the overall predictive capabilities of the MTR framework.
encoding process in Eq. (1) by incorporating the coordinate transformation function, denoted as \(\Gamma(\cdot)\), as follows:
\[F_{\text{A}}^{(l)}=\phi\left(\text{MLP}\big{(}\Gamma(S_{\text{A}}^{(\text{gg})}) \big{)}\right),\quad F_{\text{M}}^{(l)}=\phi\left(\text{MLP}\big{(}\Gamma(S_{ \text{M}}^{(\text{gg})})\big{)}\right), \tag{12}\]
where \(\Gamma(\cdot)\) transforms the polyline features from an arbitrary global coordinate system to the polyline-centric local coordinate system. Concretely, we use the latest position and moving direction of each agent to determine the local coordinate system of their corresponding polyline, while for the map polylines, we calculate the geometry center and tangent direction of each polyline to determine their local coordinate system.
The encoded features \(F_{\text{A}}^{(l)}\in\mathbb{R}^{N_{\text{a}}\times D}\) and \(F_{\text{M}}^{(l)}\in\mathbb{R}^{N_{\text{m}}\times D}\) (where "\(l\)" indicating the local reference frame) capture the polyline-wise features for the agent history states and map polylines, respectively. Importantly, these polyline features are encoded in their own local coordinate system, independent of any global coordinate system. This provides input token features that are decoupled from the global coordinate system and enables the symmetric modeling of token relations in the subsequent step.
**Attribute Definition of Polyline Tokens.** The features \(F_{\text{A}}^{(l)}\) and \(F_{\text{M}}^{(l)}\) are considered as input tokens in the subsequent transformer network, and their features are concatenated to form the input token feature matrix \(F_{\text{AM}}^{(l)}=[F_{\text{A}}^{(l)},F_{\text{M}}^{(l)}]\in\mathbb{R}^{(N_ {\text{a}}+N_{\text{m}})\times D}\). As in Sec. 3.1, the global positions of these tokens are denoted as \(P_{\text{AM}}^{(\text{g})}\in\mathbb{R}^{(N_{\text{a}}+N_{\text{m}})\times 2}\), which can be defined in an arbitrary global coordinate system. Additionally, each token is associated with a heading direction attribute \(H_{\text{AM}}^{(\text{g})}\in\mathbb{R}^{(N_{\text{a}}+N_{\text{m}})\times 1}\), which is defined similarly to the direction definition as in the transformation function \(\Gamma(\cdot)\) presented in Eq. (12).
**Symmetric Scene Context Modeling with Query-Centric Self-Attention.** In our previous MTR framework, we model the relationship between the input token features using a self-attention module (Eq. (2)) that depends on a global coordinate system centered on a single focal agent. However, this approach hindered the performance of motion prediction for other agents. To address this limitation, we propose a _query-centric self-attention_ module, which models the relationship between all tokens in a symmetric manner, decoupled from any global coordinate system.
Specifically, to explore the relationship between a query token and other tokens in its specific local coordinate system, we perform the attention mechanism separately for each query token. For instance, let us consider the \(i\)-th token as the query. We convert the coordinates and directions of all tokens into the local coordinate system of the query token:
\[R_{\text{AM}}^{(\text{pos})}{}_{[i,j]} =(P_{\text{AM}}^{(\text{g})}{}_{[j]}-P_{\text{AM}}^{(\text{g})}) \begin{bmatrix}\cos H_{\text{AM}}^{(\text{g})}&-\sin H_{\text{AM}}^{(\text{g} )}{}_{[i]}\\ \sin H_{\text{AM}}^{(\text{g})}{}_{[i]}&\cos H_{\text{AM}}^{(\text{g})}{}_{[i] }\end{bmatrix},\] \[R_{\text{AM}}^{(\text{ang})}{}_{[i,j]} =H_{\text{AM}}^{(\text{g})}{}_{[j]}-H_{\text{AM}}^{(\text{g})}{}_ {[i]}, \tag{13}\]
where \(i\in\{1,\ldots,N_{a}+N_{m}\}\), and \(j\in\Omega(i)\) indicating the index of its neighboring tokens. \(R_{\text{AM}}^{(\text{pos})}{}_{[i,j]}\in\mathbb{R}^{2}\) and \(R_{\text{AM}}^{(\text{ang})}{}_{[i,j]}\in\mathbb{R}\) indicate the \(j\)-th token's relative position and direction in the local coordinate system of the \(i\)-th query token. We then perform the query-centric self-attention mechanism as follows:
\[F^{\prime(l)}{}_{\text{AM}}^{(l)}{}_{[i]}=\text{MHSA}\big{(} \text{Q}: [F_{\text{AM}}^{(l)}{}_{[i]},\text{PE}(R_{\text{AM}}{}_{[i,i]})],\] \[\text{K}: \{[F_{\text{AM}}^{(l)}{}_{[j]},\text{PE}(R_{\text{AM}}{}_{[i,j]})] \}_{j\in\Omega(i)}, \tag{14}\] \[\text{V}: \{F_{\text{AM}}^{(l)}{}_{[j]}+\text{PE}(R_{\text{AM}}{}_{[i,j]}) \}_{j\in\Omega(i)}\big{)},\]
where \(\text{PE}(R_{\text{AM}}{}_{[i,j]})\) indicates the sinusoidal positional encoding of both \(R_{\text{AM}}^{(\text{pos})}{}_{[i,j]}\) and \(R_{\text{AM}}^{(\text{ang})}{}_{[i,j]}\). This query-centric self-attention mechanism models the token relationship in a symmetric manner by integrating the global-coordinate-decoupled token feature \(F_{\text{AM}}^{(l)}\) and relative coordinate \(R_{\text{AM}}\) based on the query token.
Note that the computational cost of the proposed query-centric self-attention module in Eq.(14) is comparable to that of the self-attention module in Eq.(2). The key advantage of the proposed module is that it enables the symmetric encoding of scene context features for each input token, such as each agent, thus allowing the encoded features to be used for predicting the motion of any input agent. This feature enables a shared scene context encoder for simultaneous prediction of the motion of multiple agents.
### _Joint Motion Decoder with Mutually-Guided Queries_
The MTR++ framework utilizes symmetrically encoded scene context features, which are fed to the motion decoder as described in Sec. 3.2, to enable simultaneous motion prediction for multiple focal agents. This simultaneous motion prediction allows for the exploration of future behavior interactions among the agents, which is crucial for making more accurate and scene-compliant motion predictions.
**Mutually-Guided Intention Querying of Multiple Agents.** To enhance the accuracy of motion prediction by enabling agents to interact and influence each other's behavior, as
Fig. 5: The comparison of two different scene context encoding modules in the MTR and MTR++ frameworks. The MTR framework adopts the scene context encoding for a single focal agent, where both the polyline-wise features and tokens’ relationships are encoded in a global coordinate system. In contrast, the MTR++ framework encodes both the polyline-wise features and their relationship in their respective local coordinate system via the novel query-centric self-attention module, thus enabling simultaneous motion prediction of multiple agents.
shown in Fig. 6, we propose a _mutually-guided intention querying_ module. However, building such interaction is nontrivial since the intention queries of different focal agents are encoded in their own local coordinate system as in Eq. (4). To maintain the local-encoded features of intention queries while also establishing the spatial relationship among them, we adopt the previously introduced query-centric self-attention module, similar to the one used in Sec. 4.1, to enable the information interaction among all intention queries.
Specifically, to predict future trajectories for \(N_{o}\) focal agents, the motion decoding process is conducted simultaneously in the local coordinate system centered at each focal agent. The intention queries for the focal agents are represented as \(E_{\text{I}}^{(\text{m})}\in\mathbb{R}^{N_{o}\times K\times D}\) (where "m" indicating multiple focal agents), wherein the intention queries for different focal agents are encoded using Eq. (4) with the same intention points \(I^{(\text{s})}\in\mathbb{R}^{K\times 2}\).
However, as the intention points are defined in their respective local coordinate systems, in order to facilitate information propagation among the intention queries of different focal agents, we first transform their intention points into the same global coordinate system based on the global positions \(P_{\text{O}}^{(\text{s})}\in\mathbb{R}^{N_{o}\times 2}\) and moving directions \(H_{\text{O}}^{(\text{s})}\in\mathbb{R}^{N_{o}\times 1}\) of the focal agents, as follows:
\[P_{\text{I}}^{(\text{m})}{}_{[t]}=I^{(\text{s})}\begin{bmatrix}\cos H_{\text{ O}}^{(\text{s})}{}_{[t]}&\sin H_{\text{O}}^{(\text{s})}{}_{[t]}\\ -\sin H_{\text{O}}^{(\text{s})}{}_{[t]}&\cos H_{\text{O}}^{(\text{s})}{}_{[t] }\end{bmatrix}+P_{\text{O}}^{(\text{s})}{}_{[t]}, \tag{15}\]
where \(t\in\{1,\ldots,N_{o}\}\) and \(P_{\text{I}}^{(\text{m})}\in\mathbb{R}^{N_{o}\times K\times 2}\). To build the information interaction among all intention queries of multiple agents, we re-organize the intention points and intention queries as \(P_{\text{I}}^{(\text{m})}\in\mathbb{R}^{(N_{o}\times K)\times 2}\) and \(E_{\text{I}}^{(\text{m})}\in\mathbb{R}^{(N_{o}\times K)\times D}\), respectively. Meanwhile, we also assign the heading direction \(H_{\text{I}}^{(\text{m})}\in\mathbb{R}^{(N_{o}\times K)\times 1}\) for the intention queries for calculating their relative spatial relationship, where the \(\mathcal{K}\) intention queries of the \(t\)-th focal agent share the same heading direction as its moving direction \(H_{\text{O}}^{(\text{s})}{}_{[t]}\).
Thus, following Eq. (13), when considering the \(i\)-th intention query as the query token, we transform the coordinates and directions of all intention queries to the local coordinate system of the \(i\)-th query token, as follows:
\[R_{\text{I}}^{(\text{pos})}{}_{[i,j]} =(P_{\text{I}}^{(\text{m})}{}_{[j]}-P_{\text{I}}^{(\text{m})}{}_ {[i]})\begin{bmatrix}\cos H_{\text{I}}^{(\text{m})}{}_{[i]}&-\sin H_{\text{I} }^{(\text{m})}{}_{[i]}\\ \sin H_{\text{I}}^{(\text{m})}{}_{[i]}&\cos H_{\text{I}}^{(\text{m})}{}_{[i]} \end{bmatrix},\] \[R_{\text{I}}^{(\text{ang})}{}_{[i,j]} =H_{\text{I}}^{(\text{m})}{}_{[j]}-H_{\text{I}}^{(\text{m})}{}_{[ i]}, \tag{16}\]
where \(i\in\{1,\ldots,N_{o}\mathcal{K}\}\), and \(j\in\Omega(i)\) indicating the index of its neighboring tokens. Then, we apply the query-centric self-attention module on all intention queries as follows:
\[F_{\text{I}}^{(\text{m})}{}_{[i]}=\text{MHSA}\big{(} \text{Q: }[F_{\text{I}}^{(\text{m})}{}_{[i]}+E_{\text{I}}^{(\text{m})}{}_{[i]}, \text{PE}(R_{\text{I}[i,i]})], \tag{17}\] \[\text{K: }\{[F_{\text{I}}^{(\text{m})}{}_{[j]}+E_{\text{I}}^{(\text{m})}{} _{[j]},\text{PE}(R_{\text{I}[i,j]})]\}_{j\in\Omega(i)},\] \[\text{V: }\{F_{\text{I}}^{(\text{m})}{}_{[j]}+E_{\text{I}}^{(\text{m})}{} _{[j]}+\text{PE}(R_{\text{I}[i,j]})\}_{j\in\Omega(i)}\big{)},\]
where \(i\in\{1,\ldots,N_{o}\mathcal{K}\}\), and \(F_{\text{I}}^{(\text{m})}\in\mathbb{R}^{(N_{o}\mathcal{K})\times D}\) indicates the query content feature from the previous transformer decoder layer and is initialized as zero in the first decoder layer.
Finally, the updated query content feature \(F_{\text{I}}^{(\text{m})}\in\mathbb{R}^{N_{o}\times K\times D}\) will be utilized individually for the subsequent scene context aggregation of each focal agent. This aggregation process is the same as described in Eq. (5) and Eq. (6) in the MTR framework. It is worth noting that the positional encoding for the encoded scene elements from the context encoder is defined in the local coordinate system of each focal agent. These resulting query features are then fed into the prediction head, which generates future trajectories for each focal agent. By establishing this information propagation process, the intention queries of multiple agents are guided by each other during the multimodal motion decoding process, ultimately resulting in more informed and realistic predictions of their future trajectories.
## 5 Experiments
### _Experimental Setup_
**Dataset and metrics.** We mainly evaluate our approach using the Waymo Open Motion Dataset (WOMD) [15], a large-scale dataset that captures diverse traffic scenes with interesting interactions among agents. There are two tasks in WOMD with separate evaluation metrics: (1) The _marginal motion prediction challenge_ that independently evaluates the predicted motion of each agent (up to 8 agents per scene). (2) The _joint motion prediction challenge_ that needs to predict the joint future positions of 2 interacting agents for evaluation. For both tasks, the dataset provides 1 second of history data and aims to predict 6 marginal or joint trajectories of the agents for 8 seconds into the future. The dataset contains \(487k\) training scenes, and approximately \(44k\) validation scenes and \(44k\) testing scenes for each challenge. We utilize the official evaluation tool, which calculates important metrics such as mAP and miss rate, as used in the official WOMD leaderboards [56, 58].
In addition to the WOMD, we also evaluate our approach on the Argoverse 2 Motion Forecasting Dataset [59], another large-scale motion prediction dataset. It contains 250,000 scenarios for training and validation. The model needs to take the history five seconds of each scenario as input and predict the six-second future trajectories of one interested agent, where HDMap is always available to provide map context information. We also utilize the official evaluation tool to calculate the miss rate as the main metric.
**Implementation details.** For both the MTR and MTR++ frameworks, we stack 6 transformer encoder layers for context encoding. The road map is represented as multiple polylines, where each polyline contains up to 20 map points (about \(10m\) in WOMD). We select \(N_{m}=768\) nearest map
Fig. 6: The illustration of the mutually-guided intention querying module.
polylines around the interested agents. The number of neighbors in the encoder's local self-attention is set to 16. The hidden feature dimension is set as \(D=256\). For the decoder modules, we stack 6 decoder layers. For dynamic map collection, we collect the closest 128 map polylines from the context encoder for iterative motion refinement. By default, we utilize 64 motion query pairs where their intention points are generated by conducting the k-means clustering algorithm on the training dataset. The number of neighbors for the query-centric self-attention module is set to 16 for the MTR++ framework. To generate 6 future trajectories for evaluation, we use non-maximum suppression (NMS) to select the top 6 predictions from 64 predicted trajectories by calculating the distances between their endpoints, and the distance threshold is set as \(2.5m\). More implementation details of the initial MTR framework can be found in our open-source codebase: [https://github.com/sshaoshuai/MTR](https://github.com/sshaoshuai/MTR).
**Training details.** Our model is trained in an end-to-end manner by AdamW optimizer with a learning rate of 0.0001, a weight decay of 0.01, and a batch size of 80 scenes. We train the model for 30 epochs with 8 GPUs, and the learning rate is decayed by a factor of 0.5 every 2 epochs from epoch 20.
### _Main Results_
**Performance comparison for marginal motion prediction.** We evaluate the marginal motion prediction performance of our MTR frameworks by comparing them with leading-edge research on the WOMD test set. As presented in Table I, our initial MTR framework already surpasses previous state-of-the-art approaches [21, 27, 39] with significant improvements. It achieves an mAP increase of \(+8.48\%\) and reduces the miss rate from \(15.11\%\) to \(13.51\%\). Furthermore, our latest MTR++ framework further enhances the performance compared to MTR on all metrics. Particularly, it achieves a \(+2.00\%\) improvement in mAP, showcasing its ability to generate more confident multimodal future trajectories by jointly considering the future behaviors of multiple agents.
Additionally, we also adopt a simple model ensemble strategy, combining predictions from multiple models and employing non-maximum-suppression (NMS) to remove redundant predictions. By adopting this ensemble strategy to diverse variants of our MTR frameworks (_e.g._, more decoder layers, different number of queries, larger hidden dimension), our approach significantly outperforms the previous state-of-the-art ensemble result [50], increasing the mAP by \(+5.42\%\) and reducing the miss rate from \(13.40\%\) to \(11.22\%\).
Notably, our MTR and MTR++ frameworks have secured the first-place positions in the highly-competitive Waymo Motion Prediction Challenge in 2022 [57] and 2023 [58], respectively. As of May 30, 2023, our MTR++ framework holds the \(1^{st}\) rank on the motion prediction leaderboard of WOMD [58], outperforming other works by a significant margin. These notable achievements highlight the effectiveness of the MTR frameworks.
**Performance comparison for joint motion prediction.** We also evaluate the proposed MTR frameworks on the joint motion prediction benchmark, merging the marginal predictions of two interacting agents into a joint prediction as explicated in [7, 15, 47]. We select the top 6 joint predictions from 36 potential combinations of these agents, with the confidence of each combination being the product of marginal probabilities. As indicated in Table II, our initial MTR framework already surpasses state-of-the-art approaches [39, 47] by substantial margins on all measures, reducing the miss rate from \(49.42\%\) to \(44.11\%\) and enhancing the mAP from \(12.39\%\) to \(20.37\%\). Furthermore, our advanced MTR++ framework, which allows us to concurrently predict future motion for two interactive agents with shared context encoding, amplifies the robust performance of MTR across all metrics, achieving
\begin{table}
\begin{tabular}{c|l|c||c c c c} \hline & Method & Reference & minADE\(\downarrow\) & minFDE\(\downarrow\) & Miss Rate\(\downarrow\) & **mAP \(\uparrow\)** \\ \hline \multirow{8}{*}{Test} & MotionCNN [28] & CVPRw 2021 & 0.7400 & 1.4936 & 0.2091 & 0.2136 \\ & RecOAt [68] & CVPRw 2021 & 0.7703 & 1.6668 & 0.2437 & 0.2711 \\ & DenseTNT [21] & ICCV 2021 & 1.0387 & 1.5514 & 0.1573 & 0.3281 \\ & SceneTransformer [39] & ICLR 2022 & 0.6117 & 1.2116 & 0.1564 & 0.2788 \\ & HDCGI [27] & Arxiv 2022 & 0.5933 & 1.2055 & 0.1511 & 0.2854 \\ & MTR (Ours) & NeurIPS 2022 & 0.6050 & 1.2207 & 0.1351 & 0.4129 \\ & MTR++ (Ours) & - & **0.5906** & **1.1939** & **0.1298** & **0.4329** \\ \cline{2-7} & \({}^{\dagger}\)MultiPath++ [50] & ICRA 2022 & _0.5557_ & _1.1577_ & _0.1340_ & _0.4092_ \\ & \({}^{\dagger}\)MTR++\_Ens (Ours) & - & _0.5581_ & _1.1166_ & _0.1122_ & _0.4634_ \\ \hline \multirow{2}{*}{Val} & MTR (Ours) & NeurIPS 2022 & 0.6046 & 1.2251 & 0.1366 & 0.4164 \\ & MTR++ (Ours) & - & **0.5912** & **1.1986** & **0.1296** & **0.4351** \\ \hline \end{tabular}
\end{table} TABLE I: Performance comparison of marginal motion prediction on the test and validation set of Waymo Open Motion Dataset. \(\dagger\): The results are shown in italic for reference since their performance is achieved with model ensemble techniques.
\begin{table}
\begin{tabular}{c|l|c||c c c c} \hline & Method & Reference & minADE\(\downarrow\) & minFDE\(\downarrow\) & Miss Rate\(\downarrow\) & **mAP \(\uparrow\)** \\ \hline \multirow{8}{*}{Test} & Waymo LSTM baseline [15] & ICCV 2021 & 1.9056 & 5.0278 & 0.7750 & 0.0524 \\ & HeatIRm4 [38] & CVPRw 2021 & 1.4197 & 3.2595 & 0.7224 & 0.0844 \\ & AIR\({}^{2}\)[60] & CVPRw 2021 & 1.3165 & 2.7138 & 0.6230 & 0.0963 \\ & SceneTransformer [39] & ICLR 2022 & 0.9774 & 2.1892 & 0.4942 & 0.1192 \\ & M2I [47] & CVPR 2022 & 1.3506 & 2.8325 & 0.5538 & 0.1239 \\ & MTR (Ours) & NeurIPS 2022 & 0.9181 & 2.0633 & 0.4411 & 0.2037 \\ & MTR++ (Ours) & - & **0.8795** & **1.9509** & **0.4143** & **0.2326** \\ \hline \multirow{2}{*}{Val} & MTR (Ours) & NeurIPS 2022 & 0.9132 & 2.0536 & 0.4372 & 0.1992 \\ & MTR++ (Ours) & - & **0.8859** & **1.9712** & **0.4106** & **0.2398** \\ \hline \end{tabular}
\end{table} TABLE II: Performance comparison of joint motion prediction on the interactive validation and test set of Waymo Open Motion Dataset.
an increase of \(+2.89\%\) in terms of mAP and reducing the miss rate by \(2.68\%\). The extraordinary performance enhancements of the MTR++ framework emphasize that, through the adoption of symmetric scene context encoding and mutually-guided intention querying, our MTR++ framework can accurately predict future trajectories that exhibit scene consistency among highly interacting agents. Additionally, we also provide some qualitative results in Fig. 9 to show our predictions in complicated interacting scenarios. Notably, as of May 30, 2023, our MTR++ framework holds the \(1^{st}\) rank on the joint motion prediction leaderboard of WOMD [56].
**Performance comparison on the Argoverse 2 dataset.** As shown in Table III, we also provide the performance comparison of our approach on the Argoverse 2 dataset for reference. Since our latest MTR++ framework is designed for simultaneous motion prediction of multiple focal agents, we only evaluate our MTR framework on this dataset, which follows the common marginal motion prediction setting of this dataset. We compare our approach with the top-10 submissions on the leaderboard of Argoverse 2 dataset [1] at the time of our MTR framework submission. These submissions, primarily developed for the Argoverse 2 Motion Forecasting Competition 2022, represent highly competitive approaches. Notably, our MTR framework achieves new state-of-the-art performance with remarkable improvements in miss-rate-related metrics, highlighting the exceptional generalizability and robustness of our approach.
### _Ablation Study_
We study the effectiveness of each component in our MTR/MTR++ frameworks. For efficiently conducting ablation experiments, we uniformly sampled 20% frames (about 97k scenes) from the WOMD training set according to their default order2, and we empirically find that it has similar distribution with the full training set. All models are evaluated with marginal motion prediction metric on the validation set of WOMD.
Footnote 2: The detailed training data split can be found in our open-source codebase: [https://github.com/sshaoshuai/MTR](https://github.com/sshaoshuai/MTR)
**Effects of the learnable intention query.** We investigate the effectiveness of different strategies for generating future trajectories based on encoded context features. These strategies include the simple MLP head [27, 39], the goal-based head [21], the head with 6 latent anchor embeddings [50], and the head with the learnable intention query. The first four rows of Table IV illustrate the performance comparison of these strategies, where our proposed learnable intention query demonstrates significantly superior results. Specifically, our strategy achieves a much better mAP compared to the previous latent anchor embedding [50] (_i.e.,_ +5.53%) and dense-goal-based methods [21, 67] (_i.e.,_ +4.67%). This improvement can be attributed to our mode-specific querying strategy, where each intention query is associated with an explicit intention point, enabling more accurate and precise multimodal predictions.
**Effects of the distribution of intention points.** As introduced in Sec. 3.2, we utilize the k-means clustering algorithm to generate 64 intention points, which serve as the foundation for our intention queries. In order to compare this approach with the straightforward uniform sampling strategy, we uniformly sample 8\(\times\)8 = 64 intention points by considering the range of trajectory distribution for each category (see Fig. 3). The results presented in Table V indicate a significant drop in performance when replacing the k-means clustering algorithm with uniform sampling for generating intention points. This comparison highlights the superiority of our k-means clustering algorithm, as it produces a more accurate and comprehensive distribution of intention points. Consequently, it effectively captures the diverse future motion intentions of our interested agent with a small number of intention points.
**Effects of the iterative trajectory refinement.** In Sec. 3.1, we introduce the utilization of stacked transformer decoder layers for iterative refinement of predicted trajectories by continually aggregating fine-grained features via dynamic map collection. As shown in the last two rows of Table IV, this iterative refinement approach significantly reduces the miss rate metric by 1.48% and improves the performance of mAP by +1.6%. By continually aggregating trajectory-specific features from the context encoder with the proposed intention queries, the refinement process effectively improves the accuracy and quality of the predicted trajectories.
**Effects of local attention for context encoding.** Table VI demonstrates that the utilization of local self-attention in our context encoders leads to slightly superior performance compared to global attention when using the same number of map polylines as input. This finding confirms the significance of incorporating the input's local structure for more effective context encoding, and the inclusion of such prior knowledge through local attention positively impacts performance. Moreover, local attention proves to be more
\begin{table}
\begin{tabular}{c c c c} \hline \hline Trajectory & Iterative & \multirow{2}{*}{minADE \(\downarrow\)} & Miss Rate \(\downarrow\) & **mAP \(\uparrow\)** \\ \cline{2-3} Generation & Refinement & & \\ \hline MLP & 0.6870 & 0.2103 & 0.2747 \\ Dense Goals & & 1.0544 & 0.1936 & 0.2912 \\ Latent Embedding & & 0.6564 & 0.1882 & 0.2826 \\ Intention Query & & 0.6885 & 0.1723 & 0.3379 \\ Intention Query & ✓ & **0.657** & **0.1575** & **0.3539** \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Effects of different strategies for generating trajectories from encoded context features in the MTR framework.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Method & Miss Rate \(\downarrow\) & Miss Rate \(\downarrow\) & bireer-mAPLE \(\downarrow\) \\ \hline MTR (Ours) & **0.15** & **0.58** & **1.98** \\ TENT [55] & 0.19 & 0.61 & **1.90** \\ OPPRed & 0.19 & 0.60 & 1.92 \\ Qml & 0.19 & 0.62 & 1.95 \\ GANNet & 0.17 & 0.60 & 1.97 \\ VI Lanter & 0.19 & 0.61 & 2.00 \\ QCNet & 0.21 & 0.60 & 2.14 \\ THOMAS [20] & 0.20 & 0.64 & 2.16 \\ HDGT [27] & 0.21 & 0.66 & 2.24 \\ GNA & 0.29 & 0.71 & 2.45 \\ vilab & 0.29 & 0.71 & 2.47 \\ \hline \hline \end{tabular}
\end{table} TABLE III: The performance comparison on the test set leaderboard of the Argoverse 2 dataset. \(K\) is the number of predicted trajectories for calculating the evaluation metrics.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Distribution of Intention Points & minADE \(\downarrow\) & Miss Rate \(\downarrow\) & mAP \(\uparrow\) \\ \hline uniform grids & 0.7022 & 0.1952 & 0.3205 \\ k-means clustering & **0.6557** & **0.1575** & **0.3539** \\ \hline \hline \end{tabular}
\end{table} TABLE V: Effects of different strategies for generating intention points.
memory-efficient, allowing for performance improvements even when increasing the number of map polylines from 256 to 1,024. In contrast, global attention suffers from memory limitations due to its quadratic complexity.
**Effects of the symmetric scene context modeling module.** In Sec. 4.1, we present the symmetric scene context encoding module, which utilizes a shared context encoder for motion prediction of multiple interested agents in the same scene. Table VII demonstrates the effectiveness of incorporating our symmetric context encoder into the MTR framework (referred to as MTR+). With MTR+, we achieve comparable performance to the MTR framework while significantly reducing both inference latency and memory cost. Specifically, when the number of interested agents increases from 8 to 32, MTR requires individual scene context encoding for each agent, causing a substantial increase in inference latency and memory cost. In contrast, MTR+ utilizes the query-centric self-attention module to encode the entire scene with a shared symmetric context encoder, leading to a remarkable reduction in inference latency (from 193ms to 98ms for 32 interested agents) and memory cost (from 15.6GB to 4.7GB for 32 interested agents). Furthermore, we provide a breakdown analysis of inference latency in Figure 7, which demonstrates that as the number of interested agents increases, the latency of MTR+'s context encoder significantly rises, while the latency of MTR+'s context encoder remains constant due to the utilization of shared context features, since these shared context features enable the prediction of future trajectories for any number of agents within the scene.
**Effects of the mutually-guided intention querying strategy.** Building upon our proposed symmetric scene context encoder for joint motion prediction, we introduce the mutually-guided intention querying strategy in Sec.4.2. This strategy enables the interaction of future behaviors among multiple agents through the propagation of information among their intention queries. In TableVI, we observe that the mutually-guided intention querying strategy significantly enhances the performance of MTR+ with a remarkable mAP improvement of \(+2.49\%\). This improvement demonstrates the effectiveness of broadcasting the potential future behaviors of each agent to other agents via their intention queries, allowing MTR++ to predict more confident future behaviors by considering the overall development of the scene elements.
Furthermore, as each agent incorporates multiple intention queries (_i.e._, 64 in MTR frameworks), we investigate the interaction among these intention queries within each agent and across different agents. As presented in Table VIII, removing either type of interaction results in a significant decrease in performance by at least \(-2.13\%\) in terms of mAP. Removing both types of interaction leads to a larger performance drop of \(-2.70\%\) in terms of mAP. This analysis highlights the importance of the interaction among an agent's different intention queries, enabling the generation of more accurate multimodal future trajectories. Additionally, the interaction among intention queries across different agents empowers the model to predict informed and scene-compliant future trajectories for multiple agents, thereby yielding additional performance improvement.
**Effects of the query-centric self-attention.** We introduce the query-centric self-attention module in Sec. 4.1, which plays a vital role in modeling the relationship between tokens within their respective local coordinate systems. This module enables both symmetric scene context modeling and mutually-guided intention querying. In Table IX, we examine the effects of different positional encoding strategies in query-centric self-attention. The results in the first three rows indicate that query-centric relative positional encoding is crucial for achieving optimal performance. Removing this encoding or replacing it with global positional encoding significantly decreases performance by \(-2.41\%\) and \(-2.91\%\) in terms of mAP, respectively. This finding demonstrates the importance of modeling the relationship in the local coordinate system of each query token, as it benefits the simultaneous motion prediction for multiple agents by treating all tokens symmetrically. Additionally, comparing the last three rows of Table IX, we observe that adding positional embeddings to both the query/key tokens and value tokens yields the best performance.
**Effects of the number of intention queries.** We conduct an ablation study to investigate the impact of the number of intention queries on the performance of the MTR++ framework. In Fig. 8, we vary the number of intention queries by generating their intention points using the k-means clustering algorithm on the training dataset. The orange curves in Fig. 8 illustrate that the performance of the MTR++ framework improves significantly as the number of intention queries increases from 6 to 64. However, the performance saturates when the number of intention queries is further increased to 100. This ablation experiment highlights that incorporating 64 intention queries in the MTR frameworks already enables the coverage of diverse
\begin{table}
\begin{tabular}{c c|c c c} \hline Attention & \#Walkine & minAP\& Alls & Snate \(\downarrow\) & **mAP \(\uparrow\)** \\ \hline Global & 256 & 0.671 & 0.1623 & 0.3450 \\ Global & 512 & 0.6677 & 0.1610 & 0.3495 \\ Global & 768 & COM & COM & COM \\ \hline Local & 256 & 0.6692 & 0.1633 & 0.3522 \\ Local & 512 & 0.6685 & 0.1599 & 0.3515 \\ Local & 768 & **0.6587** & 0.1575 & 0.3539 \\ Local & 1024 & 0.6601 & **0.1555** & **0.3564** \\ \hline \end{tabular}
\end{table} TABLE VI: Effects of local set-attention in the transformer encoder of the MTR framework. “#polyline” is the number of input map polylines used for context encoding. “COM” indicates running out of memory.
Fig. 7: The comparison of inference latency across different numbers of focal agents (_i.e._, interested agents) required to predict their future trajectories. The hatched area at the bottom of each pillar indicates the inference latency of the corresponding context encoder. MTR+ indicates the results obtained by only incorporating the symmetric context encoder into the MTR framework, while MTR++ indicates the results by further incorporating the mutually-guided intention querying strategy.
and wide-ranging future trajectories. This achievement is attributed to the design of learnable intention queries, which proves to be more efficient compared to previous goal-based strategies [21, 67] that require a large number of goal candidates to achieve satisfactory performance.
**Discussion of explicit intention queries and implicit latent embeddings.** In comparison to the latent anchor embeddings proposed in the state-of-the-art work MultiPath++ [50], our proposed MTR frameworks establish a direct correspondence between intention queries and motion modes. This explicit mapping leads to faster convergence and improved performance. By referring to Fig. 8, we can observe the following findings regarding the comparison between intention queries and latent embeddings with varying numbers of queries for the motion decoder: (1) Our strategy outperforms latent embeddings in terms of mAP and miss rate as the number of queries increases. This improvement is attributed to the fact that each intention query is specifically assigned to a particular motion mode, enabling a more stable training process. Conversely, in the case of latent embeddings, a ground truth trajectory can randomly associate with different anchor embeddings during training due to the lack of explicit correspondence. This randomness leads to training instability and decreased performance when increasing the number of anchor embeddings. (2) The explicit semantic interpretation of each intention query also contributes to its superior performance in terms of mAP. Intention queries are capable of predicting trajectories with more confident scores, thereby positively influencing the mAP metric. Overall, the establishment of explicit correspondence between intention queries and motion modes in our approach results in faster convergence, enhanced stability, and improved performance compared to previous latent embeddings.
**Effects of dense future prediction.** We investigate the impact of the dense future prediction module in Table XVI. By removing this module, we observe a significant decrease in the performance of the MTR++ framework, with a -1.48% drop in mAP. We attribute this result to the beneficial effects of the dense future prediction module. It not only provides dense supervision for the context encoder, enabling it to learn more effective features for motion prediction of all agents in the scene, but also enhances the motion decoding process in the decoder network by incorporating agent features with their potential future trajectories, thereby enriching the contextual information for multimodal motion prediction.
**Effects of the number of decoder layers**. We investigate the number of transformer decoder layers in the MTR++ framework in Table XI. We observe a consistent improvement in performance as we increase the number of decoder layers from 1 to 6. This improvement can be attributed to the stacked transformer decoder layers with the mutually-guided intention querying module, which facilitates the generation of more scene-compliant future trajectories through iterative trajectory refinement based on the predicted behaviors of other agents. However, increasing the number of decoder
\begin{table}
\begin{tabular}{c c c c} \hline \hline Dense Future Prediction & mAPADE & Miss Rate & mAP \\ \hline & 0.6662 & 0.1639 & 0.3606 \\ ✓ & 0.6490 & **0.1559** & **0.3754** \\ \hline \hline \end{tabular}
\end{table} TABLE XVI: Effects of the dense future prediction module in the MTR++ framework.
layers to 9 does not yield further improvement, suggesting a diminishing return. As a result, we adopt 6 decoder layers in our MTR++ framework to strike a balance between performance and efficiency.
## 6 Conclusion
In this paper, we have introduced the Motion TRansformer (MTR) frameworks as novel solutions for motion prediction in autonomous driving systems. The MTR frameworks employ a transformer encoder-decoder structure with learnable intention queries, effectively combining global intention localization and local movement refinement processes. This design enables the accurate determination of the agent's intent and adaptive refinement of predicted trajectories, resulting in efficient and precise prediction of multimodal future trajectories. Moreover, the proposed MTR++ framework enhances these capabilities by incorporating symmetric scene context modeling and mutually-guided intention querying modules, enabling the prediction of multimodal motion for multiple agents in a scene-compliant manner. Experimental results on the large-scale WOMD dataset demonstrate the state-of-the-art performance of the MTR frameworks on both marginal and joint motion prediction benchmarks.
|
2301.13599 | An AMM minimizing user-level extractable value and
loss-versus-rebalancing | We present V0LVER, an AMM protocol which solves an incentivization trilemma
between users, passive liquidity providers, and block producers. V0LVER enables
users and passive liquidity providers to interact without paying MEV or
incurring uncontrolled loss-versus-rebalancing to the block producer. V0LVER is
an AMM protocol built on an encrypted transaction mempool, where transactions
are decrypted after being allocated liquidity by the AMM. V0LVER ensures this
liquidity, given some external market price, is provided at that price in
expectancy. This is done by incentivizing the block producer to move the pool
price to the external market price. With this, users transact in expectancy at
the external market price in exchange for a fee, with AMMs providing liquidity
in expectancy at the external market price. Under block producer and liquidity
provider competition, all of the fees in V0LVER approach zero. Without block
producer arbitrage, V0LVER guarantees fall back to those of an AMM, albeit free
from loss-versus-rebalancing and user-level MEV. | Conor McMenamin, Vanesa Daza | 2023-01-31T12:56:18Z | http://arxiv.org/abs/2301.13599v2 | # An AMM minimizing user-level extractable value and loss-versus-rebalancing
###### Abstract
We present V0LVER, an AMM protocol which solves an incentivization trilemma between users, passive liquidity providers, and block producers. V0LVER enables users and passive liquidity providers to interact without paying MEV or incurring uncontrolled loss-versus-rebalancing to the block producer. V0LVER is an AMM protocol built on an encrypted transaction mempool, where transactions are decrypted after being allocated liquidity by the AMM. V0LVER ensures this liquidity, given some external market price, is provided at that price in expectancy. This is done by incentivizing the block producer to move the pool price to the external market price. With this, users transact in expectancy at the external market price in exchange for a fee, with AMMs providing liquidity in expectancy at the external market price. Under block producer and liquidity provider competition, all of the fees in V0LVER approach zero. Without block producer arbitrage, V0LVER guarantees fall back to those of an AMM, albeit free from loss-versus-rebalancing and user-level MEV.
Keywords:Extractable Value Decentralized Exchange Incentives Blockchain
## 1 Introduction
AMMs have emerged as a dominant medium for decentralized token exchange. This is due to several important properties making them ideal for decentralized liquidity provision. AMMs are efficient computationally, have minimal storage needs, matching computations can be done quickly, and liquidity providers (LPs) can be passive. Thus, AMMs are uniquely suited to the severely computation- and storage-constrained environment of blockchains.
Unfortunately, the benefits of AMMs are not without significant costs. For users sending orders to an AMM, these orders can be front-run, sandwiched, back-run, or censored by the block producer in a phenomenon popularized as
MEV [8]. Current estimates for MEV against AMM users on Ethereum are upwards of $600M [17, 9]. By the nature of AMMs and their continuous liquidity curves, the amount of MEV extractable from an order is increasing in order impact (related in large part to order size and slippage tolerance). Thus, MEV effectively caps the trade size allowable on current AMMs when compared to the costs for execution on MEV-protected centralized exchanges. This is a critical barrier for DeFi, and blockchain adoption in general.
Another one of these significant costs for AMMs is definitively formalized in [14] as _loss-versus-rebalancing_ (LVR). It is proved that as the underlying price of a swap moves around in real-time, the discrete-time progression of AMMs leave arbitrage opportunities against the AMM. In centralized finance, market makers (MMs) typically adjust to new price information before trading. This comes at a considerable cost to AMMs (for constant function MMs (CFMMs), [14] derives the cost to be quadratic in realized moves), with similar costs for AMMs derived quantitatively in [15, 6]. These costs are being realized by LPs in current AMM protocols. Furthermore, toxic order flow, of which LVR is a prime example, is consistently profiting against AMM LPs (Figure 1).
These costs are dooming DeFi, with current AMM design clearly unsatisfactory. In this paper, we provide V0LVER, an AMM protocol which formally protects against both MEV and LVR. beckoning a new era for AMMs, and DeFi as a whole.
Figure 1: Toxicity of Uniswap V3 Order Flow [19]. This graph aggregates the PnL of all trades on the Uniswap V3 WETH/USDC pool, measuring PnL of each order after 5 minutes, 1 hour, and 1 day. These are typical time periods within which arbitrageurs close their positions against external markets. This demonstrates the current losses being suffered by AMM pools are significant, consistent, and unsustainable. As LVR is significant and consistent, a large part of these losses can be prevented by minimizing LVR.
### Our Contribution
In this paper we introduce V0LVER 3, an AMM which provides arbitrarily high protection against user-level MEV and LVR. V0LVER is the first AMM to align the incentives of the three, typically competing, entities in AMMs; the user, the pool, and the block producer. This is done by ensuring that at all times, a block producer is incentivized to move the pool to the price maximizing LVR. When the block producer chooses a price, the block producer is forced to assert this is correct, a technique introduced in [13]. Unfortunately, the protocol in [13] gives the block producer total power to extract value from users, due to order information being revealed to the block producer before it is allocated a trading price in the blockchain. To address this, V0LVER is built on an encrypted mempool. Modern cryptographic tools allow us to encrypt the mempool using zero-knowledge based collateralized commit-reveal protocols [11, 3, 12, 20], delay encryption [5, 7] and/or threshold encryption [2]. We assume the existence of such a mempool within which all sensitive order information is hidden until the order has been committed a price against the AMM. Given these encrypted orders, we demonstrate that a block producer forced to show liquidity to such an order maximizes her own utility by showing liquidity centred around the external market price (bid below the price and offered above the price).4
Footnote 3: near-**0**E**xtractable **V**alue and **L**oss-**V**ersus-**R**ebalancing \(\leadsto\)**V0LVER
Footnote 4: This holds true in many CFMMs, including the famous Uniswap V2 protocol [1]
As such, the external market price is the price point maximizing the block producers LVR extraction (due to the replicated LVR protection of [13]), around which profit is maximized when forced to trade against some (varying) percentage of indistinguishable orders. This strictly incentivizes block producers to move the price of a V0LVER pool to the external market price. This provides users with an AMM where the expected trade price in the presence of arbitrageurs is always the external market price, excluding fees, and the LVR against the pool is minimized when these arbitrageurs are competing. Although batching orders against AMM liquidity has been proposed as a defense against LVR [18], naively batching orders against an AMM still allows a block producer to extract LVR by censoring user orders. In V0LVER, block producers are effectively forced to immediately repay LVR, while being incentivized to include order commitments in the blockchain and allocate liquidity to these orders through the AMM.
## 2 Related Work
As the phenomenon of LVR has only recently been identified, there are only two academic papers on the subject of LVR protection [10, 13] to the best of our knowledge, with no work protecting against both LVR and user-level MEV.
In [10], the AMM must receive the price of a swap from a trusted oracle before users can interact with the pool. Such sub-block time price data requires centralized sources which are prone to manipulation, or require the active participation of AMM representatives, a contradiction of the passive nature of AMMs
and their liquidity providers. We see this as an unsatisfactory dependency for DeFi protocols.
Our work is based on some of the techniques of the Diamond protocol as introduced in [13]. The Diamond protocol requires block producers to effectively attest to the final price of the block given the orders that are to be proposed to the AMM within the block. This technique requires the block producer to know exactly what orders are going to be added to the blockchain. This unfortunately gives the block producer total freedom to extract value from users submitting orders to the AMM. With V0LVER, we address this issue while keeping the LVR protection guarantees of Diamond.
Encrypting the transaction mempool using threshold encryption controlled by a committee has been proposed in [2] and applied in [16]. In [16], a DEX involving an AMM and based on frequent batch auctions [4] is proposed. This DEX does not provide LVR resistance, and incentivizes transaction censorship when a large LVR opportunity arises on the DEX. This is protected against in V0LVER.
## 3 Preliminaries
This section introduces the key terminology and definitions needed to understand LVR, and the proceeding analysis. In this work we are concerned with a single swap between token \(x\) and token \(y\). We use \(x\) and \(y\) subscripts when referring to quantities of the respective tokens. The external market price of a swap is denoted by \(\epsilon\), with the price of a swap quoted as the quantity of token \(x\) per token \(y\).
### Constant Function Market Makers
A CFMM is characterized by _reserves_\((R_{x},R_{y})\in\mathbb{R}_{+}^{2}\) which describes the total amount of each token in the pool. The price of the pool is given by _pool price function_\(P:\mathbb{R}_{+}^{2}\rightarrow\mathbb{R}\) taking as input pool reserves \((R_{x},R_{y})\). \(P()\) has the following properties:
\[\begin{split}&\text{(a) }P()\text{ is everywhere differentiable, with }\frac{\partial P}{\partial R_{x}}>0,\ \frac{\partial P}{\partial R_{y}}<0.\\ &\text{(b) }\lim_{R_{x}\to 0}P=0,\ \lim_{R_{x}\rightarrow\infty}P=\infty,\ \lim_{R_{y}\to 0}P=\infty,\ \lim_{R_{y}\rightarrow\infty}P=0.\\ &\text{(c) If }P(R_{x},R_{y})=p,\text{ then }P(R_{x}+cp,R_{y}+c)=p,\ \forall c>0. \end{split} \tag{1}\]
For a CFMM, the _feasible set of reserves_\(C\) is described by:
\[C=\{(R_{x},R_{y})\in\mathbb{R}_{+}^{2}:f(R_{x},R_{y})=k\} \tag{2}\]
where \(f:\mathbb{R}_{+}^{2}\rightarrow\mathbb{R}\) is the pool invariant and \(k\in\mathbb{R}\) is a constant. The pool is defined by a smart contract which allows any player to move the pool reserves
from the current reserves \((R_{x,0},R_{y,0})\in C\) to any other reserves \((R_{x,1},R_{y,1})\in C\) if and only if the player provides the difference \((R_{x,1}-R_{x,0},R_{y,1}-R_{y,0})\).
Whenever an arbitrageur interacts with an AMM pool, say at time \(t\) with reserves \((R_{x,t},R_{y,t})\), we assume as in [14] that the arbitrageur always moves the pool reserves to a point which maximizes arbitrageur profits, exploiting the difference between \(P(R_{x,t},R_{y,t})\) and the external market price at time \(t\), denoted \(\epsilon_{t}\). Therefore, the LVR between two blocks \(B_{t}\) and \(B_{t+1}\) where the reserves of the AMM at the end of \(B_{t}\) are \((R_{x,t},R_{y,t})\) and the external market price when creating block \(B_{t+1}\) is \(\epsilon_{t+1}\) is:
\[R_{x,t}-R_{x,t+1}+(R_{y,t}-R_{y,t+1})\epsilon_{t+1}. \tag{3}\]
In this paper, we consider only the subset of CFMMs in which, given the LVR extracted in block \(B_{t+1}\) corresponds to reserves \((R_{x,t+1},R_{y,t+1})\), \(P(R_{x,t+1},R_{y,t+1})\)\(=\epsilon_{t+1}\). This holds for Uniswap V2 pools, among others.
### LVR-resistant AMM
We provide here an overview of the most important features of Diamond [13], an AMM protocol which is proved to provide arbitrarily high LVR protection under competition to capture LVR among block producers. In V0LVER, we adapt these features for use on an encrypted transaction mempool.
A Diamond pool \(\Phi\) is described by reserves \((R_{x},R_{y})\), a pricing function \(P()\), a pool invariant function \(f()\), an _LVR-rebate parameter_\(\beta\in(0,1)\), and _conversion frequency_\(T\in\mathbb{N}\). The authors also define a _corresponding CFMM pool_ of \(\Phi\), denoted _CFMM\((\Phi)\)_. _CFMM\((\Phi)\)_ is the CFMM pool with reserves \((R_{x},R_{y})\) whose feasible set is described by pool invariant function \(f()\) and pool constant \(k=f(R_{x},R_{y})\). Conversely, \(\Phi\) is the _corresponding V0LVER pool_ of _CFMM\((\Phi)\)_. The authors note that _CFMM\((\Phi)\)_ changes every time the \(\Phi\) pool reserves change. The protocol progresses in blocks, with one reserve update possible per block.
For an arbitrageur wishing to move the price of _CFMM\((\Phi)\)_ to \(p\) from starting reserves \((R_{x,0},R_{y,0})\), let this require \(\Delta_{y}>0\) tokens to be added to _CFMM\((\Phi)\)_, and \(\Delta_{x}>0\) tokens to be removed from _CFMM\((\Phi)\)_. The same price in \(\Phi\) is achieved by the following process:
1. Adding \((1-\beta)\Delta_{y}\) tokens to \(\Phi\) and removing \((1-\beta)\Delta_{x}\) tokens.
2. Removing \(\delta_{x}>0\) tokens such that: \[P(R_{x,0}-(1-\beta)\Delta_{x}-\delta_{x},R_{y,0}+(1-\beta)\Delta_{y})=p.\] (4) These \(\delta_{x}\) tokens are added to the _vault_ of \(\Phi\).
Vault tokens are periodically re-entered into \(\Phi\) through what is effectively an auction process, where the tokens being re-added are in a ratio which approximates the external market price at the time. The main result of [13] is the proving that given a block producer interacts with \(\Phi\) when the LVR parameter is \(\beta\), and there is an LVR opportunity of \(LVR\) in \(CFMM(\Phi)\), the maximum LVR in \(\Phi\) is \((1-\beta)LVR\). This results is stated formally therein as follows:
Theorem 3.1: _For a CFMM pool \(CFMM(\Phi)\) with LVR of \(L>0\), the LVR of \(\Phi\), the corresponding pool in Diamond, has expectancy of at most \((1-\beta)L\)._
In this paper we use the same base functionality of Diamond to restrict the LVR of block producers. Given a block producer wants to move the price of \(CFMM(\Phi)\) to some price \(p\) to extract maximal LVR \(LVR\), the maximal LVR in \(\Phi\) of \((1-\beta)LVR\) is also achieved by moving the price to \(p\). An important point to note about applying LVR rebates as done in [13], is that directly after tokens are placed in the vault, the pool constant drops. This must be considered when calculating the profitability of an arbitrageur extracting LVR from a Diamond pool. We do this when analyzing the profitability of V0LVER in Section 5. Importantly, tokens are eventually re-added to the pool, and over time the expected value of the pool constant is increasing, as demonstrated in [13].
## 4 Our Protocol
We now outline the model in which we construct V0LVER, followed by a detailed description of V0LVER.
### Model
In this paper we consider a blockchain in which all transactions are attempting to interact with a single V0LVER pool between tokens \(x\) and \(y\).
1. A transaction submitted by a player for addition to the blockchain while observing blockchain height \(H\), is finalized in a block of height at most \(H+T\), for some known \(T>0\).
2. The token swap has an external market price \(\epsilon\), which follows a Martingale process.
3. There exists a population of arbitrageurs able to frictionlessly trade at external market prices, who continuously monitor and interact with the blockchain.
4. Encrypted orders are equally likely to buy or sell tokens at \(\epsilon\), distributed symmetrically around \(\epsilon\).
### Protocol Framework
This section outlines the terminology and functionalities used in V0LVER. It is intended as a reference point to understand the core V0LVER protocol. Specifically, we describe the possible transactions in V0LVER, the possible states that V0LVER orders/order commitments can be in, and the possible actions of block producers. As in the protocol of Section 3.2, a V0LVER pool \(\Phi\) with reserves \((R_{x},R_{y})\) is defined with respect to a CFMM pool, denoted \(CFMM(\Phi)\), with reserves \((R_{x},R_{y})\), a pricing function \(P()\) under the restrictions of Section 3.1, and a pool invariant function \(f()\).
Allocation Pools.
Orders in V0LVER are intended to interact with the AMM pool with some delay due to the commit-reveal nature of the orders. Therefore, we need to introduce the concept of allocated funds to be used when orders eventually get revealed. To do this, we define an _allocation pool_. For orders of size either \(size_{x}\) or \(size_{y}\) known to be of maximum size \(max_{x}\) or \(max_{y}\), the allocation pool consists of \((\lambda_{x},\lambda_{y})\) tokens such that:
\[f(R_{x},R_{y})=f(R_{x}+max_{x},R_{y}-\lambda_{y})=f(R_{x}-\lambda_{x},R_{y}+max _{y}). \tag{5}\]
For such an allocation pool, let the total user tokens being sold be \(\delta_{x}\) and \(\delta_{y}\), with \(\delta_{x}>\delta_{y}P(R_{x},R_{y})\). That is, there are more token \(x\) being sold by users than the token \(y\) required to match user orders against each other at the pool price \(P(R_{x},R_{y})\), causing an imbalance. This requires some additional \(\Delta_{y}\) tokens from the allocation pool to satisfy the imbalance. If these orders are market orders5, the execution price \(p_{e}\) of these orders is such that \((\delta_{y}+\Delta_{y})p_{e}=\delta_{x}\), and must satisfy:
Footnote 5: We omit a description of how to batch execute limit orders against allocation pools, leaving it as an implementation exercise. As long as limit orders follow the same size restrictions as specified in this paper, the properties of V0LVER outlined in Section 5 should not change.
\[f(R_{x},R_{y})=f(R_{x}+(\delta_{x}-\delta_{y}p_{e}),R_{y}-\Delta_{y}). \tag{6}\]
With these two restrictions, we can solve for \(\Delta_{y}\) and \(p_{e}\) given our specific pool pricing and invariant functions.6 An example of batch settlement against an allocation pool with a Uniswap V2 pool as the corresponding CFMM pool is provided at the end of Section 4.
Footnote 6: If \(\delta_{x}<\delta_{y}P(R_{x},R_{y})\), we must remove \(\Delta_{x}\) tokens from the allocation pool with \(\delta_{y}p_{e}=\delta_{x}+\Delta_{x}\) satisfying \(f(R_{x},R_{y})=f(R_{x}-\Delta_{x},R_{y}+(\delta_{y}-\frac{\delta_{x}}{p_{e}}))\)
These restrictions for calculating the execution price and tokens to be removed from the allocation pool are not defined with respect to the tokens in the allocation pool. However, by definition of the allocation pool reserves, there are sufficient tokens in the allocation pool to handle any allowable imbalance (anything up to \(max_{x}\) or \(max_{y}\)).
#### 3.2.2 Transaction Specifications.
There are three types of transaction in our protocol. To define these transactions, we need an _LVR rebate function_\(\beta:[0,1,...,Z,\)\(Z+1]\rightarrow[0,1]\). It suffices to consider \(\beta()\) as a strictly decreasing function with \(\beta(z)=0\)\(\forall z\geq Z\).
1. **Order**. These are straightforward buy or sell orders indicating a limit price7, size and direction to be traded. Without loss of generality, we assume all orders in our system are executable. Footnote 7: If \(\delta_{x}<\delta_{y}P(R_{x},R_{y})\), we must remove \(\Delta_{x}\) tokens from the allocation pool with \(\delta_{y}p_{e}=\delta_{x}+\Delta_{x}\) satisfying \(f(R_{x},R_{y})=f(R_{x}-\Delta_{x},R_{y}+(\delta_{y}-\frac{\delta_{x}}{p_{e}}))\)
2. **Order commitment transaction (OCT)**. These are encrypted orders known to be collateralized by either \(max_{x}\) or \(max_{y}\) tokens. The exact size, direction, price, and sender of an OCT sent from player \(P_{i}\) is hidden from all other players. This is possible using anonymous ZK proofs of collateral such
as used in [12, 20, 11]), which can be implemented on a blockchain in conjunction with a user-lead commit-reveal protocol, delay encryption scheme [5, 7] or threshold encryption scheme [2, 16]. An OCT must be inserted into the blockchain before that same OCT can be allocated liquidity in V0LVER.
3. **Update transaction**. These transactions are executed in a block before any OCT is allowed to interact with the protocol pool (see Figure 2). Let the current block height be \(H\). Update transactions take as input an _allocation block height_\(H_{a}\leq H\), and pool price \(p\). Given an allocation block height of \(H_{a}^{\prime}\) in the previous update transaction, valid update transactions require \(H_{a}>H_{a}^{\prime}\). All of the inserted OCTs in blocks \([H_{a}^{\prime}+1,...,H_{a}]\) are then considered as allocated. For any update transaction, we denote by \(T_{a}\in\mathbb{Z}_{\geq 0}\) the number of OCTs being allocated. Given inputs \((H_{a},p)\), the block producer moves the price of the pool to \(p\). The producer receives \((1-\beta(H-H_{a}))\) of the implied change in reserves from this price move, as is done in [13]. The producer must then deposit \((T_{a}\beta(H-H_{a})max_{y}p,\ T_{a}\beta(H-H_{a})\frac{max_{x}}{p})\) to an _allocation pool_ denoted \(\Phi_{H_{a}}\), with \((T_{a}(1-\beta(H-H_{a}))max_{y}p,\ T_{a}(1-\beta(H-H_{a}))\frac{max_{x}}{p})\) being added to \(\Phi_{H_{a}}\) from the AMM reserves. As such, the allocation pool contains \((T_{a}max_{y}p,\ T_{a}\frac{max_{x}}{p})\) tokens in total.
In other words, if an allocation pool requires up to \((T_{a}max_{y}p,T_{a}\frac{max_{x}}{p})\) tokens to trade with orders corresponding to the \(T_{a}\) allocated OCTs, the block producer is forced to provide \(\beta(H-H_{a})\) of the tokens in the pool, with starting bid and offer prices equal to the pool price set by the block producer. This is used to incentivize the block producer to always choose a pool price equal to the external market price.
#### 3.2.2 Block Producer Action Set.
Every block, a block producer has four possible actions to perform on OCTs and their orders. Orders in our system are batch-settled with other orders allocated at the same time, and against the liquidity in the respective allocation pool.
1. Insert OCTs to the blockchain.
2. Allocate inserted OCTs. For a block producer adding a block at height \(H\) to allocate any number (including 0) inserted OCTs with inserted height of at most \(H_{i}\), the block producer must: 1. Submit an update transaction with inputs \((H_{a}=H_{i},p)\), for some \(p>0\). 2. Allocate all unallocated OCTs with inserted height less than or equal to \(H_{i}\).
3. Reveal allocated order. When a decrypted order corresponding to an OCT at height \(H_{a}\) is finalized on the blockchain within \(T\) blocks after the corresponding OCT is allocated, it is considered revealed.
4. Execute revealed orders. \(T\) blocks after OCTs are allocated, any corresponding revealed orders are executed at a single clearing price for orders allocated at the same time. The final tokens in the allocation pool are redistributed proportionally to the allocating block producer and V0LVER reserves.
### Protocol Outline
Our protocol can be considered as two sub-protocols, a _base protocol_ proceeding in rounds corresponding to blocks in the blockchain (see Figure 2), and an _allocation protocol_ (Figure 3). As the blockchain progresses through the base protocol, at all heights \(H>0\), the block producers has two key choices. The first is how many OCTs in the mempool to insert into the blockchain. The second is whether or not to send an update transaction.
There are two scenarios for an update transaction with inputs \((H_{a},p)\) and block height of the previous update transaction \(H_{a}^{\prime}\). Either \(T_{a}=0\) or \(T_{a}>0\). If \(T_{a}=0\). the update transaction is equivalent to an arbitrageur operation on a Diamond pool with LVR-rebate parameter of \(\beta(H-H_{a})\) (see Section 3.2). If \(T_{a}>0\), the arbitrageur must also deposit \((T_{a}\beta(H-H_{a}))max_{y}p,T_{a}\beta(H-H_{a})\frac{max_{x}}{p})\) to the \(H_{a}\) allocation pool \(\Phi_{H_{a}}\), with \((T_{a}(1-\beta(H-H_{a}))max_{y}p,T_{a}(1-\beta(H-H_{a})\frac{max_{x}}{p}))\) being added to \(\Phi_{H_{a}}\) from the AMM reserves.
After an allocation pool is created for allocated OCTs \(\{oct_{1},...,oct_{T_{a}}\}\), the orders corresponding to \(\{oct_{1},...,oct_{T_{a}}\}\) can be revealed for up to \(T\) blocks. This is sufficient time for any user whose OCT is contained in that set to reveal the order corresponding to the OCT. To enforce revelation, tokens corresponding to unrevealed orders are burned. After all orders have been revealed, or \(T\) blocks have passed, any block producer can execute revealed orders against the allocation pool at a clearing price which maximizes volume traded. Specifically, given an array of orders ordered by price, a basic smart-contract can verify that a proposed clearing price maximizes volume traded, as is done in [12].
The final tokens in the allocation pool are redistributed to the allocating block producer and V0LVER reserves. Adding these tokens directly to the pool (and not the vault as in the protocol from Section 3.2) allows the pool to update its price to reflect the information of the revealed orders.
#### 4.3.1 Example: Executing Orders Against the Allocation Pool.
This example details how one would batch execute orders against an allocation pool \(\Phi_{H_{a}}\) replicating liquidity in a corresponding constant function MM, CFMM(\(\Phi\)). Let the total tokens in the V0LVER pool \(\Phi\) before allocation be \((R_{x},R_{y})\), with CFMM(\(\Phi\)) the Uniswap V2 pool. As such, \(P(R_{x},R_{y})=\frac{R_{x}}{R_{y}}=p_{0}\). Let the allocated OCTs be selling \(\delta_{x}\) and \(\delta_{y}\) tokens, with \(\delta_{y}p_{0}<\delta_{x}\). That is, there is an imbalance of tokens at \(p_{0}\), with more token \(x\) being sold than token \(y\) at the price \(p_{0}\). We will now derive the execution price \(p_{e}\) for these orders.
Given \(\delta_{y}p_{0}<\delta_{x}\), this means some \(\Delta_{y}\) tokens from the allocation pool are required to fill the imbalance. Firstly, given the execution price is \(p_{e}\), we know \((\delta_{y}+\Delta_{y})p_{e}=\delta_{x}\). That is, the execution price equals \(\frac{\delta_{x}}{\delta_{y}+\Delta_{y}}\). Secondly, the amount of \(x\) tokens added to the allocation pool is \(\delta_{x}-\delta_{y}p_{e}\). As the allocation pool provides liquidity equivalent to batch executing the orders against the CFMM, this means the pool invariant function would remain constant if those tokens were traded directly with CFMM(\(\Phi\)). Specifically:
\[R_{x}R_{y}=(R_{x}+(\delta_{x}-\delta_{y}p_{e}))(R_{y}-\Delta_{y}). \tag{7}\]
From our first observation, we know \(\Delta_{y}=\frac{\delta_{x}}{p_{e}}-\delta_{y}\), which we can rewrite as \(\frac{1}{p_{e}}(\delta_{x}-\delta_{y}p_{e})\). This gives:
\[R_{x}R_{y}=R_{x}R_{y}+R_{y}(\delta_{x}-\delta_{y}p_{e})-R_{x}\frac{1}{p_{e}}( \delta_{x}-\delta_{y}p_{e})-\frac{1}{p_{e}}(\delta_{x}-\delta_{y}p_{e})^{2}. \tag{8}\]
Cancelling the first term on both sides, and dividing by \((\delta_{x}-\delta_{y}p_{e})>0\) gives:
\[0=R_{y}-R_{x}\frac{1}{p_{e}}-\frac{1}{p_{e}}(\delta_{x}-\delta_{y}p_{e}). \tag{9}\]
Isolating \(p_{e}\), we get:
\[p_{e}=\frac{R_{x}+\delta_{x}}{R_{y}+\delta_{y}}. \tag{10}\]
Figure 2: Flow of V0LVER protocol, excluding the allocation protocol (see Figure 3 for the allocation protocol). The double-border rectangle is the initialization state, thin single-border rectangles are state updates on-chain, while thick-bordered rectangles are block producer decisions/computations off-chain. The circle state is controlled by the network. Note that \(In\), the array of inserted but unallocated OCTs, is an ordered array of sets of OCTs. For \(1<a\leq len(In)\), \(In[:a]\) returns an ordered sub-array of \(In\) elements at indices \([1,...,a]\), while \(In[a:]\) returns an ordered sub-array of \(In\) elements at indices \([a,...,len(In)]\).
## 5 Protocol Properties
The goal of this section is to show that the expected execution price of any user order is the external market price when the order is allocated, excluding at most impact and fees. Firstly, note that an update transaction prior to allocation moves the pool reserves of a V0LVER pool identically to an LVR arbitrage transaction in Section 3.2. If \(T_{a}=0\), from [13] we know the block producer moves the pool price to the max LVR price which is the external market price, and the result follows trivially.
Now instead, assume \(T_{a}>0\). Let the reserves of a V0LVER pool \(\Phi\) before the update transaction be \((R_{x,0},R_{y,0})\). Given an external market price of \(\epsilon\), from Section 3.1 we know the max LVR occurs by moving the pool reserves to some \((R_{x,m},R_{y,m})\) with \(\frac{R_{x,m}}{R_{y,m}}=\epsilon\). Without loss of generality, let \(\frac{R_{x,0}}{R_{y,0}}<\frac{R_{x,m}}{R_{y,m}}\). Let the block producer move the pool price to \(p\) corresponding to reserves in the corresponding CFMM pool of \((R_{x,p},R_{y,p})\). For a non-zero \(\beta()\), this means the tokens in \(\Phi\) not in the vault (as per the protocol in Section 3.2) are \((R^{\prime}_{x,p},R^{\prime}_{y,p})=(bR_{x,p},bR_{y,p})\) for some \(b<1\). This is because some tokens in \(\Phi\) are removed from the pool and placed in the vault, while maintaining \(\frac{R^{\prime}_{x,p}}{R^{\prime}_{y,p}}=p\).
There are three payoffs of interest here. For these, recall that by definition of the external market price, the expected imbalance of an encrypted order in our system is \(0\) at the external market price.
Figure 3: Flow of allocation protocol for V0LVER pool \(\phi\), initialized every time the ALLOCATE() function is called in Figure 2. The Reveal Orders state happens by some block after height \(H\). As in the previous figure, the double-border rectangle is the initialization state, thin single-border rectangles are state updates on-chain, while thick-bordered rectangles are block producer decisions/computations off-chain.
1. **Payoff of block producer vs. AMM pool**: \((1-\beta())(R_{x,0}-R_{x,p}+(R_{y,0}-R_{y,p})\epsilon)\).
2. **Payoff of block producer vs. users**: Against a block producer's own orders, the block producer has 0 expectancy. Against other player orders, the block producer strictly maximizes her own expectancy when \((R_{x,p},R_{y,p})=(R_{x,m},R_{y,m})\). Otherwise the block producer is offering below \(\epsilon\) against expected buyers, or bidding above \(\epsilon\) to expected sellers.
3. **Payoff of users vs. AMM pool**: Consider a set of allocated orders executed against the allocation pool, corresponding to the pool receiving \(\delta_{x}\) and paying \(\delta_{y}\) tokens. By definition of the allocation pool, this \((\delta_{x},\delta_{y})\) is the same token vector that would be applied to the CFMM pool with reserves \((bR_{x,p},bR_{y,p})\) if those orders were batch executed directly against the CFMM. Let these new reserves be \((bR_{x,1},bR_{y,1})\). Thus the profit of these orders is \(b(1-\beta())(R_{x,p}-R_{x,1}+(R_{y,p}-R_{y,1})\epsilon)\).
#### 5.1.2 Optimal strategy for block producer
Let the block producer account for \(\alpha\in[0,1]\) of the orders executed against the allocation pool. The maximum payoff of the block producer against the AMM pool is the maximum of the sum of arbitrage profits (Payoff 1) and profits of block producer orders executed against the pool (\(\alpha\) of Payoff 3). Thus, the utility function to be maximized is:
\[(1-\beta())(R_{x,0}-R_{x,p}+(R_{y,0}-R_{y,p})\epsilon)+\alpha\Big{(}b(1-\beta( ))(R_{x,p}-R_{x,1}+(R_{y,p}-R_{y,1})\epsilon)\Big{)}. \tag{11}\]
This is equal to
\[(1-\alpha b)(1-\beta())\big{(}R_{x,0}-R_{x,p}+(R_{y,0}-R_{y,p})\epsilon\big{)} +\alpha b(1-\beta())\big{(}R_{x,0}-R_{x,1}+(R_{y,0}-R_{y,1})\epsilon\big{)}. \tag{12}\]
We know the second term is maximized for \((R_{x,1},R_{y,1})=(R_{x,m},R_{y,m})\), as this corresponds to LVR. Similarly, the first term is also maximized for \((R_{x,p},R_{y,p})=(R_{x,m},R_{y,m})\). Given \((R_{x,p},R_{y,p})=(R_{x,m},R_{y,m})\), block producers have negative expectancy for \(\alpha>0\), as this reduces the probability that \((R_{x,1},R_{y,1})=(R_{x,m},R_{y,m})\) by increasing the likelihood of an imbalance at \(p\). As such, block producers are strictly incentivized to set \(p=\epsilon\), and not submit OCTs to the protocol (\(\alpha=0\)) for Payoffs 1 and 3. Now consider the payoff for the block producer against user orders (Payoff 2). We have already argued in the description of Payoff 2 that this is maximized with \((R_{x,p},R_{y,p})=(R_{x,m},R_{y,m})\).
Therefore, moving the pool price \(p\) to \(\epsilon\) is a dominant strategy for the block producer. Given this, we can see that the expected execution price for a client is \(\epsilon\) excluding impact and fees, with impact decreasing in expectancy in the number of orders allocated. The payoff for the AMM against the block producer via the update transaction is \((1-\beta())LVR\), while the payoff against other orders is at least 0.
### Minimal LVR
In the previous section, it is demonstrated that user-level MEV is prevented in V0LVER, with users trading at the external market price in expectancy, exclud
ing fees. However, we have thus far only proved that LVR in a V0LVER pool is \((1-\beta())\) of the corresponding CFMM pool. As in [13], under competition among block producers, the LVR rebate function has a strong Nash equilibrium at \(\beta(0)\), meaning LVR is also minimized.
To see this, we can use a backwards induction argument. Consider the first block producer allowed to send an update transaction with \(\beta(H-H_{a})=0\) for a block at height \(H\) (meaning \(H_{a}=H_{a}^{\prime}+1\)). This block producer can extract all of the LVR, and is required to provide no liquidity to the allocation pool. As LVR is arbitrage, all block producers do this.
A block producer at height \(H-1\) knows this. Furthermore, extracting \((1-\beta((H-1)-H_{a}))>0\) of the LVR has positive utility for all block producers, while trading with \(\beta((H-1)-H_{a})>0\) of allocated OCTs around the external market price also has a positive utility (Payoff 2 in Section 5). As such, sending an update transaction at height \(H-1\) is dominant. Following this argumentation, a block producer at height \(H-i\geq H_{a}\) always sends an update transaction as they know the block producer at height \((H+1)-i\) always sends an update transaction. This means the block producer at height \(H_{a}^{\prime}+1\) always sends an update transaction \(\forall\ H_{a}^{\prime}\), which corresponds to an LVR rebate function value of \(\beta(0)\) in equilibrium.
In reality, frictionless arbitrage against the external market price in blockchain-based protocols is likely not possible, and so LVR extraction has some cost. As such, the expected value for \(\beta()\) may be less than \(\beta(0)\). Deploying V0LVER, and analyzing \(\beta()\) across different token pairs, and under varying costs for block producers makes for interesting future work.
## 6 Discussion
If a V0LVER pool allows an OCT to be allocated with \(\beta()=0\), V0LVER effectively reverts to the corresponding CFMM pool, with MEV-proof batch settlement for all simultaneously allocated OCTs, albeit without LVR protection for the pool. To see this, note that as \(\beta()=0\), the block producer can fully extract any existing LVR opportunity, without requiring a deposit to the allocation pool. As such, the expected price of the allocation pool is the external market price, with orders executed directly against the V0LVER reserves at the external market price, excluding fees and impact. Importantly, there is never any way for the block producer to extract any value from allocated orders. This is because the settlement price for an OCT is effectively set when it allocated, before any price or directional information is revealed about the corresponding order.
Allocation of tokens to the allocation pool has an opportunity cost for both the V0LVER pool and the block producer. Given the informational superiority of the block producer, allocating tokens from the pool requires the upfront payment of a fee to the pool. Doing this anonymously is important to avoid MEV-leakage to the block producer. One possibility is providing an on-chain verifiable proof of membership to set of players who have bought pool credits, where a valid proof releases tokens to cover specific fees, as in [20, 12]. Another possibility is
providing a proof to the block-producer that the user has funds to pay the fee, with the block-producer paying the fee on behalf of the user. A final option based on threshold encryption [16] is creating a state directly after allocation before any more allocations are possible, in which allocated funds are either used or de-allocated. All of these proposals have merits and limitations, but further analysis of these are beyond the scope of this work.
## 7 Conclusion
V0LVER is an AMM based on an encrypted transaction mempool in which LVR and MEV are protected against. V0LVER aligns the incentives of users, passive liquidity providers and block producers. This is done by ensuring the optimal block producer strategy under competition among block producers simultaneously minimizes LVR against passive liquidity providers and MEV against users.
Interestingly, the exact strategy equilibria of V0LVER depend on factors beyond instantaneous token maximization for block producers. This is due to risks associated with liquidity provision and arbitrage costs. On one hand, allocating OCTs after setting the pool price to the external market price, and providing some liquidity to OCTs around this price should be positive expectancy for block producers. Similarly, increasing the number of OCTs should also reduce the variance of block producer payoffs. On the other hand, there are caveats in which all OCTs are informed and uni-directional. Analyzing these trade-offs for various risk profiles and trading scenarios makes for further interesting future work.
|
2309.15912 | Chained Quantile Morphing with Normalizing Flows | Accounting for inaccuracies in Monte Carlo simulations is a crucial step in
any high energy physics analysis. It becomes especially important when training
machine learning models, which can amplify simulation inaccuracies and
introduce large discrepancies and systematic uncertainties when the model is
applied to data. In this paper, we introduce a method to transform simulated
events to better match data using normalizing flows, a class of deep
learning-based density estimation models. Our proposal uses a technique called
chained quantile morphing, which corrects a set of observables by iteratively
shifting each entry according to a conditonal cumulative density function. We
demonstrate the technique on a realistic particle physics dataset, and compare
it to a neural network-based reweighting method. We also introduce a new
contrastive learning technique to correct high dimensional particle-level
inputs, which naively cannot be efficiently corrected with morphing strategies. | Samuel Bright-Thonney, Philip Harris, Patrick McCormack, Simon Rothman | 2023-09-27T18:00:03Z | http://arxiv.org/abs/2309.15912v1 | # Chained Quantile Morphing with Normalizing Flows
###### Abstract
Accounting for inaccuracies in Monte Carlo simulations is a crucial step in any high energy physics analysis. It becomes especially important when training machine learning models, which can amplify simulation inaccuracies and introduce large discrepancies and systematic uncertainties when the model is applied to data. In this paper, we introduce a method to transform simulated events to better match data using normalizing flows, a class of deep learning-based density estimation models. Our proposal uses a technique called _chained quantile morphing_, which corrects a set of observables by iteratively shifting each entry according to a conditional cumulative density function. We demonstrate the technique on a realistic particle physics dataset, and compare it to a neural network-based reweighting method. We also introduce a new contrastive learning technique to correct high dimensional particle-level inputs, which naively cannot be efficiently corrected with morphing strategies.
## I Introduction
Searches and measurements using Large Hadron Collider (LHC) data almost always rely on Monte Carlo (MC) simulations to develop analysis, validate tools, and frequently predict backgrounds. These simulations are widely acknowledged to be imperfect, particularly in modeling detector interactions and the non-perturbative physics of hadronization; data-driven strategies are preferred when possible. Despite the limitations of MC, many modern analyses rely heavily on machine learning (ML) to maximize their sensitivity, and these algorithms are typically trained on MC. The use of ML allows for the effective utilization of complex patterns and correlations in high-dimensional data and is thus more powerful, but also particularly susceptible to spurious, unphysical artifacts present in the simulations.
This issue has only grown in recent years, with a community-wide move towards training ML models on extremely granular, particle-level information with architectures such as ParticleNet [1; 2], LundNet [3; 4], and the Dynamic Reduction Network [5]. As the reliance on finer details increases, simulations become less reliable. These inaccuracies can lead to significant discrepancies between a model's performance on MC and real experimental data. This adds additional work for physicists (deriving corrections, scale factors, etc.), introduces new uncertainties, and points to the deeper issue of training our most powerful analysis tools on flawed simulations. It is conceivable that, in the near future, ML-related systematic uncertainties coming from flaws in the simulation will be the primary limitation on the precision of Standard Model measurements or the sensitivity of new physics searches.
In this paper, we introduce a general purpose strategy to transform samples from one probability distribution to match another using a deep learning implementation of _chained quantile morphing_ (CQM) [6; 7]. CQM iteratively transforms a set of observables \(\mathbf{x}=(x_{1},\ldots,x_{N})\) using the conditional cumulative distribution functions (CDFs) \(F_{i}^{\text{MC}}(x_{i}|\mathbf{x}_{1:i-1})\) and \(F_{i}^{\text{Data}}(x_{i}|\mathbf{x}_{1:i-1})\), and was first used for LHC analysis to improve the quality of photon identification in the Compact Muon Solenoid detector Ref. [7]. The authors used _discretized_ approximations of the CDFs to correct MC inaccuracies, which allowed them to reduce an important systematic uncertainty. In this work, we develop a continuous and precise version of their approach using _normalizing flows_[8; 9; 10] - a family of invertible ML models capable of learning complex conditional probability densities. While CQM can be used to transform between _any_ two distributions of the same dimensionality, we focus on the high energy physics context of transforming simulated Monte Carlo observables to better match experimental data.
Monte Carlo correction strategies typically fall into two categories: reweighting [11; 12; 13; 14; 15; 16] and morphing [17; 18; 19; 20; 21; 22]. CQM is a morphing strategy, meaning we _correct_ the values of observables \(\mathbf{x}_{\text{MC}}\rightarrow\mathbf{x}_{\text{MC}}^{\text{corr}}\) in a way that results in the overall distribution agreeing better with data. Reweighting methods learn a per-event _weight_ that improves data/MC agreement without explicitly altering any observables. Both are effective strategies and have been extensively studied. However, the underlying motivation for each of these methodologies is different due to the nature of how they correct MC. Morphing methods will shift full distributions, thereby breaking relations of parameters within the simulation. Such a correction is applicable when considering a recalibration of a detector readout where the full distribution is shifted to correct an mis-modeled relation between a generated effect and a reconstructed effect. An example from LHC
physics could be photon energy corrections, whereby one shifts the mis-reconstructed energy spectrum to match the generated spectrum. Reweighting strategies are often used when there is a need to preserve invariant quantities such as particle mass. An example of such a strategy is reweighting simulated top quark momentum spectra to match the observed data distribution.
While both reweighting and morphing have been shown to be effective, when distributions differ by large amounts, reweighting strategies can lead to large uncertainties in their prediction due to the presence of high weights. As a result, morphing may be more effective, particularly when there are significant differences between data and MC1. Morphing also produces a new dataset which may be easier to use in downstream applications such as training ML models.
Footnote 1: One clear example is a discrepancy in the tails of data and MC distributions, where an event weight cannot make up for lack of events
In the following paper, we develop and demonstrate the effective use of flow-based CQM. We discuss the details of the flow-based implementation of CQM in Sec. II and contrast it with existing approaches. In Sec. III we present results using CQM to morph between a pair of toy 2D distributions and a pair of realistic simulated particle physics datasets. Sec. IV explores the possibility of applying CQM in very high-dimensional (e.g. particle-level) contexts by embedding the physically relevant information into a low-dimensional space using contrastive learning. Finally, in Sec. V we demonstrate that CQM is insensitive to small levels of signal contamination and can be trained in a control region and interpolated accurately into a blinded signal region.
## II Methodology
A normalizing flow is a density estimation model designed to learn a map \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) between an unknown training data distribution \(X\sim p_{D}\) and a known base distribution \(Z\sim p_{B}\) of the same dimension2. The base distribution is typically taken to be a multidimensional standard normal \(\mathcal{N}(\mathbf{0},\mathbf{1})\), and the function \(f\) is constructed from a composition of invertible maps \(f(\mathbf{x})=f_{N}\circ f_{N-1}\circ\cdots f_{1}(\mathbf{x})\) with tractable Jacobian determinants. The invertible structure enables sampling the unknown distribution by sampling the base distribution, and the change of variables formula \(p_{D}(\mathbf{x})=p_{B}(f(\mathbf{x}))\left|\det\frac{\mathrm{d}f}{\mathrm{d} \mathbf{x}}\right|\) enables density estimation. Flows are trained with a log likelihood loss, which for a composite transformation, takes the form
Footnote 2: See Ref. [8] for a review of modern methods
\[\log p_{D}(\mathbf{x})=\log p_{B}(f(\mathbf{z}))+\sum_{i=1}^{N}\log\left|\det \frac{\mathrm{d}f_{i}}{\mathrm{d}\mathbf{z}_{i-1}}\right|\]
where \(\mathbf{x}=\mathbf{z}_{0}\), \(\mathbf{z}=\mathbf{z}_{N}\), and \(\mathbf{z}_{i}=f_{i}(\mathbf{z}_{i-1})\). The transformations \(f_{i}\) are typically drawn from a parametrized family of functions \(f_{\phi}\), with parameters \(\phi\) computed by neural networks.
### Conditional Flows & Quantile Morphing
Flow models can easily be modified to fit _conditional_ distributions \(p(\mathbf{x}|\mathbf{y})\) by allowing the parameters of the flow transformations \(f_{i,\phi_{i}}\) to depend on the conditioning variables, i.e. \(\phi_{i}=\phi_{i}(\mathbf{x},\mathbf{y})\). A multidimensional joint density estimation task can be decomposed into a series of one-dimensional tasks using the probability chain rule
\[p(\mathbf{x})=p(x_{1})p(x_{2}|x_{1})\cdots p(x_{d}|x_{1},\ldots,x_{d-1}), \tag{1}\]
with each term modeled by a normalizing flow. While modern flow architectures can perform the joint estimation, this decomposition enables the analysis of (conditional) cumulative distribution functions (CDFs) for each dimension of \(\mathbf{x}\) and is essential to the quantile morphing technique described in this paper.
Quantile morphing is a method to correct samples from a reference distribution \(p_{\mathrm{MC}}\) to match those of a target distribution \(p_{D}\)3 by applying a CDF transformation that matches their quantiles. In 1D, the transformation \(x\mapsto y=F_{D}^{-1}(F_{\mathrm{MC}}(x))\) maps \(x\sim p_{\mathrm{MC}}\) to \(y\sim p_{D}\), where \(F_{\mathrm{MC}}\), \(F_{D}\) are the CDFs for \(p_{\mathrm{MC}}\), \(p_{D}\). This is an exact transformation from \(p_{\mathrm{MC}}\) to \(p_{D}\), and guarantees optimal transport for each shifted sample.
Footnote 3: The subscripts MC and D are used here and throughout the paper as shorthand for Monte Carlo and data, respectively.
In higher dimensions, CDFs are not uniquely defined since rotations across dimensions are permissible, and the basic quantile morphing strategy breaks down. Fortunately, we can reconcile this ambiguity to transform \(p_{\mathrm{MC}}\) into \(p_{D}\) by breaking the problem into a series of 1D transformations following Eq. 1 via an iterative procedure called _chained quantile morphing_ (CQM) [6; 7]. At each step, a dimension \(x_{i}\in\mathbf{x}\sim p_{D}\) is morphed to \(y_{i}\sim p_{D}(y_{i})\) using the _conditional_ quantile functions \(F_{\mathrm{MC}}(x_{i}|y_{1},y_{2},\ldots,y_{i-1})\) and \(F_{D}(y_{i}|y_{1},y_{2},\ldots,y_{i-1})\). Starting from the first dimension \(x_{1}\), chained quantile morphing proceeds as follows:
1. Transform \(x_{1}\) to \(y_{1}=F_{D}^{-1}(F_{\mathrm{MC}}(x_{1}))\) using CDFs \(F_{\mathrm{MC}}\) and \(F_{D}\).
2. Transform \(x_{2}\) to \(y_{2}=F_{D}^{-1}(F_{\mathrm{MC}}(x_{2}|y_{1})|y_{1})\) using conditional CDFs \(F_{\mathrm{MC}}(\cdot|y_{1})\) and \(F_{D}(\cdot|y_{1})\) and the corrected value \(y_{1}\) from step (1).
3. Continue as in (2) for \(i=3,\ldots,d\) with \(y_{i}=F_{D}^{-1}(F_{\mathrm{MC}}(x_{i}|\mathbf{y}_{1:i-1})|\mathbf{y}_{1:i-1})\) and the previously corrected dimensions \(\mathbf{y}_{1:i-1}\).
If all (conditional) CDFs are known analytically, the CQM procedure guarantees an exact transformation of samples \(\mathbf{x}\sim p_{\mathrm{MC}}\) to \(\mathbf{y}\sim p_{D}\). In any real world application, however, the reference and target datasets will come from complex and/or high-dimensional distributions with intractable densities and CDFs. In these cases, the CDFs can only be approximated, typically by training machine learning (ML) algorithms to perform conditional quantile regression.
CQM was first introduced for LHC physics for photons within the CMS detector in Ref. [7], where it was used to correct mis-modeled MC to better match experimental data through the use of a well defined control region within data. The authors constructed a discretized approximation of each CDF by training boosted decision trees (BDTs) to learn a fixed array of conditional quantiles. This was effective for the analysis but difficult to scale efficiently due to the fixed quantiles and the large number of BDT trainings required. In this work, we propose a streamlined approach to CQM with normalizing flows (NFs). Flows can learn the full conditional density (and thus CDF) at each step of CQM, enabling a more exact version of the transformation.
### CQM with Normalizing Flows
Figure 1 shows a schematic of the CQM correction procedure implemented with normalizing flows. Given datasets \(\{\mathbf{x}_{j}\}_{\mathrm{MC}}\) ("Monte Carlo") and \(\{\mathbf{y}_{j}\}_{D}\) ("data"), the components \(x_{i}\in\mathbf{x}\sim p_{\mathrm{MC}}\) are iteratively corrected to match the distributions of \(y_{i}\in\mathbf{y}\sim p_{D}\) using the conditional probability decomposition shown in Eq. 1. At step \(i\), flows \(f_{i,S}\) and \(f_{i,D}\) are trained to fit the conditional distributions \(p_{\mathrm{MC}}(x_{i}|\mathbf{x}_{1:i-1}^{\mathrm{corr}})\) and \(p_{D}(y_{i}|\mathbf{y}_{1:i-1})\), respectively, where \(\mathbf{x}_{1:i-1}^{\mathrm{corr}}\) are the dimensions of \(\mathbf{x}\) corrected in previous steps. Individual points \(x_{i,k}\) are then corrected by the transformation
\[x_{i,k}^{\mathrm{corr}}=f_{i,D}^{-1}(f_{i,\mathrm{MC}}(x_{i,k}|\mathbf{x}_{1:i -1,k}^{\mathrm{corr}})|\mathbf{x}_{1:i-1,k}^{\mathrm{corr}}) \tag{2}\]
This procedure leverages the conditional _flow_ in lieu of the conditional quantile function, but the quantile morphing operation is fundamentally the same as described in Sec. II.1. A given data point \(x_{i,k}\in\mathbf{x}_{k}\) will correspond to some conditional quantile of the distribution \(p_{\mathrm{MC}}(x_{i}|\mathbf{x}_{1:i-1}^{\mathrm{corr}})\), and a faithfully trained conditional flow \(f_{i,S}\) will map it to the same quantile \(z_{i,k}\) of a standard normal distribution \(\mathcal{N}(0,1)\). The inverse flow \(f_{i,D}\) will then map \(z_{i,k}\) to the corresponding quantile of \(p_{D}(y_{i}|\mathbf{x}_{1:i-1}^{\mathrm{corr}})\). After the full chain of corrections is applied, the samples \(\{\mathbf{x}_{j}^{\mathrm{corr}}\}\) will follow the target distribution \(p_{D}\).
### Flow Implementation
We parameterize our flow transformations \(f_{\phi}\) using piecewise rational quadratic splines [23] implemented with PyTorch[24] in the nflows package [25]. The splines are defined on the interval \([-3.2,3.2]\), and all input data are min-max scaled to the range \([-3,3]\) to capture the tails. All flows are trained with the AdamW optimizer [26] on a cosine-annealed learning rate schedule [27]. The spline parameters for each flow transformation are determined from the conditioning inputs using a neural network4.
Footnote 4: When there are no conditioning inputs (i.e. the first flow of the chain), the network is simply passed zeros.
Although we implement CQM with a sequence of _distinct_ flows, we note that it is possible to use an autoregressive architecture such as MADE [28] to train a single flow that simultaneously learns each conditional density of Eq. 1. While this streamlines the process, we found that it was generally quite difficult to achieve simultaneous high-quality fits to all inputs using this approach.
### Comparison with Existing Strategies
Our approach to chained quantile morphing builds on the previous approach with BDTs [7], and extends the scope of this effort through the use of conditional normalizing flows. Additionally, a large variety of machine learning-based morphing and reweighting strategies have
Figure 1: A schematic demonstrating how chained quantile morphing transforms samples from one \(k\)-dimensional PDF \(p_{\mathrm{MC}}(x_{1},\dots,x_{k})\) to another \(p_{D}(x_{1},\dots,x_{k})\). The shaded regions in blue, green, and orange denote the conditional quantiles of variables \(x_{i}\), \(z_{i}\), \(y_{i}\), and the red X’s mark their values. CQM matches the conditional quantiles of the original distribution \(p_{\mathrm{MC}}\) to those of the target distribution \(p_{\mathrm{D}}\) by mapping \(x_{i}=q_{\mathrm{MC}}(\alpha_{i}|\mathbf{y}_{1:i-1})\mapsto y_{i}=q_{\mathrm{ D}}(\alpha_{i}|\mathbf{y}_{1:i-1})\), preserving the \(p\)-value \(\alpha_{i}\).
emerged in recent years [11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22], including several proposals using normalizing flows [17; 18; 19; 20; 21; 22]5. Reweighting methods aim to correct MC to match data by training a supervised learning algorithm to differentiate the two datasets, then reweighting the simulated dataset based on the discriminator output [30; 31]. Reweighting and quantile-morphing methods have strengths for different types of corrections, which we briefly investigate in App. A. Broadly speaking, we find reweighting to be more effective in correcting for theoretical physics generator mis-modeling and quantile morphing more effective for fixing simulated detector mis-modeling.
Footnote 5: For an overview of newer methods, see Ref. [29]
CQM is most similar to the "flows for flows" (FFF) paradigm, proposed in Ref. [17] and rephrased several times in [18; 19; 21; 29], but differs in two key ways. First, FFF transforms \(\mathbf{x}\sim p_{\text{MC}}\) to \(\mathbf{y}\sim p_{D}\) all at once using _joint_ density estimation, rather than transforming the variables \(x_{i}\) iteratively as in CQM. Secondly, FFF operates by using the learned density \(p_{D}\) as a target for training a flow transformation that maps \(p_{\text{MC}}\) directly to \(p_{D}\). CQM trains separate flows for \(p_{\text{MC}}\) and \(p_{D}\), and transforms events \(\mathbf{x}\sim p_{\text{MC}}\) via the shared Gaussian latent space. With FFF, any unsatisfactory fit would require a re-training of the full joint density, whereas CQM requires a single variable re-training. In cases where a very high quality fit is needed - minimizing systematic uncertanties, for example - this level of fine-tuned control may be advantageous.CQM additionally imposes a notion of _local_ transformations, as each 1D quantile map will typically shift a variable by a relatively small distance. Other methods, such as FFF, can only target locality by introducing an ad-hoc modification to the loss function during training [18].
## III Experiments
Chained quantile morphing is a general-purpose technique for mapping between any two densities, and can applied in a wide variety of contexts. In this section, we explore its performance in two use cases: a toy example mapping between two-dimensional datasets and a high-energy physics application using high-level observables from simulated proton-proton collisions. In the latter example we also explore how the transformed samples from CQM can be used in downstream applications, namely training neural networks.
### Toy 2D Dataset
As a simple example, we first demonstrate CQM on a pair of two-dimensional datasets shown in Fig. 2. The top row shows the reference (half moons) and target (checkerboard) densities, and the bottom row shows each step of the CQM transformation. In the first step the \(x\) distributions are matched, notably thinning the density in the parts of each half moon that overlap in the \(y\) direction. In the second step the \(y\) distributions are matched, faithfully reproducing the sharp boundaries and alternating conditional pattern of the target density.
This example demonstrates the flexibility of flow-based CQM, and highlights the advantage of using a density estimation approach that can readily capture complex single-variable distributions and strong conditional dependencies between inputs. Most realistic applications will have weaker correlations between input dimensions and less exotic marginals.
### LHC Olympics Datasets
We now demonstrate how CQM can be applied in high-energy physics using the LHC Olympics (LHCO) datasets [32]. We focus on the LHCO "R&D" [33] and "Black Box 1" (BB1) [34] datasets, and train CQM to map between them. The R&D dataset contains one million Standard Model quantum chromodynamics (QCD) background dijet events, and 100,000 signal dijet events featuring a heavy resonance decay \(Z^{\prime}\to XY\) (\(X\to q\bar{q}\), \(Y\to q\bar{q}\)) with \(m_{Z}=3.5\) TeV [35]. The BB1 dataset contains a total of one million events, 834 of which are signal events with the same topology as the R&D signal but with different resonance masses. Both datasets are generated with Pythia 8.219[36; 37] and Delphes
Figure 2: Using CQM to morph between a pair of two-dimensional datasets (top row). The bottom row shows the transformed distribution after morphing the \(x\)-axis (left) then the \(y\)-axis (right).
3.4.1[38; 39], but with different parameter configurations that alter the distributions of relevant high-level jet observables.
We cluster the particles in each event into jets using the anti-\(k_{T}\) algorithm [40] with radius \(R=0.8\), and define jet 1 (2) to be the heaviest (second heaviest) jet in the event. For our high-level observables, we compute \(\rho=m_{J}/p_{T,J}\) and the \(N\)-subjettiness ratios \(\tau_{21}\), \(\tau_{32}\), and \(\tau_{43}\)[41] as a measure of their 2, 3, or 4-pronged substructure.
Using only QCD background events from each dataset (i.e. removing the signal events from BB1), we train the CQM procedure to map between them in the eight-dimensional space constructed from \((\rho,\tau_{21},\tau_{32},\tau_{43})\) of each jet.6 We use 500,000 R&D and 500,000 BB1 events to train CQM, and evaluate its performance with the remaining events.
Footnote 6: Quantile morphing can be performed in either direction due to the invertibility of normalizing flows, i.e. R&D \(\rightarrow\) BB1 or BB1 \(\rightarrow\) R&D
Fig. 3 shows the distributions of each input variable from the test sets before and after morphing R&D to match BB1. The morphed distributions are a very good visual match to the target distributions, and the ratio plots indicate good agreement across nearly the entire range of each input. Performance begins to break down in the extreme high and low tails, but this is to be expected due to statistical fluctuations and limited training samples available in these sparse regions. We also show results of a standard neural network-based reweighting scheme, implemented with a six-layer fully connected network with ReLU activations, a Sigmoid output, and 10% dropout to prevent overfitting. The network is trained to discriminate R&D events from BB1 events using a binary cross entropy loss, and the score \(s(\mathbf{x})\) is converted into an event weight \(w(\mathbf{x})=s(\mathbf{x})/(1-s(\mathbf{x}))\)[30; 31]. As expected, the reweighting performs on-par with CQM. Unlike CQM, however, it does not produce a set of transformed samples that can be used for downstream tasks.
To better quantify the quality of the morphed distribution, we train neural networks to distinguish BB1 samples from R&D samples before and after corrections are applied. In Table 1 we report the area under the curve (AUC) metrics, each averaged over an ensemble of 10 trainings. The AUCs drop to 0.51 after morphing in either direction, indicating that CQM produces samples that are virtually indistinguishable from the target distribution. We also evaluate the reweighting by computing weighted AUCs for the nominal training (i.e. un-morphed samples), and find that it also succeeds.
### Morphed Distributions for Classification Tasks
As demonstrated in the previous section, chained quantile morphing can transform samples from the reference distribution into samples indistinguishable from the target distribution. This is a powerful tool for correcting simulation inaccuracies in high-level observables, and can find immediate use in reducing systematic uncertainties and easing cut-based analysis workflows. Beyond these standalone applications, however, CQM-transformed simulation can be used in more complex downstream tasks such as training neural networks to separate signal and background.
We demonstrate this use case using the same LHCO samples analyzed in Sec. III.2. To emulate an LHC analysis, we treat the R&D background dataset as "background simulation", the R&D signal as "signal simulation", and the BB1 background as "data". We use CQM to construct a morphed R&D background dataset to match the BB1 ("data") background, and train separate neural network classifiers to distinguish the morphed and un-morphed samples from the R&D signal7. We evaluate each classifier on the four background samples: R&D, BB1, morphed R&D, and morphed BB1. This provides insight into (a) how classifier performance changes when moving from "simulation" to "data", and (b) if training/testing on morphed samples replicates the performance on the target datasets they are meant to match.
Footnote 7: See Sec. V for a similar study including signal contamination in the target “data” sample.
In Fig. 4, we plot significance improvement characteristic (SIC) curves for each training/evaluation, where \(\mathrm{SIC}=\frac{\mathrm{TPR}}{\sqrt{\mathrm{FPR}}}\). The upper plot shows results for trainings using the original R&D background, and colored lines correspond evaluations on the various background datasets. The lower plot shows the same thing, but for trainings using the _morphed_ R&D background. For a fixed evaluation dataset, the performance is not significantly different between trainings, but is not necessarily expected to be. Crucially, the performance is extremely consistent between BB1 and morphed R&D, and between R&D and morphed BB1. In the LHC physics context, CQM would allow us to predict a classifier's performance on data (BB1) by morphing MC simulation (R&D) to match data. Curiously, the classification performance is uniformly better on the R&D/morphed BB1 samples
\begin{table}
\begin{tabular}{|c|c|} \hline Comparison & AUC \\ \hline \hline BB1 vs. R\&D & \(0.6257\pm 0.0002\) \\ Morphed BB1 vs. R\&D & \(0.5093\pm 0.0004\) \\ BB1 vs. Morphed R\&D & \(0.5106\pm 0.0004\) \\ BB1 vs. Reweighted R\&D & \(0.5158\pm 0.0001\) \\ \hline \end{tabular}
\end{table}
Table 1: Area under the curve (AUC) metrics for neural network discriminators trained to distinguish pairs of datasets (BB1 and R&D) before and after applying chained quantile morphing to make one match the other. The reported AUCs are computed from the average of 10 separate trainings. We also evaluate re-weighting with a weighted AUC using the BB1 vs. R&D training.
than the BB1/morpherd R&D samples. This simply indicates that the R&D background is more separable from signal than the BB1 background 8.
Footnote 8: This is because the \(\tau_{21}\) is skewed lower in the BB1 sample, as shown in Fig. 3, and both jets in the \(Z^{\prime}\to XY\) signal are expected to have two prongs (i.e. low \(\tau_{21}\))
Although this example is relatively simplistic (a supervised search for a resonant signal), it demonstrates CQM's potential for transforming imperfect simulations into a readily usable emulation of data. This enables a reliable estimate of an analysis' performance on real data and relies solely on having a suitable reference dataset, such as a signal free control region with similar background kinematics to the signal region. As we will discuss in Sec. V, the technique is robust against modest signal contamination in the reference dataset and can be configured to interpolate from a kinematic sideband into a signal region by conditioning on the relevant variables defining the sideband (e.g. dijet mass \(M_{jj}\)).
## IV High-dimensional CQM using contrastive learning
### Handling large input spaces
Chained quantile morphing is ideally suited for correcting \(\mathcal{O}(10)\)-dimensional input spaces constructed from high-level physics observables. The addition of more variables complicates the construction of the conditional probability distributions, making it harder to learn. An input space that is \(\mathcal{O}(10)\)-dimensional is already very useful for many LHC physics applications that rely on high-level observables such as precision measurements. However, a generalized high-dimensional approach does not naturally result from this approach. Many of the cutting-edge jet taggers deployed at ATLAS and CMS use low-level particle- or detector-level inputs and therefore have \(\mathcal{O}(1000)\)-dimensional input spaces. Applying CQM to these inputs is infeasible due to the number of trainings required and the fundamental ambiguity of "correcting" a variable-length unordered collection of particle kinematics.
Despite CQM being ineffective here, resolving data/MC discrepancies in low-level taggers is of particular interest since granular event information often amplifies the impact of MC mis-modeling and leads to larger systematic uncertainties. While low-level taggers perform significantly better than high-level taggers, the large mis-modeling can almost completely remove the improve
Figure 3: Distributions of \(\rho,\tau_{21},\tau_{32},\tau_{43}\) (left to right) of the leading (top row) and sub-leading (bottom row) jets in the BB1 (green), R&D (red), and morphed R&D (blue) datasets. The morphed R&D data is a very close match to BB1, as shown by the ratio plots in the bottom panes. We also show the results of a neural network-based reweighting of R&D data, which yields similarly good agreement.
ment present. To overcome this, we rely on new approaches in deep metric learning to learn an intermediate low-dimensional space with compelling features that would be effective for CQM. Given such an embedding, data/MC discrepancies could be addressed by applying CQM in the _feature space_, rather than on the level of individual particles.
We propose to perform this with _contrastive learning_ algorithms [42, 43, 44, 45, 46], which learn through self-supervision to embed input data into a structured feature space where different classes of inputs (e.g. light quark jets vs. \(b\) jets) are well separated. Assuming that the feature space captures the relevant classification information from the low-level inputs, an ML tagger trained on the resulting feature space should match the performance of a tagger trained directly on the low-level inputs.
### LHCO implementation and experiments
To demonstrate this approach, we train a contrastive feature space to separate signal and background in the LHCO R&D dataset. As low-level inputs, we use the \((p_{T},z,\Delta\eta,\Delta\phi)\) of the 100 highest-\(p_{T}\) constituents of the leading (heaviest) jet in each event, where \(z=p_{T}/p_{\text{\tiny{jet}}}\), \(\Delta\eta=\eta-\eta_{\text{jet}}\), and \(\Delta\phi=\phi-\phi_{\text{jet}}\). We use the ParticleNet [1] architecture as an embedding function into a four-dimensional feature space, and train it on a mixture of signal and background events using the VICReg ("Variance-Invariance-Covariance Regularization") contrastive loss [47]. VICReg learns a non-trivial representation by rewarding functions that embed "like pairs" near to one another (invariance), while maintaining decorrelated and suitably spread out features (variance/covariance). In typical contrastive learning setups, training pairs correspond to a training input and an "augmented" version of that input (e.g. an image and a randomly rotated version of the same image). In our implementation, we simply use pairs of signal and pairs of background events to train the algorithm. This method effectively constructs a mostly decorrelated feature space that captures the separation between signal and background. The construction of a largely decorrelated feature space is appealing for CQM, and we can use this feature space as an intermediate space to correct data to simulation.
To assess the quality of the contrastive space, we train an MLP classifier to distinguish embedded signal/background events and compare its performance with that of a ParticleNet classifier trained directly on the low-level features. We then use CQM to morph embedded R&D background events to match embedded BB1 background events and evaluate them using the MLP classifier. Finally, we train an MLP tagger using only the high-level variables \((\rho,\tau_{21},\tau_{32},\tau_{43})\) of the leading jet to serve as a benchmark. The results are summarized in Table 2, and the embedding spaces are shown in Fig. 5. The tagger trained on embeddings matches the performance of the low-level tagger, and applying CQM in the contrastive space transforms the R&D events to closely match BB1. By combining contrastive embedding and CQM, we are able to retain the advantages of using low-level inputs while simultaneously being able to correct undesired differences between two samples. In the LHC context, this is a promising new avenue for correcting data/MC discrepancies in low-level jet taggers and reducing the associated systematic uncertainties.
Figure 4: Significance improvement characteristic (SIC) curves for neural networks trained to distinguish R&D background from R&D signal (top), and morphed R&D background from R&D signal (bottom). Each curve is computed by evaluating the classifier on a mixture of R&D signal events with background-only BB1 (red), R&D background (purple), morphed background-only BB1 (green), and morphed R&D background (teal).
## V Signal contamination & sideband studies
The studies presented thus far morph an inclusive "MC" background sample to an inclusive "data" sample with the assumption that the underlying physics in both samples are the same. In most realistic LHC analyses, however, MC samples are used to estimate backgrounds in regions of phase space where the data is expected to contain (in the case of measurements) or may contain (in the case of searches) additional physics unaccounted for by the simulations. It is thus unrealistic to simply morph between two inclusive datasets. Instead, we would like to use a signal-free control region to learn the morphing functions, and then interpolate or extrapolate these functions into the signal region. In this section, we present two studies addressing this question.
First, we consider the case of interpolating the morphing function trained in a control region. As in Refs. [18; 19; 29] and in keeping with the LHCO dataset, we consider the case of a resonant signal decaying to a pair of jets. We take the resonant mass to be \(M_{\rm res}=3.5\) TeV, define our signal region (SR) as \(M_{jj}\in[3.3,3.7]\) TeV, and train CQM in the \(M_{jj}\) sideband (SB). We again treat the R&D dataset as "MC" and signal-free BB1 as "data", and transform the variables \((\rho,\tau_{21},\tau_{32},\tau_{43})\) of the leading jet. All flows are conditioned on \(M_{jj}\) to allow interpolation into the signal region. In Table 3 (top row), we show AUCs for a classifier trained to distinguish signal-free BB1 from morphed R&D in the SB and SR. The AUCs indicate that they are indistinguishable in both the sideband and signal region, demonstrating that CQM remains effective when interpolating into a region of phase space unseen during training. In this case, the SR was defined by a cut on a single variable, \(M_{jj}\), but this could easily be generalized to SRs defined by multiple observables.
As a second test, we consider the case where the signal is small relative to the background. If the signal is sufficiently small, it contributes so negligibly to the single variable "data" density learned by CQM that it is unnecessary to define cuts that remove it. To test this, we train using the R&D background sample as "MC" and the _full_ BB1 dataset as "data" - i.e. without signal events artificially removed - with a signal contamination fraction of 0.08%. Table 3 (bottom) indicates that the morphed R&D sample is still essentially indistinguishable
\begin{table}
\begin{tabular}{|c|c|c|} \hline Classifier & Sample & AUC \\ \hline \hline \multirow{2}{*}{ParticleNet Tagger} & R\&D & 0.954 \\ & BB1 & 0.931 \\ \hline \multirow{2}{*}{Embeddings (MLP)} & R\&D & 0.952 \\ & BB1 & 0.929 \\ & Morphed R\&D & 0.928 \\ \hline \multirow{2}{*}{High-Level (MLP)} & R\&D & 0.910 \\ & BB1 & 0.872 \\ & Morphed R\&D & 0.873 \\ \hline \end{tabular}
\end{table}
Table 2: A table showing the performance of the three taggers described in the text, evaluated by the AUC for separating R&D signal from the indicated background sample. ParticleNet (top) is trained on low-level features, where as the other two are trained on contrastive embeddings (middle) and high-level jet features (bottom).
Figure 5: (Left) The resulting latent space constructed from self-supervised VIReg training using the R&D background sample and the \(Z^{\prime}\to XY\) signal. The red shows embeddings of the background-only BB1 sample appears, which disagree slightly with the R&D backgrounds. (Right) The same plot, but after applying CQM to transform the R&D background to match the BB1 background in the latent space.
from the uncontaminated BB1 background in this case, showing that a small degree of signal contamination has little to no impact on the quality of the morphed samples. The impact of small contamination is particularly robust for CQM when no single variable dominates the discrimination and the discrimination is spread over a variety of correlated variables. Since CQM is applied one variable at a time, sensitivity to "signal regions" is particularly reduced, making CQM robust in the regions of high separation between signal and background.
## VI Conclusion
In this paper, we have presented a normalizing flow-based implementation of the "chained quantile morphing" (CQM) technique for correcting Monte Carlo simulations to better match experimental data. CQM matches the performance of reweighting with neural networks and comes with the added benefit of producing a _new set of corrected samples_ rather than simply event weights. Moreover, quantile mapping is a fundamentally different approach to reweighting and can be particularly effective when correcting for detector mis-modeling.
Our approach is unique in that the chained structure makes the implementation and interpretation of the results markedly simple and robust. The iterative, conditional density estimation - as opposed to a simultaneous _joint_ estimation - allows intervention at each stage of the morphing process, where the flow architectures can be modified or re-trained to maximize performance. This means we can ensure a high-quality fit to each variable without re-training the entire morphing process for every small modification to the flow. The structure of CQM also ensures some degree of _locality_ in the morphing transformations since each variable is transformed according to 1D conditional CDFs. This is the optimal transport map for 1D data. When chained together for an \(N\)-dimensional problem, it is intuitive that each variable is not moved "too far" from its initial value. This stands in contrast to the joint density approaches where the training ensures that the _overall_ base density is transformed to match the target but does not guarantee small overall corrections per variable.
In the emerging landscape of morphing and reweighting strategies, most applications have been focused on background estimation for resonant anomaly searches. While this is a worthwhile application, we propose broadening the discussion to consider how CQM - and MC correction strategies in general - can impact the LHC physics program. As seen in this study and experimentally in Ref. [7], CQM is a promising tool for reducing systematic uncertainties by correcting mis-modeled simulations. This is useful for a wide range of physics analyses using \(\mathcal{O}(10)\) high-level inputs, especially those that rely on neural networks or boosted decision trees to build a classifier. We have also demonstrated CQM's potential for problems with very high-dimensional inputs (e.g. jet taggers), where contrastive learning allows us to compress and correct the relevant information in a low-dimensional feature space.
As our machine learning tools become more sophisticated, they will continue to expose and amplify the flaws in our particle physics simulations. The LHC physics program already relies heavily on ML tools, and we will inevitably come to a point where the uncertainties associated with a simulation-trained model significantly limit the sensitivity of a search or the precision of a measurement. CQM is not a perfect solution to this problem, but it represents a promising step towards the ultimate goal of robust and unbiased ML. Moving ahead, we hope to apply a version of CQM to real LHC data and look forward to the ongoing development and refinement of morphing strategies.
###### Acknowledgements.
We thank Gregor Kasieczka for helpful and constructive feedback on the manuscript. PH, PM, and SR are supported in part by the Institute for Artificial Intelligence and Fundamental Interactions (IAIFI) through NSF Grant No. PHY-2019786, and the NSF Institute for AI Accelerated Algorithms for Data-Driven Discovery (NSF Grant #PHY-2117997). Additional support comes from the FAIR Data program of the U.S. DOE, Office of Science, Advanced Scientific Computing Research, under contract number DE-AC02-06CH11357, and a DOE early career award.
## Appendix A The limits of morphing & reweighting
Both chained quantile morphing and reweighting are subject to different intrinsic limitations, which must inform the choice of which technique to use in a real-world application. Conceptually the style of applications are different between the two algorithms. Morphing corresponds to a shift of the simulated distribution to match the data. This corresponds to a detector mis-modeling where the detector response or resolution are off and the distribution requires a shifting and scaling to match the observed resolution. A reweighting is an adjustment of the simulation to emphasize/de-emphaise regions of the
\begin{table}
\begin{tabular}{|c|c|c|} \hline Training & Selection & AUC \\ \hline \hline Sideband & SB & \(0.514\pm 0.001\) \\ & SR & \(0.510\pm 0.004\) \\ \hline Contaminated & Inclusive & \(0.514\pm 0.001\) \\ \hline \end{tabular}
\end{table}
Table 3: Evaluating the quality of the morphed R&D \(\rightarrow\) BB1 samples when CQM is configured to interpolate into a blinded signal region defined by an \(M_{jj}\) (top), or trained using a slightly contaminated “data” sample (bottom). As before, the AUC is computed using 10 neural network trainings to distinguish BB1 from morphed R&D.
generated parameter space. This corresponds to correction to the simulation to match the intrinsic generated parameter to reality. This intuitively corresponds to generation mis-modeling such as a missing higher order correction or shower parameter. As a result, with each there are both advantages and disadvantages.
We have constructed two toy examples of situations where one approach may be preferred over the other:
### The limitations of reweighting
One of the major strengths of a morphing scheme is that it is capable of correcting a distribution into a range not well-covered by the original simulated sample. In our toy example, we consider a 5-dimensional dataset where the "data" has a strong correlation between two of the dimensions that is missed in the "MC" sample. In particular, we define the distributions
\[v_{i}^{(MC)}=\mathcal{N}(0,1)\ \forall\ i \tag{11}\]
\[v_{i}^{(data)}=\mathcal{N}(0,1)\ \forall\ i\neq 2 \tag{12}\]
\[v_{2}^{(data)}=\mathcal{N}(0,1)+5v_{3} \tag{13}\]
Note that this creates a situation in which the "MC" sample is not completely overlapping with the "data" sample.
We then corrected the MC distribution onto the data distribution with both CQR and a simple reweighting scheme. Kernel density estimate contours of the resulting distribution are shown in Figure 6, where it is clear that the reweighting scheme fails to reproduce the data distribution where it fails to overlap the MC distribution. This would likely be improved with a larger dataset where the reweighting has more examples in the MC tails to reweight into the data distribution. In general, we find the CQR scheme to be more robust than reweighting, particularly in matching parts of the distribution that are not well-covered by the source distribution.
### The limitations of morphing
On the other hand, one of the major strengths of reweighting is that it preserves _by construction_ all of the complicated relationships between the variables in any given event. In situations where the variables being considered are related by some invariant or conserved quantity reweighting guarantees that that relationship is preserved in the transformed distribution. Morphing makes no such guarantee, and only preserves this relationship insofar as it is able to learn this relationship in the training.
In order to demonstrate this effect we consider a simplified toy model of the decay of a narrow resonance (suggestively placed at 125 GeV). In this dataset we simulate the decay of this resonance by first pulling an invariant mass from a normal distribution \(\mathcal{N}(125,5)\). One decay daughter is generated by randomly generating an momentum and angle in the 2D plane, and the second daughter is generated by randomly generating a \(\delta\phi\) and then fixing the momentum to perfectly conserve the generated invariant mass. This creates a complex and perfectly fixed relationship between the momentum of the second particle and the momentum of the first particle, the \(\delta\phi\) between them, and the generated invariant mass. In the "MC" sample the momentum distribution and \(\delta\phi\) distribution are perturbed by \(\approx 5\%\) with respect to the "data" sample, but the invariant masses are drawn from exactly the same distribution. We then blind the generated invariant mass and use both reweighting and CQR to correct the "MC" distributions onto the "data" ones. In order to evaluate the performance of the two schemes at preserving the invariant mass we reconstructed it from the two decay daughters and show the resulting distributions in Figure 7. The reweighting scheme perfectly preserves the invariant mass, while the CQR isn't able to perfectly learn this relationship and smears out the invariant mass distribution. This could likely be improved with more data or more fine-tuning of the trainings in order to help the CQR model learn the complex rela
Figure 6: Kernel density estimate contours for ”MC” and ”data” distributions for our toy example, together with the CQR-corrected distribution (**top**) and the reweighting-corrected distribution (**bottom**).
tionship between the variables. An alternative approach would be to reconstruct the invariant mass distribution in both the data and MC samples first, and then perform a morphing of this distribution from MC to data.
|
2301.00283 | Scaling limit of the time averaged distribution for continuous time
quantum walk and Szegedy's walk on the path | In this paper, we consider Szegedy's walk, a type of discrete time quantum
walk, and corresponding continuous time quantum walk related to the birth and
death chain. We show that the scaling limit of time averaged distribution for
the continuous time quantum walk induces that of Szegedy's walk if there exists
the spectral gap on so-called the corresponding Jacobi matrix . | Yusuke Ide | 2022-12-31T20:43:04Z | http://arxiv.org/abs/2301.00283v1 | **Scaling limit of the time averaged distribution for**
**Scaling limit of the time averaged distribution for**
**continuous time quantum walk and Szegedy's walk on the path**
Yusuke Ide
Department of Mathematics, College of Humanities and Sciences, Nihon University
3-25-40 Sakura-josui, Setagaya-ku, Tokyo 156-8550, Japan
e-mail: [email protected]
**Abstract**
In this paper, we consider Szegedy's walk, a type of discrete time quantum walk, and corresponding continuous time quantum walk related to the birth and death chain. We show that the scaling limit of time averaged distribution for the continuous time quantum walk induces that of Szegedy's walk if there exists the spectral gap on so-called the corresponding Jacobi matrix.
+
Footnote †: _Keywords: birth and death chain, Szegedy’s walk, continuous time quantum walk, scaling limit, time averaged distribution_
## 1 Introduction
Quantum walks, a quantum counterpart of random walks have been extensively developed in various fields during the last two decades. Since quantum walks are very simple models therefore they play fundamental and important roles in both theoretical fields and applications. There are good review articles for these developments such as Kempe [6], Kendon [7], Venegas-Andraca [14, 15], Konno [8], Manouchehri and Wang [9], and Portugal [11].
We investigate the time averaged distribution of a variant of discrete time quantum walk (DTQW) so-called Szegedy's walk [13]. On the path graph, the spectral properties of Szegedy's walk are directly connected to the theory of (finite type) orthogonal polynomials. There are studies of the distribution of Szegedy's walk on the path graph for example [1, 2, 3, 5, 10, 12].
In this paper, we focus on scaling limit of the time averaged distributions of both Szegedy's walk and corresponding continuous time quantum walk on the path graph related to the random walk with reflecting walls. In order to our main theorem (Theorem 4.1), if there exists the spectral gap, i.e., the limit superior in the size of the path graph tends to infinity of the second largest eigenvalue of the Jacobi matrix is less than one (the largest eigenvalue), then the scaling limit of Szegedy's walk is the same as that of corresponding continuous time quantum walk. We should note that existence of the spectral gap of the Jacobi matrix is equivalent to that of the transition matrix of corresponding random walk. A typical example of this case is space homogeneous random walk with \(p_{j}^{R}=p\) case (the second largest eigenvalue is \(2\sqrt{p(1-p)}\cos\pi/n\)) treated in [5] except for the symmetric random walk with \(p_{j}^{R}=1/2\). Unfortunately we have not been covered with non-spectral gap cases including symmetric random walk and the Ehrenfest model (the second largest eigenvalue is \(1-2/n\)) treated in [3]. To reveal non-spectral gap case is one of interesting future problems.
The rest of this paper is organized as follows. In Sec. 2, we define our setting of discrete time random walk, continuous time quantum walk and discrete time quantum walk on the path graph. Sec. 3 is devoted to show relationships between the time averaged distribution of Szegedy's walk and continuous time quantum walk. In the last section, we state our main theorem (Theorem 4.1) and prove it.
## 2 Definition of the models
In this paper, we consider the path graph \(P_{n+1}=(V(P_{n+1}),E(P_{n+1}))\) with the vertex set \(V(P_{n+1})=\{0,1,\ldots,n\}\) and the (undirected) edge set \(E(P_{n+1})=\{(j,j+1):j=0,1,\ldots,n-1\}\). On the path graph
\(P_{n+1}\), we define a discrete time random walk (DTRW) with reflecting walls as follows:
Let \(p_{j}^{L}\) be the transition probability of the random walker at the vertex \(j\in V(P_{n+1})\) to the left (\(j-1\in V(P_{n+1})\)). Also let \(p_{j}^{R}=1-p_{j}^{L}\) be the transition probability of the random walker at the vertex \(j\in V(P_{n+1})\) to the right (\(j+1\in V(P_{n+1})\)). For the sake of simplicity, we assume \(0<p_{j}^{L},p_{j}^{R}<1\) except for \(j=0,n\). We put the reflecting walls at the vertex \(0\in V(P_{n+1})\) and the vertex \(n\in V(P_{n+1})\), i.e., we set \(p_{0}^{R}=p_{n}^{L}=1\). We also call this type of DTRW as the birth and death chain.
Let a positive constant \(C_{\pi}\) be
\[C_{\pi}:=1+\sum_{j=1}^{n}\frac{p_{0}^{R}\cdot p_{1}^{R}\cdots p_{j-1}^{R}}{p_{ 1}^{L}\cdot p_{2}^{L}\cdots p_{j}^{L}}\]
then we can define the stationary distribution \(\{\pi(0),\pi(1),\ldots,\pi(n)\}\) as
\[\pi(j)=\begin{cases}\frac{1}{C_{\pi}}&\text{if }j=0,\\ \frac{1}{C_{\pi}}\cdot\frac{p_{0}^{R}\cdot p_{1}^{R}\cdots p_{j-1}^{R}}{p_{1}^ {L}\cdot p_{2}^{L}\cdots p_{j}^{L}}&\text{if }j=1,2,\ldots,n.\end{cases}\]
Note that \(\pi(j)>0\) for all \(j\in V(P_{n+1})\) and the stationary distribution is satisfied with so-called the detailed balance condition,
\[\pi(j)\cdot p_{j}^{R}=p_{j+1}^{L}\cdot\pi(j+1),\]
for \(j=0,1,\ldots n-1\).
In order to define a continuous time quantum walk (CTQW) corresponding to the DTRW, we introduce the normalized Laplacian matrix \(\mathcal{L}\). Let \(P\) be the transition matrix of the DTRW. Also we define diagonal matrices \(D_{\pi}^{1/2}:=\operatorname{diag}\left(\sqrt{\pi(0)},\sqrt{\pi(1)},\ldots, \sqrt{\pi(n)}\right)\) and \(D_{\pi}^{-1/2}=\left(D_{\pi}^{1/2}\right)^{-1}\). Note that \(D_{\pi}^{-1/2}=\operatorname{diag}\left(1/\sqrt{\pi(0)},1/\sqrt{\pi(1)}, \ldots,1/\sqrt{\pi(n)}\right)\) by the definition. The normalized Laplacian matrix \(\mathcal{L}\) is given by
\[\mathcal{L}:=D_{\pi}^{1/2}\left(I_{n+1}-P\right)D_{\pi}^{-1/2}=I_{n+1}-D_{\pi }^{1/2}PD_{\pi}^{-1/2},\]
where \(I_{n+1}\) be the \((n+1)\times(n+1)\) identity matrix. We should remark that the matrix
\[J:=D_{\pi}^{1/2}PD_{\pi}^{-1/2},\]
is referred as the Jacobi matrix. So we can rewrite \(\mathcal{L}\) as \(\mathcal{L}=I_{n+1}-J\).
By using the detailed balance condition, we obtain
\[J_{j,k}=J_{k,j}=\begin{cases}\sqrt{p_{j}^{R}p_{j+1}^{L}},&\text{if }k=j+1,\\ 0,&\text{otherwise}.\end{cases}\]
Thus \(\mathcal{L}=I_{n+1}-J\) is an Hermitian matrix (real symmetric matrix). The CTQW which is discussed in this paper is driven by the time evolution operator (unitary matrix)
\[U_{CTQW}(t):=\exp\left(it\mathcal{L}\right):=\sum_{k=0}^{\infty}\frac{(it)^{k }}{k!}\mathcal{L}^{k},\]
where \(i\) is the imaginary unit. Let \(X_{t}^{C}\) (\(t\geq 0\)) be the random variable representing the position of the CTQWer at time \(t\). The distribution of \(X_{t}^{C}\) is determined by
\[\mathbb{P}\left(X_{t}^{C}=k|X_{0}^{C}=j\right):=|\langle k|U_{CTQW}(t)|j\rangle |^{2}=\left|\left(U_{CTQW}(t)\right)_{k,j}\right|^{2},\]
where \(|j\rangle\) is the \((n+1)\)-dimensional unit vector (column vector) which \(j\)-th component equals \(1\) and the other components are \(0\) and \(\langle v|\) is the transpose of \(|v\rangle\), i.e., \(\langle v|={}^{T}|v\rangle\).
Hereafter we only consider \(X_{0}^{C}=0\), i.e., the CTQWer starts from the left most vertex \(0\in V(P_{n+1})\), cases. The time averaged distribution \(\bar{p}_{C}\) of the CTQW is defined by
\[\bar{p}_{C}(j):=\lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}\mathbb{P}\left(X_{t}^{ C}=j|X_{0}^{C}=0\right)dt,\]
for each vertex \(j\in V(P_{n+1})\). We define a random variable \(\bar{X}_{n}^{C}\) as \(\mathbb{P}\left(\bar{X}_{n}^{C}=j\right)=\bar{p}_{C}(j)\).
In this paper, we also deal with a type of discrete time quantum walk (DTQW) corresponding to the DTRW so-called Szegedy's walk. The time evolution operator for the DTQW is defined by \(U=SC\) with the coin operator \(C\) and the shift operator (flip-flop type shift) \(S\). The coin operator \(C\) is defined by
\[C=|0\rangle\langle 0|\otimes I_{2}+\sum_{j=1}^{n-1}|j\rangle\langle j|\otimes C _{j}+|n\rangle\langle n|\otimes I_{2},\]
where \(I_{2}\) is the \(2\times 2\) identity matrix and \(\otimes\) is the tensor product. The local coin operator \(C_{j}\) is defined by
\[C_{j}=2|\phi_{j}\rangle\langle\phi_{j}|-I_{2},\quad|\phi_{j}\rangle=\sqrt{p_{ j}^{L}}|L\rangle+\sqrt{p_{j}^{R}}|R\rangle,\]
where \(|L\rangle={}^{T}[1\ 0]\) and \(|R\rangle={}^{T}[0\ 1]\). The shift operator \(S\) is given by
\[S\left(|j\rangle\otimes|L\rangle\right)=|j-1\rangle\otimes|R\rangle,\quad S \left(|j\rangle\otimes|R\rangle\right)=|j+1\rangle\otimes|L\rangle.\]
Let \(X_{t}^{D}\) (\(t=0,1,\ldots\)) be the random variable representing the position of the DTQWer at time \(t\). In this paper, we only consider \(X_{0}^{D}=0\) cases. The distribution of \(X_{t}^{D}\) is defined by
\[\mathbb{P}\left(X_{t}^{D}=j|X_{0}^{D}=0\right): =\left\|\left(\langle j|\otimes I_{2}\right)U_{DTQW}(t)\left(|0 \rangle\otimes|R\rangle\right)\right\|^{2}\] \[=|\left(\langle j|\otimes\langle L|\right)U_{DTQW}(t)\left(|0 \rangle\otimes|R\rangle\right)|^{2}+|\left(\langle j|\otimes\langle R|\right) U_{DTQW}(t)\left(|0\rangle\otimes|R\rangle\right)|^{2}\,.\]
We also consider the time averaged distribution \(\bar{p}_{D}\) of the DTQW defined by
\[\bar{p}_{D}(j):=\lim_{T\to\infty}\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{P}\left(X _{t}^{D}=j|X_{0}^{D}=0\right),\]
for each vertex \(j\in V(P_{n+1})\). We define a random variable \(\bar{X}_{n}^{D}\) as \(\mathbb{P}\left(\bar{X}_{n}^{D}=j\right)=\bar{p}_{D}(j)\).
## 3 Relations between \(\bar{X}_{n}^{C}\) and \(\bar{X}_{n}^{D}\)
Since the Jacobi matrix \(J\) is a real symmetric matrix with simple [4] and symmetric [3] eigenvalues, we obtain eigenvalues \(1=\lambda_{0}>\lambda_{1}>\cdots>\lambda_{n-1}>\lambda_{n}=-1\) and corresponding eigenvectors \(\{|v_{\ell}\rangle\}_{\ell=0}^{n}\) as an orthonormal basis of \(n\)-dimensional complex vector space \(\mathbb{C}^{n}\). Thus we have the spectral decomposition
\[J=\sum_{\ell=0}^{n}\lambda_{\ell}|v_{\ell}\rangle\langle v_{\ell}|.\]
Noting that \(\mathcal{L}=I_{n+1}-J\), the spectral decomposition of \(U_{CTQW}(t)\) is given by
\[U_{CTQW}(t)=\sum_{\ell=0}^{n}\exp\left[it\left(1-\lambda_{\ell}\right)\right]|v _{\ell}\rangle\langle v_{\ell}|=e^{it}\sum_{\ell=0}^{n}e^{-it\lambda_{\ell}}|v _{\ell}\rangle\langle v_{\ell}|.\]
Because of simple eigenvalues of the Jacobi matrix \(J\), the time averaged distribution \(\bar{p}_{C}\) is expressed by
\[\bar{p}_{C}(j)=\sum_{\ell=0}^{n}|\langle j|v_{\ell}\rangle|^{2}\left|\langle v _{\ell}|0\rangle\right|^{2}=\sum_{\ell=0}^{n}|v_{\ell}(j)|^{2}\left|v_{\ell}(0 )\right|^{2},\]
where \(v_{\ell}(j)\) is the \(j\)th component of \(|v_{\ell}\rangle\).
On the other hand, the spectral decomposition of \(U_{DTQW}(t)\) is given (see e.g. [3, 12, 13, 5]) by
\[U_{DTQW}(t)=\mu_{0}|u_{0}\rangle\langle u_{0}|+\sum_{\ell=1}^{n-1}\left(\frac{1 }{2(1-\lambda_{\ell}^{2})}\sum_{\pm}\mu_{\pm\ell}|u_{\pm\ell}\rangle\langle u_{ \pm\ell}|\right)+\mu_{n}|u_{n}\rangle\langle u_{n}|,\]
where
\[\begin{cases}\mu_{0}=\lambda_{0}=1,&|u_{0}\rangle=|\overline{v_{0}}\rangle, \\ \mu_{\pm\ell}=\exp\left(\pm i\cos^{-1}\lambda_{\ell}\right),&|u_{\pm\ell} \rangle=|\overline{v_{\ell}}\rangle-\mu_{\pm\ell}\ S|\overline{v_{\ell}} \rangle,\\ \mu_{n}=\lambda_{n}=-1,&|u_{n-1}\rangle=|\overline{v_{n-1}}\rangle,\end{cases}\]
with
\[|\overline{v_{\ell}}\rangle=v_{\ell}(0)|0\rangle\otimes|R\rangle+\sum_{j=1}^{ n-1}v_{\ell}(j)|j\rangle\otimes|\phi_{j}\rangle+v_{\ell}(n)|n\rangle\otimes|L\rangle.\]
All the eigenvalues of \(U_{DTQW}(t)\) are also simple, the time averaged distribution \(\bar{p}_{D}\) is expressed by
\[\bar{p}_{D}(j) =\left\{|(\langle j|\otimes\langle L\rangle|u_{0}\rangle|^{2}+| (\langle j|\otimes\langle R|)\,|u_{0}\rangle|^{2}\right\}|\langle u_{0}|\,(|0 \rangle\otimes|R\rangle)|^{2}\right.\] \[+\sum_{\ell=1}^{n-1}\left[\frac{1}{2(1-\lambda_{\ell}^{2})}\sum_ {\pm}\left\{|(\langle j|\otimes\langle L\rangle|\,|u_{\pm\ell}\rangle|^{2}+|( \langle j|\otimes\langle R|)\,|u_{\pm\ell}\rangle|^{2}\right\}|\langle u_{\pm \ell}|\,(|0\rangle\otimes|R\rangle)|^{2}\right]\] \[+\left\{|(\langle j|\otimes\langle L|)\,|u_{n}\rangle|^{2}+|( \langle j|\otimes\langle R|)\,|u_{n}\rangle|^{2}\right\}|\langle u_{n}|\,(|0 \rangle\otimes|R\rangle)|^{2}\,.\]
More concrete expression of \(\bar{p}_{D}\) in terms of eigenvalues and eigenvectors of the Jacobi matrix \(J\) is given as follows (rearrangement of Eq.(10) in [3]):
\[\bar{p}_{D}(j) =\frac{1}{2}\left|v_{0}(j)\right|^{2}\left|v_{0}(0)\right|^{2}+ \frac{1}{2}\left|v_{n}(j)\right|^{2}\left|v_{n}(0)\right|^{2}\] \[+\frac{1}{2}\sum_{\ell=0}^{n}\left|v_{\ell}(j)\right|^{2}\left|v_{ \ell}(0)\right|^{2}\] \[+\frac{1}{2}\sum_{\ell=1}^{n-1}\frac{1}{1-\lambda_{\ell}^{2}} \left\{p_{j-1}^{R}\left|v_{\ell}(j-1)\right|^{2}-\lambda_{\ell}^{2}\left|v_{ \ell}(j)\right|^{2}+p_{j+1}^{L}\left|v_{\ell}(j+1)\right|^{2}\right\}\left|v_{ \ell}(0)\right|^{2},\]
with conventions \(p_{-1}^{R}=v_{\ell}(-1)=p_{n+1}^{L}=v_{\ell}(n+1)=0\).
Now we consider the distribution functions \(\bar{F}_{n}^{C}(x):=\mathbb{P}\left(\bar{X}_{n}^{C}\leq x\right)=\sum_{j\leq x }\bar{p}_{C}(j)\) of \(\bar{X}_{n}^{C}\) and \(\bar{F}_{n}^{D}(x):=\mathbb{P}\left(\bar{X}_{n}^{D}\leq x\right)=\sum_{j\leq x }\bar{p}_{D}(j)\) of \(\bar{X}_{n}^{D}\). For each integer \(0\leq k\leq n-1\), we have
\[\bar{F}_{n}^{C}(k)=\sum_{j=0}^{k}\bar{p}_{C}(j)=\sum_{j=0}^{k}\left\{\sum_{\ell =0}^{n}\left|v_{\ell}(j)\right|^{2}\left|v_{\ell}(0)\right|^{2}\right\}.\]
We also obtain the following expression by using \(p_{j}^{L}+p_{j}^{R}=1,p_{0}^{R}=1\) and \(p_{1}^{L}\left|v_{\ell}(1)\right|^{2}=\lambda_{\ell}^{2}\left|v_{\ell}(0)\right|^ {2}\):
\[\bar{F}_{n}^{D}(k) =\sum_{j=0}^{k}\bar{p}_{D}(j)\] \[=\frac{1}{2}\sum_{j=0}^{k}\left|v_{0}(j)\right|^{2}\left|v_{0}(0) \right|^{2}+\frac{1}{2}\sum_{j=0}^{k}\left|v_{n}(j)\right|^{2}\left|v_{n}(0) \right|^{2}\] \[+\frac{1}{2}\sum_{j=0}^{k}\left\{\sum_{\ell=0}^{n}\left|v_{\ell} (j)\right|^{2}\left|v_{\ell}(0)\right|^{2}\right\}+\frac{1}{2}\sum_{j=1}^{k} \left\{\sum_{\ell=1}^{n-1}\left|v_{\ell}(j)\right|^{2}\left|v_{\ell}(0)\right| ^{2}\right\}\] \[+\frac{1}{2}\sum_{\ell=1}^{n-1}\frac{1}{1-\lambda_{\ell}^{2}} \left\{p_{0}^{R}\left|v_{\ell}(0)\right|^{2}-p_{1}^{L}\left|v_{\ell}(1)\right| ^{2}-p_{k}^{R}\left|v_{\ell}(k)\right|^{2}+p_{k+1}^{L}\left|v_{\ell}(k+1) \right|^{2}\right\}\left|v_{\ell}(0)\right|^{2}\] \[=\sum_{j=0}^{k}\left\{\sum_{\ell=0}^{n}\left|v_{\ell}(j)\right|^ {2}\left|v_{\ell}(0)\right|^{2}\right\}+\frac{1}{2}\sum_{\ell=1}^{n-1}\frac{1} {1-\lambda_{\ell}^{2}}\left\{-p_{k}^{R}\left|v_{\ell}(k)\right|^{2}+p_{k+1}^{L }\left|v_{\ell}(k+1)\right|^{2}\right\}\left|v_{\ell}(0)\right|^{2}\] \[=\bar{F}_{n}^{C}(k)+\frac{1}{2}\sum_{\ell=1}^{n-1}\frac{1}{1- \lambda_{\ell}^{2}}\left\{-p_{k}^{R}\left|v_{\ell}(k)\right|^{2}+p_{k+1}^{L} \left|v_{\ell}(k+1)\right|^{2}\right\}\left|v_{\ell}(0)\right|^{2}.\]
## 4 Scaling limit
In this section, we state our main result and prove it.
Theorem 4.1: _Assume that there exists the spectral gap, i.e., \(\limsup_{n\to\infty}\lambda_{1}<1=\lambda_{0}\). If \(\frac{\bar{X}_{n}^{C}}{n}\) converges weakly to the random variable \(\bar{X}\) as \(n\to\infty\) then \(\frac{\bar{X}_{n}^{D}}{n}\) also converges weakly to the same random variable \(\bar{X}\)._
**Proof of Theorem 4.1**
Let \(\bar{F}\) be the distribution function of the random variable \(\bar{X}\). We assume that
\[\lim_{n\to\infty}\mathbb{P}\left(\frac{\bar{X}_{n}^{C}}{n}\leq x\right)=\bar{ F}(x) \tag{4.1}\]
for all points \(x\) at which \(\bar{F}\) is continuous. Hereafter we assume \(\bar{F}\) is continuous at \(x\) (\(0\leq x\leq 1\)). Remark that from the definition, Eq. (4.1) means that
\[\lim_{n\to\infty}\bar{F}_{n}^{C}\left(nx\right)=\lim_{n\to\infty}\bar{F}_{n}^{ C}\left(\left\lfloor nx\right\rfloor\right)=\lim_{n\to\infty}\sum_{j=0}^{ \lfloor nx\rfloor}\left\{\sum_{\ell=0}^{n}\left|v_{\ell}(j)\right|^{2}\left|v_{ \ell}(0)\right|^{2}\right\}=\bar{F}(x), \tag{4.2}\]
where \(\left\lfloor a\right\rfloor\) denotes the biggest integer which is not greater than \(a\).
From Eq. (4.2) and the relation
\[\mathbb{P}\left(\frac{\bar{X}_{n}^{D}}{n}\leq x\right) =\bar{F}_{n}^{D}(nx)=\bar{F}_{n}^{D}(\left\lfloor nx\right\rfloor)\] \[=\bar{F}_{n}^{C}(\left\lfloor nx\right\rfloor)+\frac{1}{2}\sum_{ \ell=1}^{n-1}\frac{1}{1-\lambda_{\ell}^{2}}\Bigg{\{}-p_{\lfloor nx\rfloor}^{R} \left|v_{\ell}(\left\lfloor nx\right\rfloor)\right|^{2}+p_{\lfloor nx\rfloor+ 1}^{L}\left|v_{\ell}(\left\lfloor nx\right\rfloor+1)\right|^{2}\Bigg{\}} \left|v_{\ell}(0)\right|^{2},\]
if we can prove
\[\lim_{n\to\infty}\sum_{\ell=1}^{n-1}\frac{1}{1-\lambda_{\ell}^{2}}\left|v_{ \ell}(\left\lfloor nx\right\rfloor)\right|^{2}\left|v_{\ell}(0)\right|^{2}= \lim_{n\to\infty}\sum_{\ell=1}^{n-1}\frac{1}{1-\lambda_{\ell}^{2}}\left|v_{ \ell}(\left\lfloor nx\right\rfloor+1)\right|^{2}\left|v_{\ell}(0)\right|^{2}=0, \tag{4.3}\]
then we can conclude
\[\lim_{n\rightarrow\infty}\mathbb{P}\left(\frac{\bar{X}_{n}^{D}}{n}\leq x\right)= \bar{F}(x),\]
for all points at which \(\bar{F}\) is continuous.
From Eq.(4.2), we obtain
\[0\leq\sum_{j=0}^{\lfloor nx\rfloor}\left\{\sum_{\ell=1}^{n-1}\left|v_{\ell}(j) \right|^{2}\left|v_{\ell}(0)\right|^{2}\right\}\leq\bar{F}_{n}^{C}(\lfloor nx \rfloor)\xrightarrow{n\rightarrow\infty}\bar{F}(x).\]
Also we have
\[0\leq\sum_{j=0}^{\lfloor nx\rfloor+1}\left\{\sum_{\ell=1}^{n-1} \left|v_{\ell}(j)\right|^{2}\left|v_{\ell}(0)\right|^{2}\right\}\leq\bar{F}_{ n}^{C}\left(\left\lfloor n\left(x+\frac{1}{n}\right)\right\rfloor\right) \xrightarrow{n\rightarrow\infty}\bar{F}(x),\]
from continuity of \(\bar{F}\) at \(x\). These mean that
\[\lim_{n\rightarrow\infty}\sum_{\ell=1}^{n-1}\left|v_{\ell}(\lfloor nx \rfloor)\right|^{2}\left|v_{\ell}(0)\right|^{2}=\lim_{n\rightarrow\infty} \sum_{\ell=1}^{n-1}\left|v_{\ell}(\lfloor nx\rfloor+1)\right|^{2}\left|v_{ \ell}(0)\right|^{2}=0. \tag{4.4}\]
Therefore combining with Eq. (4.4), we obtain Eq. (4.3) as follows:
\[\limsup_{n\rightarrow\infty}\sum_{\ell=1}^{n-1}\frac{1}{1-\lambda _{\ell}^{2}}\left|v_{\ell}(\lfloor nx\rfloor)\right|^{2}\left|v_{\ell}(0) \right|^{2} \leq\limsup_{n\rightarrow\infty}\frac{1}{1-\lambda_{1}^{2}} \sum_{\ell=1}^{n-1}\left|v_{\ell}(\lfloor nx\rfloor)\right|^{2}\left|v_{\ell}( 0)\right|^{2}\] \[\leq\frac{1}{1-\limsup_{n\rightarrow\infty}\lambda_{1}^{2}} \times\lim_{n\rightarrow\infty}\sum_{\ell=1}^{n-1}\left|v_{\ell}(\lfloor nx \rfloor)\right|^{2}\left|v_{\ell}(0)\right|^{2}\] \[=0,\]
\[\limsup_{n\rightarrow\infty}\sum_{\ell=1}^{n-1}\frac{1}{1-\lambda_{ \ell}^{2}}\left|v_{\ell}(\lfloor nx\rfloor+1)\right|^{2}\left|v_{\ell}(0) \right|^{2} \leq\limsup_{n\rightarrow\infty}\frac{1}{1-\lambda_{1}^{2}} \sum_{\ell=1}^{n-1}\left|v_{\ell}(\lfloor nx\rfloor+1)\right|^{2}\left|v_{ \ell}(0)\right|^{2}\] \[\leq\frac{1}{1-\limsup_{n\rightarrow\infty}\lambda_{1}^{2}} \times\lim_{n\rightarrow\infty}\sum_{\ell=1}^{n-1}\left|v_{\ell}(\lfloor nx \rfloor+1)\right|^{2}\left|v_{\ell}(0)\right|^{2}\] \[=0.\]
This completes the proof. \(\Box\)
|
2310.20174 | GraphTransformers for Geospatial Forecasting of Hurricane Trajectories | In this paper we introduce a novel framework for trajectory prediction of
geospatial sequences using GraphTransformers. When viewed across several
sequences, we observed that a graph structure automatically emerges between
different geospatial points that is often not taken into account for such
sequence modeling tasks. We show that by leveraging this graph structure
explicitly, geospatial trajectory prediction can be significantly improved. Our
GraphTransformer approach improves upon state-of-the-art Transformer based
baseline significantly on HURDAT, a dataset where we are interested in
predicting the trajectory of a hurricane on a 6 hourly basis. | Pallavi Banerjee, Satyaki Chakraborty | 2023-10-31T04:53:10Z | http://arxiv.org/abs/2310.20174v2 | # GraphTransformers for Geospatial Forecasting of Hurricane Trajectories
###### Abstract
In this paper we introduce a novel framework for trajectory prediction of geospatial sequences using _GraphTransformers_. When viewed across several sequences, we observed that a graph structure automatically emerges between different geospatial points that is often not taken into account for such sequence modeling tasks. We show that by leveraging this graph structure explicitly, geospatial trajectory prediction can be significantly improved. Our _GraphTransformer_ approach improves upon state-of-the-art Transformer based baseline significantly on HURDAT, a dataset where we are interested in predicting the trajectory of a hurricane on a 6 hourly basis. This helps inform evacuation efforts by narrowing down target location by \(10\) to \(20\) kilometers along both the north-south and east-west directions.
## 1 Introduction
Hurricanes, generally known as tropical cyclones are storm systems characterized by low pressure and high wind speed. These storms lead to physical and financial damage which have been on the rise due to climate change [1]. Identifying their trajectory will inform efficient rescue and evacuation methods by targeting the correct locations. This will reduce the damage to property and life. In this paper we are interested in the task of next location prediction given the trajectory of a hurricane observed at a 6 hourly interval.
When such geospatial sequences of extreme weather patterns are observed over a significant period of time an underlying directed graph structure emerges between the different spatial points. While traditional [2] and machine learning based studies [3; 4] for trajectory forecasting show promising results for this task, they typically rely on local context about the trajectory's past location to perform the prediction task. The global context that can be obtained in the form of a knowledge graph by looking across multiple such trajectories is often under-leveraged. In this paper we provide a general approach based on _GraphTransformers_ for modeling such geospatial sequences where the graph neural network captures the global context from a heuristically constructed knowledge graph and the transformer models the local context from past trajectory points.
## 2 Related Work
For hurricane trajectory prediction traditionally dynamic models, statistical models and statistical-dynamic models have been used. Dynamic models solve physical atmospheric equations using supercomputers whereas statistical models focus on the historic relation between storms features and location. Statistical-dynamic models blend both of the approaches together. Currently, neural network architectures have been employed to address similar tasks of trajectory estimation. Previous work [5]-[9] include using sophisticated deep learning architectures like recurrent neural networks to leverage the sequential storm trajectory, convolutional neural network to utilize satellite imagery data and multimodal approaches [10] combining different data sources. There are also several studies
which discuss the application of a hybrid statistical-deep learning method in forecasting tasks [11]. However, none of these approaches have explicitly leveraged the underlying graph structure for hurricane prediction tasks.
Graph neural networks, [12]-[24] are neural networks that are designed to model data that reside in a graph and are extensively used in a wide range of domains such as recommender systems, biomedical sciences, prediction, social media analysis [25]-[31]. Subsequently, transformers[32] have revolutionized the field of sequence modeling and have been used in different machine learning tasks in and outside of NLP such as classification, anomaly detection, translation, time series forecasting etc [33]-[36]. Graph Transformers are a fairly recent development which passes graph nodes as tokens and have been known to give promising results in prediction tasks [37]-[39]. In this paper we leverage the underlying graph structure of the hurricane trajectory using a GraphTransformer, where the transformer component models the trajectory sequential and the graph neural network is used for generating node embeddings
## 3 Approach
### Dataset Preparation
HURDAT dataset contains location and weather information about \(2864\) storm trajectory sequences collected from \(1851\) to \(2015\) in North America. We first preprocess the dataset to have training, validation and test splits roughly consisting \(80\%\), \(10\%\) and \(10\%\) of the trajectory sequences respectively. We use _stratified sampling_ to do the splits based on the length of the sequence to ensure that the splits cover storm sequences of varying lengths. We then create model input-output pairs (\(X_{i}\), \(Y_{i}\)) from each sequence by randomly sampling a target location, \(Y_{i}\) as the model target and using the sequence upto the target token (with a maximum sequence of \(16\) tokens) as the corresponding model input, \(X_{i}\). We do this to ensure there is no data leakage between the training and the test and validation sets.
### Geospatial Graph Construction
We observe by studying several such hurricane trajectory sequences that a natural directed graph structure emerges between different locations where the weight of an edge (\(u\to v\)) denotes the likelihood of the storm moving from location \(u\) to location \(v\) in the next time step. We construct this graph from the training set using a simple heuristic. We first create nodes corresponding to all latitude-longitude pairs in the training set with their latitude-longitude values being the node features. The latitude-longitude values are rounded off to the first decimal place. Then, for every \(u_{t}\) in sequence \(X_{i}\) we add directional edges with weight as follows where ever they are defined.
Figure 1: Storm trajectories over two centuries of data plotted across Northern America. Significant overlap and intersection in this graph suggest correlation between points across different trajectories, which is under-leveraged in prior work.
\[W[u_{t-1}\to u_{t}] +=1.0\] \[W[u_{t-2}\to u_{t}] +=0.5;W[u_{t-3}\to u_{t}] +=0.5\] \[W[u_{t-4}\to u_{t}] +=0.1;W[u_{t-5}\to u_{t}] +=0.1\]
### Feature Engineering
The input features to the transformer at a given time step consist of the current location along with its local graph structure and weather conditions. For the graph features, we simply sample a subgraph from the \(k\)-distance ego-graph of the location node. Empirically, we found that using \(k=1\) achieves good results although increasing the value of \(k\) can slightly improve the results at the cost of increase in memory and training time. All our experiments use \(k=1\) unless otherwise mentioned. During inference, it is not guaranteed that all location nodes will exist within our graph built from training set trajectories. For such cases we take the node representation of the nearest location in our training graph if it lies within an \(L2\)-distance of \(0.75\) units otherwise we only use the latitude longitude features for the node.
For the weather features which are numerical features denoting directional wind strengths, we simply standardize the values across the dataset to ensure that they follow the unit normal distribution. For ease of generalization, we transform all the latitude-longitude features (including the graph features and the target location) to the local co-ordinate frame with the starting location being the origin.
### Architecture and Training
Our graph transformer architecture Fig 3 uses a GCN with \(2\) graph convolution layers to encode graph features of the location node. The GCN takes node features which are \(2\) dimensional since they denote the latitude longitude in the local co-ordinate frame. The output and intermediate representations of the GCN is \(16\) dimensional. The output of the GCN is then concatenated with the standardized directional wind features and this concatenated input is then passed into a transformer encoder. The transformer encoder has \(4\) layers of multihead self-attention with each self attention block having \(4\) heads with an embedding dimension of \(64\). At the output of the transformer encoder we take the representation of the final token from the sequence and use a linear layer to solve the regression problem. We use smoothL1 loss[40] function for the regression loss and adam optimizer with initial learning rate as \(1e-4\). We train for \(30\) epochs on a Tesla T4 GPU for all our experiments with early stopping enabled.
Figure 2: Example of the knowledge graph construction and neighborhood sampling. (a) A toy dataset of two trajectories \(u_{0}\to u_{1}\to u_{3}\to u_{4}\to u_{5}\) and \(u_{2}\to u_{3}\to u_{4}\to u_{5}\) (b) Heuristically constructed graph from the two trajectories (c) subgraph with node features sampled during training for node \(u_{4}\)
## 4 Evaluation and Results
For evaluation we perform \(5\)-fold cross validation and show that graph transformers significantly outperform vanilla transformers in the task of next location prediction from a given trajectory. For fair evaluation we divide the test set into different buckets based on the length of trajectory sequence that is provided as input to the model, compute mean absolute error for latitude longitude values predicted and then finally average across all the buckets. From table 1 we show that graph transformers outperform vanilla transformers for all such buckets by significant margin.
As shown in table 1, by leveraging geospatial graphs we improve upon SOTA sequential model approaches by \(0.18^{\circ}\) in latitude and \(0.19^{\circ}\) in longitude. This roughly translates to narrowing down the target location by around \(20\) kilometers along N-S direction and around \(10-15\) kilometers along E-W direction. Given that the US department of homeland security recommends [41] evacuation routes \(30-80\) kilometers inland, a \(10-20\) kilometer improvement in precision of a model that predicts target location on a 6 hourly basis, can hopefully save numerous lives.
## 5 Conclusion and Future Work
In this paper we introduced a novel GraphTransformer based framework to predict the next affected location in the hurricane's trajectory. By improving the trajectory prediction task we hope to better
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Sequence Length & Transformer & \multicolumn{2}{c}{GraphTransformer (ours)} \\ \hline & Lat. & Long. & Lat. & Long. \\ \hline
0-5 & \(0.62^{\circ}\) & \(0.73^{\circ}\) & \(0.47^{\circ}\) & \(0.54^{\circ}\) \\
6-10 & \(0.53^{\circ}\) & \(0.56^{\circ}\) & \(0.36^{\circ}\) & \(0.43^{\circ}\) \\
11-15 & \(0.48^{\circ}\) & \(0.52^{\circ}\) & \(0.30^{\circ}\) & \(0.32^{\circ}\) \\ \hline All sequences & \(0.53^{\circ}\) & \(0.60^{\circ}\) & **0.35**\({}^{\circ}\) & **0.41**\({}^{\circ}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Mean Absolute Error for different models
Figure 3: Our GraphTransformer model where the graph embedding from the spatial graph is concatenated with weather features and sent to the transformer for next location prediction
inform risk mitigation and damage control efforts. In the future, for further performance enhancement we can expand the model to a multi-modal architecture by adding satellite imagery information, temperature, topography and precipitation data. We can also perform extensive hyperparameter tuning and network architecture search to further improve the performance.
|
2308.16854 | Special geometry, quasi-modularity and attractor flow for BPS structures | We study mathematical structures on the moduli spaces of BPS structures of
$\mathcal{N}=2$ theories. Guided by the realization of BPS structures within
type IIB string theory on non-compact Calabi-Yau threefolds, we develop a
notion of BPS variation of Hodge structure which gives rise to special K\"ahler
geometry as well as to Picard-Fuchs equations governing the central charges of
the BPS structure. We focus our study on cases with complex one dimensional
moduli spaces and charge lattices of rank two including Argyres-Douglas $A_2$
as well as Seiberg-Witten $SU(2)$ theories. In these cases the moduli spaces
are identified with modular curves and we determine the expressions of the
central charges in terms of quasi-modular forms of the corresponding duality
groups. We furthermore determine the curves of marginal stability and study the
attractor flow in these examples, showing that it provides another way of
determining the complete BPS spectrum in these cases. | Murad Alim, Florian Beck, Anna Biggs, Daniel Bryan | 2023-08-31T16:55:38Z | http://arxiv.org/abs/2308.16854v1 | # Special geometry, quasi-modularity and attractor flow for BPS structures
###### Abstract
We study mathematical structures on the moduli spaces of BPS structures of \({\cal N}=2\) theories. Guided by the realization of BPS structures within type IIB string theory on non-compact Calabi-Yau threefolds, we develop a notion of BPS variation of Hodge structure which gives rise to special Kahler geometry as well as to Picard-Fuchs equations governing the central charges of the BPS structure. We focus our study on cases with complex one dimensional moduli spaces and charge lattices of rank two including Argyres-Douglas \(A_{2}\) as well as Seiberg-Witten \(SU(2)\) theories. In these cases the moduli spaces are identified with modular curves and we determine the expressions of the central charges in terms of quasi-modular forms of the corresponding duality groups. We furthermore determine the curves of marginal stability and study the attractor flow in these examples, showing that it provides another way of determining the complete BPS spectrum in these cases. |
2308.16840 | Direct measurement of photoinduced transient conducting state in
multilayer 2H-MoTe2 | Ultrafast light-matter interaction has emerged as a powerful tool to control
and probe the macroscopic properties of functional materials, especially
two-dimensional transition metal dichalcogenides which can form different
structural phases with distinct physical properties. However, it is often
difficult to accurately determine the transient optical constants. In this
work, we developed a near-infrared pump - terahertz to midinfrared (12-22 THz)
probe system in transmission geometry to measure the transient optical
conductivity in 2H-MoTe2 layered material. By performing separate measurements
on bulk and thin-film samples, we are able to overcome issues related to
nonuniform substrate thickness and penetration depth mismatch and to extract
the transient optical constants reliably. Our results show that photoexcitation
at 690 nm induces a transient insulator-metal transition, while photoexcitation
at 2 um has a much smaller effect due to the photon energy being smaller than
the band gap of the material. Combining this with a single-color pump-probe
measurement, we show that the transient response evolves towards 1T' phase at
higher flunece. Our work provides a comprehensive understanding of the
photoinduced phase transition in the 2H-MoTe2 system. | XinYu Zhou, H Wang, Q M Liu, S J Zhang, S X Xu, Q Wu, R S Li, L Yue, T C Hu, J Y Yuan, S S Han, T Dong, D Wu, N L Wang | 2023-08-31T16:17:04Z | http://arxiv.org/abs/2308.16840v2 | # Direct measurement of photoinduced transient conducting state in multilayer 2\(H\)-MoTe\({}_{2}\)
###### Abstract
Ultrafast light-matter interaction has emerged as a powerful tool to control and probe the macroscopic properties of functional materials, especially two-dimensional transition metal dichalcogenides which can form different structural phases with distinct physical properties. However, it is often difficult to accurately determine the transient optical constants. In this work, we developed a near-infrared pump - terahertz to midinfrared (12-22 THz) probe system in transmission geometry to measure the transient optical conductivity in 2\(H\)-MoTe\({}_{2}\) layered material. By performing separate measurements on bulk and thin-film samples, we are able to overcome issues related to nonuniform substrate thickness and penetration depth mismatch and to extract the transient optical constants reliably. Our results show that photoexcitation at 690 nm induces a transient insulator-metal transition, while photoexcitation at 2 \(\mu\)m has a much smaller effect due to the photon energy being smaller than the band gap of the material. Combining this with a single-color pump-probe measurement, we show that the transient response evolves towards 1\(T^{{}^{\prime}}\) phase at higher fluence. Our work provides a comprehensive understanding of the photoinduced phase transition in the 2\(H\)-MoTe\({}_{2}\) system.
## I Introduction
Layered transition-metal dichalcogenides (TMDCs) are a class of two-dimensional (2D) materials that exhibit a diverse range of physical phenomena [1; 2; 3; 4]. MoTe\({}_{2}\) is of particular interest due to its ability to crystallize into various structures, including the trigonal prismatic coordinated hexagonal 2\(H\) phase (shown in Fig.1 (a)-(c)), distorted octahedral coordinated monoclinic 1\(T^{{}^{\prime}}\) phase, and orthorhombic \(T_{d}\) phase.These different phases possess significantly distinct properties, with 2\(H\)-MoTe\({}_{2}\) being insulating with a band gap of approximately 1 eV, while 1\(T^{{}^{\prime}}\) and \(T_{d}\) phases exhibit semimetallic nature [5; 6; 7]. A temperature-dependent resistivity of 2\(H\)-MoTe\({}_{2}\) thin film is shown in Fig. 1 (d). The 1\(T^{{}^{\prime}}\)-MoTe\({}_{2}\) phase is centrosymmetric and can be transformed into \(T_{d}\)-MoTe\({}_{2}\) phase with broken inversion symmetry when cooling below \(\sim\)250 K [5; 6]. \(T_{d}\)-MoTe\({}_{2}\) phase hosts type-II topological Weyl fermions and is also superconducting below 0.1 K [8; 9; 10]. Another interesting aspect is that those 2D materials can be easily exfoliated into flakes or prepared to form mono-or multilayered sheets. Intriguingly, bulk MoTe\({}_{2}\) materials undergo significant changes in electronic and photoelectric properties when transformed into layered materials [11; 12]. The formation of various phases in 2D MoTe\({}_{2}\) presents a promising prospect for the deliberate manipulation or regulation of phase transitions. The utilization of ultrashort lasers has demonstrated their efficacy in the management of diverse characteristics in layered materials [13; 14; 15]. For instance, ultrafast laser pulses can induce a nonthermal phase transition between type-II Weyl semimetal \(T_{d}\) phase and normal semimetal 1\(T^{{}^{\prime}}\) phase in a sub-picosecond timescale [16].
Insulating 2\(H\) phase is the most stable state in MoTe\({}_{2}\) and its structure is more different from \(T_{d}\) and 1\(T^{{}^{\prime}}\) phases. Then, it is expected to be difficult to induce a transition from 2\(H\) to 1\(T^{{}^{\prime}}\) or \(T_{d}\) phases. Nonetheless, recent studies suggest that the phase transition from 2\(H\) to 1\(T^{{}^{\prime}}\) phase could be induced by different techniques, including ultrashort laser or terahertz (THz) pulses in either bulk or single layer MoTe\({}_{2}\) samples [7; 17; 18; 19; 20; 21]. Such phase transition is expected to be associated with insulator-metal transition (IMT). However, up to now, there has been a lack of direct conductivity measurement identifying the IMT. Here we present time-resolved terahertz to midinfrared measurements to investigate the effects of photoexcitation at 690 nm (1.8 eV) and 2 \(\mu\)m (0.62 eV) on 2\(H\)-MoTe\({}_{2}\). By conducting two separate experiments on bulk and thin-film samples, we are able to avoid the problems of nonuniform substrate thickness and penetration depth mismatch, and to accurately determine the optical constants after excitation. Our results show that photoexcitation at 690 nm induces a transient IMT, while photoexcitation at 2 \(\mu\)m has a minimal effect due to the photon energy being smaller than the band gap of the material. In order to establish possible link between the transient IMT and a stable 1\(T^{{}^{\prime}}\) state, we performed fluence-dependent single-color pump-probe measurement and found that the transition evolves towards the 1\(T^{{}^{\prime}}\) phase at higher fluences.
## II Experiments and results
We constructed an ultrafast near-infrared pump - terahertz to midinfrared probe system in the frequency range of 12-22 THz. The system uses a Pharcs laser from Light Conversion to produce a 1030 nm laser with an energy of 400 \(\mu\)J at a repetition rate of 50 kHz. 100 \(\mu\)J of this laser is directed to an optical parametric amplifier (OPA) to generate 800 nm light with a pulse width of 10 fs. The remaining 300 \(\mu\)J is directed to OPA-twins to produce near-infrared (NIR) or midinfrared (MIR) radiation with tunable wavelength as the pump light. The 800 nm laser pulse is directed to a z-cut GaSe crystal to generate THz radiation in the frequency range of 12-22 THz. The optical path is designed to ensure that the pump pulse and probe pulse reach the sample at the same time, and the probe pulse reaches the probe crystal at the same time as the sampling pulse. A detailed setup of the optical system is provided in the Fig. S1 of appendix.
Transmission time-domain spectroscopy is a well
established method for obtaining equilibrium and photoexcited optical constants. This method directly measures the transient conductivity in the frequency range being studied, providing a clear insight into the dynamic evolution with time delay after photoexcitation [16; 22; 23; 24]. For insulating materials, a relatively thick sample could be used in the transmission experiment to determine the optical constants in equilibrium state [25; 26]. The electric field of a THz or MIR pulse that passes through a sample or the same size aperture as reference is recorded as a function of time delay. The recorded time traces are then Fourier transformed to obtain the frequency dependent complex transmission spectra, which contains both magnitude and phase information. Optical constants can then be determined using the Fresnel formula. However, for optical pump THz or MIR probe measurements, it becomes challenging to precisely determine the photoinduced change of optical constants when using thick samples, since the penetration depth of the NIR pump pulse is usually much shorter than that of THz or MIR pulses. To avoid the issue of penetration depth mismatch, a thin film sample grown on a substrate can be used for transmission measurement, provided that the pump pulse can completely penetrate the thin film sample. By using an identical substrate as a reference, both the equilibrium and photoexcited optical constants can be determined in the transmission experiment. However, in the frequency range of 12-22 THz, the commonly used substrates such as sapphire, MgO, and LaAlO\({}_{3}\) are not transparent due to high reflectivity between TO and LO phonons. Other substrates like silicon wafer can have a big pump-induced signal, which complicates the analysis of the signal contribution from the sample.
During our exploration of various substrates, we discovered that diamond is an ideal substrate for MIR due to its transparency and lack of NIR pump-induced signal. However, the high hardness of diamond made it challenging to obtain uniformly thick polished substrates, resulting in significant heterogeneity that caused errors in determining equilibrium optical constants. Our purchased diamond substrate has a thickness of 0.5 mm, but with a variation of roughly 1 \(\mu\)m nonuniformity. To overcome these issues, we developed a two-step strategy to detect pump induced optical constants. First, we determined the equilibrium optical constants of 2\(H\)-MoTe\({}_{2}\) using a relatively thick or bulk sample. Second, we work on a thin film of three-layered 2\(H\)-MoTe\({}_{2}\) (\(\sim\)1.65 nm) being transferred onto the diamond substrate. With the known optical constants in equilibrium, we were able to obtain the pump-induced optical constants without needing to know the thickness of the diamond substrate.
We first present the measurement of optical constants in equilibrium state. The information of the static complex refractive index \(\widetilde{n}\) was determined from a flake with 50 \(\mu\)m thickness, which was obtained by mechanical exfoliation from a MoTe\({}_{2}\) single crystal. This flake (bulk) sample was suspended in the air without substrate. The value of \(\widetilde{n}\) can be obtained by measuring the probe electric field through the flake sample (\(E_{sample}(t)\)) and the electric field through the air only (\(E_{air}(t)\)) [25; 26]:
\[\frac{\widetilde{E}_{sample}(\omega)}{\widetilde{E}_{air}(\omega)}=\frac{4 \widetilde{n}exp(\frac{(\widetilde{n}-1)\omega_{\omega}}{c})}{(1+\widetilde{n} )^{2}} \tag{1}\]
\(\widetilde{E}_{sample}(\omega)\) is the the Fourier transform (FFT) of \(E_{sample}(t)\), and \(\widetilde{E}_{air}(\omega)\) is the FFT of \(E_{air}(t)\). \(d\) in the formula represents the thickness of the flake sample, \(c\) is the speed of light and is a constant, and \(\omega\) is the frequency of the probe light. The complex refractive index of air is 1. By knowing the real and imaginary parts of refractive index, one can obtain other optical constants. For example, \(\widetilde{\sigma}\) is related to \(\widetilde{n}\) by following
Figure 1: **Lattice structure and the electrical property of 2\(H\)-MoTe\({}_{2}\) phase.** (a) The adjacent Te atoms are connected by thin black lines, forming a triangular prism centered on a Mo atom. (b) and (c) show the inter-plane and in-plane views of the crystal structure, respectively. (d) Temperature-dependent resistivity of a MoTe\({}_{2}\) thin film.
equation.
\[\widetilde{n}^{2}(\omega)=1+i\frac{\widetilde{\sigma}(\omega)}{\epsilon_{0}\omega} \tag{2}\]
Figure 2 (a) shows the electric fields passing through the sample in equilibrium state and the air, respectively. The corresponding real and imaginary parts of conductivity of the sample, \(\widetilde{\sigma}\)=\(\sigma_{1}\)+\(i\sigma_{2}\), are shown in Fig.2 (b). The very low value of the real part of conductivity in THz to MIR (12-22 THz) reveals that the sample is insulating.
The pump-induced change of optical constants was measured on a three-layer MoTe\({}_{2}\) grown by chemical vapor deposition method and transferred onto a diamond substrate. The temperature dependent resistivity is shown in Fig.1 (d), which is consistent with the previous report [27]. The layer thickness is about 1.65 nm, enabling a complete penetration by NIR pulse. \(E(t)\) and \(E^{\prime}(t)\) represent the transmitted electric field light before and after pumping, respectively. After pumping, we observe a reduction of peak electric field, indicating a decrease of transmittance or a change to a highly conducting response from the semiconductor ground state. Actually, the pump induced change of electric field (\(E^{\prime}(t)\)-\(E(t)\)=\(\Delta E(t)\)) could be directly measured with much improved signal-to-noise ratio by chopping the pump beam. We verified their equivalence, as shown in Fig.S3 in the appendix. Obviously, the transmitted electric field \(E(t)\) is related to the optical constants of the film in equilibrium state and that of the substrate, and \(E^{\prime}(t)\) is related to the optical constants of the film after pumping and also that of the substrate. By taking the Fast Fourier Transform of the transmitted \(E^{\prime}(t)\) and \(E(t)\), we can obtain their respective frequency spectra \(\widetilde{E}^{\prime}(\omega)\) and \(\widetilde{E}(\omega)\). Note that, for the electric field of the THz pulse passing through the film must consider multiple reflections on both the front and back surfaces of the film. From this, it is easy to derive the ratio of \(\widetilde{E}^{\prime}(\omega)\) and \(\widetilde{E}(\omega)\) as,
\[\frac{\widetilde{E}^{\prime}(\omega)}{\widetilde{E}(\omega)} =\frac{exp(i\overline{\widetilde{\sigma}}\frac{d\omega}{c}) \widetilde{n^{\prime}}}{exp(i\frac{d\omega}{c})\widetilde{n}} \tag{3}\] \[\cdot\frac{(1+\widetilde{n})(\widetilde{n}+n_{sub})-(\widetilde {n}-1)(\widetilde{n}-\widetilde{n}_{sub})exp(i\frac{2nd\omega}{c})}{(1+ \widetilde{n^{\prime}})(\widetilde{n^{\prime}}+\widetilde{n}_{sub})-( \widetilde{n^{\prime}}-1)(\widetilde{n^{\prime}}-\widetilde{n}_{sub})exp(i \frac{2\widetilde{\sigma}\omega}{c})}\]
Here, \(\widetilde{n}_{sub}\) is the complex refractive index of the diamond substrate, which has known values. Since the complex refractive index of the material in equilibrium state \(\widetilde{n}\) is determined from Eq.(1), the complex refractive index of the film sample after pumping, \(\widetilde{n^{\prime}}\), can be obtained from Eq.(3). By knowing the real and imaginary parts of \(\widetilde{n^{\prime}}\), one can obtain any other optical constants.
With the above measurement method we do not need to move the sample position during the measurement. The thickness information of substrate is not involved, therefore, the influence of non-uniformity is avoided. In the meantime, the usage of a fully permeable thin film sample to the pump pulse eliminates the effect of penetration depth mismatch. This measurement method greatly improves the credibility of experimental data. Our results is highly reproducible. It should be noted that, the band structures of 2D materials may change with thickness of sample. For semiconducting \(2H\)-MoTe\({}_{2}\), only the monolayer has a direct bandgap of 1.10 eV, while the mutilayer and bulk forms have an indirect bandgap of about 1.0 eV [28]. Therefore the electronic structure of the trilayer sample is more similar to the bulk sample. Two recent experimental studies [29; 30] discussed the thickness dependence of the optical properties. Jung et al. [30] suggested that there may be as much as a factor of two difference between the 3-layer conductivity and the bulk conductivity at 1.8 eV, while Fang et al. [29] reported much weaker dependence of \(\epsilon_{2}\) on thickness.
Figure 2: **Static data from a 50 \(\mu\)m thick flake MoTe\({}_{2}\) sample.** (a) The electric field through the air (Black) and through the sample (Red). The shift between the two electric fields in time delay represents the optical path difference causing by the sample. The dashed boxes represent the time window setting during calculation, each window has the same width. (b) Real part of conductivity \(\sigma_{1}\) in the frequency range of 12-22 THz. The inset shows the imaginary part of conductivity \(\sigma_{2}\) in the same frequency range.
We noted that both investigations presented the optical constants only in the high energy scale. Their difference becomes insignificant in the low frequency. As can be seen in Fig. 2 (a) of Jung et al. [30], the conductivity tends to converge below 0.8 eV. Since we are focusing on the pump-induced changes in the MIR to THz region, we believe that the effect is minimal.
Figure 3 shows our measurement results on a three-layer thin film sample at room temperature. Figure 3 (a) displays the pump-probe signal excited by NIR pulse of 690 nm at several different fluences (right panel). The signal reaches a sharp apex within 0.7 picosecond (ps) after the pump pulse coming, and decays to a small, non-zero value which maintains at least a few picoseconds. The left panel shows the relative change in MIR probe electric field at the peak position. With only three unit cells (1.65 nm), the relative change reaches 1.7% at a fluence of 0.56 mJ/cm\({}^{2}\). This represents a prominent effect. Indeed, the conductivity spectra derived in this frequency range exhibit significant change, as shown in Fig.3 (b). The bottom curve in this plot is from measurement on thin flake sample without external excitation, which has very small conductivity values. We observe that, after pumping, the conductivity increases by more than two orders of magnitude. With increasing fluence, the conductivity increases continuously. The dramatic increase provides direct evidence for a photoinduced transient conducting state. The peak value of \(\Delta E(t)/E(t)\) and extracted conductivity saturate with fluence, but the time dependence looks fairly similar at all fluences shown in the figure. Similar behavior was reported by Sahota et al., on pump-induced reflectance change in visible energies [31].
Figure 3 (c) and (d) show the conductivity spectra at different time delays after excitation by two different fluences 0.28 mJ/cm\({}^{2}\) and 0.14 mJ/cm\({}^{2}\). The time-zero is defined at the point when the pump light just reaches the sample, that is, where the pump-probe signal occurs. Then, 0.3 ps and 0.5 ps belong to the rising edge, 0.7 ps corresponds to the peak position of the pump-probe signal, and any time delay after 0.7 ps belongs to the falling edge. During the rise of the signal, the value of \(\sigma_{1}\) increases, reaches its maximum at the top of the signal, then decreases with time delay after the peak. In addition, an intriguing phenomenon is observed. We find that, in the rising edge, \(\sigma_{1}\) increases with increasing frequency, while a slight decreasing behavior is seen in the falling edge. This phenomenon was observed in both pumping fluences. The results may suggest formation of a peak in conductivity beyond the measured frequency in the rising edge, followed by a gradual transformation of the spectral weight from high to low frequencies over time. This is related to the fact that, when the pump pulse first arrives, the sample is mostly in a semiconductor state. We notice that there could be a 0.3 ps time variation across the probe spot due to the non-normal incidence of pump beam, but that cannot explain the systematic evolution feature, since all measurements at different pump-probe time delays should contains the same time variation across the probe spot.
To see how the photoinduced effect changes with temperature, we measured the spectral change at two different temperatures. Figure 4 (a) shows that \(\sigma_{1}\) at the peak position after pumping with a fluence of 0.20 mJ/cm\({}^{2}\) at 300 K and 10 K. Note that, due to the added cryostat window, the actual power is difficult to measure accurately. The results indicate a more significant enhancement in transient conductivity at lower temperatures, associated with reduced photocarrier scattering. Additionally, we performed measurements at 10 K using different pump fluences, as shown in Fig.4 (b). The results reveal that the light-induced transient conductivity increases with higher fluence, much like the room temperature case.
The measurements conducted above in this study were carried out using a pump pulse with a wavelength of 690 nm, which has an energy of 1.8 eV. This energy is significantly higher than the energy gap of 2\(H\)-MoTe\({}_{2}\), which is about 1 eV. To investigate the effects of a pump pulse with lower energy than the band gap, measurements were also performed using a pump pulse with a wavelength of 2 \(\mu\)m (0.62 eV). The results, shown in Fig. S4 in the appendix, reveal that a pump-probe signal can still be resolved even if the pump light energy is lower than the band gap. This may be due to a multiphoton effect. However, the value of the signal under 2 \(\mu\)m pump pulse is much lower than that under 690 nm pump. Furthermore, the signal from the three-layer sample pumped by 2 \(\mu\)m is too weak to accurately calculate the optical constant.
We also tried measurement on a relatively thick thin film, about 55 nm, transferred onto the diamond substrate, and showed the results in Fig. S5 in the appendix. We found that the extracted optical conductivity has smaller values than the 1.65 nm (three unit cells) thin film sample. This is because the pump pulse can not completely penetrate the thick film sample, and the detected THz signal can pass the region being not excited by the pump pulse.
Our above experiments represent a direct probe of the transient conductivity in the frequency range of 12-22 THz of a three-layer 2\(H\)-MoTe\({}_{2}\), demonstrating that NIR ultrafast pulse can be effective to trigger highly conducting states in sub-picosecond time scale, whereas photoexcitation at 2 \(\mu\)m has a tiny/negligible effect due to the photon energy being smaller than the band gap of the material. We address that the transition from insulator to conducting state is transient and returns to the initial state within a few picoseconds. In literature, there are a number of reports showing irreversible transition from 2\(H\) to 17\({}^{\prime}\) phase by photoexcitation or even strong field THz pulses [19; 21; 32; 33]. We remark that we have used much smaller fluence in the present measurement.
The above time resolved THz to MIR spectral measurement with high pulse repetition rate is not suitable to investigate a nonreversible transition. In order to establish a link between the transient IMT and a stable 17\({}^{\prime}\) state, we also performed single color pump-probe measurement on 2\(H\)-MoTe\({}_{2}\) to look into possible change of coherent phonon modes. We use a 800 nm amplified laser system with a pulse duration of 35 fs and repetition rate of 1 kHz for the experiment. We measured the reflection change on a thin flake sample with a thickness of about 20 nm. Figure 5 shows reflection change as a function of time delay at various pump fluences. We can see clearly oscillations in the pump-probe signals of Fig.5 (a). After subtracting background and performing FFT, we can see coherent phonons in the frequency-domain of Fig.5 (b). At low flu
ences, only the phonon representing \(2H\) phase (about 171.5 cm\({}^{-1}\)) is present [7]. With increasing fluence, the peak broadens and gradually splits into two, represents \(2H\) and \(1T^{\prime}\) phase (about 167.5 cm\({}^{-1}\)) [7], respectively. With this observation, we expect that, at sufficient fluence, a stable \(1T^{\prime}\) phase could be induced as such reported in literature. In an earlier report on few layer 2H-MoTe\({}_{2}\), a linear relationship between the fluence and the photoconductivity was observed [12]. Such behavior was usually observed in semiconductors pumped at small fluences, which could be attributed to the change in photoexcited carrier density. The present work shows clearly a sublinear fluence dependence. As the pump fluence is much higher than that used in ref. [12], the saturation behavior likely results from the depletion of states due to the excitation of electrons from the valence band to the conduction band, leading to a reduced absorption for photon energies across the band gap.
Our research provides an in-depth examination of the photo-induced phase transition from an insulating to a highly
Figure 4: **The photoinduced conductivity at 300 K and 10 K.** (a) The real part of conductivity \(\sigma_{1}\) at 10 K and 300 K at the pump-probe peak position with the 690 nm pump pulse excitation. (b) The real part of conductivity \(\sigma_{1}\) at 10 K under three different pump fluences.
Figure 3: **Near-infrared pump - terahertz to midinfrared probe trasmission spectra.** (a) Right panel: Pump-probe signal with time delay of a three-layer \(2H\)-MoTe\({}_{2}\) film at several different pump fluences. Left panel: Relative change in terahertz to midinfrared probe electric field. (b) The real part of conductivity at the pump-probe peak position in different fluences. The bottom curve is static conductivity determined from the thin flake sample. (c) and (d) show the time evolution of the real part of conductivity measured at fluences of 0.28 mJ/cm\({}^{2}\) and 0.14 mJ/cm\({}^{2}\), respectively. Dash curves and solid curves represent the spectra in the rising edge, peak and falling edge of pump-probe time delay, respectively.
conducting state in 2\(H\)-MoTe\({}_{2}\). We find that the creation of a sufficient number of excited charge carriers is necessary to trigger a transient phase transition. This is reflected by the fact that photo-excited states appear to be sensitive to the energy of ultrashort laser pulses. A reversible IMT occurs when the energy of the NIR laser pulses surpasses the band gap, even at low fluences. However, when the laser pulse energy is lower than the band gap, the effect is minimal. With increasing fluence, we expect a stable 1\(T^{\prime}\) phase could be observed gradually. We can infer that the reported light-induced conducting state is the result of a much higher pulse fluence or electric field applied to the samples. Our findings offer new insight into the transition process from a transient IMT to a nonthermal and nonreversible 1\(T^{\prime}\) phase at sufficiently high fluence.
## III Summary
In this study, we develop a NIR pump-THz to MIR (12-22 THz) probe system in transmission geometry to measure the transient optical conductivity in 2\(H\)-MoTe\({}_{2}\) layered material. With an improved two-step measurement method, we are able to avoid issues with nonuniform substrate thickness and penetration depth mismatch and to accurately extract the photoinduced optical constants in the measured MIR range at different time delays. Our results show an enormous transient increase in the real part of conductivity, yielding direct evidence for the photoinduced transient IMT in 2\(H\)-MoTe\({}_{2}\). Additionally, our findings demonstrate that ultrashort laser pulses with photon energy below the band gap produce a significantly smaller effect. Incorporated with a single-color pump-probe measurement, we demonstrating that NIR ultrafast pulse can be effective to trigger highly conducting states in 2\(H\)-MoTe\({}_{2}\) in sub-picosecond time scale. Our work offers new and deep insight into the photoinduced phase transition in the 2\(H\)-MoTe\({}_{2}\) system.
## IV Appendix
### Fabrication of samples
The 2H-MoTe\({}_{2}\) single crystals were synthesized by traditional chemical vapor transportation technology. The quartz tube loaded with the elements (99.99 % Mo power and 99.999 % Te pellets) in a stoichiometric ratio together with additional iodine (6 mg/cm\({}^{3}\)) as the transport agent was vacuum and flamed-sealed. Then the quartz tube was heated at 750\({}^{0}\) C (cold zone) and 850\({}^{0}\) C (hot zone) for 2 weeks in a two-zone furnace, followed by a slow cooling process (2\({}^{0}\) C/h) to room temperature. Thin flake samples could be easily obtained by mechanical exfoliation from the bulk 2H-MoTe\({}_{2}\) single crystal. A flake of 50 \(\mu\)m thickness was used to get equilibrium data. The sample of 55 nm thickness was obtained by further mechanical stripping from MoTe\({}_{2}\) flake. Their thickness was measured by a step meter.
The three-layer 2H-MoTe\({}_{2}\) sample is completely covered on a 5 mm \(\times\) 5 mm diamond substrate. We provide diamond substrate to SixCarbon technology company. They transferred the film onto the diamond substrate. The three-layer film of 2H-MoTe\({}_{2}\) with the thickness of 1.65 nm is grown on the SiO/Si substrate by CVD. Then, the PMMA solution is added and heated to cure the PMMA into a film, the PMMA film is stick to the three-layer MoTe\({}_{2}\) sample film. After the sample film bonded to the diamond substrate by the viscous PMMA film, the PMMA is washed off by acetone solution.
### Experimental system
The schematic diagram of the experimental system is shown in Fig.S1. There are three light beams in the system, namely pump, probe and sampling beams, all of them have a repetition rate of 50 kHz. The pump light pulses (690 nm and 2 \(\mu\)m) have a diameter of 330 \(\mu\)m, generated by an optical
Figure 5: (a) 800 nm pump-800 nm probe signal of 2\(H\)-MoTe\({}_{2}\). (b) FFT of the oscillations after subtracting the background signal. Two coherent phonon frequencies are observable: 171.5 cm\({}^{-1}\) (2\(H\) phase), 167.5 cm\({}^{-1}\) (1\(T^{\prime}\) phase).
parametric amplifier (OPA1). The probe light is produced in a GeSe crystal by a 800 nm laser (generated by OPA2), with the spectrum mainly concentrated in 12-22 THz. Both OPAs are pumped by a laser with a central wavelength of 1030 nm and a repetition rate of 50 kHz. The spot size of probe light was measured by knife edge method. As shown in Fig.S2, the diameter of 12 THz is 240 \(\mu\)m. The pump light is larger in diameter than the probe light to ensure when the two beams coincide, the region measured by the probe beam is fully excited. Probe light is normal incident on the sample, and the transmitted light that passes through the sample and the substrate is incident on the detection GaSe crystal with a sampling light. The sampling beam enters the electro-optical sampling system[34] and picked up by balancing detectors after modulated by the probe light on the detection crystal. There is an angle of 23\({}^{\circ}\) between the pump and the probe beams. The relative change of probe is the ratio of the electric field variation (\(\Delta E(t)\)) after pumping to static electric field (\(E(t)\)). The pump beam and the sampling path each has a translation stage (Delay 1 and Delay 2). The ultrafast time-domain spectral detection is realized by controlling the movement of the two translation stages. Pump-probe signal is the relative change evolution of pump light oscillation time. The electric field decreases after pumping, so the signals are negative (shown in Fig.3 (a) in the main text). The pump-probe signal can be measured by fixing Dealy 2 and moving Delay 1. \(\Delta E(t)\) can be measured by moving two delays simultaneously. Similar experiment scheme were reported in earlier time-resolved THz spectroscopy measurements, such as in Reference[35, 36].
Meanwhile, the electric field variation (\(\Delta E(t)\)) is the electric field after pumping \(E^{\prime}(t)\) minus the static electric field \(E(t)\). \(\Delta E(t)\) can be directly measured by chopping the pump beam, or by measuring \(E^{\prime}(t)\) and \(E(t)\) separately by chopping the probe beam. In principle, the amplitude and phase of \(E^{\prime}(t)\)-\(E(t)\) and \(\Delta E(t)\) should be equal, however, in actual measurement the signal-to-noise ratio will be better by directly measuring \(\Delta E(t)\). During the experiment, we need to judge the phase of \(\Delta E(t)\) by \(E^{\prime}(t)\)-\(E(t)\). The comparison between \(\Delta E(t)\) and \(E^{\prime}(t)\)-\(E(t)\) is shown in Fig. S3.
### Comparison of varying pump wavelengths
In addition to the 690 nm pump we used to excite the MoTe\({}_{2}\) layers, we also used a 2 \(\mu\)m pump light as a constrast. We can clearly see in Fig. S4, the signal of 2 \(\mu\)m pump is far less than 690 nm pump. The signal of 2 \(\mu\)m shown in Fig. S4 is the result of multiple averages, it is too small to get reliable optical constants. The data was measured at the vertex of the relative change of THz to MIR probe electric field. The effect of pump wavelength is discussed in the text.
### Effect of pump light penetration depth
We also measured the 690 nm pump-THz to MIR probe signals of a 55 nm thick 2H-MoTe\({}_{2}\) sample attached to the diamond. This sample contains around 100 atomic layers and can be considered a bulk material. Even though the pump-probe signal of this sample was much greater than the 1.65 nm film, the change in optical constant calculated by Eq.(3) was much smaller. Compared to the photoconductivity shown in Fig 2 (e) of the main text, the response of the three-layer sample was significantly greater. This is due to the penetration depth, which is influenced by the pump light wavelength and material properties. In this case, 690 nm pump beam could not completely penetrate the bulk sample, but could easily pass through the three-layer 2H-MoTe\({}_{2}\). This means that the bulk sample was not fully excited by the pump pulses. The probe beam was able to pass through both samples completely, meaning that the detected region was not completely excited by the pump light if the sample was too thick.
### Lifetime of the transient conducting state
To examine whether the lifetime of the transient conducting state has clear pump fluence dependence, we fitted the pump-probe data in Fig.3(a). The fitting results are shown in Fig. S6. We found that the lifetime does not show significant change.
###### Acknowledgements.
This work was supported by the National Natural Science Foundation of China (Grant No. 11888101), the National Key Research and Development Program of China (Grant No. 2022YFA1403901).
|
2310.20606 | One-Way Communication Complexity of Partial XOR Functions | Boolean function $F(x,y)$ for $x,y \in \{0,1\}^n$ is an XOR function if
$F(x,y)=f(x\oplus y)$ for some function $f$ on $n$ input bits, where $\oplus$
is a bit-wise XOR. XOR functions are relevant in communication complexity,
partially for allowing Fourier analytic technique. For total XOR functions it
is known that deterministic communication complexity of $F$ is closely related
to parity decision tree complexity of $f$. Montanaro and Osbourne (2009)
observed that one-sided communication complexity $D_{cc}^{\rightarrow}(F)$ of
$F$ is exactly equal to nonadaptive parity decision tree complexity
$NADT^{\oplus}(f)$ of $f$. Hatami et al. (2018) showed that unrestricted
communication complexity of $F$ is polynomially related to parity decision tree
complexity of $f$.
We initiate the studies of a similar connection for partial functions. We
show that in case of one-sided communication complexity whether these measures
are equal, depends on the number of undefined inputs of $f$. On the one hand,
if $D_{cc}^{\rightarrow}(F)=t$ and $f$ is undefined on at most
$O(\frac{2^{n-t}}{\sqrt{n-t}})$, then $NADT^{\oplus}(f)=t$.
On the other hand, for a wide range of values of $D_{cc}^{\rightarrow}(F)$
and $NADT^{\oplus}(f)$ (from constant to $n-2$) we provide partial functions
for which $D_{cc}^{\rightarrow}(F) < NADT^{\oplus}(f)$. In particular, we
provide a function with an exponential gap between the two measures. Our
separation results translate to the case of two-sided communication complexity
as well, in particular showing that the result of Hatami et al. (2018) cannot
be generalized to partial functions.
Previous results for total functions heavily rely on Boolean Fourier analysis
and the technique does not translate to partial functions. For the proofs of
our results we build a linear algebraic framework instead. Separation results
are proved through the reduction to covering codes. | Vladimir V. Podolskii, Dmitrii Sluch | 2023-10-31T16:42:10Z | http://arxiv.org/abs/2310.20606v2 | # One-Way Communication Complexity of
###### Abstract
Boolean function \(F(x,y)\) for \(x,y\in\{0,1\}^{n}\) is an XOR function if \(F(x,y)=f(x\oplus y)\) for some function \(f\) on \(n\) input bits, where \(\oplus\) is a bit-wise XOR. XOR functions are relevant in communication complexity, partially for allowing Fourier analytic technique. For total XOR functions it is known that deterministic communication complexity of \(F\) is closely related to parity decision tree complexity of \(f\). Montanaro and Osbourne (2009) observed that one-sided communication complexity \(\mathrm{D}_{\mathrm{cc}}^{\rightarrow}(F)\) of \(F\) is exactly equal to nonadaptive parity decision tree complexity \(\mathrm{NADT}^{\oplus}(f)\) of \(f\). Hatami et al. (2018) showed that unrestricted communication complexity of \(F\) is polynomially related to parity decision tree complexity of \(f\).
We initiate the studies of a similar connection for partial functions. We show that in case of one-sided communication complexity whether these measures are equal, depends on the number of undefined inputs of \(f\). More precisely, if \(\mathrm{D}_{\mathrm{cc}}^{\rightarrow}(F)=t\) and \(f\) is undefined on at most \(O\left(\frac{2^{n-t}}{\sqrt{n-t}}\right)\), then \(\mathrm{NADT}^{\oplus}(f)=t\). We provide improved bounds on the number of undefined inputs for \(t=1,2\). On the other end of the spectrum, we observe that measures are equal for any partial function \(f\) satisfying \(\mathrm{NADT}^{\oplus}(f)\geq n-1\).
We show that the restriction on the number of undefined inputs in these results is unavoidable. That is, for a wide range of values of \(\mathrm{D}_{\mathrm{cc}}^{\rightarrow}(F)\) and \(\mathrm{NADT}^{\oplus}(f)\) (from constant to \(n-2\)) we provide partial functions (with more than \(\Omega\left(\frac{2^{n-t}}{\sqrt{n-t}}\right)\) undefined inputs) for which \(\mathrm{D}_{\mathrm{cc}}^{\rightarrow}(F)<\mathrm{NADT}^{\oplus}(f)\). In particular, we provide a function with an exponential gap between the two measures. Our separation results translate to the case of two-sided communication complexity as well, in particular showing that the result of Hatami et al. (2018) cannot be generalized to partial functions.
Previous results for total functions heavily rely on Boolean Fourier analysis and thus, the technique does not translate to partial functions. For the proofs of our results we build a linear algebraic framework instead. Separation results are proved through the reduction to covering codes.
## 1 Introduction
In communication complexity model two players, Alice and Bob, are computing some fixed function \(F\colon\{0,1\}^{n}\times\{0,1\}^{n}\rightarrow\{0,1\}\) on a given input \((x,y)\). However, Alice knows only
\(x\) and Bob knows only \(y\). The main object of studies in communication complexity is the amount of communication \(\mathrm{D_{cc}}(F)\) needed between Alice and Bob to compute the function.
Function \(F\) is an XOR-function if for all \(x,y\in\{0,1\}^{n}\) we have \(F(x,y)=f(x\oplus y)\) for some \(f\colon\{0,1\}^{n}\to\{0,1\}\), where \(x\oplus y\) is a bit-wise XOR of Boolean vectors \(x\) and \(y\). XOR-functions are important in communication complexity [28, 19, 26, 27, 3, 13, 15, 1, 24, 22, 5, 2, 8, 11, 9], on one hand, since there are important XOR-functions defined based on Hamming distance between \(x\) and \(y\), and on the other hand, since the structure of XOR-functions allows for Fourier analytic techniques. In particular, this connection suggests an approach for resolving Log-rank Conjecture for XOR-functions [28, 13].
In recent years there was considerable progress in the characterization of communication complexity of a XOR-function \(F\) in terms of the complexity of \(f\) in parity decision tree model. In this model the goal is to compute a fixed function \(f\) on an unknown input \(x\in\{0,1\}^{n}\) and in one step we are allowed to query XOR of any subset of input bits. We want to minimize the number of queries that is enough to compute \(f\) on any input \(x\). The complexity of \(f\) in this model is denoted by \(\mathrm{DT}^{\oplus}(f)\). It was shown by Hatami et al. [13] that for any total \(f\) we have \(\mathrm{D_{cc}}(F)=\mathrm{poly}(\mathrm{DT}^{\oplus}(f))\).
Even stronger connection holds for one-way communication complexity case. In this setting only very restricted form of communication is allowed: Alice sends Bob a message based on \(x\) and Bob has to compute the output based on this message and \(y\). We denote the complexity of \(F\) in this model by \(\mathrm{D_{cc}^{\rightarrow}}(F)\). The relevant model of decision trees is the model of non-adaptive parity decision trees. In this model we still want to compute some function \(f\) on an unknown input and we still can query XORs of any subsets of input bits, but now all queries should be provided at once (in other words, each query cannot depend on the answers to the previous queries). The complexity of \(f\) in this model is denoted by \(\mathrm{NADT}^{\oplus}(f)\). It follows from the results of Montanaro, Osbourne [19] and Gopalan et al. [10] that for any total XOR-function \(F(x,y)=f(x\oplus y)\) we have \(\mathrm{D_{cc}^{\rightarrow}}(F)=\mathrm{NADT}^{\oplus}(f)\).
These results on the connection between communication complexity and parity decision trees can be viewed as lifting results. This type of results have seen substantial progress in recent years (see [21]). The usual structure of a lifting result is that we start with a function \(f\) that is hard in some weak computational model (for example, decision tree type model), compose it with some gadget function \(g\) to obtain \(f\circ g\) (each variable of \(f\) is substituted by a copy of \(g\) defined on fresh variables) and show that \(f\circ g\) is hard in a stronger computational model (for example, communication complexity type model). The results on XOR-functions can be viewed as lifting results for \(g=\mathrm{XOR}\).
The results on the connection between communication complexity of XOR-functions and parity decision trees discussed above are proved only for total functions \(f\) for the reason that the proofs heavily rely on Fourier techniques. However, in communication complexity and decision tree complexity it is often relevant to consider a more general case of partial functions, and many lifting theorems apply to this type of functions as well, see e.g. [7, 17, 4, 23]. In particular, there are some lifting results for partial functions for gadgets that are stronger than XOR: Mande et al. [18] proved such a result for one-way case for inner product gadget (inner product is XOR applied to ANDs of pairs of variables) and Loff, Mukhopadhyay [17] proved a result on lifting with equality gadget for general case (note that equality for inputs of length 1 is practically XOR function). In [17] a conjecture is mentioned that for partial XOR-functions \(\mathrm{D_{cc}}(F)\) is approximately equal to \(\mathrm{DT}^{\oplus}(f)\) as well.
Our results.In this paper we initiate the studies of the connection between communication complexity for the case of partial XOR functions and parity decision trees. It turns out that for one-way case whether they are equal depends on the number of inputs on which the function is undefined: if the number of undefined inputs is small, then the complexity measures are equal and if it is too large, they are not equal.
More specifically, we show that for \(t=\mathrm{D}_{\mathrm{cc}}^{\to}(F)\) the equality \(\mathrm{D}_{\mathrm{cc}}^{\to}(F)=\mathrm{NADT}^{\oplus}(f)\) holds if \(f\) is undefined on at most \(O\left(\frac{2^{n-t}}{\sqrt{n-t}}\right)\) inputs. We prove a stronger bound on the number of undefined inputs for small values of \(t\). More specifically, for \(t=1\) we show that the equality \(\mathrm{D}_{\mathrm{cc}}^{\to}(F)=\mathrm{NADT}^{\oplus}(f)\) is true for all partial \(f\). For \(t=2\) we show that the equality is true for at most \(2^{n-3}-1\) undefined inputs. On the other end of the spectrum we show that for any partial function if \(\mathrm{NADT}^{\oplus}(f)\geq n-1\), then \(\mathrm{D}_{\mathrm{cc}}^{\to}(F)=\mathrm{NADT}^{\oplus}(f)\).
On the other hand, we provide a family of partial function for which \(\mathrm{D}_{\mathrm{cc}}^{\to}(F)<\mathrm{NADT}^{\oplus}(f)\)1. More specifically, we show that for any constant \(0<c<1\) there is a function \(f\) with \(\mathrm{NADT}^{\oplus}(f)=cn\) and \(\mathrm{D}_{\mathrm{cc}}^{\to}(F)\leq c^{\prime}n\) for some \(c^{\prime}<c\). The number of undefined inputs for the function is \(O\left(\frac{2^{dn}}{\sqrt{n}}\right)\) if \(c>1/2\), \(2^{n-1}\) if \(c=1/2\) and \(2^{n}-O\left(\frac{2^{dn}}{\sqrt{n}}\right)\) if \(c<1/2\), where \(0<d<1\) is some constant (depending of \(c\)).
Footnote 1: Note that the gap in the other direction is impossible: it is easy to see that \(\mathrm{D}_{\mathrm{cc}}^{\to}(F)\leq\mathrm{NADT}^{\oplus}(f)\) for all \(f\) (see Lemma 3 below). Similar inequality (with an extra factor of \(2\)) holds for general communication complexity and parity decision tree complexity.
We provide a function \(f\) for which \(\mathrm{NADT}^{\oplus}(f)=\sqrt{n\log n}\) and \(\mathrm{D}_{\mathrm{cc}}^{\to}(F)\leq O(\log n)\), the number of undefined inputs for \(f\) is \(2^{n}-2^{\Theta(\sqrt{n}\log^{3/2}n)}\). Thus, we provide an exponential gap between the two measures.
The largest value of \(\mathrm{NADT}^{\oplus}\) for which we provide a separation is \(n-2\), this complements the result that starting with \(\mathrm{NADT}^{\oplus}(f)=n-1\) the measures are equal. The smallest values of measures for which we provide a separation are \(\mathrm{D}_{\mathrm{cc}}^{\to}(F)=7\) and \(\mathrm{NADT}^{\oplus}(f)=8\).
All our separation results translate to the setting of two-sided communication complexity vs. parity decision trees. In particular, we provide a partial function \(f\) with exponential gap between \(\mathrm{D}_{\mathrm{cc}}(F)\) and \(\mathrm{DT}^{\oplus}(f)\), which refutes the conjecture mentioned in [17].
The techniques behind the results on the connections between communication complexity of XOR-functions and parity decision tree complexity for total functions heavily rely on Fourier analysis. However, it is not clear how to translate this technique to partial functions. To prove our results we instead translate Fourier-based approach of [19, 10] into linear algebraic language. We design a framework to capture the notion of one-sided communication complexity of partial XOR-functions and use this framework to establish both equality of \(\mathrm{D}_{\mathrm{cc}}^{\to}(F)\) and \(\mathrm{NADT}^{\oplus}(f)\) for the small number of undefined points and the separation results. Within our framework we prove separation results by a reduction to covering codes.
The rest of the paper is organized as follows. In Section 2 we provide necessary preliminary information and introduce the notations. In Section 3 we introduce our linear-algebraic framework. In Section 4 we prove main results on the equality of complexity measures. In Section 5 we prove separation results. In Section 6 we provide the results for \(\mathrm{D}_{\mathrm{cc}}^{\to}(F)=1\) and \(\mathrm{D}_{\mathrm{cc}}^{\to}(F)=2\). Some of the technical proofs are presented in Appendix.
Preliminaries
### Boolean cube
A Boolean cube is a graph on the set \(\{0,1\}^{n}\) of Boolean strings of length \(n\). We connect two vertices with an edge if they differ in a single bit only. The set \(\{0,1\}^{n}\) can also be thought of as the vector space \(\mathbb{F}_{2}^{n}\), with the bitwise XOR as the group operation. An inner product over this space can be defined as
\[\langle x,y\rangle=\bigoplus_{i}x_{i}\wedge y_{i}.\]
We define \(\operatorname{dist}(x,y)\) between \(x\in\{0,1\}^{n}\) and \(y\in\{0,1\}^{n}\) to be the Hamming distance (that is defined as the number of coordinates at which \(x\) and \(y\) differ). We denote by \(V(n,r)\) the size of a Hamming ball in \(\{0,1\}^{n}\) of radius \(r\).
### Isoperimetric inequalities
We will need the vertex isoperimetric inequality for a Boolean cube known as Harper's theorem. To state it we first define Hales order.
**Definition 1** (Hales Order).: Consider two subsets \(x,y\subseteq[m]\). We define \(x\prec y\) if \(|x|<|y|\) or \(|x|=|y|\) and the smallest element of symmetric difference of \(x\) and \(y\) belongs to \(x\). In other words, there exists an \(i\) such that \(i\in x,i\notin y\), and \(i\) is the smallest element in which \(x\) and \(y\) differ. Here is an example of Hales order for \(m=4\):
\[\varnothing,1,2,3,4,12,13,14,23,24,34,123,124,134,234,1234.\]
We can induce Hales order on the set \(\{0,1\}^{m}\) by identifying subsets of \([m]\) with their charqteristic vectors.
**Theorem 2** (Harper's theorem [12, Theorem 4.2]).: _Let \(A\subseteq\{0,1\}^{m}\) be a subset of vertices of \(m\)-dimensional Boolean cube and denote \(a=|A|\). Define \(I_{a}^{m}\) to be the set of the first \(a\) elements of \(\{0,1\}^{m}\) in Hales order. Then \(|\Gamma A|\geq|\Gamma I_{a}^{m}|\)._
### Communication Complexity and Decision Trees
Throughout this paper, \(f\) denotes a partial function \(\{0,1\}^{n}\to\{0,1,\bot\}\), we let \(\operatorname{Dom}(f)=f^{-1}(\{0,1\})\). We define an XOR-function \(F:\{0,1\}^{n}\times\{0,1\}^{n}\to\{0,1,\bot\}\) as
\[F(x,y)=f(x\oplus y).\]
In communication complexity model two players, Alice and Bob, are computing some fixed function \(F\colon\{0,1\}^{n}\times\{0,1\}^{n}\to\{0,1\}\) on a given input \((x,y)\). However, Alice knows only \(x\) and Bob knows only \(y\). The main subject of studies in communication complexity is the amount of communication \(\operatorname{D_{cc}}(F)\) needed between Alice and Bob to compute the function. Formal definition of the model can be found in [16].
We will be mostly interested in one-way communication model. This is a substantially restricted setting, in which only Alice is permitted to send bits to Bob. Formally, one-way communication complexity \(D_{cc}^{\rightarrow}(F)\) is defined to be the smallest integer \(t\), allowing for a protocol where Alice knowing her input \(x\) sends \(t\) bits to Bob, which together with Bob's input \(y\) enable Bob to calculate the value of \(F\).
The bits communicated by Alice depend only on \(x\), that is Alice's message to Bob is \(h(x)\) for some fixed total function \(h\colon\{0,1\}^{n}\rightarrow\{0,1\}^{t}\). Bob computes the output \(F(x,y)\) based on \(h(x)\) and his input \(y\). That is, Bob outputs \(\varphi(h(x),y)\) for some fixed total function \(\varphi\colon\{0,1\}^{t}\times\{0,1\}^{n}\rightarrow\{0,1\}\). If \((x,y)\) is within the domain of \(F\), then the equality \(\varphi(h(x),y)=F(x,y)\) must be true.
The notion of parity decision tree complexity is a generalization of the well-known decision tree complexity model. In this model, to evaluate a function \(f\) for a given input \(x\) the protocol queries the parities of some subsets of the bits in \(x\). The cost of the protocol is the maximum over all inputs number of queries protocol makes and our goal is to minimize it.
We consider the non-adaptive parity decision tree complexity \(\mathrm{NADT}^{\oplus}(f)\). This version differs from its adaptive counterpart in that all the queries should be fixed at once. In other words, each next query should not depend on the answers to previous queries. Next we give more formal definition of \(\mathrm{NADT}^{\oplus}(f)\).
The protocol of complexity \(p\) is defined by \(n\)-bit strings \(s_{1},\ldots,s_{p}\) and a total function \(l\colon\{0,1\}^{p}\rightarrow\{0,1\}\). On input \(x\) the protocol queries the values of
\[\langle s_{1},x\rangle,\ldots,\langle s_{p},x\rangle\]
and outputs
\[l(\langle s_{1},x\rangle,\ldots,\langle s_{p},x\rangle).\]
The protocol computes partial function \(f\), if for any \(x\in\mathrm{Dom}(f)\) we have
\[l(\langle s_{1},x\rangle,\ldots,\langle s_{p},x\rangle)=f(x).\]
Throughout the paper \(t,h,\varphi,p,s_{1},\ldots,s_{p},l\) have the same meaning as defined above.
It is easy to see that there is a simple relation between \(\mathrm{NADT}^{\oplus}(f)\) and \(\mathrm{D}_{\mathrm{cc}}^{\rightarrow}(F)\).
**Lemma 3**.: _For any \(f\) we have \(\mathrm{D}_{\mathrm{cc}}^{\rightarrow}(F)\leq\mathrm{NADT}^{\oplus}(f)\)._
Proof.: Alice and Bob can compute \(F(x,y)\) by a simple simulation of \(\mathrm{NADT}^{\oplus}\) protocol for \(f\). The idea is that they privately calculate the parities of their respective inputs according to \(\mathrm{NADT}^{\oplus}\) protocol, then Alice sends the computed values to Bob, who XORs them with his own parities, and then computes the value of \(F\).
More formally, assume that \(\mathrm{NADT}^{\oplus}(f)=p\) and the corresponding protocol is given by \(s_{1},\ldots,s_{p}\in\{0,1\}^{n}\) and a function \(l\), that is
\[\forall x\in\mathrm{Dom}(f),f(x)=l(\langle s_{1},x\rangle,\ldots,\langle s_{p},x\rangle).\]
For \(i\in[p]\), we let
\[h_{i}(x):=\langle s_{i},x\rangle.\]
For the communication protocol of complexity \(p\) we let
\[h(x)=(h_{1}(x),\ldots,h_{p}(x)),\]
\[\varphi(a,y):=l(a_{1}\oplus\langle s_{1},y\rangle,\ldots,a_{p}\oplus\langle s_{p },y\rangle).\]
Then for any \((x,y)\) such that \(x\oplus y\in\operatorname{Dom}(f)\) we have
\[\varphi(h(x),y) =l(h_{1}(x)\oplus\langle s_{1},y\rangle,\ldots,h_{p}(x)\oplus \langle s_{p},y\rangle)=\] \[l(\langle s_{1},x\rangle\oplus\langle s_{1},y\rangle,\ldots, \langle s_{p},x\rangle\oplus\langle s_{p},y\rangle)=\] \[l(\langle s_{1},x\oplus y\rangle,\ldots,\langle s_{p},x\oplus y \rangle)=f(x\oplus y)=F(x,y).\]
We constructed a \(p\)-bit communication protocol for \(F\), and thus
\[\operatorname{D}_{\operatorname{cc}}^{\rightarrow}(F)\leq p=\operatorname{ NADT}^{\oplus}(f).\]
In this paper we are mainly interested in whether the inequality in the opposite direction is true.
### Covering Codes
**Definition 4**.: A subset \(\mathcal{C}\subseteq\{0,1\}^{n}\) is a \((n,K,R)\) covering code if \(|\mathcal{C}|\leq K\) and for any \(x\in\{0,1\}^{n}\) there is \(c\in\mathcal{C}\) such that \(\operatorname{dist}(x,c)\leq R\). In other words, all point in \(\{0,1\}^{n}\) are covered by balls of radius \(R\) with centers in \(\mathcal{C}\).
The following general bounds on \(K\) are known for covering codes.
**Theorem 5** ([6, Theorem 12.1.2]).: _For any \((n,K,R)\) covering code we have_
\[\log K\geq n-\log V(n,R).\]
_For any \(n\) and any \(R\leq n\) there is a \((n,K,R)\) covering code with_
\[\log K\leq n-\log V(n,R)+\log n.\]
We will use the following well known fact.
**Theorem 6** ([6, Section 2.6]).: _If \(n=2^{m}-1\) for some \(m\), then Boolean cube \(\{0,1\}^{n}\) can be splitted into disjoint balls of radius 1._
This construction is known as a Hamming error correcting code. Note that it is a \((n=2^{m}-1,\frac{2^{n}}{n+1},1)\) covering code.
**Definition 7**.: For two covering codes \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) their direct sum is
\[\mathcal{C}_{1}\oplus\mathcal{C}_{2}=\{(c_{1},c_{2})\mid c_{1}\in\mathcal{C}_ {1},c_{2}\in\mathcal{C}_{2}\}.\]
**Lemma 8** ([6, Theorem 12.1.2]).: _If \(\mathcal{C}_{1}\) is a \((n_{1},K_{1},R_{1})\) covering code and \(\mathcal{C}_{2}\) is a \((n_{2},K_{2},R_{2})\) covering code, then \(\mathcal{C}_{1}\oplus\mathcal{C}_{2}\) has parameters \((n_{1}+n_{2},K_{1}K_{2},R_{1}+R_{2})\)._
We need the following bounds on the sizes of Hamming balls (see, e.g. [14, Appendix A]).
**Lemma 9**.: _For any \(n\) and \(k\leq n\) we have_
\[\left(\frac{n}{k}\right)^{k}\leq V(n,k)\leq\left(\frac{en}{k}\right)^{k}.\]
**Lemma 10**.: _For any constant \(0<c<1\) we have_
\[\binom{n}{cn}=O\left(\frac{1}{\sqrt{n}}2^{H(c)n}\right).\]
_For any constant \(0<c<1/2\) we have_
\[V(n,cn)=O\left(\frac{1}{\sqrt{n}}2^{H(c)n}\right),\]
_where \(H\) is an entropy function._
**Lemma 11** ([25, Section 5.4]).: \[V\left(n,\frac{n}{2}-\Theta(\sqrt{n\log n})\right)=\frac{2^{n}}{\text{poly}(n )}.\]
For entropy function \(H(x)\) we will use the following simple fact.
**Lemma 12**.: _For any constant \(c\in(0,1)\) and for any \(\alpha_{n}\xrightarrow[n\to\infty]{}0\) we have_
\[H(c+\alpha_{n})=H(c)+O(\alpha_{n}),\]
_where the constant in \(O\)-notation might depend on \(c\), but not on \(n\)._
This is true since the derivative of \(H\) is upper bounded by a constant in any small enough neighborhood of \(c\).
## 3 Linear-algebraic framework
### Description of \(\mathrm{D}_{\mathrm{cc}}^{\to}(F)\) in terms of a graph
Recall that in one-way communication protocol of complexity \(t\) for \(F(x,y)=f(x\oplus y)\) Alice on input \(x\in\{0,1\}^{n}\) first sends to Bob \(h(x)\) for some fixed \(h\colon\{0,1\}^{n}\to\{0,1\}^{t}\). After that Bob computes the output \(\varphi(h(x),y)\), where \(y\in\{0,1\}^{n}\) is Bob's input and \(\varphi\colon\{0,1\}^{t}\times\{0,1\}^{n}\to\{0,1\}\).
Let's consider the partition \(\mathcal{H}=\{H_{a}\mid a\in\{0,1\}^{t}\}\), where for any \(a\in\{0,1\}^{t}\)
\[H_{a}=h^{-1}(a).\]
We refer to \(\mathcal{H}\) as \(h\)_-induced partition_. A class \(H_{a}\) of this partition is the set of inputs for which Alice sends Bob the same message.
Consider any two arbitrary inputs \(x,y\in\{0,1\}^{n}\) and consider vector \(\Delta=x\oplus y\) as a _shift_ between \(x\) and \(y\) in the sense that \(y=x\oplus\Delta\) (and vise versa). That is, \(y\) is obtained from \(x\) by a shift by \(\Delta\). We say that \(\Delta\in\{0,1\}^{n}\) is a _good shift_ if there is a pair \(x,y\in\{0,1\}^{n}\) such that \(x\oplus y=\Delta\) and \(h(x)=h(y)\), or equivalently, such that \(x\) and \(y\) belong to the same class of \(\mathcal{H}\). Note that \(f\) does not necessarily need to be defined on inputs \(x\) and \(y\). However, it turns out that on the domain of \(f\) the value of \(f\) is invariant under good shifts.
**Lemma 13**.: _Assume that \(\Delta\) is a good shift. Consider any \(v,u\in\operatorname{Dom}(f)\) such that \(v\oplus u=\Delta\). Then, \(f(v)=f(u)\)._
Proof.: Since \(\Delta\) is good, there are \(x\) and \(y\) such that \(h(x)=h(y)\) and \(x\oplus y=\Delta\). Then
\[f(v)=\varphi(h(x),x\oplus v)=\varphi(h(y),x\oplus v)=f(v\oplus x\oplus y)=f(v \oplus\Delta)=f(u).\]
This leads us to the following notion.
**Definition 14**.: Consider a graph with vertices \(\{0,1\}^{n}\) and edges drawn between vertices \(x\) and \(y\) if \(x\oplus y\) is a good shift. We call this graph a _total \(h\)-induced graph_. Now remove vertices where the function \(f\) is undefined. We refer to the resulting graph as a _partial \(h\)-induced graph_.
There is an alternative way of thinking about total \(h\)-induced graph. Consider a graph in which we connect two vertices if the value of \(h\) on these vertices is the same. Clearly, it is a subgraph of the total \(h\)-induced graph. Now consider a shift of this graph, that is, a graph in which we shifted all vertices by some fixed vector. This graph is also a subset of the total \(h\)-induced graph. By considering all possible shifts and taking the union of all graphs we will get the total \(h\)-induced graph.
By transitivity, if \(h,\varphi\) form a valid communication protocol then \(f\) assigns identical values to each connected component in partial \(h\)-induced graph. The converse is also true.
**Theorem 15**.: _For a function \(h:\{0,1\}^{n}\to\{0,1\}^{t}\) there is a function \(\varphi:\{0,1\}^{t}\times\{0,1\}^{n}\to\{0,1\}\) such that \(h,\varphi\) form a valid communication protocol if and only if \(f\) assigns the same value to each connected component in the partial \(h\)-induced graph._
Proof.: As discussed above, if \(h,\varphi\) form a valid communication protocol, then \(f\) assigns the same value to each connected component of the partial \(h\)-induced graph.
It remains to prove the converse statement. We assume that \(f\) assigns the same value to each connected component and we need to show that there is such \(\varphi\) that
\[\forall(x,y)\in\operatorname{Dom}(F),\ \ F(x,y)=\varphi(h(x),y).\]
We define \(\varphi\) as follows. For each \(\alpha\) and \(y\), consider \(x^{\prime}\) such that \(h(x^{\prime})=\alpha\) and \((x^{\prime},y)\in\operatorname{Dom}(F)\). If there is no such \(x^{\prime}\) we define \(\varphi(\alpha,y)\) arbitrarily. If there is such an \(x^{\prime}\), let
\[\varphi(\alpha,y):=F(x^{\prime},y).\]
Now we show that the resulting protocol computes \(F(x,y)\) correctly for any \((x,y)\). Consider arbitrary \((x,y)\in\mathrm{Dom}(F)\). Consider \(x^{\prime}\) chosen for \(\alpha=h(x)\) and \(y\) (it exists, since clearly \(x\) itself satisfies all the necessary conditions).
Thus, we have
\[\varphi(h(x),y)=F(x^{\prime},y).\]
It remains to prove that
\[F(x^{\prime},y)=F(x,y)\]
or equivalently,
\[f(x^{\prime}\oplus y)=f(x\oplus y).\]
For XOR of these two inputs of \(f\) we have
\[(x^{\prime}\oplus y)\oplus(x\oplus y)=x^{\prime}\oplus x.\]
Since \(h(x)=h(x^{\prime})\), we have that \(x^{\prime}\oplus x\) is a good shift. And since
\[(x,y),(x^{\prime},y)\in\mathrm{Dom}(F),\]
we have
\[x\oplus y,x^{\prime}\oplus y\in\mathrm{Dom}(f).\]
We have that vertices \(x\oplus y\) and \(x^{\prime}\oplus y\) are connected in the partial \(h\)-induced graph and by Lemma 13\(f\) assigns the same value to them. Hence, the function \(\varphi\), together with \(h\), forms a communication protocol for \(F\).
### Description of \(\mathrm{NADT}^{\oplus}(f)\) in terms of cosets
We consider the vertices of the Boolean cube as a vector space \(\mathbb{F}_{2}^{n}\). We show that a \(\mathrm{NADT}^{\oplus}\) protocol corresponds to a linear subspace such that \(f\) is constant on each of its cosets.
**Theorem 16**.: _A \(p\)-bit \(\mathrm{NADT}^{\oplus}\) protocol exists if and only if there exists an \(n-p\) dimensional subspace such that for each coset of that subspace, \(f\) assigns the same value to all inputs of the coset where \(f\) is defined._
Proof.: Suppose \(s_{1},\ldots,s_{p},l\) form a valid \(\mathrm{NADT}^{\oplus}\) protocol for \(f\). We construct a matrix \(S\) with rows \(s_{1},\ldots,s_{p}\). If some of the rows are linearly dependent, we add rows arbitrarily to make the rank of \(S\) equal to \(p\). When \(S\) is multiplied on the right by some vector \(x\), we obtain all inner products of \(x\) with vectors \(s_{1},\ldots,s_{p}\) (and possibly other bits if we added rows).
Consider the vector subspace \(\{x|Sx=0\}\). This is an \(n-p\) dimensional space. For all points in the same coset of this subspace, the values of the inner products \(\langle s_{1},x\rangle,\ldots,\langle s_{p},x\rangle\) are the same, so is the value of \(l(\langle s_{1},x\rangle,\ldots,\langle s_{p},x\rangle)\). For all points where \(f\) is defined and lying in the same coset, the value of \(f\) must be equal to the value of \(l\) and thus the same for all points in the coset.
In the reverse direction, let \(\langle e_{1},\ldots,e_{n-p}\rangle\) be an \(n-p\) dimensional subspace such that for each its coset \(f\) is constant on all points on which it is defined. We can represent this subspace in the form \(\{x|Sx=0\}\) for some matrix \(S\) of size \(p\times n\).
Vectors \(x\) and \(y\) are in the same coset of \(\langle e_{1},\ldots,e_{n-p}\rangle\) iff \(Sx=Sy\). Thus, to compute \(f(x)\) it is enough to compute the inner product of \(x\) with the rows of \(S\)
**Corollary 17**.: _If there exists an \(n-p\) dimensional subspace \(L\), such that any subgraph \(G\) of the partial \(h\)-induced graph, such that \(G\) is induced by a coset of \(L\), is connected then \(\operatorname{NADT}^{\oplus}(f)\leq p\)._
Proof.: By Theorem 15\(f\) is constant on each coset. By Theorem 16 it follows that \(\operatorname{NADT}^{\oplus}(f)\leq p\).
Equality between \(\operatorname{D}_{\operatorname{cc}}^{\rightarrow}(F)\) and \(\operatorname{NADT}^{\oplus}(f)\)
In this section we will show that if \(\operatorname{D}_{\operatorname{cc}}^{\rightarrow}(F)=t\) and the number of undefined inputs is small, then \(\operatorname{NADT}^{\oplus}(f)=t\) as well. More specifically, we prove the following theorem.
**Theorem 18**.: _If for the function \(f\) we have \(\operatorname{D}_{\operatorname{cc}}^{\rightarrow}(F)=t\) and \(f\) is undefined on less than \(\binom{n-t+1}{\lfloor\frac{n-t}{2}\rfloor-1}\) inputs, then \(\operatorname{NADT}^{\oplus}(f)=t\)._
By Lemma 10 we have that \(\binom{\lfloor\frac{n-t+1}{2}\rfloor}{\lfloor\frac{n-t+1}{2}\rfloor}=O(\frac{ 2^{n-t}}{\sqrt{n-t}})\) and since \(\lfloor\frac{n-t}{2}\rfloor-1\) differs from \(\left\lfloor\frac{n-t+1}{2}\right\rfloor\) by only a constant, it is easy to see that the same estimate applies to \(\binom{n-t+1}{\lfloor\frac{n-t}{2}\rfloor-1}\) as well. Thus, the number of undefined inputs is \(O(\frac{2^{n-t}}{\sqrt{n-t}})\).
The rest of the section is devoted to the proof of Theorem 18. The idea of the proof is as follows. Consider \(h\)-induced partition \(\mathcal{H}\) corresponding to the communication protocol of complexity \(t\). We show that either the partition \(\mathcal{H}\) corresponds to the cosets of an \(n-t\) dimensional subspace, which allows us to construct an \(\operatorname{NADT}^{\oplus}\) protocol, or there exist _many_ good shifts. The structure of these good shifts imposes restrictions on \(f\) that again allow us to construct an \(\operatorname{NADT}^{\oplus}\) protocol.
We start with a simple case.
**Lemma 19**.: _If the partition \(\mathcal{H}\) corresponds to cosets of an \(n-t\) dimensional subspace \(L\), then \(\operatorname{NADT}^{\oplus}(f)\leq t\)._
Proof.: Since the partition \(\mathcal{H}\) corresponds to the cosets of \(L\), we have that for any inputs \(x\) and \(y\), if \(h(x)=h(y)\), then \(x\oplus y\in L\) and vice versa. In other words, all good shifts are in \(L\) and any shift in \(L\) is good. Thus, connected components of the total \(h\)-induced graph are cosets of \(L\) and are fully connected. By Corollary 17 we have that \(\operatorname{NADT}^{\oplus}(f)\leq t\).
The structure of the proof for the other case is the following. We show that the total \(h\)-induced graph is structured into connected components, each of which is a coset of a \(k\)-dimensional subspace for \(k\geq n-t\). We show that there is a bijective graph homomorphism of the \(k\)-dimensional Boolean cube onto each component. Furthermore, each vertex in the total \(h\)-induced graph has a degree of at least \(\frac{2^{n}}{2^{t}}-1\). We show that if we remove fewer than \(\binom{n-t+1}{\lfloor\frac{n-t}{2}\rfloor-1}\) vertices, each coset still contains one connected component. By the way of contradiction, suppose this is not the case and some coset contains more than one connected component. We consider the smallest of these components, denote the set of its nodes by \(S\). We show that the number of neighboring vertices of \(S\) in the total \(h\)-induced graph (excluding \(S\) itself) is not less than \(\binom{n-t+1}{\lfloor\frac{n-t}{2}\rfloor-1}\). This implies that after removing the undefined inputs of \(f\)\(S\) cannot not be separated from other nodes in the coset. To show this we treat separately
cases of large and small \(|S|\). For small \(|S|\) we use the fact that vertices have high degree. For large \(|S|\) we use the vertex-isoperimetric inequality for the Boolean cube.
**Lemma 20**.: _Given a partition \(\mathcal{H}\) whose classes do not correspond to cosets of an \(n-t\)-dimensional subspace, let \(D\) be the set of good shifts. Then \(D\) contains a minimum of \(n-t+1\) linearly independent vectors._
Proof.: Suppose there are at most \(n-t\) linearly independent good shifts \(e_{1},\ldots,e_{n-t}\). Consider a linear subspace spanned over these shifts and add some vectors to it to make it exactly \(n-t\) dimensional if needed. Notate resulting subspace \(L\). As classes of \(\mathcal{H}\) do not correspond to the cosets of \(L\) and there are \(2^{n-t}\) of both classes and cosets there exist two elements belonging to the same class and different cosets. Their XOR is a good shift linearly independent with \(e_{1},\ldots,e_{n-t}\). We got a contradiction implying the lemma.
**Lemma 21**.: _Consider \(D\) as the set of all good shifts and \(\langle e_{1},\ldots,e_{k}\rangle\) as the largest linearly independent subset of \(D\). Then the total \(h\)-induced graph has the following properties._
* _Cosets of the subspace_ \(\langle e_{1},\ldots,e_{k}\rangle\) _are connected components of the total_ \(h\)_-induced graph._
* _There is a bijective graph homomorphism of_ \(k\)_-dimensional Boolean cube into each coset._
Proof.: It is easy to see that all vertices in any coset are connected to each other. Let's show that no edges exist between vertices of different cosets. Assume by contradiction that there is an edge between vertices \(v\) and \(u\) from different cosets. Note that \(u\oplus v\notin\langle e_{1},\ldots,e_{k}\rangle\). Thus, vectors \(e_{1},\ldots,e_{k},u\oplus v\) form a linearly independent system of size \(k+1\), which is a contradiction.
Now, let's construct a homomorphism \(q\) from the Boolean cube \(\{0,1\}^{k}\) into the coset \(v+\langle e_{1},\ldots,e_{k}\rangle\) for an arbitrary vertex \(v\). Consider a matrix \(B\) that has vectors \(e_{1},\ldots,e_{k}\) as its columns and let \(q(x)=v\oplus Bx\). The image of \(q\) is within the coset \(v+\langle e_{1},\ldots,e_{k}\rangle\), as columns of \(B\) belong to the subspace \(\langle e_{1},\ldots,e_{k}\rangle\). The mapping is bijective on \(v+\langle e_{1},\ldots,e_{k}\rangle\), as \(B\)'s columns are linearly independent. Finally, consider a pair of vertices \(x,y\) adjacent in a Boolean cube. Since the vertices are adjacent, they only differ in a single bit \(i\). Thus,
\[q(x)\oplus q(y)=(v\oplus Bx)\oplus(v\oplus By)=B(x\oplus y)=e_{i}.\]
Since \(e_{i}\in D\), an edge exists between \(q(x)\) and \(q(y)\), implying that \(q\) is a graph homomorphism.
**Lemma 22**.: _In the total \(h\)-induced graph, the degree of any vertex is not less than \(\frac{2^{n}}{2^{t}}-1\)._
Proof.: Let's consider the largest class in the \(h\)-induced partition \(\mathcal{H}\). Since the number of classes is at most \(2^{t}\), the largest class contains at least \(\frac{2^{n}}{2^{t}}\) elements. Fix an element of the class and compute its XOR with all elements in the same class \(\mathcal{H}\). We have \(\frac{2^{n}}{2^{t}}\) XORs in total, \(\frac{2^{n}}{2^{t}}-1\) of which are non-zero. Since each XOR is computed between elements in the same class, these XORs are good shifts. For all vertices in the \(h\)-induced graph for each good shift we draw an edge from the vertex corresponding to this shift. Therefore, the degree of any vertex is at least \(\frac{2^{n}}{2^{t}}-1\)
**Lemma 23**.: _If \(A\) is a subset of \(k\)-dimensional Boolean cube satisfying \(V\left(m,\left\lfloor\frac{m-1}{2}\right\rfloor-2\right)\leq|A|\leq 2^{k-1}\) then \(|\Gamma^{\prime}A|\geq\binom{m}{\left\lfloor\frac{m-1}{2}\right\rfloor-1}\)._
The proof of the lemma is moved to Appendix A. Finally, we are ready to prove Theorem 18.
Proof of Theorem 18.: By Lemma 20, the partition \(\mathcal{H}\) either corresponds to cosets of an \(n-t\) dimensional subspace (and then by Lemma 19 we have \(\mathrm{NADT}^{\oplus}(f)\leq t\)), or the set of good shifts \(D\) contains at least \(n-t+1\) linearly independent vectors. Let \(\langle e_{1},\ldots,e_{k}\rangle\), where \(k\geq n-t+1\), be the largest subset of linearly independent vectors in \(D\). Consider the cosets of the subspace \(\langle e_{1},\ldots,e_{k}\rangle\). We will show that if we remove fewer than \(\binom{n-t+1}{\lfloor\frac{n-t}{2}\rfloor-1}\) vertices from the total \(h\)-induced graph, each coset will contain no more than one connected component. Assume by contradiction that after removing the vertices, some coset splits into several connected components. Let \(A\) be the smallest of these components. If there are at most \(V(n-t+1,\lfloor\frac{n-t}{2}\rfloor-2)-1\) vertices in \(A\), consider a vertex \(a\) in \(A\). Given the degree of \(a\) is at least \(2^{n-t}-1\), \(a\) has at least
\[2^{n-t}-V\left(n-t+1,\left\lfloor\frac{n-t}{2}\right\rfloor-2\right) \geq V\left(n-t+1,\left\lfloor\frac{n-t}{2}\right\rfloor\right)-V \left(n-t+1,\left\lfloor\frac{n-t}{2}\right\rfloor-2\right)\] \[\geq\binom{n-t+1}{\lfloor\frac{n-t}{2}\rfloor-1}\]
neighbors outside \(A\).
On the other hand, suppose \(A\) has at least \(V(n-t+1,\lfloor\frac{n-t}{2}\rfloor-2)\) vertices. Since \(A\) is the smallest connected component in its coset it also follows that \(A\) has no more than \(2^{k-1}\) vertices. By Lemma 23 we have \(|\Gamma^{\prime}A|\geq\binom{n-t+1}{\lfloor\frac{n-t}{2}\rfloor-1}\), which is more than the number of removed vertices, a contradiction. Thus, cosets cannot be split into several components and by Corollary 17 we have \(\mathrm{NADT}^{\oplus}(f)\leq n-k\leq t-1\), which is a contradiction.
### Large Values of \(\mathrm{D}_{\mathrm{cc}}^{\rightarrow}(F)\) and \(\mathrm{NADT}^{\oplus}(f)\)
On the other end of the spectrum, we show that if \(\mathrm{NADT}^{\oplus}(f)\) is really large, then it is equal for all partial functions.
**Theorem 24**.: _For any partial function \(f\colon\{0,1\}^{n}\rightarrow\{0,1,\bot\}\), if \(\mathrm{NADT}^{\oplus}(f)\geq n-1\), then \(\mathrm{D}_{\mathrm{cc}}^{\rightarrow}(F)=\mathrm{NADT}^{\oplus}(f)\)._
Proof.: First consider the case \(\mathrm{NADT}^{\oplus}(f)=n\) and assume that \(\mathrm{D}_{\mathrm{cc}}^{\rightarrow}(F)\leq n-1\). Consider the corresponding function \(h\). One of its equivalence classes \(H\) is of size at least \(2\), denote two of its elements by \(u\) and \(v\). We have that \(\Delta=u\oplus v\) is a good shift. Thus, for any \(x\) if \(f(x)\) and \(f(x\oplus\Delta)\) are defined, then \(f(x)=f(x\oplus\Delta)\). But this exactly means that there is a \(1\)-dimensional space such that \(f\) is constant on each of its cosets. Thus, \(\mathrm{NADT}^{\oplus}(f)\leq n-1\), which is a contradiction.
Now consider the case \(\mathrm{NADT}^{\oplus}(f)=n-1\) and again assume that \(\mathrm{D}_{\mathrm{cc}}^{\rightarrow}(F)\leq n-2\). Consider the corresponding function \(h\). Now one of its equivalence classes \(H\) is of size at least \(4\). Consider any three points \(u\), \(v\), \(w\) in this class. Then the vectors \(u\oplus v\), \(u\oplus w\) and
\(v\oplus w\) are good shifts. Note that they together with \(0\)-vector form a \(2\)-dimensional linear subspace of good shifts. As a result, \(f\) is a constant on every coset of this subspace and \(\operatorname{NADT}^{\oplus}(f)\leq n-2\), which is a contradiction.
Separations between \(\operatorname{D}_{\operatorname{cc}}^{\to}(F)\) and \(\operatorname{NADT}^{\oplus}(f)\)
In this section we show that if the number of undefined inputs is large, there is a gap between \(\operatorname{D}_{\operatorname{cc}}^{\to}(F)\) and \(\operatorname{NADT}^{\oplus}(f)\). That is, we aim to come up with a function \(f\) such that \(\operatorname{D}_{\operatorname{cc}}^{\to}(F)\) is small and \(\operatorname{NADT}^{\oplus}(f)\) is large.
The key idea in our construction is that in \(h\)-induced graph for the intended communication protocol the edges connect only vertices with small Hamming distance between them. Then, if the function \(f\) has \(0\)-inputs and \(1\)-inputs far away from each other, they are not connected and \(h\) corresponds to a valid protocol. We will ensure that at the same time \(f\) has large \(\operatorname{NADT}^{\oplus}\) complexity.
We start with the construction of the functions, then investigate their \(\operatorname{NADT}^{\oplus}\) complexity and then prove upper bounds on \(\operatorname{D}_{\operatorname{cc}}^{\to}\) complexity of the corresponding XOR functions. The latter part is through the reduction to covering codes.
**Definition 25**.: For a parameter \(k\) define \(f_{k}\colon\{0,1\}^{n}\to\{0,1,\bot\}\) in the following way.
\[f_{k}(x)=\begin{cases}0&\text{for }|x|\leq k,\\ \bot&\text{for }k+1\leq|x|\leq n-1,\\ 1&\text{for }|x|=n.\end{cases}\]
We denote the corresponding XOR function by \(F_{k}\).
Note, that the number of undefined inputs in \(f_{k}\) is \(V(n,n-k-1)-1\).
It turns out that \(f_{k}\) has reasonably large \(\operatorname{NADT}^{\oplus}\) and \(\operatorname{DT}^{\oplus}\) complexities.
**Theorem 26**.: \(\operatorname{NADT}^{\oplus}(f_{k})=\operatorname{DT}^{\oplus}(f_{k})=k+1\)_._
Proof.: Since \(\operatorname{DT}^{\oplus}(f)\leq\operatorname{NADT}^{\oplus}(f)\) for any \(f\), it is enough to prove that \(\operatorname{NADT}^{\oplus}(f_{k})\leq k+1\) and \(\operatorname{DT}^{\oplus}(f_{k})\geq k+1\)
For the upper bound, observe that it is enough to query variables \(x_{1},\ldots,x_{k+1}\). If all of them are equal to \(1\), we output \(1\), otherwise we output \(0\). It is easy to see that this protocol computes \(f_{k}\) correctly.
For the lower bound suppose, for the sake of contradiction, that an adaptive parity decision tree exists that can compute the function \(f\) with \(k\) or fewer queries. Consider the branch corresponding to the input \(e=(1,\ldots,1)\). Let's assume that the decision tree queried the parities \(\langle s_{i},x\rangle\) for \(s_{1},\ldots,s_{k}\). The answers to the queries are equal to \(\langle s_{1},e\rangle,\ldots,\langle s_{k},e\rangle\). Consider a matrix \(B\subseteq\mathbb{F}^{k\times n}\) consisting of rows \(s_{1},\ldots,s_{k}\).
Denote \(a=Be\). In particular, we have that \(a\) lies in the subspace generated by columns of \(B\). Since the rank of \(B\) is at most \(k\) (the matrix has \(k\) rows), there is a subset of at most \(k\) columns generating this subspace. In particular, there is \(x\in\{0,1\}^{n}\) with \(|x|\leq k\), such that \(a=Bx\). That is, \(Be=Bx\) and the protocol behaves the same way on \(e\) and \(x\), which is a contradiction, since \(f_{k}(e)=1\) and \(f_{k}(x)=0\)
_Remark 27_.: Since \(f_{k}\) has large (adaptive) parity decision tree complexity and for any \(F\) we have \(\mathrm{D}_{\mathrm{cc}}^{\rightarrow}(F)\geq\mathrm{D}_{\mathrm{cc}}(F)\), all separations provided by functions \(f_{k}\) translate into the same separations between \(\mathrm{DT}^{\oplus}\) and \(\mathrm{D}_{\mathrm{cc}}\).
Next, we proceed to the upper bound on the \(\mathrm{D}_{\mathrm{cc}}^{\rightarrow}(F_{k})\).
**Theorem 28**.: _Suppose for some \(n\), \(k\) and \(t\) there is a \((n,2^{t},R)\) covering code \(\mathcal{C}\) for \(R=\left\lfloor\frac{n-k-1}{2}\right\rfloor\). Then, \(\mathrm{D}_{\mathrm{cc}}^{\rightarrow}(F_{k})\leq t\)._
Proof.: Split the points of \(\{0,1\}^{n}\) into balls with radius \(R\) with centers in the points of \(\mathcal{C}\) (if some point belongs to several balls, attribute it to one of them arbitrarily). This results in a partition of the cube into \(2^{t}\) subsets with the diameter of each subset at most \(n-k-1\). Consider a function \(h\) with this \(\mathcal{H}\)-partition.
Edges in \(h\)-induced graph connects only vertices at distance at most \(n-k-1\). Since, distance between \(0\)-inputs and \(1\)-inputs of \(f_{k}\) is at least \(n-k\), \(0\)-inputs and \(1\)-inputs belong to disjoint connected components. By Theorem 15 we have \(\mathrm{D}_{\mathrm{cc}}^{\rightarrow}(F_{k})\leq t\).
**Theorem 29**.: _For any \(n\) and \(k\) we have_
\[\mathrm{D}_{\mathrm{cc}}^{\rightarrow}(F_{k})\leq n-\log V(n,R)+\log n\]
_for \(R=\left\lfloor\frac{n-k-1}{2}\right\rfloor\)._
Proof.: By Theorem 5 there exists a \((n,2^{t},R)\) covering code for
\[\log 2^{t}=t\leq n-\log V(n,R)+\log n.\]
The corollary follows from Theorem 28.
From this we can get a separation for a wide range of parameters.
**Corollary 30**.: _Suppose \(k=cn\) for some constant \(0<c<1\). Then \(\mathrm{NADT}^{\oplus}(f_{k})=cn+1\) and_
\[\mathrm{D}_{\mathrm{cc}}^{\rightarrow}(F_{k})\leq\left(1-H\left(\frac{1-c}{2} \right)\right)n-O(\log n).\]
_In particular, \(\mathrm{D}_{\mathrm{cc}}^{\rightarrow}(F_{k})<\mathrm{NADT}^{\oplus}(f_{k})\). The number of undefined inputs for \(f_{k}\) is \(2^{n}-O(\frac{2^{H(c)n}}{\sqrt{n}})\) if \(c<1/2\), \((1+o(1))2^{n-1}\) if \(c=1/2\) and \(O(\frac{2^{H(1-c)n}}{\sqrt{n}})\) if \(c>1/2\)._
Proof.: The equality for \(\mathrm{NADT}^{\oplus}\) is proved in Theorem 26.
For communication complexity bound we apply Theorem 29. We have \(R=\left\lfloor\frac{(1-c)n-1}{2}\right\rfloor=\frac{(1-c)n}{2}+O(1)\) and by Lemmas 10 and 12 we have
\[\log V(n,R)=H\left(\frac{1-c}{2}\right)n-O(\log n).\]
By Theorem 29 we have
\[\mathrm{D}_{\mathrm{cc}}^{\rightarrow}(F_{k})\leq n-\log V(n,R)+\log n=\left( 1-H\left(\frac{1-c}{2}\right)\right)n+O(\log n).\]
To show that \(\mathrm{D}_{\mathrm{cc}}^{\to}(F_{k})<\mathrm{NADT}^{\oplus}(f_{k})\) we need to compare \(k=cn\) with the bound on communication complexity. It is easy to see that
\[1-H\left(\frac{1-c}{2}\right)<c\]
for all \(0<c<1\) (the left hand-side and the right hand-side are equal for \(c=0\) and \(c=1\) and the left hand-side is concave in \(c\)).
The bounds on the number of undefined inputs follow easily from Lemma 10.
The largest gap we can get is the following.
**Corollary 31**.: _For \(k=\Theta(\sqrt{n\log n})\) we have that \(\mathrm{NADT}^{\oplus}(f_{k})=\Theta(\sqrt{n\log n})\) and \(\mathrm{D}_{\mathrm{cc}}^{\to}(F_{k})=O(\log n)\). The number of undefined inputs for \(f_{k}\) is \(2^{n}-2^{\Theta(\sqrt{n}\log^{3/2}n)}\)._
Proof.: For \(k=\Theta(\sqrt{n\log n})\) we have \(R=\frac{n}{2}-\Theta(\sqrt{n\log n})\) in Theorem 29. By Lemma 11 we have \(V(n,R)=\Theta\left(\frac{2^{n}}{\mathrm{poly}(n)}\right)\) and as a result \(\mathrm{D}_{\mathrm{cc}}^{\to}(F_{k})=O(\log n)\).
For the number of undefined inputs, we apply Lemma 9:
\[\left(\frac{n}{k}\right)^{k}\leq V(n,k)\leq\left(\frac{en}{k}\right)^{k}.\]
For \(k=\Theta(\sqrt{n\log n})\) it is easy to see that both sides are \(2^{\Theta(\sqrt{n}\log^{3/2}n)}\). From this the estimate on the number of undefined inputs follows.
The largest value of \(\mathrm{NADT}^{\oplus}\) for which we get separation is \(n-2\).
**Theorem 32**.: \(\mathrm{D}_{\mathrm{cc}}^{\to}(F_{n-3})\leq n-\Theta(\log n)\)_, whereas \(\mathrm{NADT}^{\oplus}(f_{n-3})=n-2\). The number of undefined inputs for \(f_{n-3}\) is \(\frac{n(n+1)}{2}\)._
Proof.: We have already proved equality for \(\mathrm{NADT}^{\oplus}(f_{n-3})\) and it remains to bound \(\mathrm{D}_{\mathrm{cc}}^{\to}(F_{n-3})\).
For this we use Theorem 28. Note that in our case \(R=\lfloor\frac{n-k-1}{2}\rfloor=1\).
If \(n=2^{m}-1\) for some integer \(m\), then we can just use Theorem 6. Each ball of radius \(1\) is of size \(n+1\) and thus in total we have \(2^{n}/(n+1)\) balls. As a result,
\[\mathrm{D}_{\mathrm{cc}}^{\to}(F_{n-3})\leq\log\frac{2^{n}}{n+1}=n-\log(n+1).\]
For general \(n\) consider maximal integer \(m\) such that \(2^{m}-1\leq n\). Denote \(n_{1}=2^{m}-1\) and \(n_{2}=n-n_{1}\). Consider Hamming code \(\mathcal{C}_{1}\) on \(\{0,1\}^{n_{1}}\) and consider the code \(\mathcal{C}_{2}=\{0,1\}^{n_{2}}\). The latter code has parameters \((n_{2},2^{n_{2}},0)\). By Lemma 8 we have that \(\mathcal{C}_{1}\oplus\mathcal{C}_{2}\) has parameters \((n,\frac{2^{n_{1}}}{n_{1}+1}\cdot 2^{n_{2}},1)\). Since \(n_{1}\) is at least half of \(n\), we have
\[\mathrm{D}_{\mathrm{cc}}^{\to}(F_{n-3})\leq\log\left(\frac{2^{n_{1}}}{n_{1}+1 }\cdot 2^{n_{2}}\right)=n-\Theta(\log n).\]
The undefined inputs of \(f_{n-3}\) are just inputs \(x\in\{0,1\}^{n}\) with weight \(n-1\) and \(n-2\). It is easy to see that there are \(\frac{n(n+1)}{2}\) of them.
The smallest value of \(\mathrm{D}_{\mathrm{cc}}^{\rightarrow}\) for which we get a separation is \(7\).
**Theorem 33**.: _For any \(n\geq 32\) we have \(\mathrm{D}_{\mathrm{cc}}^{\rightarrow}(F_{7})\leq 7\), whereas \(\mathrm{NADT}^{\oplus}(f_{7})=8\)._
Proof.: Again, we already found \(\mathrm{NADT}^{\oplus}(f_{7})\).
For the bound on \(\mathrm{D}_{\mathrm{cc}}^{\rightarrow}\) we start with Reed-Muller code \(\mathcal{RM}(1,5)\)[6, Chapter 9]. This code has parameters \((2^{5},2^{6},12)\) (as a covering code), that is, it has \(32\) input bits, the number of covering balls is \(2^{6}\) and their radius is \(R=12\). In terms of Theorem 28 we have \(R=\frac{32-7-1}{2}\) and thus the code gives us the protocol for \(F_{7}\) of size \(\log 2^{6}=6\) on \(n=32\) inputs (that is, for the particular case of \(n=32\) we have an even better upper bound on communication complexity).
For general \(n\geq 32\) denote \(n_{1}=32\) and \(n_{2}=n-n_{1}\). Let \(\mathcal{C}_{1}\) be Reed-Muller code introduced above and \(\mathcal{C}_{2}\) consist of two vectors: all zeros and all ones. The code \(\mathcal{C}_{2}\) has parameters \((n_{2},2,\left\lfloor\frac{n_{2}}{2}\right\rfloor)\). Then \(\mathcal{C}_{1}\oplus\mathcal{C}_{2}\) has parameters \((n,2^{7},\left\lfloor\frac{n_{2}}{2}\right\rfloor+12)\). Note that its radius \(R\) can be bounded as
\[R=\left\lfloor\frac{n_{2}}{2}\right\rfloor+12\leq\frac{n_{2}}{2}+12=\frac{n}{2 }+12-\frac{32}{2}=\frac{n-7-1}{2}.\]
Thus, the code gives a protocol for \(F_{7}\) of size \(7\).
## 6 The Case of Small Communication Complexity
### Case \(\mathrm{D}_{\mathrm{cc}}^{\rightarrow}(F)=1\)
A function \(h\) is called _balanced_ if all classes in the \(h\)-induced partition are of equal size. We say that \(h\) is _balanced on a subset_ when its restriction to the inputs in this subset is balanced. We analyze two distinct scenarios separately: when \(h\) is balanced and when it is not.
For the scenario where \(h\) is unbalanced, we will demonstrate that all shifts are good, leading to the conclusion that \(f\) is a constant function. Conversely, when \(h\) is balanced, we identify a specific \(n-1\)-dimensional subspace on which \(h\) is unbalanced. We then show that every shift in this subspace is good. This observation gives us that the function value of \(f\) depends solely on whether \(x\) belongs to this identified subspace and this can be checked with a single parity query.
**Lemma 34**.: _Assume \(F\) satisfies \(\mathrm{D}_{\mathrm{cc}}^{\rightarrow}(F)=1\). If \(h\) is unbalanced, then every shift is good._
Proof.: Consider arbitrary shift \(\Delta\). Consider the cosets corresponding to the subspace \(\langle\Delta\rangle\). The \(h\)-induced partition consists of two classes, since they are not equal, one class contains more than \(2^{n-1}\) elements. Applying the Pigeonhole principle we get that some coset of the subspace \(\langle\Delta\rangle\) contains two elements with the same \(h\) value. Given that a coset has only two points and those differ by shift \(\Delta\), we conclude that \(\Delta\) is indeed a good shift.
**Lemma 35**.: _Assume \(F\) satisfies \(\mathrm{D}_{\mathrm{cc}}^{\rightarrow}(F)=1\). If \(h\) is unbalanced on a given subspace, then every shift in this subspace is good._
Proof.: The proof is completely analogous to the proof of Lemma 34. Indeed, since \(h\) is unbalanced on the subspace, for any shift \(\Delta\) in the subspace there are \(x\) and \(y\) such that \(h(x)=h(y)\) and \(x\oplus y=\Delta\). Thus, \(\Delta\) is a good shift.
\[h^{\prime}(x)=h(Bx).\]
**Lemma 36**.: _For a balanced function \(h\), there is an \(n-1\)-dimensional subspace over which \(h\) is unbalanced._
Proof.: The proof is based on Fourier analysis. For the completeness of the proof, we provide basic definitions in Appendix B.
Consider Fourier decomposition of \(h\). Since \(h\) is balanced and thus not constant, there must be a non-zero coefficient \(\hat{h}(S)\) in its Fourier decomposition associated with a non-empty subset \(S\). We show that \(h\) is unbalanced on the \(n-1\)-dimensional linear subspace \(X=\{x|\chi_{S}(x)=1\}\). Assume, for the sake of contradiction, that \(h\) is balanced on \(X\). The Fourier coefficient \(\hat{h}(S)\) can be computed as follows:
\[\hat{h}(S)=\frac{1}{2^{n}}\sum_{x}(-1)^{h(x)}\chi_{S}(x)=\]
\[\frac{1}{2^{n}}\Big{(}|\{h(x)=0,x\in X\}|-|\{h(x)=1,x\in X\}|-|\{h(x)=0,x\notin X \}|+|\{h(x)=1,x\notin X\}|\Big{)}.\]
Denote the quantity \(|\{h(x)=0,x\in X\}|\) as \(a\). As \(h\) is balanced on \(X\), it follows that \(|\{h(x)=1,x\in X\}|=a\). The set \(X\) contains \(2^{n-1}\) elements so \(a=2^{n-2}\). Given that \(h\) is balanced across \(\{0,1\}^{n}\), both the sets \(\{h(x)=0,x\in\{0,1\}^{n}\}\) and \(\{h(x)=1,x\in\{0,1\}^{n}\}\) each have \(2^{n-1}\) elements. Therefore:
\[|\{h(x)=0,x\notin X\}|=|\{h(x)=0,x\in\{0,1\}^{n}\}|-|\{h(x)=0,x\in X\}|=2^{n-2},\]
\[|\{h(x)=1,x\notin X\}|=|\{h(x)=1,x\in\{0,1\}^{n}\}|-|\{h(x)=1,x\in X\}|=2^{n-2}.\]
We can see that \(\hat{h}(S)=0\) which leads us to the required contradiction.
**Theorem 37**.: _Suppose \(F\) satisfies \(\mathrm{D}_{\mathrm{cc}}^{\rightarrow}(F)=1\). It then follows that \(\mathrm{NADT}^{\oplus}(f)=1\)._
Proof.: Consider the total \(h\)-induced graph. For any unbalanced \(h\) by Lemma 34 we get that all shifts are good, so the graph is complete. It can't be split into connectivity components by vertex removal, therefore the partial \(h\)-induced graph has a single connectivity component. By Corollary 17 we have \(\mathrm{NADT}^{\oplus}(f)=0\).
For a balanced function, we use Lemma 36 to choose an \(n-1\)-dimensional subspace \(U\), on which \(h\) is unbalanced. By Lemma 35, all the shifts in \(U\) are good. Select two arbitrary vertices \(x\) and \(y\), from the same coset of \(U\). Vertices \(x\) and \(y\) are connected in the total \(h\)-induced graph because their XOR belongs to \(U\). Therefore cosets of \(U\) are cliques and they will remain connected in a partial \(h\)-induced graph. By Corollary 17 we conclude that \(\mathrm{NADT}^{\oplus}(f)=1\)
### Case \(\mathrm{D}_{\mathrm{cc}}^{\rightarrow}(F)=2\)
We handle cases when \(h\) is unbalanced and balanced separately. In the first case, we observe that the XOR of two bad shifts results in a good shift. We then use a known result on the bound on sumset cardinality to show that the good shifts either contain a coset of a \(n-1\)-dimensional subspace or there exists _large enough_ number of such shifts. Either of these cases implies a certain structure on the total \(h\)-induced graph, which allows us to get the desired lower bound. When \(h\) is balanced, we again consider the subspace on which it is unbalanced and analogously to the prior scenario, we deduce specific structure on the subspace allowing us to conclude the proof.
**Lemma 38**.: _Assume \(F\) satisfies \(\mathrm{D}_{\mathrm{cc}}^{\rightarrow}(F)=2\) and the function \(h\) is unbalanced. Then the XOR of two bad shifts is a good shift._
Proof.: Assume \(\Delta_{1}\) and \(\Delta_{2}\) are bad shifts. Consider the cosets of the subspace \(\langle\Delta_{1},\Delta_{2}\rangle\). There are a total of \(2^{n-2}\) such cosets. As the function \(h\) is unbalanced, the \(h\)-induced partition has a class, denoted as \(H_{1}\), which contains strictly more than \(2^{n-2}\) elements. By the Pigeonhole principle, there exists a coset of \(\langle\Delta_{1},\Delta_{2}\rangle\) that contains two elements, namely \(x\) and \(y\), both of which belong to \(H_{1}\). As \(h(x)=h(y)\), the XOR of \(x\) and \(y\) produces a good shift. Additionally, \(x\) and \(y\) lay in the same coset, thus the shift \(x\oplus y\) is a member of \(\langle\Delta_{1},\Delta_{2}\rangle\). Within the subspace \(\langle\Delta_{1},\Delta_{2}\rangle\), there are only three distinct non-zero shifts: \(\Delta_{1}\), \(\Delta_{2}\), and \(\Delta_{1}\oplus\Delta_{2}\). Given that both \(\Delta_{1}\) and \(\Delta_{2}\) are bad shifts, the only possible good shift among them is \(\Delta_{1}\oplus\Delta_{2}\).
**Theorem 39**.: _Let \(A\) and \(B\) be non-empty subsets of \(\{0,1\}^{n}\). Define the sumset of \(A\) and \(B\) as \(A+B=\{a+b|a\in A,b\in B\}\). Assume that \(A\) is not contained in a coset of any proper subspace of \(\{0,1\}^{n}\). Then_
\[|A+B|\geq\min\{|A|+|B|-2^{n-3},3\cdot 2^{n-2}\}.\]
The proof of this theorem is moved to Appendix C.
**Lemma 40**.: _Assume that \(\mathrm{D}_{\mathrm{cc}}^{\rightarrow}(F)=2\) and \(h\) is unbalanced. Then either there exists at least \(5\cdot 2^{n-3}-1\) good shifts (not counting 0), or the set of good shifts contains a coset of an \(n-1\)-dimensional subspace._
Proof.: Let \(B\) be the set of bad shifts and \(\overline{B}\) be the set of good shifts, these are complementary so \(|B|+|\overline{B}|=2^{n}\). There are two cases to consider: either \(B\) is a subset of a coset of a proper subspace or it is not. In the first case, let \(Q\) be a subspace and \(q\) be a vector in \(\{0,1\}^{n}\) such that \(B\subseteq Q+q\). We extend the coset \(Q+q\) to a coset \(Q^{\prime}+q\) of some \(n-1\)-dimensional subspace \(Q^{\prime}\). Observe that since \(B\) is fully contained in \(Q^{\prime}+q\), another coset of \(Q^{\prime}\) it is fully contained in \(\overline{B}\).
In the second case, first observe that by Lemma 38 the sum of bad shifts is a good shift, thus we have \(B+B\subseteq\overline{B}\). By Theorem 39 we have
\[|\overline{B}|\geq|B+B|\geq\min\{2|B|-2^{n-3},3\cdot 2^{n-2}\}.\]
We also know that \(|B|+|\overline{B}|=2^{n}\). As a result, either
\[|B|+2|B|-2^{n-3}\leq 2^{n},\]
\[|\overline{B}|\geq 3\cdot 2^{n-2}.\]
It is easy to see that in both cases
\[|\overline{B}|\geq 5\cdot 2^{n-3}.\]
If we exclude the zero shift, we have at least \(5\cdot 2^{n-3}-1\) good shifts.
**Lemma 41**.: _Assume \(F\) satisfies \(\mathrm{D}^{\rightarrow}_{\mathrm{cc}}(F)=2\). If \(h\) is unbalanced, then one of the following two conditions is true for the total \(h\)-induced graph:_
* _Total_ \(h\)_-induced graph consists of two cliques, each being a coset of an_ \(n-1\)_-dimensional subspace._
* _Total_ \(h\)_-induced graph is_ \(2^{n-2}\)_-vertex connected._
Proof.: We consider three cases.
**Case 1:** In this case, we assume that the set of good shifts contains a subspace \(Q\) of dimension \(n-1\). Take two arbitrary points \(x\) and \(y\) from the same coset \(Q+q\), where \(q\) is a specific vector in \(\{0,1\}^{n}\). Then, \(x\) and \(y\) can be expressed as \(x=x^{\prime}\oplus q\) and \(y=y^{\prime}\oplus q\) for \(x^{\prime},y^{\prime}\in Q\). Consequently, \(x\oplus y=x^{\prime}\oplus y^{\prime}\in Q\). This shows that any two points in the coset of \(Q\) are connected by an edge in the total \(h\)-induced graph, forming cliques.
**Case 2:** Assume that the set of good shifts contains an \(n-1\)-dimensional coset \(Q+q\), where \(Q\) is an \(n-1\)-dimensional subspace and \(q\) is a vector not in \(Q\). Consider two arbitrary points \(x\) and \(y\) from different cosets of \(Q\). Without loss of generality, let \(x\in Q\) and \(y\in Q+q\). There exists \(y^{\prime}\in Q\) such that \(y=y^{\prime}\oplus q\). Then, \(x\oplus y=(x\oplus y^{\prime})\oplus q\in Q+q\). Thus, an edge exists between \(x\) and \(y\) in the total \(h\)-induced graph, and, as a result, the graph contains a complete bipartite graph with parts being the cosets of \(Q\). To make this graph disconnected one has to delete the whole part, thus the graph is \(2^{n-1}\)-connected.
**Case 3:** Assume the set of good shifts satisfies neither of the first two conditions. Then, by Lemma 40, there must be at least \(5\cdot 2^{n-3}-1\) good shifts. Take any two arbitrary non-neighboring vertices \(x\) and \(y\); the sizes of their neighbor sets are at least \(5\cdot 2^{n-3}-1\). Given that the total number of vertices excluding \(x\) and \(y\) is \(2^{n}-2\), the intersection of these neighbor sets must contain at least \(2^{n-2}\) vertices. Hence, removing fewer than \(2^{n-2}\) vertices cannot disconnect the graph.
**Lemma 42**.: _Assume \(F\) satisfies \(\mathrm{D}^{\rightarrow}_{\mathrm{cc}}(F)=2\). If \(h\) is unbalanced on a subspace \(Q\) of dimension \(n-1\), then one of the following conditions must hold:_
* _The total_ \(h\)_-induced graph consists of four distinct cliques, each of which corresponds to a coset of an_ \(n-2\)_-dimensional subspace that is itself a subspace of_ \(Q\)_._
* _The subgraphs of the complete_ \(h\)_-induced graph on the vertices of cosets of_ \(Q\)_, are at least_ \(2^{n-3}\)_-vertex connected._
Proof.: For the proof we just apply Lemma 41 on the subspace \(Q\). Formally, let \(B\) be a matrix whose columns form a basis for \(Q\). We define a new function \(h^{\prime}:x\mapsto h(Bx)\) (\(x\) is of length \(n-1\)). Applying Lemma 41, we conclude that the total \(h^{\prime}\)-induced graph either
consists of cliques corresponding to cosets of an \(n-2\)-dimensional subspace \(Q^{\prime}\) or that graph is \(2^{n-3}\)-vertex connected.
To relate \(h^{\prime}\) back to \(h\), we consider a vector \(q\) not in \(Q\) and define two graph embeddings \(\psi_{1}:x\mapsto Bx\) and \(\psi_{2}:x\mapsto Bx\oplus q\) of the total \(h^{\prime}\)-induced graph into the total \(h\)-induced graph. The images of these mappings are \(Q\) and \(Q+q\). To see that they are indeed graph embeddings we notice that if \(x\) and \(y\) are connected in the total \(h^{\prime}\)-induced graph, \(x\oplus y\) is a good shift for \(h^{\prime}\), so \(B(x\oplus y)\) is a good shift for \(h\), which implies that images of \(x\) and \(y\) under \(\psi_{1}\) as well as images of \(x\) and \(y\) under \(\psi_{2}\) are indeed connected in \(h\)-induced graph. The bound on vertex connectivity of cosets follows from these embeddings. Note that these mappings are also affine transformations that only differ by a shift. Therefore, the image of cosets in \(\{0,1\}^{n-1}\) over these mappings will result in cosets of the same space in \(\{0,1\}^{n}\), which finishes the proof.
**Theorem 43**.: _If function \(f\) is undefined on fewer than \(2^{n-3}\) inputs and \(\mathrm{D}^{\rightarrow}_{\mathrm{cc}}(F)=2\), then \(NADT(f)=2\)._
Proof.: We have two main cases to consider, depending on whether \(h\) is balanced or unbalanced. If \(h\) is unbalanced, we apply Lemma 41. As a result, either the \(h\)-induced graph consists of cliques corresponding to \(n-1\)-dimensional cosets, or the \(h\)-induced graph is \(2^{n-2}\)-vertex connected. In the first case, by Corollary 17, we conclude that \(\mathrm{NADT}^{\oplus}(f)\leq 1\), which is a contradiction. In the second case the graph is \(2^{n-2}\)-vertex connected and again by Corollary 17 we find that \(\mathrm{NADT}^{\oplus}(f)=0\) because the function \(f\) is undefined on fewer than \(2^{n-2}\) inputs, making it impossible to disconnect the graph by removing vertices.
If \(h\) is balanced, we use Lemma 36 to find a subspace \(Q\) where \(h\) becomes unbalanced. Then by Lemma 42 the graph will split either into four fully connected cosets, or into two \(2^{n-3}\) vertex-connected cosets. As \(f\) in undefined in less than \(2^{n-3}\) points we again use Corollary 17 and conclude that \(\mathrm{NADT}^{\oplus}(f)\leq 2\).
|
2301.00297 | Deterministic and Nondeterministic Particle Motion with Interaction
Mechanisms | Studying systems where many individual bodies in motion interact with one
another is a complex and interesting area. Simple mechanisms that may be
determined for biological, chemical, or physical reasons can lead to
astonishingly complex results that require a further understanding of the
moving bodies. With the increasing interaction between computation and various
scientific areas, it has become more useful, feasible, and important to create
models for these systems. Here, we present two families of models,
deterministic and nondeterministic, along with three distinct and realistic
interaction mechanisms. These are combined in a unique way to provide the
groundwork for particle system models across multiple disciplines. This work
has applications that range from biology to chemistry and physics. In addition
to describing the motivations and math behind all the models, software is
provided that allows researchers to quickly adjust and implement what is
described here. | Cameron McNamee, Renee Reijo Pera | 2022-12-31T21:52:49Z | http://arxiv.org/abs/2301.00297v1 | **Deterministic and Nondeterministic Particle Motion with Interaction Mechanisms**
## Abstract
Studying systems where many individual bodies in motion interact with one another is a complex and interesting area. Simple mechanisms that may be determined for biological, chemical, or physical reasons can lead to astonishingly complex results that require a further understanding of the moving bodies. With the increasing interaction between computation and various scientific areas, it has become more useful, feasible, and important to create models for these systems. Here, we present two families of models, deterministic and nondeterministic, along with three distinct and realistic interaction mechanisms. These are combined in a unique way to provide the groundwork for particle system models across multiple disciplines. This work has applications that range from biology to chemistry and physics. In addition to describing the motivations and math behind all the models, software is provided that allows researchers to quickly adjust and implement what is described here.
## 1 Introduction
This work was inspired by consideration of quantum biology (QB) and cell migration and tissue formation as previously described [20] with the goal of identifying quantitative measures of particle clustering in a scaffold. Identifying such measures involves observing the locations of the particles using image processing and applying these locations to a novel clustering algorithm. The results were some surprising numbers that led to a search for other, more robust methods of cluster analysis and an environment in which to test resulting algorithms. This proved not to be a simple task prompting development of potentially new models to understand how particles may interact under different conditions. From a biological perspective, and in view of the eventual application of this work, this paper explores the coalescing of particles in a space towards one another. It is also important to understand that in the simulation process, the goal is to replicate what is seen in realistic settings. In addition to biology, many systems wherein multiple autonomous particles are driven in a migratory pattern can be seen in physics and chemistry, allowing this work to be expanded to multiple fields [13, 20]. Nanoparticle and ion motion are two distinct areas that this type of computational research can be readily applied and lead to interesting features of such a system being uncovered [13, 20]. With the diverse opportunities in mind, this paper seeks to abstract the motion and interaction of individual bodies in such systems to create general models. These general models may then be taken by a researcher and applied with the appropriate parameters to match a particular system, with |
2309.11553 | Hybrid Quantum-Classical Stochastic Approach to Spin-Boson Models | Interacting spin-boson models encompass a large class of physical systems,
spanning models with a single spin interacting with a bosonic bath -- a
paradigm of quantum impurity problems -- to models with many spins interacting
with a cavity mode -- a paradigm of quantum optics. Such models have emerged in
various quantum simulation platforms which are further subject to noise and
lossy dynamics. As generic many-body systems, dynamics of spin-boson models
constitutes a challenging problem. In this paper, we present an exact hybrid
quantum-classical stochastic approach to different spin-boson models which are
typically treated using distinct techniques. In this approach, the solution of
a classical stochastic equation (mimicking the bosonic modes) is input into a
quantum stochastic equation for the spins. Furthermore, the spins are
effectively decoupled for each stochastic realization, but this comes at the
expense of sampling over unphysical states. Remarkably, the dynamics remains
Markovian in our approach even in the strong coupling regime. Moreover, we
utilize Markovian dissipation to make \textit{causality} manifest, thus
ensuring hermiticity (though not positivity) of the density matrix for each
realization. Finally, in contrast with many existing methods, we place no
restriction on the initial state, and further argue that an intrinsic
nonlinearity of the bosonic modes can be tackled within this framework. We
benchmark and showcase the utility of our approach in several examples,
specifically in cases where an exact numerical calculation is far from reach. | Naushad A. Kamar, Mohammad Maghrebi | 2023-09-20T18:00:05Z | http://arxiv.org/abs/2309.11553v1 | # Hybrid Quantum-Classical Stochastic Approach to Spin-Boson Models
###### Abstract
Interacting spin-boson models encompass a large class of physical systems, spanning models with a single spin interacting with a bosonic bath--a paradigm of quantum impurity problems--to models with many spins interacting with a cavity mode--a paradigm of quantum optics. Such models have emerged in various quantum simulation platforms which are further subject to noise and lossy dynamics. As generic many-body systems, dynamics of spin-boson models constitutes a challenging problem. In this paper, we present an exact hybrid quantum-classical stochastic approach to different spin-boson models which are typically treated using distinct techniques. In this approach, the solution of a classical stochastic equation (mimicking the bosonic modes) is input into a quantum stochastic equation for the spins. Furthermore, the spins are effectively decoupled for each stochastic realization, but this comes at the expense of sampling over unphysical states. Remarkably, the dynamics remains Markovian in our approach even in the strong coupling regime. Moreover, we utilize Markovian dissipation to make _causality_ manifest, thus ensuring hermiticity (though not positivity) of the density matrix for each realization. Finally, in contrast with many existing methods, we place no restriction on the initial state, and further argue that an intrinsic nonlinearity of the bosonic modes can be tackled within this framework. We benchmark and showcase the utility of our approach in several examples, specifically in cases where an exact numerical calculation is far from reach.
## I Introduction
Spin-boson models where one or many spins interact with bosonic modes encompass a large class of physical models. On one hand, the paradigmatic spin-boson model describing a single two-level system coupled to an (infinite) bosonic bath [1; 2] defines a paradigm of quantum impurity problems, with applications to physical, chemical and even biological problems [1; 2; 3; 4; 5]. On the other hand, the Dicke model describing many spins coupled to a cavity mode defines a paradigm of quantum optics and gives rise to a superradiant phase transition at strong coupling [6; 7]. Such models are particularly relevant in the setting of quantum simulation platforms: the paradigmatic spin-boson model has been recently realized in superconducting qubits [8; 9], while the Dicke model has been implemented in multiple platforms [10; 11; 12; 13; 14]; quantum Rabi models, describing a single spin coupled to a boson, are also demonstrated recently in trapped ions [15; 16; 17]. Furthermore, generalized spin-boson models involving many interacting spins and bosons can be implemented in various platforms ranging from trapped ions [18; 19; 20; 21; 22], to cavities via cold atoms [23; 24; 25], to superconducting qubits [26; 27], and optomechanics [28]. Quantum simulation platforms are further subject to incoherent dynamics, for example, due to the noise in lasers or the cavity loss. A full description then accounts for dissipative dynamics, and in many settings (specifically, those involving cavities) takes the form of a quantum Markovian master equation. The resulting dynamics gives rise to driven-dissipative systems defined by the competition between a coherent drive and incoherent loss [29; 30]. Driven-dissipative dynamics of interacting spin-boson models emerges in various settings such as trapped ions [31; 32; 33], Rydberg gases [34; 35], circuit QED [36; 37], and cavity QED platforms [38; 39; 40].
Dynamics of many-body systems out of equilibrium constitutes a notoriously challenging problem, as the size of the Hilbert space is exponentially large, and traditional Monte Carlo methods suffer from the dynamical sign problem. Spin-boson models are further complicated due to the unbounded local Hilbert space of bosons [41; 42]. Moreover, memory effects become important at strong coupling, leading to non-Markovian dynamics of spins. Distinct techniques have been devised for different types of spin-boson models: for the paradigmatic spin-boson model, these techniques range from perturbative analytical methods such as NIBA [1; 2], to various stochastic methods [43; 44; 45; 46], Monte Carlo simulations [47; 48], matrix product state (MPS) based methods [49; 50; 51], and more recently non-Gaussian variational ansatze [52; 53]. On the other hand, many-body spin systems coupled to a cavity mode have been treated using mean field theory and cumulant expansion [54; 55], (discrete) truncated Wigner approximation [56; 57], (Keldysh) field theory methods [58; 59; 60], quantum trajectories [61], and tensor networks [62; 63]. Finally, exact solutions via Bethe ansatz [64] or alternative methods [65] are available in special cases of these models. Simple descriptions via a (convolutionless, or Redfield, e.g.) master equation involving only the spins can be obtained in certain regimes [66; 67]; however, they are generally unavailable specifically in the strong coupling regime, and, even when they are, the resulting model remains challenging for a many-body spin system.
In this work, we consider a generalized spin-boson model and develop a hybrid stochastic quantum-classical
approach to the evolution of the spins. In this approach, we first solve a classical stochastic equation (mimicking the bosonic modes) whose solution is then fed into a quantum stochastic equation for the spins. Furthermore, the spins are effectively decoupled for each stochastic realization, but this comes at the expense of sampling over unphysical (e.g., non-positive) states. Our work provides a uniform approach to different spin-boson models, and is distinct from previous stochastic approaches in several characteristic ways: i) our hybrid quantum-classical approach remains Markovian even in the strong coupling regime; ii) causality is made manifest (thanks to Markovian loss), preserving the hermiticity of the density matrix for each realization; iii) no restriction is placed on the initial state. Interestingly, the second feature indicates that Markovian dissipation can be used as a resource for numerical simulation. Furthermore, we argue that our approach can tackle intrinsic nonlinearities of the bosonic modes. We benchmark our method and showcase its utility in several examples where an exact numerical computation is far from reach.
The structure of this paper is as follows: In Section II, we introduce the generalized spin-boson model, and further discuss a first stochastic approach to decoupling the spins while pointing to its limitations. In Section III, we summarize the main results of the paper. In Section IV, we present our main approach in application to a model comprising a single spin coupled to a cavity mode. In Section V, we generalize this treatment to many spins. In Section VI, we consider a single spin coupled to many bosonic modes, and finally in Section VII, we consider the generalized model with many interacting spins and bosons. We summarize and outline future directions in Section VIII. The technical derivation of the main results is provided in Appendices A to D.
## II Model
We consider a generalized model where spins are coupled to one or many bosonic modes, described by the Hamiltonian
\[H_{N,M}=\frac{\Delta}{2}\sum_{i=1}^{N}\sigma_{i}^{z}+\sum_{\alpha=1}^{M}\omega _{\alpha}a_{\alpha}^{\dagger}a_{\alpha}+\frac{1}{\sqrt{N}}\sum_{\alpha i}\frac {g_{\alpha i}}{2}\sigma_{i}^{x}(a_{\alpha}+a_{\alpha}^{\dagger}) \tag{1}\]
where the spin on site \(i\) is coupled linearly to the bosonic operators \(a_{\alpha}\) via the coupling \(g_{\alpha i}\). The overall normalization factor in front of the last term in the Hamiltonian is included for convenience and renders the Hamiltonian extensive in the number of spins. We shall denote the three terms on the rhs as \(H_{S}\), \(H_{B}\), and \(H_{SB}\), respectively. A prototypical example is a cavity QED system where atoms are placed inside a multimode cavity [25]. Additionally, we assume that the system is lossy; in the example of cavity QED, this could be the cavity loss. Under the Born-Markov approximation, the dynamics is governed by a quantum Markovian master equation given by [68; 69]
\[\frac{d\rho}{dt}= \mathcal{L}(\rho)=-i[H_{N,M},\rho]+ \tag{2}\] \[+\sum_{\alpha}2L_{\alpha}\rho L_{\alpha}^{\dagger}-\rho L_{\alpha }^{\dagger}L_{\alpha}-L_{\alpha}^{\dagger}L_{\alpha}\rho\]
Here, \(\rho\) is the density matrix of the full system and \(\mathcal{L}\) defines the Liouvillian comprising both the Hamiltonian and the dissipative dynamics. The operators \(L_{\alpha}\) are the so-called Lindblad operators, and characterize dissipation. We assume that the bosonic modes are subject to loss at the rate \(\kappa_{\alpha}\) with the corresponding Lindblad operator \(L_{\alpha}=\sqrt{\kappa_{\alpha}}a_{\alpha}\). Additionally, spins could be subject to loss, for example, via atomic spontaneous emission. We emphasize that \(H_{N,M}\) in the above equation should be interpreted as the Hamiltonian in the rotating frame, and the driven nature of the model is thus disguised in the rotating frame. We are particularly interested in the reduced density matrix of the spins \(\rho_{S}=\mathrm{Tr}_{B}(\rho)\). In the absence of the coupling to the bosonic modes, the spin dynamics is generated by the spin-only part of the full Liouvillian denoted by \(\mathcal{L}_{S}\). Since spins do not interact directly, the latter is a sum of local terms, \(\mathcal{L}_{S}=\sum_{i}\mathcal{L}_{i}\).
### Decoupling spins: A first approach
Many studies of the spin-boson models assume that the initial state is factorized (i.e., spins and bosons are initially uncorrelated) and furthermore the bosonic modes are initially thermal (hence, Gaussian). With this assumption, one can exactly trace out the bosonic modes thanks to the path-integral formulation by Feynman and Vernon [1; 2; 70]. This approach makes use of the quantum to classical mapping where quantum spin operators \(\sigma_{i}^{x}\) are mapped to classical spin variables \(\sigma_{i}=\pm 1\) with \(\sigma_{i}^{x}|\sigma_{i}\rangle=\sigma_{i}|\sigma_{i}\rangle\). The path integral for the spins' density matrix then involves a sum over all configurations of the ket and bra states, which we denote by \(\boldsymbol{\sigma}=\{\sigma_{i}(t)\}\) and \(\boldsymbol{\sigma}^{\prime}=\{\sigma_{i}^{\prime}(t)\}\), respectively. Performing the Gaussian integral over the bosonic modes, one obtains the Feynman-Vernon influence functional
\[\begin{split}\mathscr{F}[\boldsymbol{\sigma},\boldsymbol{\sigma} ^{\prime}]=\exp\Big{[}-\frac{1}{N}\sum_{i\neq t^{\prime}}&\frac{1} {2}\,C^{r}_{ij}(t,t^{\prime})\,\tilde{\eta}_{i}(t)\tilde{\eta}_{j}(t^{\prime} )\\ &+\,i\chi^{r}_{ij}(t,t^{\prime})\,\tilde{\eta}_{i}(t)\eta_{j}(t^{ \prime})\Big{]}\end{split} \tag{3}\]
where \(\eta=(\sigma+\sigma^{\prime})/2\) and \(\tilde{\eta}=(\sigma-\sigma^{\prime})/2\) and the time and spin indices are implicit. The kernels \(C^{r}\) and \(\chi^{r}\) denote the real part of the correlators \(C\) and \(\chi\) defined as
\[\begin{split} C_{ij}(t,t^{\prime})&=\sum_{\alpha}g_{ \alpha i}g_{\alpha j}\left\langle[a_{\alpha}(t),a_{\alpha}^{\dagger}(t^{\prime})] _{+}\right\rangle_{B}\\ i\chi_{ij}(t,t^{\prime})&=\Theta(t-t^{\prime}) \sum_{\alpha}g_{\alpha i}g_{\alpha j}\left\langle[a_{\alpha}(t),a_{\alpha}^{ \dagger}(t^{\prime})]_{-}\right\rangle_{B}\end{split} \tag{4}\]
These expressions are given for _free_ bosons in the absence of the coupling to spins (indicated by subscript
\(B\)), and can be explicitly computed. They have very different interpretations though: while \(C_{ij}\) involves the anticommutator and characterizes symmetrized correlations, the function \(\chi_{ij}\) encodes the causal response of the bath. Moreover, \(C_{ij}^{r}(t,t^{\prime})\), viewed as a matrix with the rows \(it\) and columns \(jt^{\prime}\), is both positive and symmetric. Using this fact, one can make a Hubbard-Stratonovich transformation as
\[\overline{\exp\Big{[}\sum_{it}ik_{i}(t)\tilde{\eta}_{i}(t)\Big{]}}=\exp\Big{[} -\frac{1}{2N}\sum_{ijtt^{\prime}}C_{ij}^{r}(t,t^{\prime})\tilde{\eta}_{i}(t) \tilde{\eta}_{j}(t^{\prime})\Big{]} \tag{5}\]
where we have introduced a Gaussian distributed (real-valued) field \(k_{i}\) whose correlations are given by \(\overline{k_{i}(t)k_{j}(t^{\prime})}=\frac{1}{N}C_{ij}^{r}(t,t^{\prime})\); the overline indicates an average with respect to the Gaussian distribution. \(k_{i}\) can be then viewed as a longitudinal, though stochastic, field acting on spin \(i\). Remarkably, the spin-spin coupling via \(C_{ij}\) can be traded for a stochastic sampling while crucially maintaining a unitary evolution for each realization of the stochastic field.
The causal response function however poses a significant challenge, and is indeed the root of the dynamical sign problem in Monte-Carlo type simulations [47; 48]. Causality implies that \(\chi_{ij}(t,t^{\prime})\) is not symmetric or positive (as a matrix). A special case arises for a single spin coupled to an Ohmic bath where \(\chi(t,t^{\prime})\), approximated as a delta function, can be integrated into the dynamics generator [71; 72; 73; 45]; however, this approach is further limited to weak coupling and a large bath; see also related work [44; 75]. More generally, one can take a similar approach to Eq. (5) by applying another Hubbard-Stratonovich transformation to the term involving \(\chi\). With suitable correlations between the two stochastic fields, the spins are decoupled at the expense of stochastic sampling. But, this comes at the cost of complex-valued fields and non-unitary dynamics. The stochastic evolution then leads to unphysical states that are, among other things, not hermitian. Different variations of this approach have been applied, with some success, to models ranging from the prototypical spin-boson model (single spin coupled to an infinite bath) [76; 77; 78; 79], to quantum spin chains [80; 81; 82; 83; 84]. Note that the latter emerge in a limit of Eq. (1) just as the spin-motion coupling gives rise to Ising interactions in trapped ions [33]. Such stochastic approaches however suffer from unstable solutions or slow convergence although different sampling improves their behavior [77; 78]. At a fundamental level, it is not entirely satisfactory that the response function \(\chi_{ij}(t,t^{\prime})\) is treated in a similar fashion as \(C_{ij}(t,t^{\prime})\) via a Hubbard-Stratonovich transformation, hiding the fact that \(\chi_{ij}\) is inherently a causal object. Furthermore, this approach as well as alternative stochastic methods typically assume that the initial state is factorized and bosons (or, the _bath_) are initially thermal [46].
## III Main results
In this paper, we take a fundamentally different approach to decoupling the spins (both in time \(t\) and the spin index \(i\)) that is manifestly causal. A first hint is that the response function contains information about dissipation. Indeed, Markovian dissipation plays an important role in our approach. Departing from the Feynman-Vernon influence functional, we develop a hybrid quantum-classical approach to the spin-boson model where bosonic operators are treated using functional integral methods while spins are evolved quantum mechanically. Furthermore, our approach allows us to consider a general initial state that is not necessarily factorized or thermal. Lastly, we present a uniform approach to models that are typically treated with different techniques.
We first start with a model where \(N\) spins are coupled to a single bosonic mode (\(M=1\)); we denote the bosonic parameters by \(\omega,\kappa\), and the coupling to each spin by \(g_{i}\). We show that each spin can be evolved independently while the coupling to the bosonic mode can be mimicked by a classical stochastic field that linearly depends on white noise. To this end, we first introduce for each spin a complex-valued white noise \(\xi_{i}(t)\) satisfying
\[\overline{\xi_{i}(t)\bar{\xi}_{j}(t^{\prime})}=\kappa\delta_{ij}\delta(t-t^{ \prime}) \tag{6}\]
We further define the associated field \(\underline{\psi}\) via
\[(i\partial_{t}-\omega+i\kappa)\underline{\psi}=\frac{1}{\sqrt{N}}\sum_{i}\xi_ {i}(t) \tag{7}\]
together with the initial value \(\underline{\psi}(t=0)\) that is sampled from the Wigner distribution function describing the initial (not necessarily thermal) state of the bosonic mode. For simplicity, we assume that the spins and bosons are initially uncorrelated, but it is straightforward to relax this assumption. We now state our main result (for \(M=1\)): for a given realization, the evolution of each spin is decoupled from the rest, and is governed by
\[\frac{d}{dt}\varrho_{i}=\mathcal{L}_{i}(\varrho_{i})-i\left[h_{i}(t)\sigma_{i} ^{x}\varrho_{i}-\bar{h}_{i}(t)\varrho_{i}\sigma_{i}^{x}\right] \tag{8}\]
where \(\varrho_{i}\) describes the (un-averaged) density matrix for spin \(i\), and the longitudinal field \(h_{i}\) is defined from white noise \(\xi_{i}\) and the associated classical stochastic field \(\underline{\psi}\) as
\[h_{i}=\frac{g_{i}}{2}(\underline{\psi}+\underline{\bar{\psi}})+i\frac{g_{i}}{ 4\kappa}(\xi_{i}+\bar{\xi}_{i}) \tag{9}\]
Finally, summing over many realizations, one obtains the physical density matrix of the spins as
\[\rho_{S}(t)=\overline{\otimes_{i}\varrho_{i}(t)} \tag{10}\]
where the overline indicates an average over both noise and initial conditions. This constitutes a hybrid quantum-classical approach where the solution of a classical stochastic equation feeds into the quantum stochastic evolution of the spin. Remarkably, both classical
and quantum stochastic equations are Markovian. This should be contrasted with a Hubbard-Stratonovich transformation similar to Eq. (5) that would introduce colored noise. More importantly, causality is manifest through the dependence of the classical stochastic field on noise [\(\delta\underline{\underline{\psi}}(t)/\delta\xi_{i}(t^{\prime})\) gives the response function]. As a result, the density matrix remains hermitian even for a single noise realization. Again, these features are to be contrasted with a naive Hubbard-Stratonovich transformation of the term involving the response function. Still, the imaginary part of \(h_{i}\) leads to an unphysical evolution, and thus \(\underline{\varrho}\) is not guaranteed to be trace-1 or positive for single trajectories. We also remark that Eq. (8) comes with the multiplicative noise and is given in the Ito sense. Notably, the stochastic solutions to Eq. (8) always exist unlike stochastic methods based on the positive \(P\) representation [85].
A generalization to \(M\) bosonic modes is rather straightforward. A first approach is to generalize our main results above by introducing \(M\) classical fields that satisfy linear stochastic equations with \(M\times N\) noise variables. Each spin should be then evolved under a stochastic longitudinal field that depends linearly on these fields and noise variables. This approach becomes more expensive for large (or infinite) \(M\). Instead, we can use a trick that reduces \(M\times N\) white noise variables to just \(N\) (again denoted by \(\xi_{i}\)), but we must introduce colored noise (dubbed \(x_{i}\)) that captures the remaining redundancy. In essence, this is similar to the Hubbard-Stratonovich transformation in Eq. (5), while preserving causality explicitly. We further assume that the initial state is factorized and bosonic modes are initially in their vacuum state, but we later show that these assumptions can be easily relaxed. More precisely, we evolve each spin under a longitudinal field \(h_{i}\) given by
\[\begin{split} h_{i}(t)=&\frac{1}{2N}\sum_{j=1}^{N} \frac{1}{\gamma_{j}}\int_{0}^{\infty}dt^{\prime}\chi_{ij}(t,t^{\prime})\xi_{j} (t^{\prime})+c.c.\\ &+\frac{1}{2}(x_{i}(t)+\bar{x}_{i}(t))+\frac{i}{2}\big{(}\xi_{i}( t)+\bar{\xi}_{i}(t)\big{)}\end{split} \tag{11}\]
with the complex-valued noise variables \(\xi_{i}(t)\) and \(x_{i}(t)\): \(\xi_{i}\) is white noise with the correlations
\[\overline{\xi_{i}(t)\bar{\xi}_{j}(t^{\prime})}=\gamma_{i}\delta_{ij}\delta(t -t^{\prime}),\quad\gamma_{i}=\sum_{\alpha}\frac{g_{\alpha i}^{2}}{4\kappa_{ \alpha}} \tag{12}\]
and \(x_{i}\) is Gaussian-distributed colored noise whose correlations are given by
\[\overline{x_{i}(t)\bar{x}_{j}(t^{\prime})}=\frac{1}{N}\check{C}_{ij}(t,t^{ \prime}) \tag{13}\]
where the _modified_ correlations \(\check{C}_{ij}(t,t^{\prime})\) is given by
\[\begin{split}\check{C}_{ij}(t,t^{\prime})&=C_{ij}(t,t^{\prime})\\ &-\frac{1}{N}\sum_{l=1}^{N}\frac{1}{\gamma_{l}}\int_{0}^{\infty} \!\!dt^{\prime\prime}\chi_{il}(t,t^{\prime\prime})\bar{\chi}_{jl}(t^{\prime}, t^{\prime\prime})\end{split} \tag{14}\]
The kernels \(C\) and \(\chi\) are defined according to Eq. (4). One can show that \(\check{C}_{ij}(t,t^{\prime})\) is positive as a matrix (in the basis \(it,jt^{\prime}\)). Again, the longitudinal field finds an imaginary component proportional to white noise, hence a non-unitary evolution; however, the density matrix remains hermitian for each trajectory. Finally, summing over many realizations, one obtains the physical density matrix. While each noise realization gives rise to decoupled, and possibly unphysical, states, the noise average should yield the physical state of the spins and produce their nontrivial correlations. For a schematic representation of the model and our stochastic approach, see Fig. 1.
Figure 1: (**Left**) The schematic representation of the spin-boson model where each spin (an open circle) interacts with one or many bosonic modes (characterized by infinite towers of states). The bosonic modes are subject to Markovian loss (wavy arrow). The main object of interest is \(\rho_{S}(t)\) representing the time-dependent reduced density matrix of the spins. (**Right**) In a hybrid quantum-classical stochastic approach, the solution to a classical stochastic equation mimicking the bosonic modes feeds into the quantum stochastic evolution of the spins. Specifically, the coupling to bosons is captured by a local longitudinal field \(h_{i}(t)\) that is linearly dependent on both white and colored noise (collectively represented by solid circles), and under which each spin evolves independently. The field \(h_{i}(t)\) is however complex valued: the real part of \(h_{i}\) can be viewed as a physical, though time-dependent, field comprising a term that is causally dependent on white noise via the response function \(\chi_{ij}(t,t^{\prime})\) plus colored noise whose correlations are given by the _modified_ correlation function \(\check{C}_{ij}(t,t^{\prime})\); on the other hand, the imaginary part of \(h_{i}\) is directly proportional to white noise, and leads to an unphysical evolution for a given noise realization. Still, our causal approach ensures the density matrix remains hermitian for each realization. The average over many realizations gives the physical state of the spins and produces its nontrivial correlations. See the text for details.
Spin coupled to cavity mode \(N=m=1\)
In this section, we consider the Rabi model as the simplest nontrivial case of Eqs. (1) and (2) with \(N=M=1\); to simplify notation, we take \(\omega_{\alpha}\rightarrow\omega\), \(\kappa_{\alpha}\rightarrow\kappa\), and \(g_{\alpha i}\to g\):
\[H_{1,1}=\frac{\Delta}{2}\sigma^{z}+\omega a^{\dagger}a+\frac{g}{2}(a+a^{ \dagger})\sigma^{x} \tag{15}\]
We assume that the system is initially in a factorized state, \(\rho(t=0)=\rho_{S}(0)\otimes\rho_{B}(0)\); we shall relax this assumption later. Conveniently, we can first "vectorize" the density matrix \(\rho\rightarrow|\rho\rangle\!\rangle\) such that the Liouvillian \(\mathcal{L}\) becomes a (non-Hermitian) matrix \(\mathbb{L}\) acting on the vectorized state. More explicitly, we map \(O|i\rangle\langle j|O^{\prime}\to O|i\rangle\otimes{O^{\prime}}^{ \mathcal{T}}|j\rangle\) where the element \(|i\rangle\langle j|\) of the density matrix is mapped to the vector \(|i\rangle\otimes|j\rangle=|i\rangle|j\rangle\equiv|i\rangle\!\rangle\). In the absence of the spin-boson coupling (\(g=0\)) and starting from the initial state \(|\rho_{S}(0)\rangle\!\rangle\), the spin simply evolves under a Schrodinger-like equation as
\[\frac{d}{dt}|\rho_{S}\rangle\!\rangle=\mathbb{L}_{S}|\rho_{S}\rangle\!\rangle\]
where the matrix \(\mathbb{L}_{S}\) denotes the spin-only dynamics generator corresponding to \(\mathcal{L}_{S}\). For example, the spin-only Hamiltonian \(H_{S}\) corresponds to \(\mathbb{L}_{S}=-i(H_{S}\otimes I_{S}-I_{S}\otimes H_{S})\) with \(I_{S}\) the identity matrix for the spin; spin spontaneous emission can be included in \(\mathbb{L}_{S}\) as well. For a time-independent \(\mathbb{L}_{S}\), the state at time \(t\) is given by \(|\rho_{S}(t)\rangle\!\rangle=\exp(\mathbb{L}_{S}t)|\rho_{S}(0)\rangle\!\rangle\).
More generally, the spin-boson interaction entangles the spin and the bosonic mode, and the evolved state can no longer be written in a factorized fashion. To tackle this problem, we adopt a hybrid quantum-classical approach, as explained below. The bosonic part of the dynamics, absent the coupling, is simply that of a (damped) quantum harmonic oscillator. The coupling too is linear in the bosonic variable, although a strong coupling renders the dynamics highly nonlinear. However, from a formal perspective, it is straightforward to capture the bosonic part via a functional integral formalism where bosonic operators are mapped to classical phase-space variables and a sum is performed over different classical configurations weighted by a classical action ('classical' refers to a representation of quantum operators as \(c\) numbers). In principle, this approach can be extended to the spin, using various representations of spin in terms of coherent states [86] or Majorana fermions [87]; however, we shall instead treat the spin quantum mechanically. We keep track of the spin dynamics via the reduced density matrix \(\rho_{S}(t)=\mathrm{Tr}_{B}(\rho(t))\); we stress that the density matrix is not factorizable. Utilizing a combination of path-integral techniques for the bosonic mode [88] together with the quantum-to-classical mapping for the spin [89; 60], we find
\[\begin{split}&|\rho_{S}(t)\rangle\!\rangle=\int\mathscr{D}[\psi, \phi]e^{i\mathscr{S}_{B}}\mathscr{W}_{0}(\psi_{0})\times\\ &\times\mathrm{T}_{t}\,e^{\int_{0}^{t}dt^{\prime}\,(\mathbb{L}_{S }+\mathbb{L}_{\mathrm{int}}(t^{\prime}))}|\rho_{S}(0)\rangle\!\rangle\end{split} \tag{16}\]
We shall leave the details of the derivation to Appendix A, and just explain the different terms in this expression: the fields \(\psi,\phi\) are the phase-space variables used to map the bosonic operators to \(c\) numbers, and the corresponding action \(\mathscr{S}_{B}\) is given by
\[\mathscr{S}_{B}=\int_{0}^{t}dt^{\prime}\left[2\bar{\phi}\big{(}i\dot{\psi}- \omega\psi+i\kappa\psi\big{)}+c.c.+4i\kappa\bar{\phi}\phi\right] \tag{17}\]
The function \(\mathscr{W}_{0}(\psi_{0})\) denotes the Wigner function corresponding to the initial state of the cavity mode, \(\rho_{B}(0)\). The first line of Eq. (16) then involves a (functional) integration over both \(\psi,\phi\) weighted by the exponential of the action \(\mathscr{S}_{B}\) as well as the Wigner function \(\mathscr{W}_{0}(\psi_{0})\) corresponding to the initial state. Finally, the second line of Eq. (16) is the time-ordered product (enforced by the time ordering operator \(\mathrm{T}_{t}\)) of the evolution operator that involves, besides the spin-only part, \(\mathbb{L}_{S}\), an interaction-induced matrix \(\mathbb{L}_{\mathrm{int}}(t)\) due to the coupling to the bosonic mode. The latter matrix takes the form
\[\mathbb{L}_{\mathrm{int}}(t)=ig\left(\psi+\bar{\psi}\right)\mathbb{S}+ig\left( \phi+\bar{\phi}\right)\mathbb{T} \tag{18}\]
where the time dependence of the fields \(\psi(t),\phi(t)\) are implicit, and the matrices \(\mathbb{S},\mathbb{T}\) are defined as
\[\begin{split}\mathbb{S}&=-\frac{1}{2}\left(\sigma^{x} \otimes I_{S}-I_{S}\otimes\sigma^{x}\right)\\ \mathbb{T}&=-\frac{1}{2}\left(\sigma^{x}\otimes I_{S} +I_{S}\otimes\sigma^{x}\right)\end{split} \tag{19}\]
One can see that the term proportional to \(\mathbb{S}\) in Eq. (18) can be interpreted as a _classical_, though time-dependent, longitudinal magnetic field, while the term proportional to \(\mathbb{T}\) does not admit such interpretation.
It is particularly convenient to work in a basis that diagonalizes the spin-boson interaction. To this end, we first define \(|\sigma\rangle=|\pm\rangle\) as eigenstates of the operator \(\sigma^{x}\), that is, \(\sigma^{x}|\sigma\rangle=\sigma|\sigma\rangle\). A matrix (such as \(\mathbb{L}_{S}\)) can be then represented in a basis spanned \(|\sigma\sigma^{\prime}\rangle\!\rangle\in\{|+\rangle\!\rangle,|+-\rangle\! \rangle,|-+\rangle\!\rangle,|---\rangle\!\rangle\}\). In this basis, the matrices \(\mathbb{S},\mathbb{T}\) become diagonal and take a simple form,
\[\mathbb{S}=\,\mathrm{diag}\{0,-1,1,0\}\,,\quad\mathbb{T}=\,\mathrm{diag}\{-1,0,0,1\} \tag{20}\]
and the interaction matrix becomes
\[\mathbb{L}_{\mathrm{int}}(t)=2ig\begin{pmatrix}-(\phi+\bar{\phi})&0&0&0\\ 0&-(\psi+\bar{\psi})&0&0\\ 0&0&\psi+\bar{\psi}&0\\ 0&0&0&\phi+\bar{\phi}\end{pmatrix} \tag{21}\]
In short, Eq. (16) defines a hybrid approach where the spin is still explicitly quantum mechanical while the
bosonic operators are traded in for the phase-space variables \(\psi(t),\phi(t)\). The resulting functional integral is however rather formal and is of little practical use because of the sign problem, that is, it involves complex-valued weights and is not amenable to sampling via a Monte-Carlo type of approach.
Here, we take a different approach utilizing Markovian dissipation (\(\kappa\neq 0\)). As a first step, we use a standard trick that converts stochastic Langevin equations to a path integral and vice versa [90]. To this end, we write the last term in the action in Eq. (17), sometimes referred to as the "quantum noise", in terms of a noise field \(\xi(t)\) using a Hubbard-Stratonovich transformation:
\[e^{-4\kappa\int_{t}\bar{\xi}\phi}=\int\mathscr{D}[\xi]e^{-\int_{t}\bar{\xi} \xi/\kappa-2i\int_{t}(\bar{\xi}\phi+\bar{\xi}\bar{\phi})} \tag{22}\]
As a warm up, let us first consider \(g=0\), so that there is no spin-boson interaction (specifically, \(\mathbb{L}_{\text{int}}=0\)). This is a trivial exercise (the first line of Eq. (16) just yields 1), but it sets the stage for later. With the above transformation and in the absence of the spin-boson coupling, the field \(\phi\) only appears linearly in the action, and its (path) integral yields a delta function which enforces a stochastic equation of motion,
\[i\dot{\underline{\psi}}-\omega\underline{\psi}+i\kappa\underline{\psi}=\xi(t) \tag{23}\]
where \(\xi(t)\) can be viewed as white noise with the correlations
\[\overline{\xi(t)\bar{\xi}(t^{\prime})}=\kappa\delta(t-t^{\prime}) \tag{24}\]
and \(\overline{\xi(t)\xi(t^{\prime})}=0\). The above equations should be supplemented with the initial condition \(\underline{\psi}(t=0)=\psi_{0}\) which is drawn from the Wigner distribution function \(\mathscr{W}_{0}(\psi_{0})\). The underline emphasizes that \(\underline{\psi}\) is not a free field, and is completely fixed by \(\xi(t)\) and \(\overline{\psi}_{0}\).
Turning on the spin-boson interaction (\(g\neq 0\)), the field \(\phi\) also appears in the time-ordered product in the second line of Eq. (16) through \(\mathbb{L}_{\text{int}}\) which is explicitly defined in Eq. (18). Therefore, one cannot immediately integrate over \(\phi\) in the same fashion as described above. The trick is to instead write \(\phi\) in the time-ordered product as a (functional) derivative with respect to \(\xi\),
\[\text{T}_{t}e^{ig\int_{t}(\phi+\bar{\phi})\mathbb{T}+\cdots}\longrightarrow \text{T}_{t}e^{-\frac{g}{2}\int_{t}\mathbb{T}(\bar{\delta}/\bar{\xi}+\delta/ \delta\xi)+\cdots}\]
acting on \(\exp(-2i\int_{t}\bar{\xi}\phi+\bar{\xi}\bar{\phi})\) introduced in the Hubbard-Stratonovich transformation in Eq. (22); the dots represent the remaining terms in \(\mathbb{L}+\mathbb{L}_{\text{int}}\) which are dropped for ease of notation. Notice that the above operation simply induces a shift in the noise variable \(\xi\to\xi-\frac{g}{2}\mathbb{T}\) where the first term is now understood to be proportional to the identity matrix \(\mathbb{I}=\text{diag}\{1,1,1,1\}\) in the vectorized space. An integration by parts allows us to put the partial derivatives on the noise Gaussian distribution (functional), \(\exp(-\int_{t}\bar{\xi}\xi/\kappa)\), which can be then explicitly evaluated by applying the inverse shift. The net effect of this procedure is to replace the last term in Eq. (18) as
\[ig(\phi+\bar{\phi})\mathbb{T}\longrightarrow-\frac{g}{2\kappa}(\xi+\bar{\xi}) \mathbb{T}-\frac{g^{2}}{4\kappa}\mathbb{T}^{2} \tag{25}\]
With this transformation, the field \(\phi\) now appears only in the action \(\mathscr{S}_{B}\), and the path integral over this field can be explicitly done, which in turn constrains \(\psi\) via Eq. (23). These steps can be rigorously justified by discretizing time in the functional integral and Trotterizing the evolution operator that appears in the time-ordered product. We leave the details to Appendix B, and just report the final result:
\[\begin{split}|\rho_{S}(t)\rangle\!\rangle=&\int \mathscr{D}[\xi]e^{-\int_{t}\bar{\xi}\xi/\kappa}\int d^{2}\psi\mathscr{W}_{0}( \psi_{0})\times\\ &\times\text{T}_{t}\,e^{\int_{0}^{t}dt^{\prime}\,\mathbb{K}(t^{ \prime})}|\rho_{S}(0)\rangle\!\rangle\end{split} \tag{26}\]
where the matrix \(\mathbb{K}(t)\) is given by
\[\mathbb{K}(t)=\mathbb{L}_{S}+ig(\underline{\psi}+\underline{\bar{\psi}}) \mathbb{S}-\frac{g}{2\kappa}(\xi+\bar{\xi})\mathbb{T}-\frac{g^{2}}{4\kappa} \mathbb{T}^{2} \tag{27}\]
and depends on time implicitly through the noise \(\xi\) and the associated field \(\underline{\psi}\). Notice that the functional integral over \(\psi,\phi\) is now replaced by an integral over noise and the initial Wigner distribution function in Eq. (26). Most importantly, the weight of the functional integral is now positive (at least when the Wigner function is positive).
Next, we define \(\ket{\rho_{S}(t)}_{\xi}\) as the time-ordered product in the second line of Eq. (26):
\[\ket{\rho_{S}(t)}_{\xi}\equiv\text{T}_{t}\,e^{\int_{0}^{t}dt^{\prime}\, \mathbb{K}(t^{\prime})}|\rho_{S}(0)\rangle\!\rangle \tag{28}\]
for a given noise realization \(\xi(t)\) and the initial value \(\psi_{0}\); the dependence of \(\ket{\rho_{S}(t)}_{\xi}\) on \(\psi_{0}\) is made implicit for ease of notation. The full density matrix is obtained by averaging over different realizations, which is again free of the sign problem. It follows from Eq. (28) that the dynamics can be written in terms of a generator, i.e., via an equation that is local in time:
\[\frac{d}{dt}\ket{\rho_{S}}_{\xi}=\mathbb{K}^{\text{I}}(t)\ket{\rho_{S}}_{\xi} \tag{29}\]
where \(\mathbb{K}^{\text{I}}(t)=\mathbb{K}(t)+\frac{g^{2}}{4\kappa}\mathbb{T}^{2}\). Equation (29) is a stochastic equation with multiplicative white noise, and is given in the sense of Ito. In fact, the difference between \(\mathbb{K}\) and \(\mathbb{K}^{\text{I}}\) follows from the Ito rule; a careful derivation is provided in Appendix B. Notice that the extra term in the definition of \(\mathbb{K}^{\text{I}}\) just cancels out against the last term in Eq. (27), hence the resulting simple equation \(\mathbb{K}^{\text{I}}=\mathbb{L}_{S}+ig(\underline{\psi}+\underline{\bar{\psi} })\mathbb{S}-\frac{g}{2\kappa}(\xi+\bar{\xi})\mathbb{T}\). Interestingly, this equation can be identified simply by substituting \(\phi\to-i\xi/\kappa\) rather than Eq. (25). Adopting the basis defined by Eq. (20), the dynamics generator is explicitly given
\[\mathbb{K}^{\mathrm{I}}(t)=\mathbb{L}_{S}+\begin{pmatrix}\frac{g}{2\kappa}(\xi+ \bar{\xi})&0&0&0\\ 0&-ig(\underline{\psi}+\underline{\bar{\psi}})&0&0\\ 0&0&ig(\underline{\psi}+\underline{\bar{\psi}})&0\\ 0&0&0&-\frac{g}{2\kappa}(\xi+\bar{\xi})\end{pmatrix} \tag{30}\]
Calculating the state \(\ket{\rho_{S}(t)}_{\xi}\) for a given realization from Eqs. (29) and (30), we can then find the full time-dependent state by averaging over all realizations,
\[\ket{\rho_{S}(t)}=\overline{\ket{\rho_{S}(t)}_{\xi}} \tag{31}\]
where, in a slight abuse of notation, the overline denotes the average over both the noise as well as the initial conditions,
\[\overline{\cdots}=\int\mathscr{D}[\xi]e^{-\int_{t}\bar{\xi}\xi/\kappa}\int d^ {2}\psi_{0}\mathscr{W}_{0}(\psi_{0})\cdots \tag{32}\]
We remark in passing that the integration over the Wigner function representing the initial state bears resemblance to the truncated Wigner approximation [91]; however, the stochastic approach presented here is _exact_.
The above equations are among the main results of this paper. These equations feature several important, and immediately useful, properties. Here, we shall summarize and further highlight these points, and furthermore enlist other important features of these equations.
\(\bullet\)**Non-factorized initial state.** Our treatment can be readily generalized to a non-factorized initial state \(\rho(0)\) via the substitution
\[\mathscr{W}_{0}(\psi_{0})\rho_{S}(0)\rightarrow\widetilde{\rho_{S}}(\psi_{0} )\equiv\mathrm{Tr}_{B}[\delta_{\mathrm{W}}(\psi_{0}-a)\rho(0)]\]
where the Weyl-ordered delta function is defined as \(\delta_{\mathrm{W}}(\psi-a)=\int\frac{d^{2}\phi}{\pi^{2}}\exp[\bar{\phi}(\psi- a)-\phi(\bar{\psi}-a^{\dagger})]\). For a given \(\psi_{0}\), we should then evolve the spin starting from the initial state given by \(\widetilde{\rho}_{S}(\psi_{0})\). Equation (32) is then replaced by an average over \(\psi_{0}\) that is effectively sampled by \(\widetilde{\rho_{S}}(\psi_{0})\). For a factorized initial state, we recover the Wigner function \(\mathscr{W}_{0}(\psi_{0})\equiv\mathrm{Tr}_{B}[\delta_{\mathrm{W}}(\psi_{0} -a)\rho_{B}(0)]\)[88; 91].
\(\bullet\)**Feynman-Vernon influence functional.** For an initial state where the boson is in its vacuum state, the Wigner function is a Gaussian function, \(\mathscr{W}_{0}(\psi_{0})=\frac{2}{\pi}\exp(-2|\psi_{0}|^{2})\). In this case, one can show that Eq. (26) directly leads to the influence functional in Eq. (3); see Appendix C.1. The perspective afforded by the Feynman-Vernon influence functional is particularly useful in our treatment of \(N\) spins coupled to boson(s); see Section V.
\(\bullet\)**Sign-problem free at expense of unphysical trajectories.** The original path-integral formulation in Eq. (16), or the Feynman-Vernon formalism, are exact, but they suffer from the dynamical sign problem. On the other hand, the stochastic formulation in Eq. (26) is sign problem free. (A negative value, if any, of the Wigner function corresponding to the initial state is only a mild exception.) However, this comes at the expense of unphysical states for each trajectory where the density matrix is not trace-1 or positive. Still, the causal structure of our approach keeps the density matrix hermitian even for a single trajectory; see Section IV.1 for details.
\(\bullet\)**Ito vs Stratonovich.** The stochastic differential equation in Eq. (29) follows the Ito convention [92] and involves multiplicative noise, \(d\rho_{m}=A_{m}(\rho)dt+b_{m}(\rho)dW\) where \(dW\) is the Wiener increment, and \(A_{m},b_{m}\) are linear functions of the (vectorized) density matrix components \(\rho_{m}\) with \(m=1,2,3,4\). Specifically, \(b_{m}(\rho)=-(g/\sqrt{2\kappa})T_{m}\rho_{m}\) with \(T_{m}\) the diagonal element of \(\mathbb{T}\) in the basis defined in Eq. (20). Interestingly, one can see that the dynamics in the Stratonovich sense [92] is governed by the matrix \(\mathbb{K}\) that appears in the functional integral.
\(\bullet\)**Markovian in form; non-Markovian in essence.** In our approach, we only keep track of the spin dynamics, while we trade in the cavity mode for a classical stochastic field. This approach is nonperturbative, but remarkably the dynamics remains purely local in time. Put differently, an exact elimination of the bosonic operator is possible (well beyond the domain of adiabatic elimination) while maintaining locality in time.
\(\bullet\)**Existence of stochastic solutions.** Stochastic differential equations could often lead to unstable solutions (for example, in methods based on the positive \(P\)-representation [85]). Our approach does not suffer from this since a certain _growth_ condition (see Ch. 6 of Ref. [85]) is satisfied, which thus guarantees that stochastic solutions exist at all times. An immediate question though is how many trajectories are required for convergence. We study this question in several examples in this work; see for example Section IV.2. In general, the convergence improves for larger dissipation and/or smaller coupling. A systematic study of convergence with the number of trajectories is left to future work.
\(\bullet\)**Dissipation as computational resource.** Here, we have used Markovian dissipation to trade in the coupling to the bosonic mode for a stochastic sampling that respects causality and ensures hermiticy for each trajectory. In principle, Markovian dissipation could be taken infinitesimal to simulate unitary dynamics; however, the convergence worsens with decreasing dissipation. This
is a satisfactory feature as one expects dissipation, responsible for the quantum-classical crossover, renders the dynamics more amenable to a numerical simulation. In contrast, MPS-based methods, for example, become more complex when dealing with dissipation [93; 94; 95; 62]. Our approach thus provides a concrete framework where Markovian dissipation can be used as a computational resource; see also [96].
### Stochastic master equation
We can gain further insight by writing the dynamics explicitly for the density matrix, effectively undoing vectorization. From Eqs. (26) and (27), we find that the dynamics in the Ito convention takes the form
\[\frac{d}{dt}\rho_{\xi}\equiv\mathcal{K}^{\mathrm{I}}(\rho_{\xi}) =\mathcal{L}_{S}(\rho_{\xi})-i\left[h(t)\sigma^{x}\rho_{\xi}-\bar{h}(t)\rho_{ \xi}\sigma^{x}\right]\] \[\quad\mathrm{with}\quad h(t)=\frac{g}{2}(\underline{\psi}+ \underline{\bar{\psi}})+i\frac{g}{4\kappa}(\xi+\bar{\xi}) \tag{33}\]
for a given noise realization \(\xi(t)\) and an initial condition \(\underline{\psi}(t=0)=\psi_{0}\); for brevity, we have not explicitly shown the dependence of \(\rho_{\xi}\) on \(\psi_{0}\) and dropped the subscript \(S\). Note that the generator of the full dynamics, \(\mathcal{K}^{\mathrm{I}}\), implicitly depends on time through its dependence on \(h(t)\). The field \(h\) can be viewed as a complex-valued longitudinal field: the real part of \(h\) mimics a physical (though stochastic and time-dependent) longitudinal field given by \((g/2)(\underline{\psi}+\bar{\psi})\). On the other hand, the imaginary part of \(h\) is proportional to the noise and the corresponding term in the master equation is not of a Hamiltonian form. Still, we can interpret the corresponding dynamics as that of a non-hermitian Hamiltonian, \(H_{h}\rho_{\xi}-\rho_{\xi}H_{h}^{\dagger}\), with \(H_{h}\equiv h(t)\sigma^{x}\). But the evolved state is no longer trace \(1\) or positive. The lack of positivity becomes more manifest when writing the master equation in the Stratonovich sense1
Footnote 1: Even with \(\mathcal{L}_{S}=0\),
\[\rho_{\xi}(t)\neq U(t)\rho_{0}U^{\dagger}(t)\]
which would otherwise imply positivity; here, \(U(t)=\mathrm{Tr}e^{i\int_{0}^{t}dt^{\prime}h(t^{\prime})}\) (which is not unitary since \(h\) is complex valued). This is because, according to the Ito rule, the evolution due to the non-hermitian Hamiltonian cannot be broken into the evolution of ket and bra states independently. The proper exponentiated evolution matrix is then \(\mathcal{K}\) corresponding to the matrix \(\mathbb{K}\) which is incidentally the dynamics generator in the Stratonovich sense. by adding the term \(-\frac{g^{2}}{8\kappa}(\sigma^{x}\rho_{\xi}\sigma^{x}+\rho_{\xi})\) to the rhs of the first line in Eq. (33); this term looks like dephasing but with a wrong sign for the _jump_ term. It is this sign difference that could lead to negative eigenvalues of the density matrix \(\rho_{\xi}\) for a fixed noise realization.
While the density matrix for each realization is not physical, it remains hermitian even for a single realization since
\[(\mathcal{K}^{\mathrm{I}}(\rho_{\xi}))^{\dagger}=\mathcal{K}^{ \mathrm{I}}(\rho_{\xi}^{\dagger}) \tag{34}\]
This feature is a direct consequence of our causal treatment, and is particularly convenient for numerical computations as it puts a strong constraint on the form of the density matrix \(\rho_{\xi}\). Finally, we note that the average over many realizations must yield a physical density matrix that is trace \(1\) and positive. In fact, averaging the master equation in Eq. (33) over noise and using \(\overline{\xi(t)\rho_{\xi}(t)}=0\) from the Ito convention shows that the trace is conserved, \(d\overline{\mathrm{Tr}(\rho_{\xi})}/dt=0\), on average (even before averaging over initial conditions). The above properties can be explicitly verified numerically, as we discuss in the next section.
### Numerical results
Here, we provide numerical results for different examples of the model considered in this section. Specifically, we plot \(\langle\sigma^{x}(t)\rangle\) as a function of time starting from an initial state where the spin is fully polarized along the positive \(x\) direction, and the boson is in its vacuum state. As a representative example, we take \(\omega=\kappa=1,\Delta=0.4\) and consider two characteristic values for the spin-boson coupling, \(g=0.6,1.2\); all parameters are comparable in order to avoid any fine tuning. To simulate the stochastic dynamics, we have adopted the Euler method [92] providing a simple method for solving stochastic equations; more accurate techniques will improve the convergence properties. Finally, we choose the time step \(dt=0.01\).
In Fig. 2(a), we plot the dynamics up to \(t=40\) for \(g=0.6\), and observe underdamped dynamics. Even with a moderate number of trajectories (\(10^{4}\)), the dynamics is captured to relatively long times. For a larger number of trajectories (\(10^{7}\)), our results perfectly match the exact numerical computation. Additionally, we plot the averaged trace, and observe that this quantity approaches \(1\) with increasing the number of trajectories.
In Fig. 2(b), we plot the dynamics up to \(t=20\) for \(g=1.2\), and rather observe overdamped dynamics. A stronger coupling requires a larger number of trajectories for convergence in a given time interval. While a relatively small number of trajectories (\(10^{5}\)) capture the main features of the dynamics, full convergence up to \(t=20\) requires a larger number of trajectories (the thick curves are obtained for \(10^{9}\) trajectories). Again, the trace acts as a proxy for convergence and approaches \(1\) with increasing the number of trajectories.
## V \(N\) spins coupled to a boson \(M=1\)
In this section, we consider several (or many) spins coupled to a single bosonic mode. The Hamiltonian is
now given by
\[H_{N,1}=\omega a^{\dagger}a+\frac{\Delta}{2}\sum_{i=1}^{N}\sigma_{i}^{z}+\frac{1} {\sqrt{N}}(a+a^{\dagger})\sum_{i}\frac{g_{i}}{2}\sigma_{i}^{x} \tag{35}\]
Again, the bosonic mode is subject to loss characterized by the Lindblad operator \(L=\sqrt{\kappa}a\); spins could be lossy as well. A generalization of the main result of the previous section to many spins is straightforward. Let us denote by \(|\rho^{(N)}\rangle\!\rangle\) the (vectorized) reduced density matrix of the \(N\) spins upon tracing out the bosonic mode. The dynamics can be then written as a quantum stochastic evolution with each trajectory evolving as
\[\frac{d}{dt}|\rho^{(N)}\rangle\!\rangle_{\xi}=\mathbb{K}^{\rm I}(t)|\rho^{(N)} \rangle\!\rangle_{\xi} \tag{36}\]
where the dynamics generator \(\mathbb{K}^{\rm I}\), now acting on \(N\) spins, is given by
\[\mathbb{K}^{\rm I}(t)=\sum_{i=1}^{N}\left[\mathbb{L}_{i}+i\frac{g_{i}}{\sqrt{N }}(\underline{\psi}+\underline{\bar{\psi}})\mathbb{S}_{i}-\frac{g_{i}}{2 \sqrt{N}\kappa}(\xi+\bar{\xi})\mathbb{T}_{i}\right] \tag{37}\]
Here, \(\mathbb{L}_{i}\) describes the single-body dynamics of the spin \(i\), which more precisely should be understood as a tensor product with the identity matrix for other spins, \(\mathbb{I}\otimes\cdots\otimes\mathbb{L}_{i}\otimes\cdots\mathbb{I}\). Finally, the white noise \(\xi\) and the associated field \(\underline{\psi}\) are defined exactly as before via Eqs. (23) and (24).
In principle, for a given realization of noise and a given initial state, Eq. (37) can be used directly to evolve the vectorized density matrix of \(N\) spins, i.e., a vector of size \(4^{N}\). The average over many realizations then gives the physical density matrix. In practice however, even for moderate values of \(N\), and certainly in a many-body system, the size of the state becomes prohibitively large for any numerical simulation. On the other hand, we notice that \(\mathbb{K}^{\rm I}\) in Eq. (37) is decoupled among different spins. Naively, each spin could be then evolved individually (at least, if the initial state is factorized) for a given noise realization. However, this argument is flawed! As a first observation, note that \(\mathbb{K}\) appearing in the time ordered product (also, the generator of the dynamics in the Stratonovich sense) is given by \(\mathbb{K}=\mathbb{K}^{\rm I}-\frac{1}{4N\kappa}(\sum_{i}g_{i}\mathbb{T}_{i}) ^{2}\) which directly couples the spins. A resolution thus lies in the nontrivial form of the Ito chain rule. As a simple example, consider \(N=2\) spins and let us assume (to prove the contrary) that the dynamics is decoupled:
\[d\rho_{im}=A_{im}dt-\frac{g}{2\sqrt{\kappa}}T_{m}\rho_{im}dW\]
where \(i=1,2\) denote the spins and \(m=1,2,3,4\) refer to the component of the corresponding vectorized density matrix. The resulting equation for the total density matrix, \(\rho_{mn}^{(2)}=\rho_{1m}\rho_{2n}\), is then obtained by applying the Ito chain rule,
\[d\rho_{mn}^{(2)}=\cdots+\frac{\partial^{2}\big{(}\rho_{mn}^{(2)}\big{)}}{ \partial\rho_{1m}\partial\rho_{2n}}d\rho_{1m}d\rho_{2n}=\cdots+\frac{g^{2}}{4 \kappa}T_{m}T_{n}\rho_{mn}^{(2)}dt\]
where the dots refer to single-body terms. Notice that the new term on the rhs couples the two spins, while there is no such coupling in Eq. (37). More generally, for \(N\) spins, one finds a coupling between all pairs of spins. (Interestingly, the Stratonovich dynamics involves a term of the same form but with an opposite sign.) Therefore, regardless of the stochastic rules we adopt, the spins do not seem to be decoupled.2 Paradoxically, the absence of
Figure 2: Stochastic evolution of \(\langle\sigma^{x}(t)\rangle\) starting from an initial state where the spin points along the positive \(x\) direction and the boson is in its vacuum state; here, \(\omega=\kappa=1,\Delta=0.4\) while \(g=0.6,1.2\) in the left/right panels, respectively. The decay of the overall amplitude is purely due to the coupling to the bosonic mode. (a) Underdamped dynamics at \(g=0.6\) for \(\#10^{4},10^{7}\) trajectories. The stochastic dynamics with \(10^{7}\) trajectories is in excellent agreement with an exact numerical computation up to \(t=40\). (b) Overdamped dynamics at \(g=1.2\) for \(\#10^{5},10^{9}\) trajectories. Full agreement with an exact numerical computation up to \(t=20\) is achieved for \(10^{9}\) different trajectories. In both cases, the averaged trace (see the blue curves) approaches \(1\) with increasing the number of trajectories.
any coupling in Eq. (37) for the full (tensor product) state implies that they are effectively coupled. Nonetheless, we can bring the dynamics into a form where individual spins are evolved independently by introducing a different noise variable for each spin. We explain this procedure in the following subsection.
### Decoupling spins
In this subsection, we provide a recipe for decoupling the spins. For convenience, we assume that the initial state of spins is a product state (in the vectorized space), i.e., \(|\rho^{N}(0)\rangle\!\rangle=\otimes_{i}|\rho_{i}(0)\rangle\!\rangle\); this assumption can be relaxed simply by writing the initial state of spins as a superposition (again, in the vectorized sense) of product states. The proof follows from a variation of the Feynman-Vernon influence functional, which we leave to Appendix C.2. The main trick is to introduce uncorrelated fictitious noise variables \(\xi_{i}\) with \(i=1,\cdots,N\), i.e.,
\[\langle\xi_{i}(t)\bar{\xi}_{j}(t^{\prime})\rangle=\kappa\delta(t-t^{\prime}) \delta_{ij} \tag{38}\]
and define the field \(\underline{\psi}(t)\) as
\[(i\partial_{t}-\omega+i\kappa)\underline{\psi}(t)=\frac{1}{\sqrt{N}}\sum_{j} \xi_{j}(t) \tag{39}\]
For a given noise realization \(\boldsymbol{\xi}=\{\xi_{i}\}\) and a given initial state, we can write the state in a factorized form at all times,
\[|\rho^{N}(t)\rangle\!\rangle_{\boldsymbol{\xi}}=\otimes_{i=1}^{N}|\rho_{i}(t) \rangle\!\rangle_{\boldsymbol{\xi}} \tag{40}\]
where each spin is evolved as
\[\frac{d}{dt}|\rho_{i}\rangle\!\rangle_{\boldsymbol{\xi}}=\mathbb{K}_{i}^{ \mathrm{I}}(t)|\rho_{i}\rangle\!\rangle_{\boldsymbol{\xi}} \tag{41}\]
and the generator of the dynamics is given by
\[\mathbb{K}_{i}^{\mathrm{I}}(t)=\mathbb{L}_{i}+i\frac{g_{i}}{\sqrt{N}}( \underline{\psi}+\underline{\bar{\psi}})\mathbb{S}_{i}-\frac{g_{i}}{2\kappa} (\xi_{i}+\bar{\xi}_{i})\mathbb{T}_{i} \tag{42}\]
Notice that while the same field \(\underline{\psi}(t)\) is coupled to all the spins, each spin \(i\) is subject to its own noise \(\xi_{i}(t)\); of course, the dynamics of a given spin still depends on all the noise variables through its dependence on \(\underline{\psi}\). We thus find that the evolution of each spin is given in a similar fashion as that of a single spin in Eq. (30) only with the modification \(\underline{\psi}\to\underline{\psi}/\sqrt{N}\), \(\xi\to\xi_{i}\), and \(g\to g_{i}\) for spin \(i\) together with the noise correlations and the stochastic equation of motion in Eqs. (38) and (39). Finally, the physical state is given by the average over noise and the initial state
\[|\rho^{(N)}\rangle\!\rangle=\overline{|\rho^{(N)}\rangle\!\rangle_{\boldsymbol {\xi}}} \tag{43}\]
which, again in a slight abuse of notation, the overline indicates the average over all the noise variables \(\boldsymbol{\xi}\) as well as the initial conditions,
\[\overline{\cdots}=\int\mathscr{D}[\boldsymbol{\xi}]e^{-\sum_{i}\int_{t}\bar{ \xi}_{i}\xi_{i}/\kappa}\int d^{2}\psi_{0}\mathscr{W}_{0}(\psi_{0})\cdots \tag{44}\]
As they evolve, the spins form nontrivial correlations or become entangled. In our approach, the sum over different realizations effectively mimics the quantum (and statistical) correlations between the spins. In principle, this leads to an exponential reduction from a state of size \(4^{N}\) to \(N\) vectors of size \(4\). Of course, the number of trajectories required for convergence could limit the applicability of this method. In practice, we do not need to keep track of the full state in Eq. (43) if we are interested in expectation values of local operators, two-, or \(n-\)point correlation functions. For example, the expectation value of an operator \(O_{i}\) acting on spin \(i\) is simply given by
\[\langle O_{i}(t)\rangle=\overline{\langle O_{i}^{T}|\rho_{i}(t)\rangle\! \rangle_{\boldsymbol{\xi}}\prod_{l\neq i}\left\langle\!I_{l}|\rho_{l}(t) \rangle\!\rangle_{\boldsymbol{\xi}}} \tag{45}\]
and the correlation function between two spins is given by
\[\langle O_{i}(t)O_{j}(t)\rangle=\overline{\langle\!\langle O_{i}^{T}|\rho_{i }(t)\rangle\!\rangle_{\boldsymbol{\xi}}\!\langle\!\langle O_{j}^{T}|\rho_{j}(t) \rangle\!\rangle_{\boldsymbol{\xi}}\prod_{l\neq i,j}\left\langle\!\langle I_{l }|\rho_{l}(t)\rangle\!\rangle_{\boldsymbol{\xi}}} \tag{46}\]
and similarly for higher \(n\)-point correlation functions. Similarly, we can find a simple expression for the reduced density matrix for a subset of spins. For example, the reduced density matrix for spin \(i\), \(\rho_{i}=\mathrm{Tr}_{l\neq i}(\rho^{(N)})\), is given by
\[|\rho_{i}(t)\rangle\!\rangle=\overline{|\rho_{i}(t)\rangle\!\rangle_{ \boldsymbol{\xi}}\prod_{l\neq i}\left\langle\!\langle I_{l}|\rho_{l}(t) \rangle\!\rangle_{\boldsymbol{\xi}}} \tag{47}\]
Note that \(|\rho_{i}\rangle\!\rangle\neq\overline{|\rho_{i}\rangle\!\rangle_{\boldsymbol{ \xi}}}\), that is, the average over different realizations still involves all the spins. One can find similar expressions for any subset of spins.
Similar considerations about the quantum stochastic evolution of a single spin also apply to our treatment here: the stochastic quantum evolution is sign-problem free but at the expense of sampling over unphysical trajectories, yet the density matrix remains hermitian for each trajectory; additionally, stochastic solutions exist at all times. In practice, the efficiency of our stochastic approach depends on the number of trajectories required for convergence. In the next subsection, we present numerical simulations showcasing the utility of our approach when an exact numerical computation is unavailable.
### Numerical results
In this section, we consider the dynamics of the Dicke model where all spins are coupled to a single "cavity mode" with the same coupling \(g_{i}=g\); a uniform coupling is not necessary in our approach but allows a comparison against the exact numerical simulation.
We first consider \(N=3\) spins and take the parameters \(\omega=\kappa=1,\Delta=0.4,g=0.3\). In Fig. 3, we plot the two-point correlation function \(\langle\sigma_{1}^{x}\sigma_{2}^{x}\rangle\) as a function of time up to \(t=20\) starting from an initial state where the spins are
along the positive \(z\) direction (no correlations at \(t=0\)) and the cavity mode is in its vacuum state. The exact dynamics is well captured by the stochastic average over \(10^{8}\) trajectories; the slight deviation at long times is likely due to the time step \(dt=0.01\). As time evolves, non-trivial correlations are formed between the spins (while \(\langle\sigma_{1}^{x}\rangle=\langle\sigma_{2}^{x}\rangle=0\)), yet the decoupled stochastic dynamics evolves the spins in a factorized form.
As a second example, we consider the dynamics of a system of size \(N=30\) starting from an initial state where the spins are fully polarized along the positive \(x\) direction and the cavity mode is in its vacuum state. We take the parameters \(\omega=\kappa=\Delta=1,g=0.4\) and choose a time step of \(dt=0.005\). In Fig. 4, we can simulate the dynamics up to \(t\lesssim 10\) by averaging over a moderate number of trajectories (\(10^{8}\)) and find an excellent agreement with the exact result; the latter is obtained by taking advantage of the permutation symmetry and working in the Dicke manifold [97]. Without the permutation symmetry, a system size of \(N=30\) is well outside the domain of exact diagonalization, also taking into account the coupling to the bosonic mode and the open quantum system dynamics. Our method thus provides a worthwhile alternative when there is no such symmetry. Convergence with number of trajectories worsen as the total time or the coupling to the bosonic mode increase. Higher-order techniques for solving stochastic equations and further numerical optimizations should improve the convergence.
## VI Spin coupled to (\(\infty-\))many bosons
In this section, we consider a single spin coupled to several, many or possibly infinitely many, bosonic modes. We quote the Hamiltonian for completeness:
\[H_{1,M}=\frac{\Delta}{2}\sigma^{z}+\sum_{\alpha=1}^{M}\omega_{\alpha}a_{ \alpha}^{\dagger}a_{\alpha}+\sigma^{x}\sum_{\alpha}\frac{g_{\alpha}}{2}(a_{ \alpha}+a_{\alpha}^{\dagger}) \tag{48}\]
Again, we assume that each bosonic mode is subject to Markovian loss characterized by the Lindblad operator \(L_{\alpha}=\sqrt{\kappa_{\alpha}}a_{\alpha}\). Analogously to Section IV, we trade in the bosonic operator for stochastic fields, only now we must include \(M\) such variables to represent all the bosonic modes. The full stochastic evolution is then given by
\[\frac{d}{dt}|\rho_{S}\rangle\!\rangle_{\{\xi_{\alpha}\}}=\mathbb{K}^{\rm I}(t) |\rho\rangle\!\rangle_{\{\xi_{\alpha}\}} \tag{49}\]
where
\[\mathbb{K}^{\rm I}(t)=\mathbb{L}_{S}+\mathbb{S}\sum_{\alpha}ig_{\alpha}( \underline{\psi}_{\alpha}+\underline{\bar{\psi}}_{\alpha})-\mathbb{T}\sum_{ \alpha}\frac{g_{\alpha}}{2\kappa_{\alpha}}(\xi_{\alpha}+\bar{\xi}_{\alpha}) \tag{50}\]
Again, the first term on the rhs of the above equation denotes the single-spin terms. The noise variables \(\xi_{\alpha}\) and the associated field \(\underline{\psi}_{\alpha}\) are exactly determined in the same fashion as Eqs. (23) and (24) with the substitution \(\xi,\underline{\psi}\to\xi_{\alpha},\underline{\psi}_{\alpha}\) and \(\omega,\kappa\to\omega_{\alpha},\kappa_{\alpha}\). To be more concrete, we have
\[\underline{\psi}_{\alpha}(t)=iG_{\alpha}(t)\underline{\psi}_{\alpha}(0)+\int _{0}^{t}dt^{\prime}G_{\alpha}(t-t^{\prime})\xi(t^{\prime}) \tag{51}\]
where \(G_{\alpha}(t)\) is the free (causal) Green's function corresponding to the bosonic mode \(\alpha\),
\[G_{\alpha}(t)=\frac{1}{i\partial_{t}-\omega_{\alpha}+i\kappa_{\alpha}}=-i \Theta(t)e^{-i\omega_{\alpha}t-\kappa_{\alpha}t} \tag{52}\]
The quantum stochastic evolution can be then solved for a given realization; a single trajectory now comprises all
Figure 3: Stochastic evolution of the correlator \(\langle\sigma_{1}^{x}\sigma_{2}^{x}\rangle\) for the Dicke model with \(N=3\) spins starting from an initial state where the spins are fully polarized along the \(z\) direction and the cavity modes is in a vacuum state; here, \(\omega,\kappa=1,\Delta=0.4\) and \(g=0.3\). The stochastic average of \(10^{8}\) trajectories is well in agreement with an exact numerical computation up to \(t=20\). Slight deviation at longer times is likely due to the moderate time step \(dt=0.01\).
Figure 4: Stochastic evolution of \(\langle\sigma_{1}^{z}(t)\rangle\) for the Dicke model with \(N=30\) spins starting from an initial state where spins are fully polarized along the \(x\) direction and the cavity mode is in its vacuum state; here, \(\omega=\kappa=\Delta=1\) and \(g=0.4\). The stochastic average of \(10^{8}\) trajectories is in good agreement with an exact numerical computation using the permutation symmetry of the Dicke model. The convergence at longer time can be improved by averaging over more trajectories.
the noise variables \(\xi_{\alpha}\) and the initial values for the corresponding field \(\underline{\psi}_{\alpha}\). Finally, the physical density matrix is obtained by averaging over many trajectories.
While the above strategy is in principle feasible, it would be rather demanding if there are many, or even a continuum of, bosonic modes, an example of which is the paradigmatic spin-boson model where a two-level system is coupled to an infinite bath [1; 2]. A more efficient route is desired in this case. To this end, we first assume that the bosonic modes are initially in their vacuum state; later in this section, we extend our results to a general initial state. Now, taking advantage of the linear (stochastic) equation for the classical fields as well as the initial Gaussian state, one can combine all the noise variables \(\xi_{\alpha}\) into a single variable (similarly for the associated fields \(\underline{\psi}_{\alpha}\)):
\[\underline{\Psi}(t)\equiv\sum_{\alpha}g_{\alpha}\underline{\psi}_{\alpha}, \quad\Xi(t)\equiv\sum_{\alpha}\frac{g_{\alpha}}{2\kappa_{\alpha}}\xi_{\alpha} \tag{53}\]
The collective noise variable \(\Xi(t)\), being a sum of white noise terms, is itself white noise with the correlations
\[\overline{\Xi(t)\Xi(t^{\prime})}=\gamma\delta(t-t^{\prime}),\quad\gamma=\sum_ {\alpha}\frac{g_{\alpha}^{2}}{4\kappa_{\alpha}} \tag{54}\]
where we have defined the dissipation rate \(\gamma\). Some algebra shows that, for \(t>t^{\prime}\),
\[\begin{split}\chi(t,t^{\prime})&\equiv\overline{ \underline{\Psi}(t)\Xi(t^{\prime})}=\sum_{\alpha}\frac{g_{\alpha}^{2}}{2}G_{ \alpha}(t-t^{\prime})\\ C(t,t^{\prime})&\equiv\overline{\underline{\Psi }(t)\underline{\Psi}(t^{\prime})}=i\sum_{\alpha}\frac{g_{\alpha}^{2}}{2}G_{ \alpha}(t-t^{\prime})\end{split} \tag{55}\]
while, for \(t<t^{\prime}\), we have \(\chi(t,t^{\prime})=0\) and \(C(t,t^{\prime})=\bar{C}(t^{\prime},t)\). The functions \(\chi\) and \(C\) define the (causal) response and correlation functions, respectively. Notice that they are both translation invariant as they only depend on \(t-t^{\prime}\). While this is always the case for the response function in a linear system, the correlation function becomes translation invariant as we have assumed that the bosonic modes are initially in their vacuum state. While \(\underline{\psi}_{\alpha}\) is fully specified by \(\xi_{\alpha}\) together with its initial value, the collective field \(\underline{\Psi}\) is not completely determined by \(\Xi\) even for fixed initial conditions: this is because there are many noise realizations \(\{\xi_{\alpha}(t)\}\) for a given \(\Xi(t)\). This redundancy, and consequently the uncertainty in \(\underline{\Psi}\), can be encoded into another Gaussian distributed noise variable. More precisely, we can capture the correlators in Eq. (55) by writing
\[\underline{\Psi}(t)=\frac{1}{\gamma}\int_{0}^{\infty}dt^{\prime}\chi(t,t^{ \prime})\Xi(t^{\prime})+X(t) \tag{56}\]
where we have introduced the noise variable \(X(t)\) with the correlations
\[\overline{X(t)\bar{X}(t^{\prime})}=\check{C}(t,t^{\prime}) \tag{57}\]
while all the other (self- or cross-) noise correlations are vanishing, \(\overline{X(t)X(t^{\prime})}=\overline{X(t)\Xi(t^{\prime})}=\overline{X(t) \Xi(t^{\prime})}=0\), and the function \(\check{C}(t,t^{\prime})\) is defined as
\[\check{C}(t,t^{\prime})=C(t,t^{\prime})-\frac{1}{\gamma}\int_{0}^{\infty}dt^ {\prime\prime}\chi(t-t^{\prime\prime})\bar{\chi}(t^{\prime}-t^{\prime\prime}) \tag{58}\]
One can easily verify that Eq. (55) follows from the definition in Eq. (56) and the noise correlations in Eqs. (54) and (57). Moreover, one can show that \(\check{C}(t,t^{\prime})\) considered as a matrix (in the basis \(t,t^{\prime}\)) is positive, in harmony with an interpretation of \(X\) as Gaussian distributed colored noise (contrasted with the white noise \(\Xi\)); see Appendix D. If the system consists of one mode only, the function \(\check{C}\) only captures the initial conditions which alternatively can be treated as before by sampling over the initial conditions. In general, \(\check{C}(t,t^{\prime})\) takes a nontrivial form and is not translation invariant (i.e., not just a function of \(t-t^{\prime}\)). Utilizing the hermiticity of the \(\check{C}(t,t^{\prime})\) matrix (in the \(t,t^{\prime}\) basis), we can diagonalize it as
\[\check{C}(t,t^{\prime})=\sum_{a}\check{c}_{a}\bar{\theta}_{a}(t)\theta_{a}(t^{ \prime}) \tag{59}\]
where the functions \(\theta_{a}(t)\) define a complete basis and \(\check{c}_{a}\geq 0\) denote the diagonal elements. We can then write \(X(t)=\sum_{a}\sqrt{\bar{c}_{a}}X_{a}\theta_{a}(t)/\sqrt{2}\) for the complex variables \(X_{a}\) whose real and imaginary parts are drawn from a normal distribution. The average over the field \(X(t)\) can be conveniently replaced by sampling \(\mathbf{X}=\{X_{a}\}\).
We can now write the full dynamics as
\[\frac{d}{dt}|\rho_{S}\rangle\!\rangle_{\Xi,\mathbf{X}}=\mathbb{K}^{\text{I}} (t)|\rho\rangle\!\rangle_{\Xi,\mathbf{X}} \tag{60}\]
where the generator now takes the simple form
\[\mathbb{K}^{\text{I}}(t)=\mathbb{L}_{S}+i(\underline{\Psi}+\bar{\underline{ \Psi}})\mathbb{S}-(\Xi+\bar{\Xi})\mathbb{T} \tag{61}\]
and the function \(\underline{\Psi}\) is given by Eq. (56) together with \(X(t)=\sum_{a}\sqrt{\bar{c}_{a}}X_{a}\theta_{a}(t)/\sqrt{2}\). Again, we emphasize that the initial conditions are captured directly via the function \(X\) and do not require additional averaging. Finally, to obtain the physical density matrix, we should sum over different realizations.
Before closing this section, we consider an arbitrary initial state of bosons described by the joint Wigner function of all the bosonic variables, \(\mathscr{W}_{0}(\{\psi_{\alpha 0}\})\), where \(\psi_{\alpha 0}\) denotes the initial value of the field \(\underline{\psi}_{\alpha}\). To account for the initial state, we should substitute Eqs. (56) and (58) by
\[\underline{\Psi}(t)=\frac{1}{\gamma}\int_{0}^{\infty}dt^{\prime} \chi(t,t^{\prime})\Xi(t^{\prime})+X(t)+i\sum_{\alpha}g_{\alpha}G_{\alpha}(t) \psi_{\alpha 0}\] \[\overline{X(t)\bar{X}(t^{\prime})}=\check{C}(t,t^{\prime})-\sum_ {\alpha}\frac{g_{\alpha}^{2}}{2}G_{\alpha}(t)\bar{G}_{\alpha}(t^{\prime}) \tag{62}\]
In the last equation, we have removed the contribution of the initial state to the modified correlations \(\check{C}(t,t^{\prime})\)
and rather capture it directly in the definition of \(\Psi(t)\) which now depends explicitly on the initial values \(\{\psi_{\alpha 0}\}\) which in turn are sampled according to the Wigner distribution function. One can also show that the rhs of the second line of Eq. (62) is positive as a matrix; see Appendix D. For a Gaussian initial state (e.g., when bosons are initially in their vacuum state), one can absorb the Gaussian fluctuations due to the initial state in colored noise correlations as before; however, the above equations allow us to consider a general initial state. In the end, the sampling over the original \(M\) white noise variables \(\{\xi_{\alpha}(t)\}\) is reduced to that of a single white noise \(\Xi(t)\) plus sampling \(M\) initial values \(\{\psi_{\alpha 0}\}\).
### Spectral function vs Markovian dissipation
For a continuum of bosonic modes, it is convenient to introduce the spectral function of the bath. Let us first assume no Markovian dissipation, i.e., \(\kappa_{\alpha}=0\). The bath can be characterized by the spectral function defined as \(J(\omega)=\pi\sum_{\alpha}g_{\alpha}^{2}\delta(\omega-\omega_{\alpha})\). In the continuum limit, the sum over modes \(\alpha\) becomes an integral. The behavior of the spectral function, especially at low frequencies, determines the nature of the quantum bath [1; 2]. We shall consider an Ohmic bath later in this section.
While Markovian dissipation is not typically considered in the discussion of spin-boson models [1; 2], quantum simulation of these models often come with Markovian dissipation [20; 98; 99; 100], which is the focus of this work. We further emphasize that such models are inherently driven-dissipative as they involve both a coherent drive and incoherent loss. Now turning on Markovian dissipation, we can still define the spectral function \(J(\omega)\) as above, but we must incorporate the Markovian dissipation in the correlation and response functions. Let us first denote by \(\kappa(\omega)\) the dissipation rate corresponding to a mode with the natural frequency \(\omega\) (assuming that there is at most a single mode corresponding to a given frequency). Next, we can write the constant \(\gamma\) as well as the functions \(\chi(t-t^{\prime})\equiv\chi(t,t^{\prime})\) and \(C(t-t^{\prime})\equiv C(t,t^{\prime})\) in terms of the spectral function \(J(\omega)\) and the function \(\kappa(\omega)\) via the substitution \(\sum_{\alpha}\to\int\frac{d\omega}{\pi}\), \(\kappa_{\alpha}\to\kappa(\omega)\) and \(g_{\alpha}^{2}\to J(\omega)\):
\[\gamma =\frac{1}{4}\int\frac{d\omega}{\pi}\frac{J(\omega)}{\kappa(\omega)} \tag{63}\] \[\chi(t) =-\frac{i}{2}\Theta(t)\int\frac{d\omega}{\pi}J(\omega)e^{-i \omega t-\kappa(\omega)t}\]
We also define \(C(t)=\bar{C}(-t)=i\chi(t)\) for \(t>0\). The spectral function \(J(\omega)\) together with \(\kappa(\omega)\) uniquely define the spin-boson model that is further subject to Markovian loss.
### Numerical results: Lossy Ohmic bath
In this section, we consider the paradigmatic spin-boson model where a spin is coupled to an Ohmic bath whose spectral function [1; 2] is given by
\[J(\omega)=2\pi\alpha\omega e^{-\omega/\omega_{c}}, \tag{64}\]
Here, the parameter \(\alpha\) defines the strength of the spin-boson coupling, and \(\omega_{c}\) defines a soft cutoff for the bath. To fully characterize the bath, we must specify the Markovian dissipation as well. To this end, we define a one-parameter family of baths characterized by the function \(\kappa(\omega)=r\omega\) where the dissipation rate for a given mode is proportional to its frequency with the constant of proportionality \(r>0\). A forthcoming paper will be dedicated to this model and studying its rich behavior [101]; here, we just take it as a testbed for our stochastic method. The bath introduced here is identified by
\[\gamma=\frac{\alpha\omega_{c}}{2r} \tag{65}\]
and the response function
\[\chi(t)=\frac{-i\alpha\Theta(t)}{\left[\omega_{c}^{-1}+(i+r)t\right]^{2}} \tag{66}\]
Note specifically that this function smoothly interpolates to the standard Ohmic bath as \(r\to 0\).
Here, we choose \(r=1/2\) while considering the coupling strengths \(\alpha=0.45,0.5,0.55\) and \(5\times 10^{7},10^{8},2\times 10^{8}\) trajectories, respectively. We take the parameters \(\Delta=0.4,\omega_{c}=1\) and choose the time step \(dt=0.01\). We further consider an initial state where the spin is fully polarized along the \(x\) direction and all the bosonic modes are initially in their vacuum state. The numerical results
Figure 5: Stochastic evolution of \(\langle\sigma^{x}(t)\rangle\) for a spin coupled to a lossy Ohmic bath starting from an initial state where the spin is fully polarized along the \(x\) direction; here, \(r=1/2\) characterizes the ratio of loss to frequency of the bosons in the bath, \(\omega_{c}=1\) is the bath cutoff, \(\Delta=0.4\), and \(\alpha=0.45,0.5,0.55\) is the spin-bath coupling strength (see the text for details). The numerical results are in agreement with the TEBD simulation.
using the stochastic equation are shown in Fig. 5 and are in very good agreement with a time-evolving block decimation (TEBD) simulation [102]. For the TEBD simulation, we use the semi-infinite mapping of the spin-boson model [103] combined with a vectorization scheme of the density matrix [93]. Within the time window considered in the stochastic simulation, the dynamics for \(\alpha<0.5\) appears to be qualitatively different from \(\alpha>0.5\). Indeed, the standard spin-boson model (with no Markovian dissipation) is known to exhibit a transition from underdamped to overdamped dynamics exactly at \(\alpha=0.5\) when \(\Delta\ll\omega_{c}\)[104, 104, 42, 45, 47]. Whether or not the lossy model considered here exhibits a similar transition is beyond the scope of this work, as it requires a larger number of trajectories. However, our stochastic approach, despite its simplicity, allows for an exact simulation at intermediate coupling strengths and time scales. In contrast, a numerical simulation of the spin-boson model at moderate coupling strengths typically requires sophisticated computational methods such as NRG [105], MPS-based methods [106, 107, 42, 104], among others [108].
## VII \(N\) spins coupled to \(M\) bosons
Finally, in this section, we consider a general spin-boson model where \(N\) spins are coupled to \(M\) bosons, described by the Hamiltonian in Eq. (1) which we quote for completeness:
\[H_{N,M}=\frac{\Delta}{2}\sum_{i=1}^{N}\sigma_{i}^{z}+\sum_{\alpha=1}^{M}\omega _{\alpha}a_{\alpha}^{\dagger}a_{\alpha}+\frac{1}{\sqrt{N}}\sum_{\alpha i} \frac{g_{\alpha i}}{2}\sigma_{i}^{x}(a_{\alpha}+a_{\alpha}^{\dagger})\]
Again, we assume that each bosonic mode is subject to Markovian loss of bosons at the rate \(\kappa_{\alpha}\).
In order to treat the many-body problem, we combine the approaches presented in Sections V and VI:
1. First, we consider the bosonic modes separately. For a given mode \(\alpha\), we follow the steps in Section V to eliminate it in terms of the classical field \(\underline{\psi}_{\alpha}\) and white noise variables \(\xi_{\alpha}\), which satisfy
\[\frac{G_{\alpha}^{-1}\underline{\psi}_{\alpha}}{\xi_{\alpha}(t)\xi_{\alpha}(t ^{\prime})} =\kappa_{\alpha}\delta(t-t^{\prime}), \tag{67}\]
with \(G_{\alpha}\) the corresponding (causal) Green's function. Again, this generates an effective coupling between all the spins, and would be demanding for a large spin system. To decouple the spins, we adopt the trick in Section V.1: we introduce \(N\) fictitious noise fields \(\xi_{\alpha i}\) (per mode \(\alpha\)) such that
\[G_{\alpha}^{-1}\underline{\psi}_{\alpha} =\frac{1}{\sqrt{N}}\sum_{i=1}^{N}\xi_{\alpha i}(t) \tag{68}\] \[\overline{\xi_{\alpha i}(t)\xi_{\beta j}(t^{\prime})} =\kappa_{\alpha}\delta_{\alpha\beta}\delta_{ij}\delta(t-t^{ \prime}),\]
The dynamics is then factorized and each spin is evolved as (in the Ito convention)
\[\frac{d}{dt}|\rho_{i}\rangle\!\rangle_{\{\xi_{\alpha i}\}}=\mathbb{K}_{i}^{ \text{I}}(t)|\rho_{i}\rangle\!\rangle_{\{\xi_{\alpha i}\}} \tag{69}\]
where
\[\mathbb{K}_{i}^{\text{I}}(t)=\mathbb{L}_{i}+\sum_{\alpha}\frac{ig_{\alpha i}} {\sqrt{N}}(\underline{\psi}_{\alpha}+\underline{\bar{\psi}}_{\alpha})\mathbb{ S}_{i}-\frac{g_{\alpha i}}{2\kappa_{\alpha}}(\xi_{\alpha i}+\bar{\xi}_{\alpha i })\mathbb{T}_{i} \tag{70}\]
The full density matrix is then obtained by averaging over different realizations of white noise variables \(\{\xi_{\alpha i}(t)\}\) and the initial values \(\{\underline{\psi}_{\alpha}(0)\}\). For simplicity, we assume that the initial state is a factorized state of spins and bosons, and moreover the bosons are initially in their vacuum state; a generalization to a more general initial state is straightforward and follows the prescription in the previous sections.
2. In the above equations, each spin is still coupled to \(M\) noise variables. Following the steps in Section VI, we can make another transformation where the \(M\times N\) noise variables \(\xi_{\alpha i}\) are reduced to just \(N\) variables, one for each spin. To this end, we first define
\[\underline{\Psi}_{i}(t) =\frac{1}{\sqrt{N}}\sum_{\alpha}g_{\alpha i}\underline{\psi}_{ \alpha}(t), \tag{71}\] \[\Xi_{i}(t) =\sum_{\alpha}\frac{g_{\alpha i}}{2\kappa_{\alpha}}\xi_{\alpha i }(t)\]
The generator \(\mathbb{K}^{\text{I}}\) now takes a simple form as
\[\mathbb{K}_{i}^{\text{I}}(t)=\mathbb{L}_{i}+i(\underline{\Psi}_{i}+ \underline{\bar{\Psi}}_{i})\mathbb{S}_{i}-(\Xi_{i}+\bar{\Xi}_{i})\mathbb{T}_ {i} \tag{72}\]
where \(\Xi_{i}(t)\) are the new white variables defined by the correlations
\[\overline{\Xi_{i}(t)\Xi_{j}(t^{\prime})}=\gamma_{i}\delta(t-t^{\prime}),\quad \gamma_{i}\equiv\sum_{\alpha}\frac{g_{\alpha i}^{2}}{4\kappa_{\alpha}} \tag{73}\]
Furthermore, the function \(\underline{\Psi}_{i}\) is now determined by
\[\underline{\Psi}_{i}=\frac{1}{N}\sum_{j}\int dt^{\prime}\chi_{ij}(t,t^{ \prime})\Xi_{j}(t^{\prime})/\gamma_{j}+X_{i}(t) \tag{74}\]
where the variable \(X_{i}(t)\) is a Gaussian-distributed random variable with the correlations
\[\overline{X_{i}(t)\bar{X}_{j}(t^{\prime})}=\frac{1}{N}\check{C}_{ij}(t,t^{ \prime}) \tag{75}\]
hence, colored noise. Assuming that initially the bosonic modes are in their vacuum state, the kernels \(\chi_{ij}(t,t^{\prime})=\chi(t-t^{\prime})\), \(C_{ij}(t,t^{\prime})=C_{ij}(t-t^{\prime})\) and \(\check{C}_{ij}(t,t^{\prime})\) are given by
\[\chi_{ij}(t) =\frac{1}{2}\sum_{\alpha}g_{\alpha i}g_{\alpha j}G_{\alpha}(t) \tag{76}\] \[C_{ij}(t) =\bar{C}_{ij}(-t)=i\chi_{ij}(t),\qquad t>0\] \[\check{C}_{ij}(t,t^{\prime}) =C_{ij}(t)-\frac{1}{N}\sum_{l}\frac{1}{\gamma_{l}}\int_{t^{ \prime\prime}}\chi_{il}(t-t^{\prime\prime})\bar{\chi}_{jl}(t^{\prime}-t^{ \prime\prime})\]
Again, one can show that \(\tilde{C}_{ij}(t,t^{\prime})\geq 0\) as matrix defined with the rows and columns defined as \(it\) and \(jt^{\prime}\), respectively. We can then proceed as before to decompose \(\tilde{C}_{ij}(t,t^{\prime})\) in terms of a complete basis and write \(X_{i}\) as a sum of different terms each with a complex-valued coefficient that is drawn from a normal distribution. The quantities of interest such as expectation values of local operators or correlations function can be computed by first evolving each spin for a given realization, and then averaging over white noise \(\{\Xi_{i}(t)\}\) as well as colored noise \(\{X_{i}(t)\}\). Finally, a generalization to an arbitrary initial state follows analogously to the previous sections.
## VIII Summary and outlook
In this paper, we have considered a generalized spin-boson model and developed a hybrid stochastic quantum-classical approach to the evolution of the spins. To this end, we have traded off the bosonic modes for classical stochastic variables, which are then used as an input for a stochastic quantum evolution of the spins. In this approach, the spins are effectively decoupled for each stochastic realization, but this comes at the expense of sampling over unphysical states. Specifically, the density matrix is not trace 1 or positive, a fact that could hinder convergence at late times. However, we crucially utilize Markovian dissipation to treat the response function in a causal fashion, and to preserve hermiticity of the density matrix, a convenient feature for numerical simulations. Our work thus provides an example where Markovian dissipation can be used as a computational resource for simulating quantum systems [96]. We have showcased the utility of our approach in scenarios where an exact numerical computation is not available. Our work is relevant to emerging quantum simulation platforms including trapped ions [18; 19; 20; 21; 22], cavities via cold atoms [23; 24; 25], superconducting qubits [26; 27], and optomechanics [28].
Our approach extends beyond the existing methods rooted in the Feynman-Verron influence functional and their stochastic variants [71; 72; 73; 74; 75; 76; 77; 78; 79], collectively referred to as the FV approach. First, as remarked above, our approach keeps causality, and thus hermiticity, manifest. Second, we do not place a restriction on the initial state while the FV approach generically assumes a factorized initial state and an initially thermal state of bosons. Third, the FV approach gives rise to non-local kernels (which should be sampled by colored noise) while our hybrid quantum-classical approach remains Markovian (unless we lump bosonic variables into one or several variables; see Sections VI and VII). Fourth, while the FV approach is typically used in scenarios where the coupling is diagonal in a given basis (e.g., the Rabi model with \(H_{SB}\sim g\sigma^{x}(a+a^{\dagger})\)), our approach can be easily generalized to non-diagonal coupling (e.g., the Jaynes-Cumming model with \(H_{SB}\sim\sigma^{+}a+h.c.\)). Fifth, our approach in principle works even when the bosonic modes are intrinsically nonlinear in which case the bosonic modes cannot be integrated out via a Gaussian integral and the FV approach is no longer applicable. As an example, we may consider dephasing for bosons, \(L=\sqrt{\kappa_{dph}}a^{\dagger}a\), which would lead to nonlinear dynamics at the level of the Liouvillian; such dephasing can be mimicked by adding a term to \(H_{B}\),
\[H_{B}\to H_{B}+k(t)a^{\dagger}a \tag{77}\]
where \(k\) is (real-valued) white noise with the correlations \(\overline{k(t)k(t^{\prime})}=\kappa_{dph}\delta(t-t^{\prime})\). One can then carry out the stochastic analysis presented in Section IV to obtain a modified (Ito) stochastic equation for \(\underline{\psi}\):
\[[i\partial_{t}-\omega-k(t)+i\kappa]\underline{\psi}=\xi(t) \tag{78}\]
involving both additive \([\xi(t)]\) and multiplicative \([k(t)]\) white noise. To obtain the physical density matrix, we must average over both noise variables.
Spin-boson models are particularly challenging at strong coupling where the dynamics is highly nonlinear and the system may even undergo a phase transition. Indeed, the paradigmatic spin-boson model (a spin coupled to an infinite bath) exhibits a localization phase transition [1; 2] while the Rabi/Dicke models (one/many spins coupled to a cavity mode) undergo a superradiant phase transition at strong coupling [6; 7]. It is desirable to apply our stochastic approach to study such regimes; however, sampling over unphysical states could lead to poor convergence with the number of trajectories at strong coupling. Our approach provides an immediate advantage by ensuring hermiticity of the density matrix which strongly constrains single realizations. One can further improve the convergence by making the stochastic equation trace preserving [76; 43; 77]; however, the resulting nonlinear stochastic equation could lead to unstable solutions although different routes are proposed to alleviate this behavior [78]. In principle, our approach only requires averaging over well-defined stochastic solutions corresponding to many trajectories which can be parallelized on classical machines; this should be contrasted with the dynamics in the full Hilbert space where an exponentially large space is required even to store the quantum state of the system. Nevertheless, the usefulness of our approach in the challenging regime of strong coupling requires a careful analysis of the scaling of the number of trajectories with the coupling strength along with Markovian dissipation, a direction that constitutes an important avenue for future research.
###### Acknowledgements.
We thank Aash Clerk for useful discussions. This work is supported by the Air Force Office of Scientific Research (AFOSR) under the award number FA9550-20-1-0073. We also acknowledge support from the National Science Foundation under the NSF CAREER Award (DMR2142866) as well as the NSF grant PHY2112893.
## Appendix A Derivation of Eq. (16)
In this section, we derive Eq. (16) for a single spin coupled to a cavity mode. The generalization of our approach to many spins is straightforward. We follow a hybrid approach to the functional integral: we first apply the quantum-to-classical mapping utilized in Refs. [89, 60] to the spin operators and map them to classical discrete variables, and then use the phase-space approach in Ref. [88] to turn the cavity operators to \(c\) numbers using the Weyl ordering [91]. To this end, we break the Liouvillian as
\[\mathcal{L}=\mathcal{L}_{S}+\mathcal{L}_{B}+\mathcal{L}_{SB} \tag{17}\]
with the first two terms on the rhs including only spin and bosonic terms, respectively. The last term denotes the interaction between the two and is given by
\[\mathcal{L}_{SB}=-i[H_{SB},\bullet],\qquad H_{SB}=\frac{g}{2}\sigma^{x}(a+a^{ \dagger}) \tag{18}\]
Different interactions (e.g., Jaynes-Cumming model) can be treated in a similar fashion. In the above equation and throughout this Appendix, we denote the action of a superoperator (such as \(\mathcal{L}\)) on an arbitrary operator by the location and ordering of the symbol \(\bullet\). Next, we Trotterize the evolution as
\[\rho(t)=e^{t\mathcal{L}}(\rho_{0})=\underbrace{e^{\delta t\mathcal{L}}(e^{ \delta t\mathcal{L}}(\cdots(e^{\delta t\mathcal{L}}(\rho_{0}))\cdots)}_{n\text{ times}} \tag{19}\]
with \(\rho_{0}\) the initial state at time \(t=0\) and \(n=t/\delta t\). Note that the Liouvillian is a superoperator, and the above expression is not a matrix multiplication. To carry out the quantum-to-classical mapping for the spin, we we must insert a complete basis for spin operators at each time slice. We introduce the identity superoperator
\[\mathcal{I}_{S}=\sum_{\sigma^{u}=\pm 1}|\sigma^{u}\rangle\langle\sigma^{u}| \bullet\sum_{\sigma^{l}=\pm 1}|\sigma^{l}\rangle\langle\sigma^{l}| \tag{20}\]
where \(\sigma^{x}|\sigma\rangle=\sigma|\sigma\rangle\) with \(\sigma\) representing either \(\sigma^{u}\) or \(\sigma^{l}\); the notation for the superscript is inspired by the upper/lower branches of the Keldysh contour. Inserting the identity superoperator at each time slice in Eq. (19), we have
\[\langle\sigma|\rho(t)|\sigma^{\prime}\rangle= \sum_{\mathbf{\sigma}}\left\{\big{[}\prod_{k=0}^{n-1}\langle\sigma^{ u}_{k+1}\big{]}\left(e^{\delta t\mathcal{L}}(|\sigma^{u}_{k}\rangle\langle \sigma^{l}_{k}|)\right)|\sigma^{l}_{k+1}\rangle\right]\] \[\times\langle\sigma^{u}_{0}|\rho_{0}|\sigma^{l}_{0}\rangle\right\} \tag{21}\]
Here, we have introduced the complete basis \(\sigma^{u/l}_{k}\) at the Trotter time \(k\), and summed over all \(\mathbf{\sigma}=(\sigma^{u/l}_{0},\sigma^{u/l}_{1},\cdots,\sigma^{u/l}_{n-1})\), and further identified \(\sigma^{u/l}_{n}=\sigma,\sigma^{\prime}\), respectively. We emphasize that \(\langle\sigma^{\prime}|\rho(t)|\sigma\rangle\) is still a matrix in the Hilbert space of the bosonic operator. At this point, we carry out the standard Trotter-Suzuki expansion and write
\[e^{\delta t\mathcal{L}}=e^{\delta t(\mathcal{L}_{B}+\mathcal{L}_{S})}e^{\delta t \mathcal{L}_{SB}}+\mathcal{O}(\delta t^{2}) \tag{22}\]
The treatment of the spin-boson coupling is now straightforward within the quantum-to-classical mapping since this coupling is diagonal in the basis adopted in Eq. (20):
\[e^{\delta t\mathcal{L}_{SB}}(|\sigma^{u}_{k}\rangle\langle\sigma^{l}_{k}|)=| \sigma^{u}_{k}\rangle\langle\sigma^{l}_{k}|e^{-i\delta tg\sigma^{u}_{k}(a+a^{ \dagger})\bullet+i\delta tgg\downarrow^{u}_{k}(a+a^{\dagger})} \tag{23}\]
where the notation in the exponent should be understood as follows: defining the exponential in a series expansion, we find a nested product of (bosonic) superoperators, and the location of \(\bullet\) guides the action of subsequent superoperators on an input operator. We can then write
\[\langle\sigma|\rho(t)|\sigma^{\prime}\rangle=\sum_{\mathbf{\sigma}}\bigg{\{}\Big{[} \prod_{k=0}^{n-1}\langle\sigma^{u}_{k+1}|\left(e^{\delta t\mathcal{L}_{S}}(| \sigma^{u}_{k}\rangle\langle\sigma^{l}_{k}|)\right)|\sigma^{l}_{k+1}\rangle \Big{]}\langle\sigma^{u}_{0}|\rho_{S}(0)|\sigma^{l}_{0}\rangle\times\text{T}_{k} \prod_{k=0}^{n-1}e^{\delta t\widetilde{\mathcal{L}}_{B}(t_{k})}(\rho_{B}(0)) \bigg{\}} \tag{24}\]
where \(\text{T}_{k}\) is the time-ordering operator (in discretized time indexed by \(k\)), and we have defined the 'tilde' superoperator
\[\widetilde{\mathcal{L}}_{B}(t_{k})=\mathcal{L}_{B}-\frac{ig}{2}\left[\sigma^{ u}_{k}(a+a^{\dagger})\bullet-\sigma^{l}_{k}\bullet(a+a^{\dagger})\right] \tag{25}\]
with \(t_{k}=k\delta t\). We have additionally assumed that the initial state is factorized, \(\rho_{0}=\rho_{S}(0)\otimes\rho_{B}(0)\). At this stage, \(\sigma^{u/l}_{k}\) are just numbers and their ordering in the above expression is unimportant. Notice that the first product on the rhs of Eq. (24) only involves the spin variables, while the information about the bosonic mode as well as the spin-boson coupling is included in the second product that involves the Liouvillian \(\widetilde{\mathcal{L}}_{B}(t_{k})\).
Next, we map the bosonic operators into classical variables in order to construct the functional integral. Specifically, we utilize a quantum-optics based approach by working in the Weyl representation where the state is represented in terms of the Wigner function (in contrast with the Keldysh functional integral where the operators are replaced by \(Q\) symbols). The advantage of this method is that an unambiguous continuous-time limit emerges
naturally from a properly defined (i.e., discretized) functional integral. To this end, we follow [88] to establish the mapping to a functional integral for the bosonic mode. We shall not repeat these steps here, and just quote the resulting functional integral; an interested reader is referred to Appendix A of Ref. [88]. In this mapping, we map bosonic operators to phase space variables \(\phi_{k},\psi_{k}\) and the superoperator to a classical action. The evolution of the bosonic mode can be written as a functional integral over the phase-space variables weighted by the exponential of the corresponding action (similar to Feynman path integral). The Liouvillian \(\mathcal{L}_{B}\) describing free bosonic mode is then mapped to the classical action
\[\mathscr{S}_{B}=\sum_{k=0}^{n-1}\delta t\mathscr{L}_{B}(\psi_{k+1},\psi_{k}, \phi_{k}) \tag{100}\]
where we have defined the Lagrangian \(\mathscr{L}_{B}\)
\[\mathscr{L}_{B}(\psi_{1},\psi_{0},\phi_{0}) =2i\bar{\phi}_{0}(\psi_{1}-\psi_{0})/\delta t-2i\phi_{0}(\bar{\psi }_{1}-\bar{\psi}_{0})/\delta t-\mathscr{H}_{B}(\psi_{0}+\phi_{0})+\mathscr{H} _{B}(\psi_{0}-\phi_{0})\] \[\quad+i\kappa(2\bar{\phi}_{0}\phi_{0}-\phi_{0}\bar{\psi}_{0}+ \bar{\phi}_{0}\psi_{0})\] \[=2\bar{\phi}_{0}[i(\psi_{1}-\psi_{0})-\omega\psi_{0}-\kappa\psi_{ 0}]+c.c.+4i\kappa\bar{\phi}_{0}\phi_{0}, \tag{101}\]
and \(\mathscr{H}_{B}\) denotes the Weyl symbol corresponding to the Hamiltonian \(H_{B}\). Notice that the sign difference between the two terms involving \(\mathscr{H}_{B}\) is due to the same sign in the commutator \([H_{B},\bullet]\). In the last equality, we have used the transformation \(H_{B}=\omega a^{\dagger}a\to\mathscr{H}_{B}(\alpha)=\omega(|\alpha|^{2}-\frac{ 1}{2})\), and have dropped the constant term. Next, we consider the interaction, i.e., the expression in the bracket in Eq. (109). A similar treatment leads to the substitution \((a+a^{\dagger})\bullet\to\psi_{k}+\phi_{k}+c.c.\) and \(\bullet(a+a^{\dagger})\to\psi_{k}-\phi_{k}+c.c.\) at time \(t_{k}\) in the functional integral. Now, the action involving both the free bosonic part as well as the spin-boson coupling takes the form
\[\widetilde{\mathscr{S}_{B}}=\mathscr{S}_{B}+\sum_{k=0}^{n-1}\Big{[}-\frac{g} {2}\sigma_{k}^{u}\left(\psi_{k}+\phi_{k}+c.c.\right)+\frac{g}{2}\sigma_{k}^{ l}\left(\psi_{k}-\phi_{k}+c.c.\right)\Big{]} \tag{102}\]
Mapping out the bosonic operator to phase-space variables, we can now write the spin reduced density matrix \(\rho_{S}(t)=\mathrm{Tr}_{B}(\rho(t))\) as
\[\begin{split}\langle\sigma|\rho_{S}(t)|\sigma^{\prime}\rangle& =\int d^{2}\psi_{n}\prod_{k=0}^{n-1}\frac{4d^{2}\psi_{k}d^{2}\phi_ {k}}{\pi^{2}}e^{i\widetilde{\mathscr{S}_{B}}}\mathscr{W}_{0}(\psi_{0})\\ &\times\sum_{\boldsymbol{\sigma}}\Bigg{\{}\left[\prod_{k=0}^{n-1} \langle\sigma_{k+1}^{u}|\left(e^{\delta t\mathcal{L}_{S}}(|\sigma_{k}^{u} \rangle\langle\sigma_{k}^{l}|)\right)|\sigma_{k+1}^{l}\rangle\right]\times \langle\sigma_{0}^{u}|\rho_{S}(0)|\sigma_{0}^{l}\rangle\Bigg{\}}\\ &=\int d^{2}\psi_{n}\prod_{k=0}^{n-1}\frac{4d^{2}\psi_{k}d^{2} \phi_{k}}{\pi^{2}}e^{i\mathscr{S}_{B}}\mathscr{W}_{0}(\psi_{0})\\ &\times\sum_{\boldsymbol{\sigma}}\Bigg{\{}\left[\prod_{k=0}^{n-1 }\langle\sigma_{k+1}^{u}|\left(e^{\delta t\widetilde{\mathcal{L}}_{S}(t_{k}) }(|\sigma_{k}^{u}\rangle\langle\sigma_{k}^{l}|)\right)|\sigma_{k+1}^{l}\rangle \right]\times\langle\sigma_{0}^{u}|\rho_{S}(0)|\sigma_{0}^{l}\rangle\Bigg{\}} \end{split} \tag{103}\]
with \(\mathscr{W}_{0}\) the Wigner function of the initial state; we have used the notation \(d^{2}z=d\mathrm{Re}\left(z\right)d\mathrm{Im}\left(z\right)\). In the last equality, we have restored the free bosonic action, \(\widetilde{\mathscr{S}_{B}}\to\mathscr{S}_{B}\), and absorbed the difference in \(\mathcal{L}_{S}\to\widetilde{\mathcal{L}}_{S}(t_{k})\) where we have defined the 'tilde' spin superoperator
\[\widetilde{\mathcal{L}}_{S}(t_{k})=\mathcal{L}_{S}-\frac{ig}{2}\left[(\psi_{k} +\phi_{k}+c.c.)\sigma^{x}\bullet-(\psi_{k}-\phi_{k}+c.c.)\bullet\sigma^{x}\right] \tag{104}\]
Notice that this expression does not involve the indices \(\sigma_{k}^{u/l}\) explicitly, but the spin operator \(\sigma^{x}\) acting on a ket or a bra state at time \(k\) (as prescribed by the ordering with respect to \(\bullet\)) reproduces the expression in Eq. (102). We can then explicitly sum over the spin indices \(\boldsymbol{\sigma}\) as they sum to an identity superoperator at all time slices before time \(t\)
The resulting density matrix is given by
\[\langle\sigma|\rho_{S}(t)|\sigma^{\prime}\rangle=\int\mathscr{D}[\psi,\phi]e^{i \mathscr{S}_{B}}\mathscr{W}_{0}(\psi_{0})\langle\sigma|\mathrm{T}_{k}e^{\sum_{k= 0}^{n-1}\widetilde{\mathscr{L}}_{S}(t_{k})}(\rho_{S}(0))|\sigma^{\prime}\rangle \tag{100}\]
where we have defined the shorthand
\[\mathscr{D}[\psi,\phi]=\int d^{2}\psi_{n}\prod_{k=0}^{n-1}\frac{4d^{2}\psi_{k} d^{2}\phi_{k}}{\pi^{2}} \tag{101}\]
as the proper measure of the path integral in discretized time. Finally, vectorizing the density matrix, we obtain
\[|\rho_{S}(t)\rangle\!\rangle=\int\mathscr{D}[\psi,\phi]e^{i\mathscr{S}_{B}} \mathscr{W}_{0}(\psi_{0})\mathrm{T}_{k}e^{\sum_{k=0}^{n-1}\widetilde{\mathscr{ L}}_{S}(t_{k})}|\rho_{S}(0)\rangle\!\rangle \tag{102}\]
where the matrix \(\widetilde{\mathscr{L}}_{S}(t_{k})\) corresponds to the superoperator \(\widetilde{\mathscr{L}}_{S}(t_{k})\) in the vectorized notation, and is explicitly given by
\[\widetilde{\mathscr{L}}_{S}(t_{k})=\mathbb{L}_{S}-\frac{ig}{2}\left[(\psi_{k} +\phi_{k}+c.c.)\sigma^{x}\otimes I-(\psi_{k}-\phi_{k}+c.c.)I\otimes\sigma^{x}\right] \tag{103}\]
Taking the limit of continuous time, one recovers Eq. (16) and can explicitly verify that \(\widetilde{\mathscr{L}}_{S}(t_{k})\to\mathbb{L}_{S}+\mathbb{L}_{\mathrm{int}}(t)\) where \(\mathbb{L}_{\mathrm{int}}\) is defined in Eq. (18).
## Appendix B Derivation of Eq. (26)
In this section, we provide the derivation of Eq. (26). To this end, we first use the Hubbard-Stratonovich transformation given by Eq. (22) but in discretized time:
\[e^{-4\kappa\delta t\widetilde{\phi}_{k}\phi_{k}}=\frac{\delta t}{\kappa\pi} \int d^{2}\xi_{k}e^{-\delta t\widetilde{\xi}_{k}\xi_{k}/\kappa-2i\delta t( \widetilde{\xi}_{k}\phi_{k}+\xi_{k}\widetilde{\phi}_{k})} \tag{104}\]
where we have introduced the Gaussian distributed noise \(\xi_{k}\). Ultimately, we like to trade in the field \(\phi_{k}\) for the noise \(\xi_{k}\). But, as remarked before, the field \(\phi_{k}\) also appears in the \(\widetilde{\mathscr{L}}_{S}(t_{k})\) matrix. To deal with the latter, we use the Suzuki-Trotter expansion to write:
\[\begin{split} e^{\delta t\widetilde{\mathscr{L}}_{S}(t_{k})}& =e^{\delta t\mathscr{L}_{1}(t_{k})+ig\delta t(\phi_{k}+\widetilde {\phi}_{k})\mathbb{T}}\\ &=e^{\delta t\mathscr{L}_{1}(t_{k})}e^{ig\delta t(\phi_{k}+ \widetilde{\phi}_{k})\mathbb{T}}+\mathcal{O}(\delta t^{3/2})\end{split} \tag{105}\]
where in the first equality we have defined \(\mathbb{L}_{1}(t_{k})=\mathbb{L}_{S}+g(\psi_{k}+\bar{\psi}_{k})\mathbb{S}\) for notational convenience. We note that the error is of the order of \(\delta t^{3/2}\), and not \(\delta t^{2}\) as one might naively expect. This is because \(\phi_{k}\sim 1/\delta t^{1/2}\) given the Gaussian fluctuations of the field \(\phi_{k}\) in Eq. (104). In fact, one should be careful when expanding the exponential to the lowest order. Specifically, expanding \(\exp[i\delta t(\phi_{q}+\bar{\phi}_{q})\cdots]\approx 1+i\delta t(\phi_{q}+ \bar{\phi}_{q})\cdots\) captures terms up to \(\delta t^{1/2}\) but misses the term of the order \(\delta t\). In using the Suzuki-Trotter expansion, we have ensured that the error is smaller than \(\delta t\). We then have
\[|\rho_{S}(t)\rangle\!\rangle=\int\mathscr{D}[\psi,\phi,\xi]\mathscr{W}_{0}( \psi_{0})e^{i\mathscr{S}^{\prime}_{B}+\sum_{k=0}^{n-1}-\delta t\widetilde{\xi }_{k}\xi_{k}/\kappa-2i\delta t(\widetilde{\xi}_{k}\phi_{k}+\xi_{k}\widetilde {\phi}_{k})}\mathrm{T}_{k}\;\prod_{k=0}^{n-1}e^{\delta t\mathscr{L}_{1}(t_{k}) }e^{ig\delta t\mathbb{T}(\phi_{k}+\bar{\phi}_{k})}|\rho_{S}(0)\rangle\!\rangle \tag{106}\]
where \(\mathscr{S}^{\prime}_{B}\) refers to action for the free bosonic mode excluding the term \(4i\kappa\bar{\phi}\phi\) in the corresponding Lagrangian (103). We have also defined the shorthand
\[\mathscr{D}[\xi]=\prod_{k=0}^{n-1}\frac{\delta t}{\kappa\pi}d^{2}\xi_{k} \tag{107}\]
as the measure of the functional integral over noise.
Next, we eliminate \(\phi_{k}\) in the time-ordered product by writing it as a derivative with respect to \(\xi_{k}\) as
\[|\rho_{S}(t)\rangle\!\rangle=\int\mathscr{D}[\psi,\phi,\xi]\mathscr{W}_{0}( \psi_{0})e^{i\mathscr{S}^{\prime}_{B}-\sum_{k}\delta t\widetilde{\xi}_{k} \xi_{k}/\kappa}\;\mathrm{T}_{k}\;\prod_{k=0}^{n-1}e^{\delta t\mathscr{L}_{1}(t_ {k})}e^{-\frac{g}{2}\mathbb{T}(\partial_{k_{k}}+\partial_{\xi_{k}})}e^{-2i \delta t(\widetilde{\xi}_{k}\phi_{k}+\xi_{k}\widetilde{\phi}_{k})\mathscr{L} }|\rho_{0}\rangle\!\rangle\]
An integration by parts with respect to the derivatives just introduced yields
\[\begin{split}|\rho_{S}(t)\rangle\!\rangle&=\int \mathscr{D}[\psi,\phi,\xi]\mathscr{W}_{0}(\psi_{0})e^{i\mathscr{S}^{\mu}_{B}- \sum_{k}2i\delta t(\bar{\xi}_{k}\phi_{k}+\xi_{k}\bar{\phi}_{k})}\ \text{T}_{k}\prod_{k=0}^{n-1}e^{\delta t \mathbb{L}_{1}(t_{k})}e^{\frac{g}{2}\mathbb{T}(\partial_{t_{k}}+\partial_{\xi _{k}})}e^{-\delta t\bar{\xi}_{k}\xi_{k}/\kappa\mathbb{I}}|\rho_{S}(0)\rangle\! \rangle\\ &=\int\mathscr{D}[\psi,\phi,\xi]\mathscr{W}_{0}(\psi_{0})e^{i \mathscr{S}^{\mu}_{B}-\sum_{k}2i\delta t(\bar{\xi}_{k}\phi_{k}+\xi_{k}\bar{\phi }_{k})}\ \text{T}_{k}\prod_{k=0}^{n-1}e^{\delta t\mathbb{L}_{1}(t_{k})}e^{-\delta t (\bar{\xi}_{k}\mathbb{I}+\frac{g}{2}\mathbb{T})(\xi_{k}\mathbb{I}+\frac{g}{2} \mathbb{T})/\kappa}|\rho_{S}(0)\rangle\!\rangle\\ &=\int\mathscr{D}[\psi,\phi,\xi]\mathscr{W}_{0}(\psi_{0})e^{i \mathscr{S}^{\mu}_{B}-\sum_{k}2i\delta t(\bar{\xi}_{k}\phi_{k}+\xi_{k}\bar{ \phi}_{k})-\delta t\bar{\xi}_{k}\xi_{k}/\kappa}\ \text{T}_{k}\ \prod_{k=0}^{n-1}e^{\delta t \mathbb{L}_{1}(t_{k})-\delta t(g/2\kappa)(\xi_{k}+\bar{\xi}_{k})\mathbb{T}- \delta t(g^{2}/4\kappa)\mathbb{T}^{2}}|\rho_{S}(0)\rangle\!\rangle\end{split} \tag{100}\]
In the second line, we used the fact that \(e^{a\partial_{\xi}}\) is a translation operator, i.e., \(e^{a\partial_{\xi}}f(\xi)=f(\xi+a)\) for a function \(f(\xi)\). In the third line, we have used the Suzuki-Trotter expansion again, this time to combine the exponents, and ignored an error of the order \(\delta t^{3/2}\). Finally, we can integrate over \(\phi_{k}\) for all \(k=0,1,\cdots,n-1\) to find the delta functions
\[\begin{split}&\int\frac{4d^{2}\phi_{k}}{\pi^{2}}e^{2i(\bar{ \phi}_{k}f+\phi_{k}\bar{f})}=\delta^{2}(f)\\ &\text{with }f=i(\psi_{k+1}-\psi_{k})+\delta t(-\omega\psi_{k}+i \kappa\psi_{k}-\xi_{k})\end{split} \tag{101}\]
where we have defined \(\delta^{2}(z)=\delta(\operatorname{Re}z)\delta(\operatorname{Im}z)\). Integrating over \(\psi_{k}\) for \(k=1,\cdots,n\) (excluding \(k=0\)) and \(\phi_{k}\) for \(k=0,\cdots,n-1\), the field \(\phi_{k}\) completely drops out while the field \(\psi_{k}\) (for \(k>0\)) becomes constrained by the equation
\[i(\underline{\psi}_{k+1}-\underline{\psi}_{k})/\delta t-\omega\underline{\psi }_{k}+i\kappa\underline{\psi}_{k}=\xi_{k} \tag{102}\]
and the initial condition \(\psi_{0}\) that is drawn from the Wigner function \(\mathscr{W}_{0}(\psi_{0})\). We have denoted the solution to the above equation by \(\underline{\psi}_{k}\). Together with the fact that the Jacobian is \(1\), we finally find
\[\begin{split}|\rho_{S}(t)\rangle\!\rangle=\!\int\mathscr{D}[\xi] \int d^{2}\psi_{0}\mathscr{W}_{0}(\psi_{0})e^{-\sum_{k}\delta t\bar{\xi}_{k} \xi_{k}/\kappa}\text{T}_{k}\ \prod_{k=0}^{n-1}e^{\delta t(\mathbb{L}_{S}+ig(\underline{\psi}_{k}+ \underline{\bar{\psi}}_{k})\mathbb{S}-(g/2\kappa)(\xi_{k}+\bar{\xi}_{k}) \mathbb{T}-(g^{2}/4\kappa)\mathbb{T}^{2})}|\rho_{S}(0)\rangle\!\rangle\end{split} \tag{103}\]
The exponent in this equation gives the kernel \(\mathbb{K}(t_{k})\) defined in Eq. (27) albeit at a discrete time step \(k\). This completes our derivation of Eq. (26).
Next, we derive the local generator of the dynamics. First, we define the time ordered product in the above expression as \(|\rho_{S}(t)\rangle\!\rangle_{\xi}\) such that
\[\begin{split}|\rho_{S}(t)\rangle\!\rangle=\int\mathscr{D}[\xi] \int d^{2}\psi_{0}\mathscr{W}_{0}(\psi_{0})e^{-\sum_{k}\delta t\bar{\xi}_{k} \xi_{k}/\kappa}|\rho_{S}(t)\rangle\!\rangle_{\xi}\end{split} \tag{104}\]
It follows from this definition that
\[\begin{split}|\rho_{S}(t_{k+1})\rangle\!\rangle_{\xi}=\\ e^{\delta t[\mathbb{L}_{S}+ig(\underline{\psi}_{k}+\underline{ \bar{\psi}}_{k})\mathbb{S}-(g/2\kappa)(\xi_{k}+\bar{\xi}_{k})\mathbb{T}-(g^{2 }/4\kappa)\mathbb{T}^{2})}|\rho_{S}(t_{k})\rangle\!\rangle_{\xi}\end{split} \tag{105}\]
We then expand the exponential in powers of \(\delta t\) and take the limit \(\delta t\to 0\). Specifically, we must expand the term proportional to noise \((\xi_{k}+\bar{\xi}_{k})\) to the second order since \(\overline{\xi_{k}\bar{\xi}_{k}}=\kappa/\delta t\). The term generated at this order is given by
\[\frac{1}{2}\left(\frac{g}{2\kappa}\right)^{2}\delta t^{2}(\xi_{k}+\bar{\xi}_{ k})^{2}\mathbb{T}^{2}\xrightarrow[\delta t\to 0]{}\delta t\frac{g^{2}}{4\kappa}\mathbb{T}^{2} \tag{106}\]
The limit is obtained in the sense that \(\delta t^{2}\overline{\xi_{k}\bar{\xi}_{k}}=\delta t\) while all the higher _cumulants_ are proportional to higher powers of \(\delta t\). In essence, this statement is the same as the proof that a Wiener process is described by \(dW^{2}=dt\) and \(dW^{2+N}=0\) for \(N>0\)[92]. Next, we note that the above expression cancels out against the last term of the exponent in Eq. (104) once expanded to the linear order in \(\delta t\). It follows that
\[\begin{split}|\rho_{S}(t_{k+1})\rangle\!\rangle_{\xi}=|\rho_{S}(t _{k})\rangle\!\rangle_{\xi}+\delta t\left[\mathbb{L}_{S}+ig(\underline{\psi}_{ k}+\underline{\bar{\psi}}_{k})\mathbb{S}-(g/2\kappa)(\xi_{k}+\bar{\xi}_{k}) \mathbb{T}\right]|\rho_{S}(t_{k})\rangle\!\rangle_{\xi}\end{split} \tag{107}\]
In the continuum limit, this expression gives the dynamics with the generator \(\mathbb{K}^{\rm I}\) defined in Eq. (30). The discretized version of this equation makes it clear that the dynamics is given in the Ito sense. As a final remark, it is straightforward to see that, using the transformation between the Ito and the Stratonovich rules, the kernel \(\mathbb{K}\) describes the stochastic dynamics in the Stratonovich sense.
## Appendix C Influence functional and its variants
In this section, we show that our stochastic approach gives rise to the expected Feynman-Vernon influence functional when the initial state is factorized and the bosonic mode(s) are initially in their vacuum state. Furthermore, we utilize the Feynman-Vernon framework to verify our stochastic formulation in cases where the initial state state is arbitrary.
### The case \(N=m=1\)
We start by 'integrating out' the noise as well as the initial state from Eq. (38). We first introduce the notation (also see Section II)
\[\eta_{k}=(\sigma_{k}^{u}+\sigma_{k}^{l})/2,\qquad\tilde{\eta}_{k}=(\sigma_{k}^ {u}-\sigma_{k}^{l})/2 \tag{40}\]
Analogously to Appendix A, we can insert a complete basis to perform a quantum-to-classical mapping, albeit in the basis defined above. The matrices \(\mathbb{S},\mathbb{T}\) become diagonal in this basis with the elements \(\langle\!\langle\eta\tilde{\eta}|\mathbb{S}|\eta\tilde{\eta}\rangle\!\rangle =-\tilde{\eta}\) and \(\langle\!\langle\eta\tilde{\eta}|\mathbb{T}|\eta\tilde{\eta}\rangle\!\rangle =-\eta\). It is easy to verify that the functional integral in Eq. (38) can be cast as
\[\begin{split}\langle\sigma|\rho_{S}(t)|\sigma^{\prime}\rangle=& \sum_{\mathbf{\eta},\tilde{\mathbf{\eta}}}\int d^{2}\psi_{0}\mathscr{W}_{0}( \psi_{0})\int\mathscr{D}[\xi]e^{-\sum_{k}\delta t\bar{\xi}_{k}\xi_{k}/\kappa} \langle\!\langle\sigma\sigma^{\prime}|\eta_{\tilde{\eta}}\tilde{\eta}_{n} \rangle\!\rangle\langle\!\langle\eta_{0}\tilde{\eta}_{0}|\rho_{S}(0)\rangle\! \rangle\times\\ &\times\prod_{k=0}^{n-1}\langle\!\langle\eta_{k+1}\tilde{\eta}_{ k+1}|e^{\delta t\mathbb{L}_{S}}|\eta_{k}\tilde{\eta}_{k}\rangle\!\rangle e^{-i\delta tg \tilde{\eta}_{k}(\underline{\psi}_{k}+\underline{\tilde{\psi}}_{k})+\frac{g}{2 \kappa}\delta t\eta_{k}(\xi_{k}+\bar{\xi}_{k})-\frac{g^{2}}{4\kappa}\delta t \eta_{k}^{2}}\end{split} \tag{41}\]
where \(\mathbf{\eta}=\{\eta_{0},\eta_{1},\cdots,\eta_{n}\}\). Here, we assume that the bosonic mode is initially in the vacuum state described by the Wigner function \(\mathscr{W}_{0}(\psi_{0})=\frac{2}{\pi}e^{-2|\psi_{0}|^{2}}\). Now integrating over the noise \(\{\xi_{k}\}\) as well as the Wigner distribution function, we find
\[\langle\sigma|\rho_{S}(t)|\sigma^{\prime}\rangle=\sum_{\mathbf{\eta}, \tilde{\mathbf{\eta}}}\langle\!\langle\sigma\sigma^{\prime}|\eta_{\tilde{\eta}} \tilde{\eta}_{n}\rangle\!\rangle\langle\!\langle\eta_{0}\tilde{\eta}_{0}|\rho_ {S}(0)\rangle\!\rangle\times\] \[\times\prod_{k=0}^{n-1}\langle\!\langle\eta_{k+1}\tilde{\eta}_{k +1}|e^{\delta t\mathbb{L}_{S}}|\eta_{k}\tilde{\eta}_{k}\rangle\!\rangle e^{-\frac{g^{2}}{4 \kappa}\delta t\eta_{k}^{2}}e^{\sum_{k,k^{\prime}}\delta t^{2}[-\frac{g^{2}}{2 }(\underline{\overline{\varphi}_{k}}\underline{\tilde{\psi}}_{k^{\prime}}+c.c.)\tilde{\eta}_{k}\tilde{\eta}_{k^{\prime}}-i\frac{g^{2}}{2\kappa}( \underline{\overline{\psi}}_{k}\underline{\tilde{\psi}}_{k^{\prime}}+c.c.) \tilde{\eta}_{k}\eta_{k^{\prime}}+\frac{1}{2}\frac{g^{2}}{4\kappa^{2}}( \bar{\xi}_{k}\underline{\tilde{\xi}}_{k^{\prime}}+c.c.)\eta_{k}\eta_{k^{ \prime}}]\] \[=\sum_{\mathbf{\eta},\tilde{\mathbf{\eta}}}\langle\!\langle\sigma\sigma^{ \prime}|\eta_{\tilde{\eta}}\tilde{\eta}_{n}\rangle\!\rangle\langle\!\langle \eta_{0}\tilde{\eta}_{0}|\rho_{S}(0)\rangle\prod_{k=0}^{n-1}\langle\!\langle\eta _{k+1}\tilde{\eta}_{k+1}|e^{\delta t\mathbb{L}_{S}}|\eta_{k}\tilde{\eta}_{k} \rangle\!\rangle e^{-\frac{g^{2}}{2}\int_{t,t^{\prime}}\tilde{\eta}(t)\tilde{ \eta}(t^{\prime})\tilde{\eta}(t^{\prime})}e^{-ig^{2}\int_{t,t^{\prime}}\tilde{ \eta}(t)\eta(t^{\prime})\mathcal{G}(t,t^{\prime})} \tag{42}\]
where the overline in the first equality indicates the Gaussian average over both the noise \(\xi_{k}\) as well as \(\psi_{0}\), and the continuous time in the last line is a shorthand for the sum over discrete time. Notice that the term proportional to \(\eta_{k}^{2}\) cancels out against the contribution from the noise average. Moreover, we have defined the Green's functions
\[\begin{split}&\mathcal{G}(t,t^{\prime})=\frac{1}{2}G(t-t^{\prime})+c.c. =-\sin[\omega(t-t^{\prime})]e^{-\kappa(t-t^{\prime})}\Theta(t-t^{\prime})\\ & i\mathcal{G}^{K}(t,t^{\prime})=\kappa\int_{t^{\prime\prime}}G(t-t ^{\prime\prime})\tilde{G}(t^{\prime}-t^{\prime\prime})+c.c.+\frac{1}{2}G(t) \bar{G}(t^{\prime})+c.c.=\cos[\omega(t-t^{\prime})]e^{-\kappa|t-t^{\prime}|} \end{split} \tag{43}\]
These Green functions correspond to the correlation and response function of the first quadrature of the cavity mode. While \(i\mathcal{G}^{K}\) is purely real, we have included a factor of \(i\) in harmony with the Keldysh convention. The above expressions are consistent with the influence functional upon identifying \(g^{2}\mathcal{G}\to-L_{1}\) and \(ig^{2}\mathcal{G}^{K}\to L_{2}\); see Eq. (41). Notice that the factor of \(1/2\) in front of \(\mathcal{G}^{K}\) in the exponent of Eq. (42) is due to the symmetrization with respect to \(t,t^{\prime}\).
### The case \(N>1\) and \(M=1\)
Next, we consider \(N\) spins coupled to one bosonic mode. We shall assume a factorized initial state (an extension thereof is straightforward), but consider a general initial state for the bosonic mode that is not necessarily Gaussian. We can derive a similar functional integral expression similar to Eq. (100) only with the matrix \(\mathbb{K}\) identified as
\[\mathbb{K}=\sum_{i=1}^{N}\Big{[}\mathbb{L}_{i}+i\frac{g_{i}}{\sqrt{N}}(\underline {\psi}+\underline{\bar{\psi}})\mathbb{S}_{i}-\frac{g_{i}}{2\sqrt{N}\kappa}( \xi+\bar{\xi})\mathbb{T}_{i}\Big{]}-\frac{1}{4N\kappa}\Big{(}\sum_{i}g_{i} \mathbb{T}_{i}\Big{)}^{2} \tag{101}\]
Inserting a complete basis as the last subsection and performing a quantum-to-classical mapping, we find
\[\begin{split}\langle\vec{\sigma}|\rho_{S}(t)|\vec{\sigma}^{ \prime}\rangle=&\int d^{2}\psi_{0}\mathscr{W}_{0}(\psi_{0})\int \mathscr{D}[\xi]e^{-\sum_{k}\bar{\xi}_{k}\xi_{k}/\kappa}\sum_{\mathbf{\eta},\mathbf{ \tilde{\eta}}}\langle\!\langle\vec{\sigma},\vec{\sigma}^{\prime}|\prod_{i}| \eta_{i,n}\tilde{\eta}_{i,n}\rangle\!\rangle\langle\!\langle\eta_{i,0}\tilde{ \eta}_{i,0}|\rho_{i}(0)\rangle\!\rangle\times\\ &\times\Big{[}\prod_{i,k}\langle\!\langle\eta_{i,k+1}\tilde{\eta }_{i,k+1}|e^{\delta t\mathbb{L}_{S}}|\eta_{i,k}\tilde{\eta}_{i,k}\rangle\! \rangle\Big{]}e^{-i\delta t\sum_{i}\frac{g_{i}}{\sqrt{N}}\tilde{\eta}_{i,k}( \underline{\psi}_{+}+\underline{\tilde{\psi}}_{+})+\delta t\sum_{i}\frac{g_{i }}{2\sqrt{N}\kappa}\eta_{i,k}(\xi_{k}+\bar{\xi}_{k})-\frac{\delta t}{4N\kappa} (\sum_{i}g_{i}\eta_{i,k})^{2}}\end{split} \tag{102}\]
where we have defined \(\vec{\sigma}=\{\sigma_{1},\sigma_{2},\cdots,\sigma_{N}\}\) and \(\mathbf{\eta}=\{\eta_{i,k}\}\) with \(\eta_{i,k}\) defined analogously to Eq. (100) with an additional index \(i\) for each spin. We can repeat the same steps as the previous subsection to integrate over the noise \(\xi\), but this time we assume a general Wigner function \(\mathscr{W}_{0}(\psi_{0})\) and do not integrate over \(\psi_{0}\). More explicitly, we separate out the dependence of the classical field \(\underline{\psi}\) on noise and the initial conditions as \(\underline{\psi}(t)=\psi[\xi](t)+iG(t)\psi_{0}\) where \(\psi[\xi](t)=\int_{t^{\prime}}G(t-t^{\prime})\xi(t^{\prime})\) is solely the contribution due to noise. Now integrating over noise \(\xi\), we obtain
\[\begin{split}\langle\vec{\sigma}|\rho_{S}(t)|\vec{\sigma}^{\prime }\rangle=&\int d^{2}\psi_{0}\mathscr{W}_{0}(\psi_{0})\sum_{\mathbf{ \eta},\mathbf{\tilde{\eta}}}\langle\!\langle\vec{\sigma},\vec{\sigma}^{\prime}| \prod_{i}\eta_{i,n}\tilde{\eta}_{i,n}\rangle\!\rangle\langle\!\langle\eta_{i, 0}\tilde{\eta}_{i,0}|\rho_{i}(0)\rangle\!\rangle\prod_{i,k}\langle\!\langle\eta _{i,k+1}\tilde{\eta}_{i,k+1}|e^{\delta t\mathbb{L}_{S}}|\eta_{i,k}\tilde{\eta}_ {i,k}\rangle\!\rangle\\ &\times e^{-i\sum_{i}\frac{g_{i}}{\sqrt{N}}\int_{t}\tilde{\eta}_{ i}(t)(iG(t)\psi_{0}+c.c.)-\sum_{ij}\frac{g_{i}g_{j}}{2N}\int_{t,t^{\prime}}\tilde{ \eta}_{j}(t)\tilde{\eta}_{j}(t^{\prime})i\Delta\mathcal{G}^{k}(t,t^{\prime})- i\sum_{ij}\frac{g_{i}g_{j}}{N}\int_{t,t^{\prime}}\tilde{\eta}_{i}(t)\eta_{j}(t^{ \prime})\mathcal{G}(t,t^{\prime})}\end{split} \tag{103}\]
where we have used the continuous time as a shorthand for discrete time, and defined
\[i\Delta\mathcal{G}^{K}(t,t^{\prime})=\overline{\psi[\xi](t)\bar{\psi}[\xi](t^{ \prime})}+c.c.=i\mathcal{G}^{K}(t,t^{\prime})-[\frac{1}{2}G(t)\bar{G}(t^{ \prime})+c.c.] \tag{104}\]
where the last equality follows by subtracting the contribution due to the initial state. We also remark that the all-to-all coupling (albeit with different coefficients) in Eq. (103) is simply because the same bosonic mode is coupled to all the spins.
Next, we provide an alternative representation that yields the same functional integral as Eq. (103), but it has the advantage that different spins become decoupled. We first introduce the independent noise variables \(\mathbf{\xi}=\{\xi_{i,k}\}\):
\[\overline{\xi_{i,k}\tilde{\xi}_{j,k^{\prime}}}=\frac{\kappa}{\delta t}\delta_{ kk^{\prime}}\delta_{ij} \tag{105}\]
We then separate out the initial conditions as \(\underline{\psi}(t)=\psi[\mathbf{\xi}](t)+iG(t)\psi_{0}\) where the term \(\psi[\mathbf{\xi}]\), still to be defined, is linearly dependent on the new noise variables. We are then tasked to define \(\psi[\mathbf{\xi}]\) such that \((\overline{\psi[\mathbf{\xi}](t)\xi_{i}(t^{\prime})}+c.c.)/2\kappa=\mathcal{G}(t-t ^{\prime})\) while \(\overline{\psi[\mathbf{\xi}](t)\bar{\psi}[\mathbf{\xi}](t^{\prime})}+c.c.=i\Delta \mathcal{G}^{K}(t,t^{\prime})\); if possible, this would allow us to couple the spin \(i\) only to \(\xi_{i}\) and still recover Eq. (103). Indeed, this can be achieved by defining (in continuous time)
\[G^{-1}\underline{\psi}(t)=\frac{1}{\sqrt{N}}\sum_{j}\xi_{j}(t)\qquad\text{or} \qquad\psi[\mathbf{\xi}](t)=\frac{1}{\sqrt{N}}\sum_{j}\int_{t^{\prime}}G(t-t^{ \prime})\xi_{j}(t^{\prime}) \tag{106}\]
where \(G^{-1}=i\partial_{t}-\omega+i\kappa\) is the inverse of the Green's function \(G\). We can then construct the functional integral in discrete time as
\[\begin{split}\langle\vec{\sigma}|\rho_{S}(t)|\vec{\sigma}^{\prime }\rangle=&\int d^{2}\psi_{0}\mathscr{W}_{0}(\psi_{0})\int \mathscr{D}[\mathbf{\xi}]e^{-\sum_{i,k}\bar{\xi}_{i,k}\xi_{i,k}/\kappa}\sum_{\mathbf{ \eta},\mathbf{\tilde{\eta}}}\langle\!\langle\vec{\sigma}\vec{\sigma}^{\prime}|\prod_{i }\eta_{i,n}\tilde{\eta}_{i,n}\rangle\!\rangle\langle\!\langle\eta_{i,0}\tilde{ \eta}_{i,0}|\rho_{i}(0)\rangle\!\rangle\times\\ &\times\prod_{i,k}\langle\!\langle\eta_{i,k+1}\tilde{\eta}_{i,k+1}|e^ {\delta t\mathbb{L}_{S}}|\eta_{i,k}\tilde{\eta}_{i,k}\rangle\!\rangle e^{-i \delta t\frac{g_{i}}{\sqrt{N}}\tilde{\eta}_{i,k}(\underline{\psi}_{+}+ \underline{\tilde{\psi}}_{+})+\frac{g_{i}}{2\kappa}\delta t\eta_{i,k}(\xi_{i,k }+\bar{\xi}_{i,k})-\frac{g_{i}^{2}}{4\kappa}\eta_{i,k}^{2}}\end{split} \tag{107}\]
where we have further assumed that the spins are initially factorized. One can explicitly check that the integral over noise \(\xi_{i,k}\) exactly yields Eq. (103). But in this process, we have substituted the square of the sum \(\big{(}\sum_{j}g_{j}\eta_{j}(t)\big{)}^{2}\) with the sum of squares \(\sum_{j}g_{j}^{2}\eta_{j}(t)^{2}\) where different spins are uncoupled. Indeed, undoing the insertion of the identity matrices, we obtain the stochastic quantum evolution with the dynamics generator given by Eq. (37). Therefore, we can evolve each spin independently before averaging their product over noise.
### The case \(N=1\) and \(M>1\)
Here, we consider a spin coupled to \(M\) bosons. We first consider a factorized state and assume that bosons are initially in their ground state; we shall later relax the latter assumption. Our starting point is Eq. (100) upon substituting \(\xi\to\Xi\) and \(\underline{\psi}\to\underline{\Psi}\) whose correlations are given by Eq. (55). Following the same steps as Appendix C.1, we find the expression in Eq. (101) for the Feynman-Vernon influence functional, only with a different identification of the Green's functions as
\[\begin{split}&\mathcal{G}(t,t^{\prime})=-\sum g_{\alpha}^{2}\sin[ \omega_{\alpha}(t-t^{\prime})]e^{-\kappa_{\alpha}(t-t^{\prime})}\Theta(t-t^{ \prime})\\ & i\mathcal{G}^{K}(t,t^{\prime})=\sum g_{\alpha}^{2}\cos[\omega_{ \alpha}(t-t^{\prime})]e^{-\kappa_{\alpha}|t-t^{\prime}|}\end{split} \tag{102}\]
In the absence of Markovian dissipation, the standard Feynman-Vernon influence functional is given by
\[F[\eta,\tilde{\eta}]=\exp\left\{-\frac{1}{\pi}\int_{0}^{\infty}dt\int_{0}^{t} dt^{\prime}\left[-iL_{1}(t-t^{\prime})\eta(t)\tilde{\eta}(t^{\prime})+L_{2}(t-t^{ \prime})\tilde{\eta}(t)\tilde{\eta}(t^{\prime})\right]\right\} \tag{103}\]
where the kernels \(L_{1,2}(t,t^{\prime})\) are defined in terms of the bath spectral function \(J(\omega)\) as (at zero temperature)
\[L_{1}(t)=\int_{0}^{\infty}d\omega J(\omega)\sin(\omega t),\qquad L_{2}(t)=\int _{0}^{\infty}d\omega J(\omega)\cos(\omega t) \tag{104}\]
Setting \(\kappa\to 0\), we recover the above expressions from Eq. (102) by identifying
\[\sum_{\alpha}\to\int\frac{d\omega}{\pi},\qquad g_{\alpha}^{2}\to J(\omega) \tag{105}\]
Next, we consider a general initial state for bosons. We derive Eq. (62) by combining the steps in this and the previous subsections. As a first step, we define \(\Psi[\Xi,\mathbf{X}]\) from Eq. (62) by excluding the dependence on \(\psi_{\alpha 0}\), that is, \(\underline{\Psi}(t)=\Psi[\Xi,\mathbf{X}](t)+i\sum_{\alpha}G_{\alpha}(t)\psi_{ \alpha 0}\). With the correlations defined in Eq. (62), one can show that
\[\begin{split}\overline{\Psi[\Xi,\mathbf{X}](t)\bar{\Psi}[\Xi, \mathbf{X}](t^{\prime})}&=\sum_{\alpha}g_{\alpha}^{2}\kappa_{ \alpha}\int_{t^{\prime\prime}}G_{\alpha}(t,t^{\prime\prime})\bar{G}_{\alpha}(t ^{\prime},t^{\prime\prime})=\sum_{\alpha}g_{\alpha}^{2}\overline{\psi_{\alpha }[\xi_{\alpha}](t)\bar{\psi}_{\alpha}[\xi_{\alpha}](t^{\prime})}\\ \overline{\Psi[\Xi,\mathbf{X}](t)\bar{\Xi}(t^{\prime})}& =\sum_{\alpha}\frac{g_{\alpha}^{2}}{2}G_{\alpha}(t-t^{\prime})= \sum_{\alpha}g_{\alpha}^{2}\overline{\psi_{\alpha}[\xi_{\alpha}](t)\bar{\xi}_ {\alpha}(t^{\prime})}\end{split} \tag{106}\]
where \(\psi_{\alpha}[\xi_{\alpha}](t)=\int_{t^{\prime}}G_{\alpha}(t-t^{\prime})\xi_{ \alpha}(t^{\prime})\). One can then follow the same steps as in Appendix C.2 to show that explicitly integrating over the noise \(\Xi\) yields the same result as the original functional integral over \(M\) different noise variables \(\{\xi_{\alpha}(t)\}\). In both cases, the initial state is represented by the integral over \(\{\psi_{\alpha 0}\}\) weighted by the Wigner function \(\mathscr{W}_{0}(\{\psi_{\alpha 0}\})\).
## Appendix D \(\check{C}\) matrix
In this section, we first prove that \(\check{C}(t,t^{\prime})\) defined in Eq. (58), viewed as a matrix in time, is positive. We then provide a simple scheme for diagonalizing this matrix.
To show the positivity, we first write \(\check{C}\) as
\[\check{C}(t,t^{\prime})=C_{\text{st}}(t-t^{\prime})+D(t,t^{\prime}) \tag{107}\]
where we have defined
\[\begin{split} C_{\text{st}}(t-t^{\prime})&=C(t-t^{ \prime})-\frac{1}{\gamma}\int_{-\infty}^{\infty}ds\chi(t-t^{\prime}-s)\bar{ \chi}(s)\\ D(t,t^{\prime})&=\frac{1}{\gamma}\int_{-\infty}^{0} dt^{\prime\prime}\chi(t-t^{\prime\prime})\bar{\chi}(t^{\prime}-t^{\prime \prime})\end{split} \tag{108}\]
Notice the bounds of the integrals in these expressions. The subscript st stands for the stationary state since \(\check{C}(t,t^{\prime})\to C_{\text{st}}(t,t^{\prime})\) at long times \(t,t^{\prime}\to\infty\). It is easy to see that \(D\) is manifestly positive: \(\langle f|D|f\rangle\equiv\int_{t,t^{\prime}}\check{f}(t)D(t,t^{\prime})f(t^{ \prime})=\sum_{\alpha}g_{\alpha}^{2}\overline{\psi_{\alpha}[\xi_{\alpha}](t) \bar{\xi}_{\alpha}[\xi_{\alpha}](t^{\prime})}\). We can then show that the matrix \(\check{C}(t,t^{\prime})\) is invertible in time.
### The case \(N=1\) and \(M>1\)
Here, we consider a spin coupled to \(M\) bosons. We first consider a factorized state and assume that bosons are initially in their ground state; we shall later relax the latter assumption. Our starting point is Eq. (100) upon substituting \(\xi\to\Xi\) and \(\underline{\psi}\to\underline{\Psi}\) whose correlations are given by Eq. (55). Following the same steps as Appendix C.1, we find the expression in Eq. (101) for the Feynman-Vernon influence functional, only with a different identification of the Green's functions as
\[\begin{split}&\mathcal{G}(t,t^{\prime})=-\sum g_{\alpha}^{2}\sin[ \omega_{\alpha}(t-t^{\prime})]e^{-\kappa_{\alpha}(t-t^{\prime})}\Theta(t-t^{ \prime})\\ & i\mathcal{G}^{K}(t,t^{\prime})=\sum g_{\alpha}^{2}\cos[\omega_{ \alpha}(t-t^{\prime})]e^{-\kappa_{\alpha}|t-t^{\prime}|}\end{split} \tag{109}\]
In the absence of Markovian dissipation, the standard Feynman-Vernon influence functional is given by
\[F[\eta,\tilde{\eta}]=\exp\left\{-\frac{1}{\pi}\int_{0}^{\infty}dt\int_{0}^{t} dt^{\prime}\left[-iL_{1}(t-t^{\prime})\eta(t)\tilde{\eta}(t^{\prime})+L_{2}(t-t^{ \prime})\tilde{\eta}(t)\tilde{\eta}(t^{\prime})\right]\right\} \tag{110}\]
where the kernels \(L_{1,2}(t,t^{\prime})\) are defined in terms of the bath spectral function \(J(\omega)\) as (at zero temperature)
\[L_{1}(t)=\int_{0}^{\infty}d\omega J(\omega)\sin(\omega t),\qquad L_{2}(t)=\int_{0} ^{\infty}d\omega J(\omega)\cos(\omega t) \tag{111}\]
Setting \(\kappa\to 0\), we recover the above expressions from Eq. (102) by identifying
\[\sum_{\alpha}\to\int\frac{d\omega}{\pi},\qquad g_{\alpha}^{2}\to J(\omega) \tag{112}\]
Next, we consider a general initial state for bosons. We derive Eq. (62) by combining the steps in this and the previous subsections. As a first step, we define \(\Psi[\Xi,\mathbf{X}]\) from Eq. (62) by excluding the dependence on \(\psi_{\alpha 0}\), that is, \(\underline{\Psi}(t)=\Psi[\Xi,\mathbf{X}](t)+i\sum_{\alpha}G_{\alpha}(t)\psi_{ \alpha 0}\). With the correlations defined in Eq. (62), one can show that
\[\begin{split}\overline{\Psi[\Xi,\mathbf{X}](t)\bar{\Psi}[\Xi, \mathbf{X}](t^{\prime})}&=\sum_{\alpha}g_{\alpha}^{2}\kappa_{ \alpha}\int_{t^{\prime\prime}}G_{\alpha}(t,t^{\prime\prime})\bar{G}_{\alpha}(t ^{\prime},t^{\prime\prime})=\sum_{\alpha}g_{\alpha}^{2}\overline{\psi_{\alpha}[ \xi_{\alpha}](t)\bar{\psi}_{\alpha}[\xi_{\alpha}](t^{\prime})}\\ &\overline{\Psi[\Xi,\mathbf{X}](t)\bar{\Xi}(t^{\prime})}& =\sum_{\alpha}\frac{g_{\alpha}^{2}}{2}G_{\alpha}(t-t^{\prime})=\sum_{\alpha}g_{ \alpha}^{2}\overline{\psi_{\alpha}[\xi_{\alpha}](t)\bar{\xi}_{\alpha}(t^{ \prime})}\end{split} \tag{113}\]
where \(\psi_{\alpha}[\xi_{\alpha}](t)=\int_{t^{\prime}}G_{\alpha}(t-t^{\prime})\xi_{ \alpha}(t^{\prime})\). One can then follow the same steps as in Appendix C.2 to show that explicitly integrating over the noise \(\Xi\) yields the same result as the original functional integral over \(M\) different noise variables \(\{\xi_{\alpha}(t)\}\). In both cases, the initial state is represented by the integral over \(\{\psi_{\alpha 0}\}\) weighted by the Wigner function \(\mathscr{W}_{0}(\{\psi_{\alpha 0}\})\)
\((1/\gamma)\int dt^{\prime\prime}\left|\int_{t}f(t)\chi(t-t^{\prime\prime})\right|^{2}\geq 0\) for an arbitrary function \(f(t)\). To prove \(\check{C}>0\), it is sufficient to show that \(C_{\rm st}\geq 0\). Since the latter is time translation invariant, we can consider its Fourier transform
\[C_{\rm st}(\omega)=\sum_{\alpha}g_{\alpha}^{2}\kappa_{\alpha}G_{\alpha}( \omega)\overline{G_{\alpha}(\omega)}-\frac{1}{\sum g_{\alpha}^{2}/\kappa_{ \alpha}}\sum_{\alpha}g_{\alpha}^{2}G_{\alpha}(\omega)\sum_{\beta}g_{\beta}^{2 }\overline{G_{\beta}(\omega)} \tag{103}\]
This quantity is positive because
\[\begin{split}\gamma C_{\rm st}(\omega)&=\frac{1}{2} \sum_{\alpha,\beta}g_{\alpha}^{2}g_{\beta}^{2}\left[\frac{\kappa_{\alpha}}{ \kappa_{\beta}}|G_{\alpha}(\omega)|^{2}+\frac{\kappa_{\beta}}{\kappa_{\alpha }}|G_{\beta}(\omega)|^{2}-G_{\alpha}(\omega)\overline{G_{\beta}(\omega)}- \overline{G_{\alpha}(\omega)}G_{\beta}(\omega)\right]\\ &=\frac{1}{2}\sum_{\alpha,\beta}g_{\alpha}^{2}g_{\beta}^{2} \left|\sqrt{\frac{\kappa_{\alpha}}{\kappa_{\beta}}}G_{\alpha}(\omega)-\sqrt{ \frac{\kappa_{\beta}}{\kappa_{\alpha}}}G_{\beta}(\omega)\right|^{2}\geq 0 \end{split} \tag{104}\]
We therefore conclude that \(\check{C}(t,t^{\prime})\) is positive as a matrix.
In fact, a similar argument leads to the stronger result that \(\Delta C=\check{C}(t,t^{\prime})-\frac{1}{2}\sum g_{\alpha}^{2}G_{\alpha}(t) \bar{G}_{\alpha}(t^{\prime})\) is positive, that is,
\[\Delta C(t,t^{\prime})=\sum_{\alpha}g_{\alpha}^{2}\kappa_{\alpha}\int_{t^{ \prime\prime}}G_{\alpha}(t-t^{\prime\prime})\bar{G}_{\alpha}(t^{\prime}-t^{ \prime\prime})-\frac{1}{\gamma}\int_{t^{\prime\prime}}\chi(t-t^{\prime\prime}) \bar{\chi}(t^{\prime}-t^{\prime\prime})\geq 0 \tag{105}\]
in the sense of a matrix. The proof follows by sandwiching the above equation between \(\langle f|\) and \(|f\rangle\) for an arbitrary function \(f(t)\) and following a similar argument as above. Note that \(\Delta C\) appears in Eq. (62) as the colored noise correlator upon excluding the contribution from the initial state.
Next, we diagonalize \(\check{C}(t,t^{\prime})\) viewed as a matrix in the time domain. It is more convenient to work in the Fourier basis. We first Fourier transform the function \(C_{\rm st}(t-t^{\prime})\). Defining the maximum time \(t_{max}\), the argument of \(C_{\rm st}\) is in the range \([-t_{max},t_{max}]\); see also [74; 45]. Expanding in Fourier series, we have
\[C_{\rm st}(t-t^{\prime})=\sum_{n=-\infty}^{\infty}c_{n}e^{i\pi n(t-t^{\prime}) /t_{max}} \tag{106}\]
where the coefficients \(c_{n}\) are obtained as
\[c_{n}=\frac{1}{2t_{max}}\int_{-t_{max}}^{t_{max}}dse^{-i\pi ns/t_{max}}C_{\rm st }(s) \tag{107}\]
Next, we consider the function \(D(t,t^{\prime})\); the time variables \(t,t^{\prime}\) are defined in the range \([0,t_{max}]\). However, we extend these functions to the extended domain \([-t_{max},t_{max}]\), in order to expand these functions in the same harmonics basis used for \(C_{\rm st}(t-t^{\prime})\). There are different ways that one can define these functions in an extended domains. A first choice is to define \(\check{C}(t,t^{\prime})=0\) if \(t<0\) or \(t^{\prime}<0\), although it may generate pronounced oscillations when we truncate the sum over harmonics at a finite order \(n_{max}\). Other choices may be taken where the function \(\check{C}(t,t^{\prime})\) are nonzero for \(t<0\) or \(t^{\prime}<0\). We expand \(D(t,t^{\prime})\) in terms of these harmonics as
\[D(t,t^{\prime})=\sum_{n,n^{\prime}=-\infty}^{\infty}d_{nn^{\prime}}e^{i\pi nt /t_{max}-i\pi n^{\prime}t^{\prime}/t_{max}} \tag{108}\]
where
\[d_{nn^{\prime}}=\iint_{-t_{max}}^{t_{max}}\frac{dsds^{\prime}}{(2t_{max})^{2}} e^{-i\pi(ns-n^{\prime}s^{\prime})/t_{max}}D(s,s^{\prime}) \tag{109}\]
The full matrix can be then written as
\[\check{C}(t,t^{\prime})=\sum_{n,n^{\prime}}e^{i\pi nt/t_{max}-i\pi n^{\prime}t ^{\prime}/t_{max}}(c_{n}\delta_{n,n^{\prime}}+d_{n,n^{\prime}}) \tag{110}\]
We can now diagonalize the matrix \([\hat{\bf C}]_{n,n^{\prime}}=c_{n}\delta_{n,n^{\prime}}+d_{nn^{\prime}}\) as
\[\hat{\bf C}={\bf U}^{-1}\hat{\bf C}_{\rm diag}{\bf U} \tag{111}\]
where \(\check{\mathbf{C}}_{\text{diag}}=\text{diag}\{\check{c}_{a}\}\) is a diagonal matrix with all the eigenvalues positive (\(\check{c}_{a}\geq 0\)), and \(\mathbf{U}\) is a unitary matrix. Finally, we can write
\[\check{C}(t,t^{\prime})=\sum_{a}\check{c}_{a}\bar{\theta}_{a}(t)\theta_{a}(t^{ \prime}) \tag{122}\]
where the new functions \(\theta_{\alpha}\) are now defined as
\[\theta_{a}(t)=\sum_{n}U_{an}e^{-i\pi nt/t_{max}} \tag{123}\]
|
2310.03034 | Confined modes of single-particle trajectories induced by stochastic
resetting | Random trajectories of single particles in living cells contain information
about the interaction between particles, as well as, with the cellular
environment. However, precise consideration of the underlying stochastic
properties, beyond normal diffusion, remains a challenge as applied to each
particle trajectory separately. In this paper, we show how positions of
confined particles in living cells can obey not only the Laplace distribution,
but the Linnik one. This feature is detected in experimental data for the
motion of G proteins and coupled receptors in cells, and its origin is
explained in terms of stochastic resetting. This resetting process generates
power-law waiting times, giving rise to the Linnik statistics in confined
motion, and also includes exponentially distributed times as a limit case
leading to the Laplace one. The stochastic process, which is affected by the
resetting, can be Brownian motion commonly found in cells. Other possible
models producing similar effects are discussed. | Aleksander A. Stanislavsky, Aleksander Weron | 2023-09-26T12:15:47Z | http://arxiv.org/abs/2310.03034v3 | # Confined modes of single-particle trajectories induced by stochastic resetting
###### Abstract
Random trajectories of single particles in living cells contain information about the interaction between particles, as well as, with the cellular environment. However, precise consideration of the underlying stochastic properties, beyond normal diffusion, remains a challenge as applied to each particle trajectory separately. In this paper, we show how positions of confined particles in living cells can obey not only the Laplace distribution, but the Linnik one. This feature is detected in experimental data for the motion of G proteins and coupled receptors in cells, and its origin is explained in terms of stochastic resetting. This resetting process generates power-law waiting times, giving rise to the Linnik statistics in confined motion, and also includes exponentially distributed times as a limit case leading to the Laplace one. The stochastic process, which is affected by the resetting, can be Brownian motion commonly found in cells. Other possible models producing similar effects are discussed.
## I Introduction
Combination of the live-cell single-molecule imaging with single-particle tracking (SPT) methods has allowed a revolutionary breakthrough for the quantitative analysis of dynamic processes in living cells [1; 2; 3; 4]. This approach gives the visualization of the movement of individual "particles", the latter being single molecules, macromolecular complexes, as well as, viruses or organelles in physiological conditions [5]. As such, it has been crucial to study the mechanisms of intracellular transport, cell membrane dynamics, and viral infection [6; 7; 8]. Stochastic processes associated with the movement of particles are directly affected by interactions that occur with other cellular structures or components [9]. Therefore, single-particle dynamics often deviates from Brownian motion and exhibits heterogeneous behavior characterized by changes in diffusion, transient confinement, immobilization or anomalous diffusion [10; 11; 12]. The development of theoretical frameworks for the robust analysis of random trajectories implemented in biological scenarios is thus of fundamental importance to understand molecular mechanisms of interaction [13].
As an example, detection of the transient confinement with high precision requires the knowledge of position or displacement statistics [14]. Restrained trajectories have shown to obey the Gaussian statistics, as well as, the Laplace one (see, for example, [15; 16]). The Gaussian confinement is often described by the well-known Ornstein-Uhlenbeck (OU) model [17]. In this case, the stationary probability distribution function (PDF) is Gaussian, and the random trajectories follow Gaussian statistics with an obvious physical interpretation. Unfortunately, the OU model, based on the ordinary Langevin equation with a harmonic potential, does not provide a description for confined trajectories with the Laplace statistics.
An alternative scenario, leading to confinement of single-particle trajectories with the Laplace statistics, can be based on the stochastic resetting methodology. The resetting of a stochastic process describes evolution of a stochastic system that is returned repeatedly to a steady (or equilibrium) state [18], as it occurs in target search with home returns [21; 22; 23]. If the resetting process is Poissonian and independent of the random motion that undergoes resetting, then the latter has a steady PDF. For trajectories undergoing Brownian motion with Poissonian resetting, the steady PDF has the Laplace form. It is noteworthy that subordinated Brownian processes, leading to subdiffusion, under Poissonian resetting also produces a stationary state with the Laplace distribution, but with another scale parameter [24]. In contrast, the engagement of Levy motion with the stochastic resetting produces confinement with the Linnik statistics, in which the Laplace one is a particular case (see [25]). Experimental works have provided examples of the occurrence of confinement with Laplace statistics [14; 15; 16], but to our knowledge no examples of confined trajectories with Linnik PDFs have been reported.
In this article, considering the frequent occurrence of stochastic resetting in biological systems, we propose an analysis pipeline to robustly determine the testing and apply it to several experimental datasets. Our study shows that the Linnik statistics does occur in single-particle trajectories of both G proteins and coupled receptors in living cells. After providing a brief mathematical background of the stochastic resetting, we describe the analytical framework and its application to the experiments that led to the quantification of confined trajectories and their segments following the Laplace and Linnik statistics. We discuss the results and further propose a possible explanation for their occurrence. In our consideration, the transient behavior means that the set of trajectories contains segments in different diffusive modes:
free, confined, and others. Each segment obeys one of the modes, however trajectories not segmented can be also found. The case when either segmental or non-segmental trajectories have different statistics indicates the inhomogeneity of the medium and the mode contribution ratio determines how heterogeneous the medium is. If all the trajectories follow the Brownian motion, the medium is homogeneous.
## II Models of Confined Modes
There are at least three ways to get a stationary PDF for stochastic processes, namely
* subordination of random processes;
* stochastic differential equations (SDEs) with an attractive potential;
* stochastic resetting.
Each of them may be considered as a mode leading to confinement. Sometimes, they have similarities in limit cases, but, generally, the limits are different. Let us consider the above approaches and their features below.
### Model 1: subordination of random processes
The subordination consists of time randomization of a stochastic process with the help of another independent random process [26]. Confined trajectories obeying the Laplace PDF can be produced by the use of a specific subordinator, closely related to the one providing tempered subdiffusion. The conjugate property of Bernstein functions connects the tempered subdiffusion with the confinement [24]. Interpretation of anomalous diffusion tending to the confinement is that diffusive motion, accompanied by multiple-trapping events with infinite mean sojourn time, is transformed into pure jumps, restricted in a confined environment. If the Laplace exponent of a tempered \(\alpha\)-stable process is \((s+\chi)^{\alpha}-\chi^{\alpha}\), where \(\chi\) is a positive constant and \(0<\alpha<1\), then its conjugate partner from the set of Bernstein functions takes the form \(s/((s+\chi)^{\alpha}-\chi^{\alpha})\)[27]. The PDF of the operational time is easy to present in the Laplace transform with respect to \(t\), i.e.,
\[\tilde{f}(\tau,s)=\frac{1}{(s+\chi)^{\alpha}-\chi^{\alpha}}\,e^{-\tau s/((s+ \chi)^{\alpha}-\chi^{\alpha})}\,. \tag{1}\]
The propagator of such a subordinated process can be written in the integral form
\[p(x,t)=\int_{0}^{\infty}h(x,\tau)\,f(\tau,t)\,d\tau\,, \tag{2}\]
where \(h(x,\tau)\) is the PDF of a parent process, whereas \(f(\tau,t)\) is the PDF of a directing one. If the ordinary Brownian motion is a parent process, and the directing process is described by Eq. (1), it is not difficult to find a stationary distribution in the Laplace form explicitly
\[p_{B}(x,\infty)=\frac{1}{\sqrt{2D\alpha\chi^{\alpha-1}}}\,\exp\left(-\frac{2| x|}{\sqrt{2D\alpha\chi^{\alpha-1}}}\right)\,, \tag{3}\]
where \(D\) is the diffusivity constant for the Brownian motion. Any subordinated non-Brownian motion, in which the subordinator is defined by the Laplace exponent conjugate to a tempered \(\alpha\)-stable process, has a confined probability distribution itself [14; 16]. Their forms are simpler to present through the Fourier transform, giving characteristic functions. This procedure covers a wide class of geometrically infinitely divisible distributions as a confined case of the non-Brownian motion subordinated by a special subordinator responsible for the confinement.
### Model 2: coupled SDEs with a potential
The coupled Langevin equations for position \(x_{t}\) and diffusivity \(D_{t}\) opens new possibilities in description of the confinement. The system of SDEs reads
\[\left\{\begin{array}{l}dx_{t}=-\frac{1}{\tau_{x}}(x_{t}-\bar{x})dt+\sqrt{D_{ t}}dW_{t}^{(1)}\\ dD_{t}=-\frac{1}{\tau_{D}}(D_{t}-\bar{D})dt+\sigma\sqrt{D_{t}}dW_{t}^{(2)} \end{array}\right.\,, \tag{4}\]
where \(\tau_{x}\) and \(\tau_{D}\) are the correlation times for \(x\) and \(D\), \(\bar{x}\) and \(\bar{D}\) denote the average position and diffusivity, whereas \(\sigma\) is the "speed" of fluctuations for the diffusion coefficient [28]. Here, \(W_{t}^{(1)}\) and \(W_{t}^{(2)}\) are independent Wiener processes. Note that the second equation of the system (4) is independent of the first, whereas the latter is "driven" by the former. Moreover, the former corresponds to a Cox-Ingersoll-Ross (CIR) process [29]. The stationary solution of the CIR SDE is presented, for example, in [30]. This PDF is expressed in terms of the gamma distribution. Next, the stationary state for the first SDE in the system (4) leads to the Gaussian PDF, if its standard deviation became constant as in the ordinary OU model. To find the stationary PDF \(F_{\lambda}(x)\) of the coupled equations (4), we integrate the Gaussian and the gamma PDFs over \(D\). Consequently, the stationary PDF takes the following form:
\[F_{\lambda}(x) = \frac{2\eta^{-\lambda-1/2}}{\Gamma(\lambda)\sqrt{\pi}}|x-\bar{x} |^{\lambda-1/2} \tag{5}\] \[\times K_{\lambda-1/2}\left(\frac{2|x-\bar{x}|}{\eta}\right)\,,\]
where \(K_{\nu}(z)\) is the modified Bessel function [31] (here \(\nu=\lambda-1/2\)), \(\eta=\sqrt{\tau_{x}\tau_{D}\sigma^{2}}\), and \(\lambda=\frac{2\bar{D}}{\tau_{D}\sigma^{2}}\). The index \(\lambda-1/2\) has the meaning of the shape parameter [32]. It is also interesting to mention the characteristic function for this PDF. It reads
\[\int_{-\infty}^{\infty}F_{\lambda}(x)\,\cos(qx)\,dx=\frac{e^{iq\bar{x}}}{ \left(1+q^{2}\eta^{2}/4\right)^{\lambda}}\,, \tag{6}\]
showing similarity with both the Laplace and the Linnik distributions. Note, if \(\lambda=1\), then Eq.(5) takes the form of the Laplace PDF
\[F_{1}(x)=\frac{1}{\eta}\exp\left(-\frac{2|x-\bar{x}|}{\eta}\right) \tag{7}\]
where \(\bar{x}\) is the location parameter and \(\eta>0\) is the scale parameter (sometimes referred to as the diversity).
### Model 3: stochastic resetting
The Brownian motion under Poissonian resetting can be described by a starightforward mathematical framework [18; 19; 20]. The Green function in the absence of resetting (or in other terms the propagator) is given by
\[G_{1}(x,t|x_{0})=\frac{1}{\sqrt{4\pi Dt}}\exp\left[-\frac{(x-x_{0})^{2}}{4Dt} \right]\,, \tag{8}\]
where \(x_{0}\) is the initial position, and \(D\) is the diffusion constant. The initial condition takes the Dirac delta-function, namely \(G_{1}(x,0|x_{0})=\delta(x-x_{0})\). Due to the Poissonian resetting with the rate \(r\), the PDF \(p_{1}(x,t|x_{0})\) satisfies a renewal equation [21]. Its form is expressed by a sum of two terms, described as
\[p_{1}(x,t|x_{0}) = e^{-rt}G_{1}(x,t|x_{0}) \tag{9}\] \[+ r\int_{0}^{t}e^{-r\tau}\,G_{1}(x,\tau|X_{r})\,d\tau\,,\]
where \(X_{r}\) is the position to which the particle returns after resetting. It should be noticed that the renewal equation Eq. (9) holds for more general stochastic processes, with propagators having a more general form than \(G_{1}(x,t|x_{0})\)[21; 24]. In the stationary state with \(t\to\infty\), the first term of Eq. (9) may be neglected, and the second term can be exactly calculated [18], providing the typical Laplace PDF [25]
\[p_{1}(x,\infty|x_{0})=\frac{c_{1}}{2}e^{-c_{1}|x-X_{r}|}\,, \tag{10}\]
where the scale parameter \(c_{1}=\sqrt{r/D}\) depends on the rate \(r\). Hence, particles undergoing Brownian diffusion with resetting behave as performing confined motion with the Laplace statistics. Poissonian resetting also yields a similar behavior for subdiffusion [24], i.e. random walks characterized by a nonlinear mean squared displacement \(t^{\alpha}\) with \(\alpha\in(0,1)\). The only difference in this case, is that the scale parameter is given by \(c_{\alpha}=\sqrt{r^{\alpha}/D}\). A similar result can also be obtained in a more general case. In fact, for a subordinator with an inverse infinitely-divisible distribution, the subordinated Brownian motion under Poissonian resetting tends to a stationary PDF in the Laplace form with the scale parameter \(c_{\Psi}=\sqrt{\Psi(r)/D}\), where the Laplace exponent \(\hat{\Psi}(r)\) is expressed in terms of Bernstein functions [33]. Therefore, Poissonian resetting can force many (but not any) nonstationary stochastic processes to a steady state with the Laplace PDF, and this general capability may explain why this distribution is often detected in confined trajectories of single particles.
If, instead of Brownian motion, we consider the \(\beta\)-stable Levy motion with \(\beta\in(0,\,2)\) as a parent process [34], then the Poissonian resetting of such random motion leads to a stationary characteristic function [24]
\[\hat{p}(k,\infty|X_{r})=\frac{e^{ikX_{r}}}{1+D^{*}|k|^{\beta}/r}\,, \tag{11}\]
corresponding to the symmetric Linnik PDF [35]. In Eq. (11) the term \(e^{ikX_{r}}\) shows that the PDF maximum is located at \(X_{r}\), and \(D^{*}\) is constant. When \(\beta=2\), the Linnik density reduces to the Laplace case. Both Laplace and Linnik PDFs describe confinement with jumps (unlike the Brownian confinement with continuous trajectories) [24], but the difference between them is that the Linnik PDF has a heavier tail than the Laplace one [25]. Therefore, the Linnik confinement is characterized by longer jumps.
### Advantages and disadvantages of models
As was shown above, there are different mathematical approaches leading to confinement. Although each model may be implemented in the dependence of physical conditions, they have their pros and cons, which are useful to list:
* subordination of random processes leads to the Laplace PDF in many cases of non-Gaussian processes, but the subordinator is too specific;
* SDEs with an attractive potential are good for Gaussian confinement, but the Laplace PDF exists for the only value of the parameter describing the stationary PDF;
* stochastic resetting has great potential for explaining non-equilibrium states in physics, chemistry and biology, and this model provides more grounds for understanding the diversity of confined trajectories.
The statistics of confined trajectories can play the key role in finding physical processes responsible for the occurrence of confinement. Therefore, discriminating tests of the random trajectories on possible PDFs are very important for the study of confined modes. It should be noticed that stochastic resetting is not so anomalous in living cells as it may seem. In particular, the diffusive backward motion of paused RNA polymerases is a diffusion process with stochastic resetting [36].
## III Analysis of experimental data
To choose the most suitable model of confinement, we analyze stochastic trajectories detected in the SPT experiment [2].
### Classification of diffusion modes
Truthful classification of random trajectories in cells is of great importance [37]. At the current level of development of science and technology, it builds bridges between physics, biology, biochemistry, and biophysics necessary for understanding how living cells function on the microscopic basis [1]. We used the data of motion for G proteins and coupled receptors on the surface of living cells (Figure 1). Their primary analysis can be started with classification of the trajectories according to the standardized maximal distance [10]. This method is useful because it allows us to evaluate quickly the contribution of confined trajectories [14; 24]. However, it is rather rough and does not take into account the segmentation of trajectories. Segments can be different: immobile, confined, free (Brownian motion), and directed (diffusion with drift). Moreover, multi-segment trajectories show various modes in segments. Therefore, we used an accurate and computationally efficient transient motion analysis algorithm, termed "divide-and-conquer moment scaling spectrum" (DC-MSS) [38]. This approach includes three stages: initial track segmentation, initial segment classification, and final validation. The first stage calculates the maximum pairwise distance (MPD) between particle positions within the window. The value of MPD reveals the switches between different diffusive modes: MPD (immobile) \(<\) MPD (confined) \(<\) MPD (free) \(<\) MPD (directed). In the next stage the track segments identified in the first stage are classified, using the MSS analysis of molecule displacements. The final stage compensates for initial track oversegmentation by testing the merger of adjacent segments. Using DC-MSS, we have carried out the segment analysis of trajectories for G proteins and coupled receptors in living cells. Their classification shows that there is a significant number of confined segments (see Table 1). The difference in the number of segments between G proteins and coupled receptors is due to a purely experimental situation: G proteins were presented with fewer trajectories than coupled receptors. Confined segments may have various statistics. Therefore, the next step of our data analysis is to discriminate them in statistics.
### Test of confined motion
To group confined tracks of G proteins and coupled receptors by statistics, we use a simple statistical test, based on the logarithm of the ratio of maximized likelihoods between normal and Laplace distributions [39]. The ratio \(Q\) reads
\[Q=\frac{n}{2}\ln(2)-\frac{n}{2}\ln(\pi)+n\ln(\hat{\theta})-n\ln(\hat{\sigma})+ \frac{n}{2}\,, \tag{12}\]
where the terms are dependent on the following values
\[\hat{\theta} = \frac{1}{n}\sum_{i=1}^{n}\left|Y_{i}-\hat{\eta}\right|,\] \[\hat{\eta} = \text{median}\{Y_{1},Y_{2},\ldots,Y_{n}\}\,, \tag{13}\] \[\hat{\sigma}^{2} = \frac{1}{n}\sum_{i=1}^{n}(Y_{i}-\hat{\mu})^{2}\,,\quad\hat{\mu}= \frac{1}{n}\sum_{i=1}^{n}Y_{i}\,.\]
Figure 1: Examples of random one-segment trajectories, focusing on confined modes, in the motion of G proteins (a) and coupled receptors (b) observed in living cells. The type of segments is identified by the DC–MSS algorithm and statistical tests (see more details in Section III).
If the statistical test shows \(Q>0\), the sample satisfies the normal distribution, but \(Q<0\) suggests the Laplace distribution or a similar one to it. This procedure took into account only segments with a length greater than and equal to 50. Unfortunately, this test cannot refer segments with the Linnik statistics to the Laplace case due to their close relationship.
### Detection of Linnik confinement
To recognize segments with the Linnik PDF, we apply the test suggested by Anderson and Arnold [40]. It allows estimating the parameter \(\beta\) of the Linnik PDF for chosen segments. This test uses the minimization of the following objective function
\[\bar{I}_{L}=\int_{0}^{\infty}\left(\bar{\eta}(z)-(1+|\xi z|^{\beta})^{-1} \right)^{2}\,\exp(-z^{2})\,dz\,, \tag{14}\]
where \(\bar{\eta}(z)=n^{-1}\sum_{j=1}^{n}\cos(z\,y_{j})\), and \(y_{1},y_{2},\ldots,y_{n}\) is a data sampling. The expression is minimized with respect to two parameters, \(\beta\) and \(\xi\) (scale parameter). The presence of the weight function \(\exp(-z^{2})\) provides fast convergence of the integral. Thus, Eq. (14) can be calculated numerically at no extra cost. The results of applying this test for G proteins and coupled receptors are shown in Table 1. Note that the diffusive motion of these particles differs in statistics along \(x\) and \(y\). Basically, the testing supports our above statistical analysis of confined segments. Really, the most segments with the jump-like confinement has the index \(\beta\) equal or close to 2, typical for the Laplace statistics. Nevertheless, there is also a significant part of segments with \(\beta\approx 1.203\pm 0.002\) corresponding to the Linnik case. They are detected in both cases, for G proteins and coupled receptors. This shows that the effect is not an anomaly or exotic.
## IV Resetting at power-law times
Let us mention that from the three models: subordination, system with potential and stochastic resetting only the last one can lead to the emergence of Linnik confinement from Brownian motion, represented certainly in many trajectories. Moreover, the stochastic resetting has the ability to be generalized from an exponential distribution of resetting times to a power-law one by using the Mittag-Leffler function. The Brownian motion with Poissonian resetting, leading to exponentially distributed times, has been studied in great details (see [18; 21] and references therein), while the power-law models have been explored less. For the first time stochastic resetting following the power-law distributed times, was considered in [41; 42]. Below, we study such a model. It includes both the resetting with power-law times and the exponential case as a limit.
A more general resetting protocol is to take the sequence of resetting times to be generated by a probability density function \(\psi(\tau)\). This is the so-called non-Poissonian resetting [21]. It follows that \(\Psi(\tau)=1-\int_{0}^{\tau}\psi(s)\,ds\) is the survival probability, i.e., the probability that no resetting has occurred up to time \(\tau\). In particular, for Poissonian resetting, the functions read \(\psi(\tau)=re^{-r\tau}\) and \(\Psi(\tau)=e^{-r\tau}\). The first renewal equation (9) can be easily generalized
\[p_{\psi}(x,t|x_{0}) = \Psi(t)\,G_{1}(x,t|x_{0}) \tag{15}\] \[+ \int_{0}^{t}\psi(\tau)\,G_{1}(x,\tau|X_{r})\,d\tau\,.\]
The stochastic resetting with power-law times can be implemented with help of the Mittag-Leffler distribution. Let us recall that a statistical distribution in terms of the Mittag-Leffler function was defined by Pillai [43]. Differentiating it, the Mittag-Leffler PDF becomes \(y^{\alpha-1}E_{\alpha,\alpha}(-y^{\alpha})\) with \(0<\alpha\leq 1\), where \(E_{\alpha,\beta}(y)=\sum_{k=0}^{\infty}y^{k}/\Gamma(\alpha k+\beta)\) is the two-parameter Mittag-Leffler function [44]. If \(\alpha\to 1\), the PDF tends to exponential. Next, in a stationary state of Eq. (15) the PDF \(p_{\alpha}(x,t|x_{0})\) takes the following form
\[p_{\alpha}(x,\infty|x_{0}) = r\int_{0}^{\infty}(r\tau)^{\alpha-1}E_{\alpha,\alpha}[-(r\tau)^{ \alpha}] \tag{16}\] \[\times G_{1}(x,\tau|X_{r})\,d\tau\,.\]
Note that the normalization condition \(\int_{-\infty}^{\infty}p_{\alpha}(x,\infty|x_{0})\,dx=1\) is connected with the integral \(\int_{0}^{\infty}(r\tau)^{\alpha-1}E_{\alpha,\alpha}[-(r\tau)^{\alpha}]\,d\tau= 1/r\). Moreover, in this context the function \(\Psi(t)=E_{\alpha,1}[-(rt)^{\alpha}]\) tends to \(e^{-rt}\) for \(\alpha\to 1\) and, as expected, the relation \(r=0\) shows the absence of resetting in Eq. (15). If we pass from the PDF \(G_{1}(x,t|x_{0})\) to its characteristic function \(e^{-k^{2}Dt}\) through the Fourier transform, then it is not difficult to
\begin{table}
\begin{tabular}{|c|c|c||c|c|} \hline
**Types** & \multicolumn{2}{c||}{**G protein**} & \multicolumn{2}{c|}{**Coupled**} \\ & & & **Receptor** \\ \hline
**Coordinates** & **x** & **y** & **x** & **y** \\ \hline
**Confined segm.** & 6513 & 6513 & 9637 & 9637 \\ \hline
**Normal PDF** & 3373 & 3371 & 6684 & 6746 \\ \hline
**Laplace PDF** & 2245 & 2285 & 2353 & 2232 \\
**Linnik PDF** & 895 & 857 & 600 & 659 \\ \hline \end{tabular}
\end{table}
Table 1: Classification of experimental data with random-walk segments in the trajectories of G proteins and coupled receptors under basal conditions along the coordinates \(x\) and \(y\) with the cutoff length of trajectories more and equal to 50.
find the characteristic function for \(p_{\alpha}(x,\infty|x_{0})\), namely
\[\hat{p}_{\alpha}(k,\infty|x_{0}) = \int_{-\infty}^{\infty}p_{\alpha}(x,\infty|x_{0})\,e^{ikx}\,dx \tag{17}\] \[= \frac{e^{ikX_{r}}}{1+D^{\alpha}k^{2\alpha}/r^{\alpha}}\,.\]
When \(\alpha=1\), the expression \(\hat{p}_{1}(k,\infty|x_{0})\) is the characteristic function for Eq. (10), clearly describing the resetting with exponentially distributed times. After the inverse Fourier transform, the characteristic function (17) leads to the Linnik distribution generalizing the Laplace distribution [25]. The evolution from the Brownian motion under non-Poissonian resetting to the Linnik distribution is presented in Fig. 2. Thus, the Linnik distribution as a stationary state in random processes under stochastic resetting can occur not only, when the \(\beta\)-stable Levy motion for \(0<\beta\leq 2\) is subject to the Poissonian resetting [24; 45], but also when Brownian motion is under resetting at power-law times generated by the Mittag-Leffler distribution in which \(0<\alpha\leq 1\).
The model of the Mittag-Leffler resetting is characterized by the same index for small and large times. In Appendix A we present its generalization, using the three-parameter Mittag-Leffler function, where the stochastic resetting depends on two indices. This makes the model more flexible and more adequate for the stochastic resetting behaving differently for small and large times. Moreover, Appendix B considers the optimal search for the Mittag-Leffler resetting.
## V Conclusions
Single molecular imaging of G proteins and coupled receptors [46] as they diffuse and interact on the surface of a living cell shows many diffusion modes with frequent switching between them. Confined segments of their trajectories are typical for the interaction of individual coupled receptors and G proteins. Their behavior resembles the stochastic resetting. The coupled receptors have to "find" and interact with G proteins on the cell membrane in order to initiate and regulate intracellular processes [47]. The processes are highly heterogeneous and complex. The interaction between G proteins and coupled receptors occurs multiple times, and the time interval between activation and deactivation turns out to be random. The stochastic resetting makes this process optimal. If the stochastic resetting happens with the exponentially distributed times, then the confined motion obeys the Laplace distribution, but power-law distributed times generate the Linnik distribution in confined modes. We analyzed the segmented experimental data with random trajectories of coupled receptors and G proteins. Among the set of segments, by using special tests, we found confined ones with the Laplace and Linnik statistics. Our analysis to confined trajectories and fragments can serve as a filter to identify the interaction of G proteins and coupled receptors at power-law times.
Finally, we can summarize the following conclusions.
1. The binding of G proteins and coupled receptors [48] causes their confinement.
2. The resetting corresponds to rebinding events. If this is the case, it means that the confined segments actually correspond to binding-unbinding-rebinding events where the unbinding step has not been detected.
3. If the time spent unbound is short (the DC-MSS cannot detect segments shorter than 20 steps), then assuming that between two consecutive binding events the G proteins undergo Brownian motion, the Linnik/Laplace statistics might just be the result of the mixing due to the suboptimal segmentation or the presence of unbinding events shorter than the minimum detectable duration.
###### Acknowledgements.
A.W. thanks the support of Beethoven Grant No. DFG-NCN 2016/23/G/ST1/04083. The authors are grateful to T. Sungkaworn and D. Calebiro for free access to their experimental data analyzed in Section III as well as to C. Manzo for useful discussions.
## Appendix A Generalized Mittag-Leffler resetting
The three-parameter Mittag-Leffler function \(E_{\alpha,\beta}^{\gamma}(y)\) is a generalization of the two-parameter Mittag-Leffler function \(E_{\alpha,\beta}(y)\) mentioned in Section IV. It was first
Figure 2: Brownian motion under the Mittag-Leffler resetting (\(r=3\), \(D=2\), \(x_{0}=0.1\), \(X_{r}=0.5\), \(\alpha=0.75\)).
introduced by Prabhakar [49] who defined it as
\[E^{\gamma}_{\alpha,\beta}(y)=\sum_{k=0}^{\infty}\frac{(\gamma)_{k}\,y^{k}}{k! \Gamma(\alpha k+\beta)}\,, \tag{10}\]
where \(k\in\mathbb{N}\), and \((\gamma)_{k}:=\Gamma(k+\gamma)/\Gamma(\gamma)\) is the Pochhammer symbol. A random variable \(Y\) has a generalized Mittag-Leffler distribution [50; 51; 52], if its probability density function is
\[p_{Y}(y)=\lambda^{\delta}y^{\delta\nu-1}E^{\delta}_{\nu,\delta\nu}(-\lambda y ^{\nu}) \tag{11}\]
with \(y>0\), \(\nu\in(0,1]\), shape \(\delta\in\mathbb{R}\) and rate \(\lambda>0\). Then the first renewal equation (15) reads
\[p_{\alpha,\delta}(x,t|x_{0}) \tag{12}\] \[= \left(1-(r\tau)^{\delta\alpha}E^{\delta}_{\alpha,\delta\alpha+1}[ -(r\tau)^{\alpha}]\right)G_{1}(x,t|x_{0})\] \[+ r\int_{0}^{t}(r\tau)^{\delta\alpha-1}E^{\delta}_{\alpha,\delta \alpha}[-(r\tau)^{\alpha}]\,G_{1}(x,\tau|X_{r})\,d\tau.\]
In a stationary state of Eq. (12), the PDF \(p_{\alpha,\delta}(x,t|x_{0})\) is easier to find through its characteristic function, namely
\[\hat{p}_{\alpha,\delta}(k,\infty|x_{0}) = \int_{-\infty}^{\infty}p_{\alpha,\delta}(x,\infty|x_{0})\,e^{ikx}\,dx \tag{13}\] \[= \frac{e^{ikX_{r}}}{\left(1+D^{\alpha}k^{2\alpha}/r^{\alpha}\right) ^{\delta}}\,.\]
As a result, we obtain the characteristic function of the generalized Linnik PDF [25]. When \(\delta=1\), the PDF \(p_{\alpha,\delta}(x,\infty|x_{0})\) simplifies to the ordinary Linnik form (see Section IV). If \(\alpha=1\), then the stationary PDF \(p_{1,\delta}(x,\infty|x_{0})\) can be found explicitly in the form (5) that is the same for the coupled Langevin equations mentioned in Subsection II.2.
## Appendix B Mean time to absorption
For the non-Poissonian resetting there is a convenient way to connect the survival probability with resetting, denoted as \(Q_{r}\), while one without resetting as \(Q_{1}\)[21]. It is based on the last renewal equation written as
\[Q_{r}(x_{0},t) = \Psi(t)Q_{1}(x_{0},t) \tag{14}\] \[+ \int_{0}^{t}\psi(\tau)\,Q_{1}(X_{r},\tau)\,Q_{r}(x_{0},t-\tau)\,d\tau\]
in the shorthand \(Q_{r}(x_{0},t|X_{r})=Q_{r}(x_{0},t)\) and similarly for \(Q_{1}\). Taking the Laplace transform in time and setting \(x_{0}=X_{r}\), we have
\[\bar{Q}_{r}(X_{r},s)=\frac{\int_{0}^{\infty}e^{-st}\,\Psi(t)\,Q_{1}(X_{r},t)\, dt}{1-\int_{0}^{\infty}e^{-st}\,\psi(t)\,Q_{1}(X_{r},t)\,dt}\,. \tag{15}\]
Then the mean time to absorption is obtained by setting \(s=0\) in Eq. (15), i. e.
\[\langle T(X_{r})\rangle=\frac{\int_{0}^{\infty}\Psi(t)\,Q_{1}(X_{r},t)\,dt}{1 -\int_{0}^{\infty}\psi(t)\,Q_{1}(X_{r},t)\,dt}\,. \tag{16}\]
The survival probability of a diffusive particle starting from \(x_{0}\) without resetting is written as
\[Q_{1}(x_{0},t)=\mbox{erf}\left(\frac{x_{0}}{2(Dt)^{1/2}}\right)\,, \tag{17}\]
where \(\mbox{erf}(z)=\frac{2}{\sqrt{\pi}}\int_{0}^{z}e^{-y^{2}}\,dz\) is the Gauss error function [31]. In Eq. (16) only the functions \(\Psi\) and \(\psi\) depend on \(r\). Therefore, by changing variables \(rt\to t\) it is convenient to collect all parameters (\(r\), \(X_{r}\), and \(D\)) in \(Q_{1}(X_{r},t)\), i.e.,
\[Q_{1}(X_{r},t)=\mbox{erf}\left(\frac{\gamma}{2t^{1/2}}\right)\,, \tag{18}\]
where \(\gamma=X_{r}\sqrt{r/D}\), taking \(\Psi(t,r=1)\) and \(\psi(t,r=1)\) for the fixed \(r\). Consequently, Eq. (16) yields
\[\langle T(X_{r})\rangle=\frac{1}{r}\left(\frac{\int_{0}^{\infty}\Psi(t,r=1)\, Q_{1}(X_{r},t)\,dt}{1-\int_{0}^{\infty}\psi(t,r=1)\,Q_{1}(X_{r},t)\,dt}\right)\,. \tag{19}\]
Since \(\mbox{erf}(z)\sim 2z/\sqrt{\pi}\) for \(z\) tending to zero, the value of \(\langle T(X_{r})\rangle\) diverges as \(r\to 0\) as \(\langle T(X_{r})\rangle\sim r^{-1/2}\). It is clear that in the absence of resetting, the mean time is infinity. On the other hand, when \(r\to\infty\), the survival probability without resetting \(Q_{1}(X_{r},t)\) is limited to one. Then \(\int_{0}^{\infty}\psi(t,r=1)\,dt\to 1\), and the asymptotic behavior of the denominator in Eq. (19) is determined by \(1-\mbox{erf}(z)\sim e^{-z^{2}}/(\sqrt{\pi}z)\) for \(z\to\infty\). Therefore, \(\langle T(X_{r})\rangle\) also diverges as \(r\to\infty\). This is not surprising. The higher the reset rate, the shorter the time between resets to reach the origin. These two divergences can surround one minimum of \(\langle T(X_{r})\rangle\).
However, this will be the case if the numerator of Eq. (19) is finite. Note that \(\int_{0}^{\infty}\psi(t,r=1)\,Q_{1}(X_{r},t)\,dt<1\) for \(r<\infty\). Taking \(\Psi(t,r=1)=E_{\alpha,1}(-t^{\alpha})\), the improper integral in the numerator of Eq. (19) converges,
Figure 3: Mean time to absorption as a function of \(r\) with the generalized Mittag-Leffler resetting (\(D=1\), \(x_{0}=X_{r}=1\), \(\alpha=0.85\) and \(\delta=0.5\)).
when \(\alpha>1/2\). Here the convergence problem arises at the upper limit of the integral. For \(t\to\infty\) the integrand \(\Psi(t,r=1)\,Q_{1}(X_{r},t)\) depends on \(E_{\alpha,1}(-t^{\alpha})\sim t^{-\alpha}/\Gamma(1-\alpha)\), having \(0<\alpha\leq 1\), and \(Q_{1}(X_{r},t)\sim\gamma/\sqrt{\pi t}\). This leads to the convergence of the numerator of Eq. (20) just as in the case of \(1/2<\alpha\leq 1\). When \(\Psi(t,r=1)\) corresponds to the generalized Mittag-Leffler resetting, we observe the same convergence condition of this numerator, i. e. for \(1/2<\alpha\leq 1\). If the numerator of Eq. (20) is finite, between two divergences (\(r\to 0\) and \(r\to\infty\)) there is a single minimum. Such an example (related to the generalized Mittag-Leffler resetting) is shown in Figure 3. In this case the minimal mean time is achieved when the ratio of the distance \(X_{r}\) (from the resetting position to the target) to the typical diffusion length between resets is \(\gamma=1.5124\dots\). Finally, the Mittag-Leffler (generalized and ordinary) resetting with \(0<\alpha\leq 1/2\) is characterized by divergence, and the mean time for a diffusive particle to reach the origin becomes infinite for any \(r\).
|
2309.10895 | Large Language Models as Agents in the Clinic | Recent developments in large language models (LLMs) have unlocked new
opportunities for healthcare, from information synthesis to clinical decision
support. These new LLMs are not just capable of modeling language, but can also
act as intelligent "agents" that interact with stakeholders in open-ended
conversations and even influence clinical decision-making. Rather than relying
on benchmarks that measure a model's ability to process clinical data or answer
standardized test questions, LLM agents should be assessed for their
performance on real-world clinical tasks. These new evaluation frameworks,
which we call "Artificial-intelligence Structured Clinical Examinations"
("AI-SCI"), can draw from comparable technologies where machines operate with
varying degrees of self-governance, such as self-driving cars. High-fidelity
simulations may also be used to evaluate interactions between users and LLMs
within a clinical workflow, or to model the dynamic interactions of multiple
LLMs. Developing these robust, real-world clinical evaluations will be crucial
towards deploying LLM agents into healthcare. | Nikita Mehandru, Brenda Y. Miao, Eduardo Rodriguez Almaraz, Madhumita Sushil, Atul J. Butte, Ahmed Alaa | 2023-09-19T19:38:28Z | http://arxiv.org/abs/2309.10895v1 | # Large Language Models as Agents in the Clinic
###### Abstract
Recent developments in large language models (LLMs) have unlocked new opportunities for healthcare, from information synthesis to clinical decision support. These new LLMs are not just capable of modeling language, but can also act as intelligent "agents" that interact with stakeholders in open-ended conversations and even influence clinical decision-making. Rather than relying on benchmarks that measure a model's ability to process clinical data or answer standardized test questions, LLM agents should be assessed for their performance on real-world clinical tasks. These new evaluation frameworks, which we call "Artificial-intelligence Structured Clinical Examinations" ("AI-SCI"), can draw from comparable technologies where machines operate with varying degrees of self-governance, such as self-driving cars. High-fidelity simulations may also be used to evaluate interactions between users and LLMs within a clinical workflow, or to model the dynamic interactions of multiple LLMs. Developing these robust, real-world clinical evaluations will be crucial towards deploying LLM agents into healthcare.
The release of ChatGPT, a chatbot powered by a large language model (LLM), has brought LLMs into the spotlight and unlocked new opportunities for their utilization in healthcare systems. Even though many of these large LLMs are trained on a variety of openly available information from the Internet rather than just biomedical information, these models have immense progress in clinical natural language processing (NLP) [1, 2, 3] and have the potential to improve and augment clinical workflows. For instance, the Generative Pretrained Transformer 4 (GPT-4) model can generate summaries of physician-patient encounters from transcripts of the conversation [4], achieve a score of 86% on the United States Medical Licensing Examination (USMLE) [5], and even create clinical question-answer pairs that are largely indistinguishable from human-generated USMLE questions [6]. These early demonstrations of GPT4 and other LLMs on clinical tasks and benchmarks suggest that these models have the potential to improve and automate clinical tasks.
However, the emergent capabilities of LLMs have significantly expanded their potential applications beyond conventional, standardized clinical NLP tasks that primarily revolve around text processing and question answering. Instead, there is a growing emphasis on utilizing LLMs as Chatbots in both physician-facing and patient-facing tasks, where they are able to synthesize information, make medical inferences, or generate suggestions in a manner similar to a human expert [7, 4, 8]. In these scenarios, LLMs should not be viewed as models of language, but rather as intelligent "agents" that can interact with stakeholders in open-ended conversations and even influence clinical decision-making. eHealthcare systems are already adopting LLM-based chatbots; for instance, UC San Diego Health is already working to integrate GPT-4
into MyChart, Epic's online health portal to streamline patient messaging [9]. Patients are also already leveraging publicly available chatbots (such as ChatGPT) to better understand medical vocabulary from clinical notes, and some medical centers are even considering exploring a "virtual-first" approach where LLMs assist in patient triaging [10, 11]. In all of these use cases, these LLMs "agents" go beyond traditional NLP tasks, and are instead used as active participants that contribute to clinical decision-making processes or engage in healthcare workflows in autonomous or semi-autonomous manners, both within and outside hospital settings.
## 1 Agent-based modeling of LLM chatbots
To evaluate the utility and safety of LLM-based chatbots as agents in these novel and forthcoming applications, we suggest the use of novel benchmarks that are not confined to traditional, narrowly-scoped assessments based on NLP benchmarks consisting of predetermined inputs and ground-truths. Given that agency entails autonomy, it becomes imperative to evaluate LLMs in a manner similar to other comparable technologies where machines operate with varying degrees of self-governance.
To evaluate LLMs effectively, one promising approach is to integrate concepts and tools from the domain of agent-based modeling (ABM) [12], a commonly used tool in health policy, complex systems, biology, ecology, economics, and the social sciences. ABM is a computational framework that enables simulation of the actions and interactions of autonomous agents, providing insights into system behavior and the factors influencing its outcomes. With an ABM approach, we can simulate multi-agent environments where LLMs interact with physicians, patients, and caretakers. Through these simulations, we can demonstrate and quantify emergent behavior, identify failure scenarios, and assess the impact of LLMs as an intervention in a healthcare system as well as other safety considerations. Guardrails, which have been developed for general-purpose models to constrain their behavior [13], can also be developed for clinical LLMs based on insights derived from such simulations.
An illustrative instance of applying ABM to evaluate a technology that demonstrates some level of agency can be found in the domain of self-driving cars [14]. In this field, simulation environments that emulate the behavior of autonomous vehicles, drivers, and pedestrians are used to identify critical scenarios. A prominent example is Waymo, an autonomous driving technology company that utilizes agent-based simulation environments like "CarCraft" [15] and "Simulation City" [16] to thoroughly evaluate and refine their algorithms. Similar to standards and regulations for the autonomous driving industry, identifying robust clinical guidelines and what constitutes a successful interaction for healthcare LLMs will be crucial towards fulfilling the long-term goals of patients, providers, and other clinical stakeholders.
## 2 Utility of agent-based modeling
ABM approaches are already used in health research to conduct simulation studies of health behaviors, social epidemiology and the spread of infectious diseases [17]. In all of these settings, ABM-based Monte Carlo simulations are used to thoroughly examine the effects of interactions among agents on system-level outcomes [18]. Similarly, ABM-based approaches can be used to study the outcomes of LLM-Human interactions at scale, taking into consideration the probabilistic elements of both human and LLM behaviors.
One aspect of LLMs that can be studied through ABM simulations is the impact of their sensitivity to user prompts on system outcomes. It is well-known that LLMs are highly influenced by the specific prompts given by users, such as physicians, healthcare workers, or patients [19]. In fact, understanding how to craft queries has become a significant subfield in computer science known as "prompt engineering". An ABM can simulate the variability in language usage among different people, and a Monte Carlo approach can reveal the range of probable outcomes of an LLM-augmented system of care. Emerging phenomena that are caused by an LLM's sensitivity to prompts can then be studied. For instance, disparities in outcomes arising from the varied quality of an LLM's responses to patients of different backgrounds and native languages can be simulated to assess model fairness prior to its deployment.
Agent-based simulations can also offer valuable insights into modeling interactions between users and LLMs, as well as the dynamic roles of multiple LLMs within the clinical workflow. Concerns surrounding the generation of inaccurate or biased information from LLMs have prompted researchers to explore approaches where models can cross-validate each other's outputs to enhance consistency and accuracy [7, 20]. Moreover, LLMs can be trained with specialized knowledge, such as ClinicalT5, which focuses exclusively on radiology reports and discharge summaries [21]. Complex clinical decision-making tasks, such as tumor boards, may involve the coordinated interaction of multiple LLMs with different areas of expertise. Another scenario could involve the collaboration between an LLM specializing in triage and another specializing in diagnosis. These examples highlight the interconnected nature of the interactions between humans and LLMs which contribute to shaping the trajectory of patient care. Utilizing an ABM approach allows for the simulation of numerous episodes involving these multi-agent interactions, enabling the evaluation of a distribution of possible outcomes under these interactions.
## 3 Building an ABM simulation environment
In order to evaluate LLMs using agent-based approaches, we need high-fidelity simulation environments of the healthcare systems in which these models will be de
ployed. Interestingly, LLMs themselves can be utilized in constructing these simulators. Previous research has demonstrated the feasibility of employing LLMs to create "interactive simulacra" that replicate human behavior [22, 23]. By combining LLM-based models of patients and physicians with rule-based models of standard clinical workflows, it becomes possible to simulate the environments in which LLM models will be deployed. These simulation environments serve multiple purposes. Firstly, they can facilitate the evaluation of new LLM agents, allowing for rigorous assessments of their performance. Additionally, they can provide valuable insights into system-level aspects of the clinical workflow. Through these simulations, researchers can identify the specific types of information that are essential to extract and discuss in various complex clinical scenarios. Furthermore, systematic ablation experiments can be conducted to determine which information is not necessary for optimal outcomes. Moreover, the agents themselves can be treated as discrete components, systematically added or removed from interactions, to assess the contributions of specific roles to overall outcomes. This approach enables a comprehensive understanding of the impact of different agent configurations on the system.
In human medical education, there has been a movement away from standardized testing that evaluate only shallow clinical reasoning and modern curricula increasingly use Objective Structured Clinical Examinations (OSCE) [24]. These exams assess a student's practical skills in the clinic, including their ability to examine patients, take clinical histories, communicate effectively, and handle unexpected situations. Similarly, current benchmarks for clinical NLP, including MedQA (USMLE style questions) or MedNLI (identifies relationship between clinical sentences), are often also derived from standardized tests or curated clinical text and are not sufficient to capture the full range of capabilities demonstrated by clinical LLM agents [21, 25]. Instead, we call for the development of Artificial-intelligence Structured Clinical Examinations ("AI-SCI") that can be used to assess the ability for LLMs to aid in real-world clinical workflows. These AI-SCI benchmarks, which may be derived from difficult simulation scenarios or from real-world clinical tasks, should be created with input from interdisciplinary teams of clinicians, computer scientists, and the medical informatics community. As LLMs evolve and demonstrate increasingly advanced capabilities, their involvement in clinical practice will extend beyond limited text processing tasks. They will play a significant role in clinical decision-making and influence the cognitive load of healthcare professionals. In the near future, it may become necessary to shift our benchmarks from static datasets to dynamic simulation environments and transition from language modeling to agent modeling. Drawing inspiration from fields such as biology, the social sciences, and economics could be beneficial for future LLM research and development endeavors for clinical applications.
|
2309.17122 | Benchmarking the Abilities of Large Language Models for RDF Knowledge
Graph Creation and Comprehension: How Well Do LLMs Speak Turtle? | Large Language Models (LLMs) are advancing at a rapid pace, with significant
improvements at natural language processing and coding tasks. Yet, their
ability to work with formal languages representing data, specifically within
the realm of knowledge graph engineering, remains under-investigated. To
evaluate the proficiency of various LLMs, we created a set of five tasks that
probe their ability to parse, understand, analyze, and create knowledge graphs
serialized in Turtle syntax. These tasks, each embodying distinct degrees of
complexity and being able to scale with the size of the problem, have been
integrated into our automated evaluation system, the LLM-KG-Bench. The
evaluation encompassed four commercially available LLMs - GPT-3.5, GPT-4,
Claude 1.3, and Claude 2.0, as well as two freely accessible offline models,
GPT4All Vicuna and GPT4All Falcon 13B. This analysis offers an in-depth
understanding of the strengths and shortcomings of LLMs in relation to their
application within RDF knowledge graph engineering workflows utilizing Turtle
representation. While our findings show that the latest commercial models
outperform their forerunners in terms of proficiency with the Turtle language,
they also reveal an apparent weakness. These models fall short when it comes to
adhering strictly to the output formatting constraints, a crucial requirement
in this context. | Johannes Frey, Lars-Peter Meyer, Natanael Arndt, Felix Brei, Kirill Bulert | 2023-09-29T10:36:04Z | http://arxiv.org/abs/2309.17122v1 | Benchmarking the Abilities of Large Language Models for RDF Knowledge Graph Creation and Comprehension: How Well Do LLMs Speak Turtle?
###### Abstract
Large Language Models (LLMs) are advancing at a rapid pace, with significant improvements at natural language processing and coding tasks. Yet, their ability to work with formal languages representing data, specifically within the realm of knowledge graph engineering, remains under-investigated. To evaluate the proficiency of various LLMs, we created a set of five tasks that probe their ability to parse, understand, analyze, and create knowledge graphs serialized in Turtle syntax. These tasks, each embodying distinct degrees of complexity and being able to scale with the size of the problem, have been integrated into our automated evaluation system, the LLM-KG-Bench. The evaluation encompassed four commercially available LLMs - GPT-3.5, GPT-4, Claude 1.3, and Claude 2.0, as well as two freely accessible offline models, GPT4All Vicuna and GPT4All Falcon 13B. This analysis offers an in-depth understanding of the strengths and shortcomings of LLMs in relation to their application within RDF knowledge graph engineering workflows utilizing Turtle representation. While our findings show that the latest commercial models outperform their forerunners in terms of proficiency with the Turtle language, they also reveal an apparent weakness. These models fall short when it comes to adhering strictly to the output formatting constraints, a crucial requirement in this context.
L +
Footnote †: 1}\)[https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard/tree/a068e66fdd6f453812b307541e8c82f99472aabe](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard/tree/a068e66fdd6f453812b307541e8c82f99472aabe)
Large Language Model, Knowledge Graph Engineering, Large Language Model Benchmark
## 1 Introduction
Large Language Models have gained significant attention in recent years, with GPT-4 being among the most prominent [1]. However, other models also demonstrate impressive performance in various tasks, as tracked in the LLMSYS Chatbot Arena Leaderboard1.
In the field of Knowledge Graph Engineering (KGE) the overarching task is to structure knowledge and encode it in a machine processable format. Using machine learning techniques to create or process knowledge graphs is a well researched topic that receives new momentum. The report of the Dagstuhl Seminar 22372 [2] and the Knowledge Base Construction from Pretrained Language Models (LM-KBC) Challenge2 emphasize the relevance of this topic. Pan et al. [3] outline the potential of connecting LLMs and KGs and in particular with "LLM-augmented KGs".
Footnote 2: Website: [https://lm-kbc.github.io/challenge2023/](https://lm-kbc.github.io/challenge2023/)
RDF - the Resource Description Framework - serves as a standard for representing Knowledge Graphs, while Turtle, a textual representation, is widely used to store and exchange RDF data. We have opted for Turtle given its strong resemblance to natural language, which aligns well with the primary mode of interaction with LLMs.
In previous works, we have conducted manual experiments [4] and introduced the framework _LLM-KG-Bench_[5] for automated benchmarking of LLM performance on KGE tasks.
In this paper, we expand upon that work by introducing two new tasks to the _LLM-KG-Bench_ framework and evaluate the ability of various models to shed light on the question "how well do LLMs speak Turtle", i.e. parse, comprehend, analyze, create and serialize RDF knowledge graphs using Turtle serialization. Besides Claude 1.3-100k, GPT-3.5 Turbo, and GPT-4, we also include Claude 2 in our evaluation and have extended the framework's capabilities to allow for benchmarking on a variety of freely available offline LLM models using GPT4All. We have selected GPT4All Vicuna as a non-commercially usable model and Falcon 13B as the top freely available commercially usable model (Apache 2.0 license) to be additionally included.
After describing related work in the next section, we introduce the benchmark tasks in section 3. In section 4, we explain the study setup, present the evaluation and discuss the strengths and weaknesses of the individual LLMs regarding their utilization for RDF KGE workflows using Turtle. We conclude with a discussion and outline future work in section 5.
## 2 Related Work
There are several evaluations and articles discussing the utilization of LLMs for KG related tasks3, e.g. [6, 7, 8, 9, 4]. Some of them include references to code for reproducing the results. These works cover topics like, KG construction supported by LLMs, KG reasoning supported by LLMs or question answering based on KGs and LLMs.
Footnote 3: Repository: [https://github.com/zjukg/KG-LLM-Papers/](https://github.com/zjukg/KG-LLM-Papers/)
In the field of generic LLM benchmarking, the _BigBench_ framework [10] offers a robust structure and collects already a large list of automated benchmark tasks, but public code for integrating current LLMs like GPT or Claude is missing. The Large Model Systems (LMSys) leaderboard [11] is built mainly on manual testing and evaluation by the community. There is also the Language Model Evaluation Harness [12] which tests Open Source LLMs on a variety of reasoning and logic tasks, but none are related to knowledge graphs.
In the scope of this paper, we are focusing on the definition of automated evaluation tasks testing KGE related capabilities using turtle syntax for RDF-based KGs. We decided to use the KGE specific _LLM-KG-Bench_ framework[5], which is compatible with _BigBench_, but adds KGE
specific helpers, includes connectors for current LLMs like Claude and GPT, and supports tasks that can scale in problem sizes.
## 3 Benchmark Tasks
To evaluate the capabilities of LLMs, we created five tasks with focus on the ability to parse, understand, analyze, and create knowledge graphs using the Turtle serialization format.
The tasks **T2**_TurtleErrorsStatic_ (section 3.2), **T3**_TurtleSampleGeneration_ (section 3.3), and **T5**_FactExtractStatic_ (section 3.5) are extended versions of the tasks described in [5], while the tasks **T1**_TurtleConnectionExplainStatic_ (section 3.1) and **T4**_TurtleFriendCount_ (section 3.4) are newly introduced in this paper.
The tasks are executed in two different manners, T1, T2, and T5 are executed as _static_ tasks, i.e. with a fixed prompt size and fixed expected responses, while T3 and T4 are _scalable_ in problem size (i.e. given prompt or expected response length) using an estimated byte limit parameter. The byte limit can be used by the scalable tasks to calculate a task specific problem size (number of persons in the case of T3 and T4) to approximate that byte limit.
Task T1, T2 and T5 were executed 20 times per model. While the benchmark tasks report a variety of metrics and info or debug scores, we report the F1 measures for these tasks for a unified comparison in the scope of this work (shown in Figure 1). The scalable tasks T3 and T4 were executed 20 times per combination of size and model for 8 different sizes. Byte limit and resulting task problem sizes are depicted in table 1.
### Task T1: Find Connection in Small Turtle File
To check basic support for knowledge graphs and Turtle syntax, we implemented the _TurtleConnectionExplainStatic_ task similar to the first manual experiment in our previous work [4].
Prompt 1:For the following turtle find the shortest non trivial connection from Anne to Bob. Please summarize the connection with just a list of resource IRIs, one per line, starting with [https://abc.def/ghi/anne](https://abc.def/ghi/anne) and ending with [https://abc.def/ghi/bob](https://abc.def/ghi/bob). Please leave out rdf:type infos, leave out explanatory text and answer with just the IRI lines.
[... followed by the graph in listing 1]
In prompt 1, we provide a small organizational graph (see listing 1) and ask for the shortest non trivial connection (excluding the one via foid:Person type statement) between the two
\begin{table}
\begin{tabular}{c c c} \hline \hline Byte Limit & No. Persons Task T3 & No. Persons Task T4 \\ \hline
1000 & 10 & 6 \\
2000 & 20 & 16 \\ \(\vdots\) & \(\vdots\) & \(\vdots\) \\
8000 & 80 & 76 \\ \end{tabular}
\end{table}
Table 1: Configured byte limit and resulting task problem sizes
nodes :_Anne_ and :_Bob_. By finding the connection \(anne\xleftrightarrow{org:member}\node{1}\xleftrightarrow{org:organization}\)\(researchDep\xleftrightarrow{org:unitOf}\xleftrightarrow{org:unitOf}\xleftrightarrow{org:unitOf}\xleftrightarrow{org:organization}\node{2}\)\(\xleftrightarrow{org:member}\xleftrightarrow{obb}\) between them, the LLM demonstrates basic graph handling capabilities. Note, that we ask the response output to be a list of resource/node IRIs without any other text, to support the automated evaluation of the answer. This also excludes both blank nodes and leads to the list \(anne,researchDep,wonderOrg,marketingDep,bob\). We use similar output requirements in most tasks and argue that a strict adherence to task details and output format instructions is a necessary capability for using LLMs as part of a tool chain in KGE tasks or workflows. The task computes recall, precision, and F1 measure for the list of IRIs mentioned in the model response with regard to the list of IRIs representing the nodes of the shortest path.
### Task T2: Find Errors in Small Turtle File
```
1PREFIX:<[http://www.w3.org/2000/01/rdf-schema#](http://www.w3.org/2000/01/rdf-schema#)>
2PREIXw1:<[http://www.w3.org/2002/07/owl#](http://www.w3.org/2002/07/owl#)>
3PREIXfoaf:<[http://xmlns.com/foaf/0.1/](http://xmlns.com/foaf/0.1/)>
4PREIXvoard:<[http://www.w3.org/2006/vcard/ns#](http://www.w3.org/2006/vcard/ns#)>
5PREIXorg:<[http://www.w3.org/ns/org#](http://www.w3.org/ns/org#)>
6:anneafoaf:Person:foaf:firstName"Anne";foaf:surname"Miller";
7vcard:hasAddress[avcard:Home;vcard:country-name"UK"].
8:bobaf:Person:foaf:firstName"Bob";foaf:surname"Tanner";
9:vcard:hasAddress[avcard:Home;vcard:country-name"US"].
10:
11:wonderogagorg:Organization.
12:researchDepagorg:OrganizationlUnit;org:unitof:wonderog;
13rdfs:label"ResearchDepartment".
14:marketingDepagorg:organizationlUnit;org:unitof:wonderog;
15rdfs:label"MarketingDepartment".
16:chiefResearchofficeragorg:Role.:marketingManagerag:Role.
17
18
19:chiefResearchofficeragorg:Role.:marketingManagerag:Role.
20
21[aorg:Membership;org:member:anne;org:organization:researchDep;
22org:role:chiefResearchofficer].
23[aorg:Membership;org:member:bob;org:organization:marketingDep;
24org:role:marketingManager].
```
Listing 1: An organizational KG with two people working in different departments of the same organization. Graph taken from [4].
nodes :_Anne_ and :_Bob_. By finding the connection \(anne\xleftrightarrow{org:member}\node{1}\xleftrightarrow{org:organization}\)\(researchDep\xleftrightarrow{org:unitOf}\xleftrightarrow{org:unitOf}\xleftrightarrow{org:organization}\xleftrightarrow{org:original}\xleftrightarrow{org:member}\xleftrightarrow{obb}\) between them, the LLM demonstrates basic graph handling capabilities. Note, that we ask the response output to be a list of resource/node IRIs without any other text, to support the automated evaluation of the answer. This also excludes both blank nodes and leads to the list \(anne,researchDep,wonderOrg,marketingDep,bob\). We use similar output requirements in most tasks and argue that a strict adherence to task details and output format instructions is a necessary capability for using LLMs as part of a tool chain in KGE tasks or workflows. The task computes recall, precision, and F1 measure for the list of IRIs mentioned in the model response with regard to the list of IRIs representing the nodes of the shortest path.
### Task T2: Find Errors in Small Turtle File
```
1PREFIX:<[http://www.w3.org/2000/01/rdf-schema#](http://www.w3.org/2000/01/rdf-schema#)>
2PREIXw1:<[http://www.w3.org/2002/07/owl#](http://www.w3.org/2002/07/owl#)>
3PREIXfoaf:<[http://xmlns.com/foaf/0.1/](http://xmlns.com/foaf/0.1/)>
4PREIXvoard:<[http://www.w3.org/2006/vcard/ns#](http://www.w3.org/2006/vcard/ns#)>
5PREIXorg:<[http://www.w3.org/ns/org#](http://www.w3.org/ns/org#)>
6:anneafoaf:Person:foaf:firstName"Anne";foaf:surname"Miller";
7vcard:hasAddress[avcard:Home;vcard:country-name"UK"].
8:bobaf:Person:foaf:firstName"Bob";foaf:surname"Tanner";
9:vcard:hasAddress[avcard:Home;vcard:country-name"US"].
10:
11:vcard:hasAddress[avcard:Home;vcard:country-name"US"].
12
13:wonderogagorg:Organization.
14:researchDepagorg:OrganizationlUnit;org:unitof:wonderog;
15rdfs:label"ResearchDepartment".
16:marketingDepagorg:organizationlUnit;org:unitof:wonderog;
17rdfs:label"MarketingDepartment".
18
19:chiefResearchofficeragorg:Role.:marketingManagerag:Role.
20
21[aorg:Membership;org:member:anne;org:organization:researchDep;
22org:role:chiefResearchofficer].
23[aorg:Membership;org:member:bob;org:organization:marketingDep;
24org:role:marketingManager].
```
Listing 1: An organizational KG with two people working in different departments of the same organization. Graph taken from [4].
nodes :_Anne_ and :_Bob_. By finding the connection \(anne\xleftrightarrow{org:member}\xleftrightarrow{node{1}}\xleftrightarrow{org:organization}\)\(researchDep\xleftrightarrow{org:unitOf}\xleftrightarrow{org:unitOf}\xleftrightarrow{org:unitOf}\xleftrightarrow{org:organization}\xleftrightarrow{org:original}\xleftrightarrow{org:member}\xleftrightarrow{obb}\) between them, the LLM demonstrates basic graph handling capabilities. Note, that we ask the response output to be a list of resource/node IRIs without any other text, to support the automated evaluation of the answer. This also excludes both blank nodes and leads to the list \(anne,researchDep,wonderOrg,marketingDep,bob\). We use similar output requirements in most tasks and argue that a strict adherence to task details and output format instructions is a necessary capability for using LLMs as part of a tool chain in KGE tasks or workflows. The task computes recall, precision, and F1 measure for the list of IRIs mentioned in the model response with regard to the list of IRIs representing the nodes of the shortest path.
### Task T2: Find Errors in Small Turtle File
```
1Prompt2:Please check the following rdf turtle file for errors and answer with no text but just the corrected turtle. Try to stick to the original formatting. [... followed by the turtle document to check]
The _TurtleErrorsStatic_ task involves identifying and correcting syntax errors in a small Turtle file, which is based on the same organizational graph (listing 1) with minor modifications. The
turtle file has a period missing at the end of line 9 and the first semicolon in line 16 was removed. Correcting the errors demonstrates the LLM's knowledge of Turtle grammar while also showing its ability to transform it into a proper form without altering existing facts and adhering strictly to the task requirements. One of the scores calculated during evaluation is the F1 measure on parsable, normalized triples, comparing the LLM's answer with the perfect answer. In order to do so, the response is directly consumed by rdflib in combination with an iterative parsing failure heuristic. The heuristic removes invalid lines that are reported as source of a syntax error until the document is fully parsable or empty.
### Task T3: Create Sample Graphs
We created the task _TurtleSampleGeneration_ to see if LLMs can understand and honor the requirements that we postulate for creating a simple knowledge graph, i.e. the number of resources in the graph as well as its structure. The task makes use of the popular foaf vocabulary because we assume that members of it are very prevalent in the training data, since they are frequently used in example snippets in online forums and datasets.
**Prompt 3:** Create a knowledge graph in turtle format that consists of \(n\) different objects of type foaf:person. Each should have at least 2 and at most \(n-1\) connections to other persons via foaf:knows. Give no extra text.
In prompt 3 we instruct the LLM to generate a knowledge graph with a certain number of persons who each have between two and \(n-1\) friends (both inclusive). The number of persons \(n\) can be varied to get different answer sizes. The task is motivated by the idea of using LLMs to generate test, training, or example data of various sizes. Furthermore, it allows to study the capacity of the models to generate content of increasing sizes while maintaining the integrity of the graph and serialization. This task checks first if the generated answer is parsable. If so, the structure of the graph is evaluated. We count the number of resources in the graph and if they are all correctly declared as rdf:type foaf:Person. The _persons_relative_error_ scores measures the difference between the actual number of person objects generated and the number asked for. This value is normalized to be \(=0\) if they match, \(>0\) if there are more persons than asked for and \(<0\) if there are less persons, with the special case of \(-1\) meaning an empty graph.
### Task T4: Count Links in Person Graph
**Prompt 4:** Please name the person which is known by the most persons according to the following RDF graph in turtle syntax. Give just the IRI of this person with most incoming links as answer, without abbreviation or explanation. [... followed by the graph serialization]
The task _TurtleFriendCount_ requires finding the resource with the most incoming links in a simple generated KG. The structure of the KG is similar to the previous tasks, consisting of a variable number of foaf:Person resources connected by foaf:knows properties. Each
person is known by two other persons, but one designated foaf:Person is known by two additional persons (one for small sizes), resulting in three to four incoming links instead of two. This task tests basic RDF and turtle knowledge as well as graph comprehension and processing skills by aggregating link counts for various KG sizes. The number of foaf:Person resources in the prompt has a linear correlation with the prompt length. The task computes recall, precision, and f1 measure with respect to the expected person IRI.
### Task T5: Create Knowledge Graph from Factsheet
**Prompt 5:** As a Linked Data expert and knowledge engineer please convert the 3d printer specification given in the bottom into an RDF turtle formatted Knowledge Graph. The main subject in this Knowledge Graph should be the Printer itself using the [https://data.semper-ki.org/resources/](https://data.semper-ki.org/resources/)$encoded_label$ whereas $encoded_label$ label refers to the name of the printer in which whitespaces have been replaced with underscores. Step by step attach the following statements to that node.
1) Attach [https://purl.org/tema/051993](https://purl.org/tema/051993) as rdf:type
2) Attach the manufacturer via an object property using the schema.org vocabulary. For the subject identifier of the manufacturer use the same approach as for the printer subject and assign it the rdfs:label without a language tag.
3) Attach all printable material as well all support materials the printer can use via object properties. For the property identifiers use [https://data.semper-ki.org/properties/printMaterial](https://data.semper-ki.org/properties/printMaterial) respectively [https://data.semper-ki.org/properties/supportMaterial](https://data.semper-ki.org/properties/supportMaterial) and for material identifiers use the [https://data.semper-ki.org/resources/materials/](https://data.semper-ki.org/resources/materials/) namespace.
4) Attach the dimensions of the printer as 3 separate values using the schema.org vocabulary for the property identifiers of the dimensions, but for the values use qudt:QuantityValue objects from the QUDT ontologies family to have the numerical value ( via qudt:numericValue property and typed as xsd:double) and the appropriate QUDT unit of measurement identifier (via qudt:hasUnit property) separately. Do not convert the dimensions into another unit.
Follow best practices of Linked Data and common standard vocabularies as basis for the modeling unless specified differently in the requirements above. Do not extract any other properties or values from the specification than the ones I mentioned but validate your output step by step, to check whether all used prefixes are included and the given requirements as well as the grammar of the RDF turtle serialization were respected. Only return the turtle format, no additional text. [... followed by the fact sheet plaintext excerpt]
The task _FactExtractStatic_ assesses the LLM's fact extraction and advanced RDF modeling abilities, by utilizing a plaintext excerpt from one of our previous experiments [4]. The excerpt (that is not shown for reasons of brevity) describes various aspects of the specifications of a 3D printer in a key-value format, including the formatting irregularities commonly found in PDF extracts. We ask the model to generate a Turtle file that captures a subset of this information to check for how well RDF facts can be extracted from factsheet plaintexts and transformed into a
knowledge graph. The prompt is carefully designed with regard to the transformation aspect in order to be very specific and unambiguous on how the data should be represented. The prompt defines concrete namespace schemes and construction rules for IRI identifiers for properties and subjects, but also challenges knowledge about ontology members by requesting the use of concrete ontologies. Subsequently, we can evaluate the quality of a single response using the F1 measure, counting the set of parsable triples that (mis)match or are missing compared to a manually curated reference document. Since we consider this as a quite challenging task, we make use of multiple prompt engineering optimization techniques, namely, asking for an expert answer in a domain context, providing step by step instructions, and asking for critical self-evaluation.
## 4 Benchmark Study Results and Discussion
Using the _LLM-KG-Bench_ framework, we configured the 5 aforementioned benchmark tasks (cf. section 3) to be evaluated for the 3 highest ranking LLMs at the LLMSYS Chatbot Arena Leaderboard1 i.e. GPT-4 (gpt-4-0613), GPT-3.5 (gpt-3.5-turbo-0613), and Claude-1.3 (claude-1.3-100k), additionally we have included Claude-2.0. These 4 systems were evaluated using the commercial APIs of OpenAI and Anthropic. We also wanted to include freely available offline LLMs. Based on the availability in GPT4All, we have selected GPT4All Vicuna-13B (version ggml-vicuna-13b-1.1-q4_2.bin) as a non-commercially usable representant and GPT4All Falcon (ggml-model-gpt4all-falcon-q4_0.bin) as the top freely available commercially usable model (Apache 2.0 license).
**T1:** As can be seen in Figure 0(a), Claude-2.0 answers perfectly. GPT-4 added occasionally properties to the list, although only nodes were requested. Claude-1.3 and GPT-3.5 seem to omit sometimes a resource from the requested list. Falcon's lists often contain only the first and last resource which was already specified in the prompt, but a few attempts pose some basic
Figure 1: Evaluation of Static Tasks: Distribution of F1 scores
understanding of the task, but severely violate the output constraints. Vicuna's responses are mostly in the form of "This is the shortest connection from anne to bob".
**T2:** GPT-3.5 often claims that the given turtle file would be correct and returns no turtle. This explains the high rate of zeros as F1 score (see figure 1) The answers given by Claude-1.3 and GPT-4 score better. Claude 2.0 fails in contrast to its predecessor in returning the plain turtle, leading to unparseable documents although the errors seem fixed. Vicuna replies with the empty string in all cases. While Falcon reports there would be no errors in a few cases, it often does not follow the task and replies with Turtle snippets or explanations of the content.
**T3:** Claude-1.3 clearly performed worst from the commercial models (Fig. 2). Common issues leading to missing persons for smaller sizes were omitting rdf:type statements and missing prefix definitions (making the entire document unparseable). For bigger sizes, Claude 1.3 additionally used ellipses with comments like "continues with 70+ more persons" making it impractical for sample generation. Claude 2.0 shows similar ellipses but sovereignly handles types and prefixes, leading to the best results for small and medium sizes, but bad ones for large sizes. GPT-4 has only a slightly better performance compared to GPT-3.5 but in contrast uses ellipses much more frequently for higher sizes (60-80 persons). Falcon and Vicuna scored worse. Vicuna creates very few persons but then uses ellipses. However, it is missing the essential rdf:type statement for size 10. Falcon just creates a list of prefix declarations.
**T4:** All commercial models seem to be challenged by the size of 6 persons as indicated in Figure 2, where the person with the most friends only has one more friend compared to all others. GPT-3.5 consistently proposes the person with the most outgoing friends as solution instead. For the other models this also happens very frequently. Surprisingly, this potential misunderstanding seems to occur significantly less frequent for all other sizes where the the person with the most friends has 4 incoming relations. In fact, GPT-4 is performing in an outstanding way, only doing one mistake for the largest size. Claude-2.0 performs similar to GPT-3.5, both confuse one of the persons with more outgoing links as correct solution, although interestingly GPT-3.5 also answers often with a full sentence (instead the IRI only) when it has such a confusion. Claude 1.3 also frequently confuses ingoing with outgoing links, and similarly to GPT 3.5 this correlates with violating the output constraint, but even stronger. Falcon and Vicuna seem to understand the task but have incorrect reasoning. Falcon consistently answers
Figure 2: Evaluation of Scalable Tasks: Mean of task metric with 95% confidence interval
with a wrong non-reasonable person and textual explanations. Vicuna surprisingly identifies the correct person for the smallest size, but fails for bigger sizes. Moreover the context windows are exceeded for sizes greater than 26 persons for Vicuna and 36 persons for Falcon.
**T5:** Figure 1c shows that both GPT models outperform Claude 1.3 in this task. While GPT4 has a better mean, due to one very good response (F1 score of 0.94), it however replied frequently with unparseable content, which in turn did not happen for GPT3.5, leading to a slightly better median for that. Claude 2.0 shows the highest values for F1 mean as well as the third quartile and it returned less unparseable documents compared to Claude 1.3 and GPT4. Vicuna did not return output that has any similarity with Turtle. Falcon creates turtle preambles with varying prefix definitions but had problems to continue the document and seemed very often stuck in repetitive namespace gibberish and prefix patterns.
## 5 Conclusion and Future Work
The evaluation shows already promising results. Especially the newer versions of both GPT and Claude speak turtle already at a level that it might be useful for assistant tasks. A general problem is though, that the models, although explicitly requested, do not consistently respond with plain Turtle but include short explanations or markdown ticks. While our parsing failure heuristic can mitigate some of these cases, this issue poses a challenge when interfacing directly with RDF tools. It is noteworthy, that the newer versions of both GPT and Claude tend to violate the output constraints more often. While our failure tolerant parser heuristic allows to get more insights into the quality of results, it can also reward solutions to some degree that might not be useful without special post processing in practical scenarios. Therefore, we see as a next step to define tests that are stricter, however provide feedback to the models (e.g. parsing errors) to perform and evaluate few-shot approaches. Moreover, it could be of value to assess the performance using ntriples, which has less syntactical features but allows easier retrieval of partially inconsistent responses. It also remains to be seen whether finetuning LLMs on RDF syntax using large datasets like Wikidata and DBpedia would be beneficial. Finally besides extending the framework with more tests, we see integrating LangChain to study the combination of LLMs with KGE-assistant plugins (e.g. prefix or ontology terminology lookup service) as an interesting path to explore.
## Acknowledgments
This work was partially supported by grants from the German Federal Ministry of Education and Research (BMBF) to the projects StahlDigital (13XP5116B) and KupferDigital (F13XP5119F) as well as from the German Federal Ministry for Economic Affairs and Climate Action (BMWK) to the CoyPu project (01MK21007A) and KISS project (01MK22001A).
|
2301.13566 | Factorizations of Cyclic Groups and Bayonet Codes | We study the (variable-length) codes of the form X u {a^n}, where X c a*wa*
and |X| = n. We extend various notions and results from factorizations of
cyclic groups theory to this type of codes. In particular, when n is the
product of at most three primes or has the form pq^k (with p and q prime), we
prove that they are composed of prefix and suffix codes. We provide
counterexamples for other n. It implies that the long-standing triangle
conjecture is true for this type of n. We also prove a conjecture about the
size of a potential counterexample to the conjecture. | Christophe Cordero | 2023-01-31T11:36:22Z | http://arxiv.org/abs/2301.13566v1 | # Factorizations of Cyclic Groups and Bayonet Codes
###### Abstract
We study the (variable-length) codes of the form \(X\cup\{a^{n}\}\) where \(X\subseteq a^{*}\omega a^{*}\) and \(|X|=n\). We extend various notions and results from _factorizations of cyclic groups_ theory to this type of codes.
In particular, when \(n\) is the product of at most three primes or has the form \(p^{k}q\) (with \(p\) and \(q\) prime), we prove that they are composed of prefix and suffix codes. We provide counterexamples for other \(n\). It implies that the long-standing _triangle conjecture_ is true for this type of \(n\). We also prove a conjecture about the size of a potential counterexample to the conjecture.
## Introduction
Schutzenberger founded and developed the theory of (variable-length) _codes_ in the 1960s in order to study encoding problems raised by Shannon's information theory. Since then, the theory has undergone its own development and links with monoids, automata, combinatorics on words and factorizations of cyclic groups have emerged. We refer the reader to the book [1] for an introduction to the theory of codes.
Let \(\mathcal{A}\) be an alphabet containing letters \(a\) and \(b\). A _code_ is a subset \(X\subseteq\mathcal{A}^{*}\) such that for all \(t,t^{\prime}\geq 0\) and \(x_{1},\ldots,x_{t},y_{1},\ldots,y_{t^{\prime}}\in X\) the condition
\[x_{1}\,x_{2}\,\cdots\,x_{t}\,=\,y_{1}\,y_{2}\,\cdots\,y_{t^{\prime}}\]
implies \(t=t^{\prime}\) and \(x_{i}=y_{i}\), for \(i=1,\ldots,t\). For example, the set \(\{aabb,abaaa,b,ba\}\) is not a code because
\[(b)(abaaa)(b)(b)=(ba)(ba)(aabb).\]
A straight forward way to create a code is to build a _prefix set_ (respectively _suffix set_), which is a set of words such that none of its words is a prefix (resp. suffix) of another one. For example, the set
\[\{aaa,ab,aab,bba,ba\} \tag{1}\]
is prefix but not suffix. According to Proposition 2.1.9 from [1], a prefix (resp. suffix) set different than \(\{\varepsilon\}\) is a code, where \(\varepsilon\) is the empty word. Such codes are called _prefix codes_ (resp. _suffix codes_). So the set (1) is a prefix code.
Since any code is included in a _maximal_ code (code that is not included in another code), most of the theory of codes is dedicated to the study of finite maximal codes.
One of the main conjecture about the characterization of finite maximal codes is the _factorization conjecture_ from Schutzenberger [11]. This conjecture states that for any finite maximal code \(M\), there exists finite sets \(P,S\subseteq\mathcal{A}^{*}\) such that
\[\underline{M}-1=\underline{P}\,(\underline{A}-1)\,\underline{S}, \tag{2}\]
where given a set \(X\subseteq\mathcal{A}^{*}\), we denote its formal sum
\[\sum_{x\in X}x\]
by \(\underline{X}\). The best known result about the conjecture is from Reutenaeur [13]. He proved that for any finite maximal code \(M\), there exists polynomials \(\underline{P},\underline{S}\in\mathbb{Z}\langle\langle\mathcal{A}\rangle\rangle\) (the set of formal power series of \(\mathcal{A}^{*}\) over \(\mathbb{Z}\)) such that (2).
During some unsuccessful attempts to prove the conjecture, Perrin and Schutzenberger proposed an intermediate conjecture called the _triangle conjecture_[14]. It is stated as follows: for any _bayonet code_\(X\), i.e. for any code \(X\subseteq a^{*}ba^{*}\), we have
\[|\{x\text{ such that }x\in X\text{ and }|x|\leq k\}|\leq k,\text{ for all }k\geq 0. \tag{3}\]
However, Shor found a counterexample [15].
Since then, variants of the triangle conjecture have been proposed. In particular the one that we nowadays call, by an abuse of language, the _triangle conjecture_ which suggests that any bayonet code \(X\) either satisfies the inequalities (3) or it is not included in a finite maximal code. This conjecture is implied by the factorization conjecture.
A stronger version of the triangle conjecture proposed by Zhang and Shum [10] states that for all finite maximal codes \(M\), \(\omega\in\mathcal{A}^{*}\), and \(k\geq 0\), we have
\[\left|\left\{a^{i}\omega a^{j}\text{ such that }a^{i}\omega a^{j}\in M^{*}\text{ and }i+j<k\right\}\right|\leq k.\]
In this paper, our main subject is the (subsets of) codes concerned by the triangle conjectures, which are the codes of the form
\[X\cup\{a^{n}\}\,,\]
where \(X\subseteq a^{\{0,1,\ldots,n-1\}}ba^{\{0,1,\ldots,n-1\}}\) and \(|X|=n\), for a given \(n\geq 1\). We call them \(n\)_-complete bayonet codes_ (cbc). "Complete" refers to the fact that such a code cannot contain more elements, according to Proposition 2.1 of [10]. We extend various notions and results from _factorizations of cyclic groups_ theory to cbc. The framework we develop about cbc generalizes and simplifies the work recently made about the triangle conjectures such as [10, 10, 11] and allows us to improve the best known results.
In particular, when \(n\) is the product of at most three primes or has the form \(p^{k}q\) with \(p\) and \(q\) prime (we call those numbers _cbc Hajos numbers_), we prove that any code \(X\cup\{a^{n}\}\)
where \(X\subseteq a^{*}\omega a^{*}\), \(\omega\in\mathcal{A}^{*}\), and \(|X|=n\), is a composition of prefix and suffix codes. We provide counterexamples in other cases. This implies that the Zhang and Shum conjecture and therefore the triangle conjecture is true for cbc Hajos numbers. Moreover, our Theorem 7.2 proves a conjecture about the size of a potential counterexample to the triangle conjectures and our Theorem 4.7 gives a structural property of codes satisfying the factorization conjecture.
Our paper is organized as follows. In the first section, we mostly introduce and recall some concepts that relate cbc to finite maximal codes. In the second section, we study some operations on cbc that, among others, lead to a criterion that a code must satisfies in order to be included in a finite maximal code. In the third section, we show that each cbc can be associated to a notion that we call _border_. This notion exhibits a link between _factorizations of cyclic groups_ and cbc. We deduce from it a characteristic about finite maximal codes. Then we show various operations to build _borders_ from others. In the fourth section, similarly to factorization theory, we introduce a _periodic_ and a _Hajos_ notion for cbc. Then we show that _Hajos cbc_ are cbc bordered by a _Krasner factorization_ which, in particular, provides a structural property of codes that satisfy the factorization conjecture. In the fifth section, we prove that any cbc of size \(n\), where \(n\) is a cbc Hajos numbers, is of Hajos. We provide counterexamples in other cases. In the sixth section, we show that Hajos cbc are composed of prefix and suffix codes and thus are included in some finite maximal codes. We also deduce from it that the triangle conjectures are true for cbc Hajos numbers. In section seven, thanks to the framework on borders, we prove a conjecture about the size of a potential counterexample to the triangle conjecture. Finally, we conclude by exposing our main perspectives.
## 1 Complete Bayonet Code
It is well known1 that for any finite maximal code \(M\), there exists a unique integer \(n\) such that \(a^{n}\in M\), it is called the _order_ of the letter \(a\). Most of the known characterizations of finite maximal codes are based on the order of one of its letter. Such as Restivo, Salemi, and Sportelli who have shown [13] the following link between _factorizations_ (_of cyclic groups_) and finite maximal codes.
Footnote 1: see, for example, Proposition 2.5.15 of [1].
**Theorem 1.1**.: _If \(M\) is a finite maximal code such that \(b,a^{n}\in M\) then the ordered pair_
\[\left(\left\{k,\,a^{k}b^{+}\in M\right\},\left\{k,\,b^{+}a^{k}\in M\right\}\right)\]
_is a factorization of size \(n\)._
We recall some basics notions about _factorizations_. However, we refer the reader to the books [16, 13] in order to find proofs of those recalls and for an introduction to the more general theory of factorizations of abelian groups. Given \(P,Q\subseteq\mathbb{Z}\), an ordered pair \((P,Q)\) is a _factorization_ of size \(n\) if and only if for all \(k\in[n]\) (\([n]\) denotes the set \(\{0,1,\ldots,n-1\}\)), there exists a unique \((p,q)\in P\times Q\) such that \(k=p+q\) in \(\mathbb{Z}_{n}\) (the cyclic group of size \(n\)). In particular, \(n\) must be equal to \(|P|\times|Q|\). For example, the ordered pair
\[\left(\left\{4,5,6,7\right\},\left\{1,5\right\}\right) \tag{4}\]
is a factorization of size \(8\). A factorization \((P,Q)\) is called _periodic_ with period \(m\neq 0\) (in \(\mathbb{Z}_{n}\)), if \(P\) or \(Q\) is \(m\)-periodic in \(\mathbb{Z}_{n}\), i.e. if
\[m+P=P\text{ or }m+Q=Q\text{ in }\mathbb{Z}_{n}.\]
For example, the factorization (4) is \(4\)-periodic because \(4+\{1,5\}=\{1,5\}\) in \(\mathbb{Z}_{8}\).
A factorization \((P,Q)\) is said to be _normalized_ if \(0\in P\) and \(0\in Q\). If \((P,Q)\) is a (\(m\)-periodic) factorization then for any \(p\in P\) and \(q\in Q\),
\[(P-p,Q-q)\]
is a (\(m\)-periodic) normalized factorization.
We recall that given a factorization \((P,Q)\) of size \(n\), if \(P\) is periodic then the set made of its periods and \(0\), i.e.
\[\left\{m,m+P=P\text{ in }\mathbb{Z}_{n}\right\},\]
is a subgroup of \(\mathbb{Z}_{n}\). Moreover, if \(P\) is \(m\)-periodic then \(|Q|\) divides \(m\). A set \(U\) is \(m\)-periodic in \(\mathbb{Z}_{n}\) if and only if
\[U=\overline{U}^{m}+m\left[\frac{n}{m}\right]\quad\text{ in }\mathbb{Z}_{n},\]
where \(\overline{U}^{m}\) denotes the set \(\{\overline{u}^{m},u\in U\}\) and where \(\overline{u}^{m}\) denotes the remainder of the Euclidean division of \(u\) by \(m\).
A factorization \((P,Q)\) of size \(n\) is said to be of _Hajos_ if and only if \(n=1\) or if \(P\) (respectively \(Q\)) is \(m\)-periodic and if
\[\left(\overline{P}^{m},Q\right)\quad\left(\text{resp. }\left(P,\overline{Q}^{m} \right)\right)\]
is again a Hajos factorization of size \(m\). All factorizations of size \(n\) where \(n\) is the product of at most four primes or a number of the form \(p^{k}q\), where \(k\geq 1\) and \(p,q\) are primes, are Hajos factorizations. Those numbers are called _Hajos numbers_. Smallest non-Hajos factorizations are therefore of sizes \(72\) and \(108\) [SIa]. See [1] for such an example of non-Hajos factorization of size \(72\).
Thanks to Theorem 1.1, we can prove that the code
\[\left\{a^{5},ab,b,baa\right\}, \tag{5}\]
found by Restivo [10], cannot be included in a finite maximal code because there is no factorization of the form
\[\left(\{0,1\}\subseteq P,\{0,2\}\subseteq Q\right),\]
where \(|P|\times|Q|=5\).
In this paper, we expose a deeper link between the theory of codes and factorizations, starting from the following notion introduced by Perrin and Schutzenberger in [10].
**Definition 1.2**.: _Given a code \(M\) such that \(a^{n}\in M\) and a word \(\omega\in\mathcal{A}^{*}\), we set_
\[C_{M}(\omega):=\left\{a^{\overline{u}^{n}}ba^{\overline{j}^{n}}\text{ such that }a^{i}\omega a^{j}\in M^{*}\right\}. \tag{6}\]
For example, given the finite maximal code
\[E:=\left\{b,ab,a^{4},a^{2}ba,a^{3}b,a^{2}b^{2}\right\}, \tag{7}\]
we have
\[C_{E}(b)=\left\{b,ab,a^{2}ba,a^{3}b\right\}\text{ and }C_{E}(bb)=\left\{b,ab,a^{2 }b,a^{3}b\right\}.\]
As shown in Proposition 2.2 of [28], sets of type (6) form codes.
**Proposition 1.3**.: _If \(M\) is a code containing \(a^{n}\) then for any \(\omega\in\mathcal{A}^{*}\), the set_
\[\left\{a^{n}\right\}\cup C_{M}(\omega) \tag{8}\]
_is a code._
Our statement is slightly different, we produce a straightforward proof.
Proof.: Given a word \(\omega\) and a code \(M\) containing \(a^{n}\), if the set (8) is not a code then there exists
\[a^{i_{1}}\omega a^{j_{1}},\ldots,a^{i_{t}}\omega a^{j_{t}},a^{k_{1}}\omega a^ {\ell_{1}},\ldots,a^{k_{t}}\omega a^{\ell_{t}}\in M^{*}\]
such that \(\overline{j_{1}}^{n}\neq\overline{\ell_{1}}^{n}\) and
\[a^{\overline{i_{1}}^{n}}ba^{\overline{j_{1}}^{n}}\ldots a^{\overline{i_{t}}^{ n}}ba^{\overline{j_{t}}^{n}}\equiv_{n}a^{\overline{k_{1}}^{n}}ba^{\overline{ \ell_{1}}^{n}}\ldots a^{\overline{k_{t}}^{n}}ba^{\overline{\ell_{t}}^{n}},\]
where \(\equiv_{n}\) denotes the congruence over \(\mathbb{Z}\langle\langle\mathcal{A}\rangle\rangle\) defined by the relation \(a^{n}=\varepsilon\). Thus
\[a^{i_{1}}\omega a^{j_{1}}\ldots a^{i_{t}}\omega a^{j_{t}}\equiv_{n}a^{k_{1}} \omega a^{\ell_{1}}\ldots a^{k_{t}}\omega a^{\ell_{t}}.\]
Which implies that \(M\) is not a code since words from the non-empty set
\[\left(a^{n}\right)^{*}\left(a^{i_{1}}\omega a^{j_{1}}\right)\left(a^{n} \right)^{*}\ldots\left(a^{i_{t}}\omega a^{j_{t}}\right)\left(a^{n}\right)^{*} \cap\left(a^{n}\right)^{*}\left(a^{k_{1}}\omega a^{\ell_{1}}\right)\left(a^{n }\right)^{*}\ldots\left(a^{k_{t}}\omega a^{\ell_{t}}\right)\left(a^{n}\right) ^{*}\]
can be decompose in two different ways. This ends the proof by contradiction.
Perrin and Schutzenberger introduced the notion (6) as a characterization of finite maximal codes.
**Theorem 1.4**.: _If \(M\) is a finite code such that \(a^{n}\in M\) then_
\[M\text{ is maximal}\quad\Longleftrightarrow\quad\forall\omega\in\mathcal{A}^{ *},\;\;\;|C_{M}(\omega)|=n. \tag{9}\]
The left to right implication of (9) is demonstrated as Proposition 12.2.4 in [1] and the converse is true according to Theorem 2.5.13 of [1]. The "finite" hypothesis is necessary because the code (5) of Restivo is contained in some infinite maximal codes and none of which verify (9) (in particular when \(\omega=b\)).
We now introduce the class of codes studied in this paper and which contains those of type (8).
**Definition 1.5**.: _We call \(n\)-**complete bayonet code** (\(n\)-**cbc**) a set \(X\subseteq a^{[n]}ba^{[n]}\) such that_
\[|X|=n\;\text{ and }\;\;\{a^{n}\}\cup X\text{ is a code}.\]
Thus for any finite maximal code \(M\) containing \(a^{n}\) and for any word \(\omega\), the set \(C_{M}(\omega)\) is an \(n\)-cbc. However, we do not know whether or not to any \(n\)-cbc \(X\) corresponds a finite maximal code \(M\) and a word \(\omega\) such that \(X=C_{M}(\omega)\). Lam has nevertheless shown [1] that any cbc of the form \(a^{P}ba^{Q}\), where \((P,Q)\) is a Hajos factorization, is included in a finite maximal code. We improve this result in Theorem 6.2.
One of our main motivation is to study the _strong triangle conjecture_ based on _triangle property_.
**Definition 1.6**.: _We say that an \(n\)-cbc \(X\) satisfies the **triangle property** if_
\[|\{x\text{ such that }x\in X\text{ and }|x|\leq k\}|\leq k,\]
_for all \(k\in[n]\). The **strong triangle conjecture** states that every cbc satisfy the triangle property._
The strong triangle conjecture implies the Zhang and Shum conjecture and therefore the triangle conjecture.
## 2 Stability
In this section, we introduce and study some operations on cbc and their framework related to finite maximal codes. Firstly, we introduce a composition operation for cbc.
**Definition 2.1**.: _Let \(X\) and \(Y\) be \(n\)-cbc, we set_
\[X\circ_{r}Y:=\left\{a^{i}ba^{\ell}\text{ such that }a^{i}ba^{j}\in X,\,a^{k}ba^{ \ell}\in Y,\text{and }\overline{j+k}^{n}=r\right\},\]
_for any \(r\in[n]\)._
**Example 2.2**.: _For any \(n\)-cbc \(X\) and \(k\in[n]\),_
\[X\circ_{k}\left\{a^{\overline{k-i}}ba^{i},i\in[n]\right\}=X.\]
The operations \(\left(\circ_{r}\right)_{r\geq 0}\) are associative. Indeed, for any \(n\)-cbc \(X,Y,Z\) and \(r_{1},r_{2}\in[n]\),
\[\left(X\circ_{r_{1}}Y\right)\circ_{r_{2}}Z\text{ and }X\circ_{r_{1}}\left(Y \circ_{r_{2}}Z\right)\]
are equal to
\[\left\{a^{i}ba^{j}\text{ such that }a^{i}ba^{j_{1}}\in X,\,a^{i_{2}}ba^{j_{2}} \in Y,\,a^{i_{3}}ba^{j}\in Z,\,\overline{j_{1}+i_{2}}^{n}=r_{1},\text{and }\overline{j_{2}+i_{3}}^{n}=r_{2}\right\}.\]
However, the product of two \(n\)-cbc does not necessarily results in an \(n\)-cbc. For example,
\[\left\{b,ba\right\}\circ_{0}\left\{b,ab\right\}=\left\{b\right\}.\]
We introduce the notion of _compatibility_ as a framework that enables composition of cbc.
**Definition 2.3**.: _A set \(\mathcal{E}\) of \(n\)-cbc is said to be **compatible** if for all \(X_{1},\ldots,X_{k+1}\in\mathcal{E}\) and \(r_{1},\ldots,r_{k}\in[n]\),_
\[X_{1}\circ_{r_{1}}X_{2}\circ_{r_{2}}X_{3}\cdots\circ_{r_{k}}X_{k+1}\]
_is an \(n\)-cbc._
The compatibility of a set can be tested thanks to a graph algorithm. Given an integer \(n\geq 1\) and a set \(\mathcal{E}\) of bayonet codes, we denote by \(\mathcal{G}_{n}\left(\mathcal{E}\right)\) the directed graph made of the set of vertices \([n]\) and arrows from \(k_{1}\) to \(k_{2}\) if and only if there exists \(X\in\mathcal{E}\) and two different words \(a^{i_{1}}ba^{j_{1}},a^{i_{2}}ba^{j_{2}}\in X\) such that
\[k_{1}=\overline{i_{1}-i_{2}}^{n}\text{ and }k_{2}=\overline{j_{2}-j_{1}}^{n}.\]
**Proposition 2.4**.: _A set \(\mathcal{E}\) of \(n\)-cbc is compatible if and only if the graph \(\mathcal{G}_{n}\left(\mathcal{E}\right)\) does **not** contain a non-empty path from \(0\) to \(0\)._
Proof.: A set \(\mathcal{E}\) of \(n\)-cbc is not compatible if and only if there exists \(t>1\), \(X_{1},\ldots,X_{t}\in\mathcal{E}\), and
\[a^{i_{1}}ba^{j_{1}},a^{k_{1}}ba^{\ell_{1}}\in X_{1},\ldots,a^{i_{t}}ba^{j_{t}}, a^{k_{t}}ba^{\ell_{t}}\in X_{t} \tag{10}\]
such that \(j_{1}\neq\ell_{1}\) and
\[a^{i_{1}}ba^{j_{1}}a^{i_{2}}ba^{j_{2}}\ldots a^{i_{t}}ba^{j_{t}}\equiv_{n}a^{k _{1}}ba^{\ell_{1}}a^{k_{2}}ba^{\ell_{2}}\ldots a^{k_{t}}ba^{\ell_{t}}. \tag{11}\]
Thus if \(\mathcal{E}\) is not compatible then \(\mathcal{G}_{n}\left(\mathcal{E}\right)\) contains the path
Conversely, for any path from \(0\) to \(0\) of length \(t>1\) in the graph \(\mathcal{G}_{n}\left(\mathcal{E}\right)\), there exists (10) such that the relation (11) is true.
We can see \(\mathcal{G}_{n}\left(\mathcal{E}\right)\) as the superposition of graphs \(\mathcal{G}_{n}\left(\left\{X\right\}\right)\), where \(X\in\mathcal{E}\). So in the particular case where \(\mathcal{E}\) is a singleton, the graph \(\mathcal{G}_{n}\left(\mathcal{E}\right)\) is equivalent to the graph defined in Proposition 1 of [10]2 and equal to the graph \(\mathcal{G}_{mod}\) of [11].
Footnote 2: In Proposition 1 of [10], Perrin and Schützenberger defined a graph on sets of integers. They did not explain the link with theory of codes. However, one can understand it as an algorithm to test whether or not a set of the form \(\left\{a^{n}\right\}\cup X\) (where \(X\subseteq a^{*}ba^{*}\)) is a code. Their graph is not equal to \(\mathcal{G}_{n}\left(\left\{X\right\}\right)\) in general but it also contains a non-empty path from \(0\) to \(0\) if and only if the considered set is not a code.
We introduce the _stable_ notion as compatible sets closed under composition.
**Definition 2.5**.: _A set \(\mathcal{S}\) of \(n\)-cbc is **stable** if for all \(X,Y\in S\) and \(r\in[n]\),_
\[X\circ_{r}Y\in\mathcal{S}.\]
A stable set is therefore compatible. Given a set \(\mathcal{E}\) of compatible cbc, we denote by \(\mathcal{E}^{\circ}\) the smallest stable set (for inclusion) containing \(\mathcal{E}\). We say that \(\mathcal{E}^{\circ}\) is the stable of \(\mathcal{E}\). For example, we obtain
\[\left\{C_{E}(b),C_{E}(bb)\right\}^{\circ}=\left\{C_{E}(b),C_{E}(bb),\left\{ba, aba,a^{2}b,a^{3}ba\right\},\left\{ba,aba,a^{2}ba,a^{3}ba\right\}\right\}, \tag{12}\]
where \(E\) is the code (7).
We can associate stable sets to any finite maximal code.
**Proposition 2.6**.: _Let \(M\) be a finite maximal code, the sets_
\[\mathcal{C}_{M}:=\left\{C_{M}(\omega),\,\omega\in\mathcal{A}^{*}\right\}\text{ and }\mathcal{C}_{M}^{\prime}:=\left\{C_{M}(\omega),\,\omega\in\mathcal{B}\left(a^{*} \mathcal{B}\right)^{*}\right\},\]
_where \(\mathcal{B}\) is the alphabet \(\mathcal{A}\setminus\left\{a\right\}\), are stable._
Proof.: Let \(M\) be a finite maximal code and \(n\) the order of \(a\). According to Proposition 1.3, the set \(\mathcal{C}_{M}\) (respectively \(\mathcal{C}_{M}^{\prime}\)) is a set of \(n\)-cbc. Moreover, for all \(C_{1},C_{2}\in\mathcal{C}_{M}\) (resp. \(\mathcal{C}_{M}^{\prime}\)), there exists \(\omega_{1},\omega_{2}\in\mathcal{A}^{*}\) (resp. \(\mathcal{B}\left(a^{*}\mathcal{B}\right)^{*}\)) such that \(C_{1}=C(\omega_{1})\) and \(C_{2}=C(\omega_{2})\). For any \(r\in[n]\), we have
\[C_{1}\circ_{r}C_{2}=C_{M}(\omega_{1}a^{r+tn}\omega_{2})\in\mathcal{C}_{M}\ \ ( \text{resp. }\mathcal{C}_{M}^{\prime})\,,\]
for \(t\geq\frac{2}{n}\underset{m\in M}{\max}\left\{|m|\right\}-\frac{r}{n}\).
For example, we obtain that the set \(\mathcal{C}_{E}^{\prime}\) is equal to the set (12), when \(E\) is the set (7).
Proposition 2.6 thus gives a criterion that a code must satisfies in order to be included in a finite maximal code.
**Example 2.7**.: _Suppose that the code_
\[\left\{a^{5},ab,aba^{2},a^{2}b^{2},ab^{2}\right\} \tag{13}\]
_is included in a finite maximal code \(M\). One can notice that the code (13) satisfies the necessary conditions implied by Theorem 1.4 and Proposition 1.3. For example, the codes_
\[\left\{ab,aba^{2}\right\}\subseteq C_{M}(b)\text{ and }\left\{a^{2}b,ab \right\}\subseteq C_{M}(bb) \tag{14}\]
_are respectively included in the \(5\)-cbc_
\[\left\{ab,aba^{2},ba^{4},ba^{3},ba\right\}\text{ and }\left\{a^{2}b,ab,a^{4}b, a^{3}b,b\right\}.\]
_However, thanks to an exhaustive computer exploration of the (finite) set of \(5\)-cbc, we found, using the algorithm from Proposition 2.4, that none of the cbc containing (14) are compatible. Thus, according to Proposition 2.6, the code (13) is not included in a finite maximal code._
## 3 Border
In this section, we first show that each stable set can be associated to a notion that we call _border_. It is a generalization of a notion originally introduced on codes satisfying the factorization conjecture in section 5.3 of [1]. It exhibits a link between factorizations and stable sets. Secondly, we show various operations to build _borders_ from others.
### Definition and existence
**Definition 3.1**.: _Given \(P,Q\subseteq\mathbb{Z}\), we say that the ordered pair \(\left(P,Q\right)\) is a **border** of an \(n\)-cbc \(X\) if_
\[a^{P}\,\underline{X}\,a^{Q}\equiv_{n}a^{\left[n\right]}ba^{\left[n\right]}\ \left( \equiv_{n}\sum_{i,j\in\left[n\right]}a^{i}ba^{j}\right).\]
_More generally, we say that it is a border of a set of cbc if it borders each of its elements._
**Example 3.2**.: _The ordered pair_
\[\left(\left\{2,4\right\},\left\{0,2,4,6\right\}\right) \tag{15}\]
_and the factorization (4) are borders of the \(8\)-cbc_
\[\left\{b,ba,aba^{2},a^{3}ba^{3},a^{4}b,a^{4}ba,a^{5}ba^{2},a^{7}ba^{7}\right\}. \tag{16}\]
_The factorization_
\[\left(\left\{0\right\},\left\{0,1,2,3\right\}\right) \tag{17}\]
_is a border of the stable set (12)._
Given an \(n\)-cbc \(X\), note that \(\left(P,Q\right)\) borders \(X\) if and only if \(\left(Q,P\right)\) borders its dual
\[\delta\left(X\right):=\left\{a^{j}ba^{i},a^{i}ba^{j}\in X\right\}\]
and that a factorization \(\left(P,Q\right)\) borders \(\left\{X\right\}^{\circ}\) if and only if
\[a^{P}\frac{1}{1-\underline{X}}a^{Q}\equiv_{n}\underline{\mathcal{A}_{/a^{n}= \varepsilon}^{*}},\]
where \(\mathcal{A}_{/a^{n}=\varepsilon}^{*}\) is the quotient of \(\mathcal{A}^{*}\) by the relation \(a^{n}=\varepsilon\).
We write \(x\in_{n}X\) if and only if there exists \(y\in X\) such that \(x\equiv_{n}y\). For any \(Y\subseteq a^{*}ba^{*},n\geq 1\), and \(k\in\left[n\right]\), we set
\[L_{k}^{n}\left(Y\right):=\left\{\overline{i}^{n},\,a^{i}ba^{k}\in_{n}Y\right\},\,L(Y):=\left\{\overline{i}^{n},\,a^{i}ba^{j}\in Y\right\},\]
and
\[R_{k}^{n}\left(Y\right):=\left\{\overline{j}^{n},\,a^{k}ba^{j}\in_{n}Y\right\},\,R(Y):=\left\{R_{k}^{n}\left(Y\right),\,k\in L(Y)\right\}.\]
For example, \(L(X)=\left\{0,1,3,4,5,7\right\}\) and \(R(X)=\left\{\left\{0,1\right\},\left\{2\right\},\left\{3\right\},\left\{7 \right\}\right\}\) when \(X\) is (16). We say that an \(n\)-cbc \(X\)_**borders**_ a set \(\mathcal{E}\) if and only if for any \(R\in R(X)\), the ordered pair \(\left(R,L(X)\right)\) is a factorization of size \(n\) that borders \(\mathcal{E}\).
**Proposition 3.3**.: _Any stable set is bordered by one of its elements._
Proof.: Let \(\mathcal{S}\) be a stable set of \(n\)-cbc. If \(Y\in\mathcal{S}\) does not borders \(\mathcal{S}\) then there exists \(i,j\in\left[n\right]\), \(k\in L(Y)\), and \(X\in\mathcal{S}\) such that
\[a^{i}ba^{j}\not\in_{n}a^{R_{k}^{n}\left(Y\right)}\,X\,a^{L(Y)}\text{ or }a^{i}\not \in_{n}a^{R_{k}^{n}\left(Y\right)}a^{L(Y)}.\]
Thus \(Y^{\prime}:=Y\circ_{i}X\circ_{j}Y\) or \(Y^{\prime}:=Y\circ_{i}Y\) is a cbc belonging to \(\mathcal{S}\) such that \(\left|L(Y^{\prime})\right|<\left|L(Y)\right|\), because \(k\in L(Y)\) and \(k\notin L(Y^{\prime})\).
The cardinality of a set being a positive integer, we obtain the expected result by iterating this process at most \(\left|L(X)\right|\) times starting from any element \(X\) of \(\mathcal{S}\)
Thus any stable set admits a bordering factorization and conversely for any factorization, there exists a cbc that it borders. Indeed, a factorization \((P,Q)\) borders the cbc \(a^{Q}ba^{P}\).
It is possible to build, by compositions of elements of a compatible set, a cbc in a more restrictive form which borders the set.
**Theorem 3.4**.: _Let \(\mathcal{E}\) be a compatible set of \(n\)-cbc. For all \(Z\in\mathcal{E}\), there exists \(r_{1},\dots,r_{k}\in[n]\) and \(X_{1},\dots,\,X_{k}\in\mathcal{E}\) such that the cbc_
\[X:=Z\circ_{r_{1}}X_{1}\circ_{r_{2}}\cdots\circ_{r_{k}}X_{k}\]
_borders \(\mathcal{E}\) and such that for all \(R,R^{\prime}\in R(X)\),_
\[R\cap R^{\prime}\neq\emptyset\implies R=R^{\prime}.\]
Proof.: Let \(\mathcal{E}\) be a compatible set of cbc and \(Y\in\mathcal{E}^{\circ}\) a cbc bordering \(\mathcal{E}\). If there exists \(r\in R\cap R^{\prime}\), such that \(R\neq R^{\prime}\) and \(R,R^{\prime}\in R(Y)\) then for any \(\ell\in L(Y)\), the cbc
\[Y^{\prime}:=Y\circ_{\overline{r+\ell^{\prime}}}Y\]
satisfies the facts that \(R(Y^{\prime})\) is strictly included in \(R(Y)\) and that \(L(Y^{\prime})=L(Y)\). Thus \(Y^{\prime}\in\mathcal{E}^{\circ}\) and \(Y^{\prime}\) also borders \(\mathcal{E}\).
The set \(R(X)\) of a cbc \(X\) cannot be empty, so we obtain the expected result by iterating this process at most \(|R(Y)|\) times starting from a cbc \(Y\) given by Proposition 3.3. Moreover, according to the proof of Proposition 3.3, such \(Y\) can be chosen by starting from any \(Z\in\mathcal{E}\). This concludes the proof.
One of the consequences of Theorem 3.4 is that for any finite maximal code \(M\) containing \(a^{n}\) and any word \(\omega_{1}\in\mathcal{A}^{*}\), there exists a word \(\omega\in\mathcal{A}^{*}\) such that
\[\underline{C_{M}\left(\omega_{1}\omega\right)}\equiv_{n}\sum_{k\in[t]}a^{L_{k} }ba^{R_{k}},\]
where \(R_{i}\cap R_{j}=\emptyset\) when \(i\neq j\), and such that
\[\left(L_{1}\sqcup\cdots\sqcup L_{t},R_{k}\right)_{k\in[t]}\]
are factorizations bordering \(\mathcal{C}_{M}\).
### New borders from old ones
Being a border is closed under translations and multiplications.
**Proposition 3.5**.: _If \((P,Q)\) is a border of a cbc \(X\) then for all \(i,j\in\mathbb{Z}\), \(d_{1}\) prime to \(|P|\), and \(d_{2}\) prime to \(|Q|\), the ordered pairs_
\[(P+i,Q+j)\text{ and }(d_{1}P,d_{2}Q)\]
_are borders of \(X\)._
Proof.: First, we recall some properties about factorizations. It is well known that for any \(j\in\mathbb{Z}\), an ordered pair \((P,Q)\) is a factorization of size \(n\) if and only if \((P,Q+j)\) is a factorization of size \(n\), since for all \(p_{1},p_{2}\in P\) and \(q_{1},q_{2}\in Q\), we have
\[\overline{p_{1}+\left(q_{1}+j\right)}^{n}=\overline{p_{2}+\left(q_{2}+j\right) }^{n}\iff\overline{p_{1}+q_{1}}^{n}=\overline{p_{2}+q_{2}}^{n}.\]
Moreover, according to Proposition 3 of [10], if \((P,Q)\) is a factorization and \(d\) a number prime to \(|Q|\) then the ordered pair \((P,dQ)\) is a factorization.
An ordered pair \((P,Q)\) borders an \(n\)-cbc \(X\) if and only if for any \(k\in[n]\), the ordered pair
\[\left(R_{k}^{n}\left(a^{P}X\right),Q\right)\]
is a factorization of size \(n\). Assume that \((P,Q)\) borders \(X\) then, according to the previous recalls, for any \(k\in[n]\), \(j\in\mathbb{Z}\), and \(d\) prime to \(|Q|\), the ordered pair
\[\left(R_{k}^{n}\left(a^{P}X\right),j+dQ\right)\]
is a factorization. Thus \((P,j+dQ)\) borders \(X\). We obtain the expected result thanks to a symmetrical argument.
In some cases, Proposition 3.5 enables to explicitly compute a border.
**Proposition 3.6**.: _If an \(n\)-cbc \(X\) is bordered by \((P,Q)\) such that \(p:=|P|\) is prime to \(q:=|Q|\) then the factorization_
\[\left(q\left[p\right],\,p\left[q\right]\right) \tag{18}\]
_borders \(X\)._
Proof.: Assuming the hypotheses of the Proposition, the number \(q\) is prime to \(p\) so according to Proposition 3.5, the ordered pair \((qP,Q)\) borders \(X\). The set \(qP\) contains \(p\) distinct elements, all of which are multiples of \(q\). Thus
\[qP=\left\{0,q,\ldots,q(p-1)\right\}=q\left[p\right].\]
Symmetrically, we obtain that the ordered pair (18) borders \(X\). Moreover, according to Bezout's identity [1], there exists \(u,v\in\mathbb{Z}\) such that \(up+vq=1\). Thus for all \(k\in[n]\), we have
\[k=\overline{p(ku)+q(kv)}^{n}\in p\left[q\right]+q\left[p\right].\]
Thus the ordered pair (18) is a factorization.
The composition of cbc from a stable set brings out bordering factorizations.
**Theorem 3.7**.: _Let \(\mathcal{S}\) be a stable set of \(n\)-cbc and \((P,Q)\) one of its borders. For all \(k_{1},k_{2}\in[n]\) and \(X,Y\in\mathcal{S}\), the ordered pairs_
\[\left(P,\,L_{k_{2}}^{n}\left(Ya^{Q}\right)\right),\,\left(R_{k_{1}}^{n}\left( a^{P}X\right),\,L_{k_{2}}^{n}\left(Ya^{Q}\right)\right),\text{ and }\left(R_{k_{1}}^{n}\left(a^{P}X\right),\,Q\right)\]
_are factorizations bordering \(\mathcal{S}\)._
Proof.: Assuming the hypotheses of the Theorem, if the ordered pair
\[\left(R_{k_{1}}^{n}\left(a^{P}X\right),\,L_{k_{2}}^{n}\left(Ya^{Q}\right)\right), \text{ respectively }\left(P,\,L_{k_{2}}^{n}\left(Ya^{Q}\right)\right), \tag{19}\]
is not a factorization then there exists \(i\in[n]\) which is not generated by the sum of its two components (modulo \(n\)) and thus
\[a^{k_{1}}ba^{k_{2}}\not\in_{n}a^{P}X\circ_{i}Ya^{Q},\text{resp. }a^{i}ba^{k_{2}} \not\in_{n}a^{P}Ya^{Q}.\]
Thus \(\left(P,Q\right)\) is not a border of \(\mathcal{S}\). Which contradicts the assumptions.
Likewise, if the ordered pair (19) does not borders \(\mathcal{S}\) then there exists \(Z\in\mathcal{S}\) and \(i,j\in[n]\) such that
\[a^{i}ba^{j}\not\in_{n}a^{R_{k_{1}}^{n}\left(a^{P}X\right)}Za^{L_{k_{2}}^{n} \left(Ya^{Q}\right)},\text{ resp. }a^{i}ba^{j}\not\in_{n}a^{P}Za^{L_{k_{2}}^{n}\left(Ya^{Q}\right)}.\]
Which implies that
\[a^{k_{1}}ba^{k_{2}}\not\in_{n}a^{P}X\circ_{i}Z\circ_{j}Ya^{Q},\text{resp. }a^{i}ba^{k_{2}}\not\in_{n}a^{P}Z\circ_{j}Ya^{Q}.\]
Thus \(\left(P,Q\right)\) does not borders \(\mathcal{S}\). Which contradicts the assumptions.
We obtain the last case thanks to a symmetrical argument.
## 4 Hajos cbc
In this section, similarly to factorization theory, we introduce a _periodic_ and a _Hajos_ notion for cbc and compatible sets. Then we show that this _Hajos_ notion is equivalent as being bordered by a _Krasner factorization_.
### Periodic cbc
We introduce an operation to build bigger cbc from a smaller one. Given a set
\[X:=\left\{a^{i_{1}}ba^{j_{1}},\ldots,a^{i_{n}}ba^{j_{n}}\right\}\subseteq a^{ [n]}ba^{[n]}\]
and \(t\geq 1\), we define the operation
\[H_{t}\left(X\right):=\left\{\bigsqcup_{\ell=1}^{n}\left\{a^{i_{\ell}+k_{\ell, 1}n}ba^{j_{\ell}},\ldots,a^{i_{\ell}+k_{\ell,t}n}ba^{j_{\ell}+(t-1)n}\right\}, k_{1,1},\ldots,k_{n,t}\in[t]\right\}.\]
For example, the dual of (16) is equal to
\[\left\{\begin{array}{ll}a^{0+0\times 4}ba^{0}&a^{0+0\times 4}ba^{0+1\times 4} \\ a^{1+0\times 4}ba^{0}&a^{1+0\times 4}ba^{0+1\times 4}\\ a^{2+0\times 4}ba^{1}&a^{2+0\times 4}ba^{1+1\times 4}\\ a^{3+0\times 4}ba^{3}&a^{3+1\times 4}ba^{3+1\times 4}\end{array}\right\}\in H _{2}\left(\left\{b,ab,a^{2}ba,a^{3}ba^{3}\right\}\right). \tag{20}\]
We extend this operation to compatible sets. We write \(\mathcal{E}^{\prime}\in\mathcal{H}_{t}\left(\mathcal{E}\right)\), where \(t\geq 1\), if and only if
\[Y\in\mathcal{E}^{\prime}\implies\exists X\in\mathcal{E}\text{ such that }Y\in H_{t}\left(X\right)\]
\[X\in\mathcal{E}\implies\exists Y\in\mathcal{E}^{\prime}\text{ such that }Y\in H_{t}\left(X\right).\]
For example, if \(\mathcal{S}\) is the stable set (12) associated to the code (7) then \(\delta\left(\mathcal{S}\right)\in\mathcal{H}_{4}\left(\left\{\left\{b\right\} \right\}\right)\), where \(\delta\left(\mathcal{S}\right)\) is the set \(\left\{\delta\left(X\right),\,X\in\mathcal{S}\right\}\).
This operation preserve the fact of being a compatible set.
**Proposition 4.1**.: _Given \(t\geq 1\) and \(\mathcal{E}^{\prime}\in\mathcal{H}_{t}\left(\mathcal{E}\right)\), the set \(\mathcal{E}\) is a compatible set of \(n\)-cbc such that \(\mathcal{E}^{\circ}\) is bordered by \(\left(P,Q\right)\) if and only if \(\mathcal{E}^{\prime}\) is a compatible set of \(nt\)-cbc such that \(\mathcal{E}^{\prime}{}^{\circ}\) is bordered by \(\left(P+n\left[t\right],Q\right)\)._
Proof.: Let \(\mathcal{E}\) be a compatible set of \(n\)-cbc whose stable is bordered by \(\left(P,Q\right)\) and \(\mathcal{E}^{\prime}\in\mathcal{H}_{t}\left(\mathcal{E}\right)\).
For all \(Y_{1},\ldots,Y_{k}\in\mathcal{E}^{\prime}\), there exists \(X_{1},\ldots,X_{k}\in\mathcal{E}\) such that \(Y_{i}\in H_{t}\left(X_{i}\right)\), when \(i\in\left[1,k\right]\). Since
\[a^{n\left[t\right]}\underline{Y_{i}}\equiv_{nt}a^{n\left[t\right]}\underline{X _{i}}a^{n\left[t\right]}\]
for all \(i\in\left[1,k\right]\), we have that
\[a^{P+n\left[t\right]}\underline{Y_{1}}\cdots\underline{Y_{k}}a^{Q}\equiv_{nt} a^{P+n\left[t\right]}\left(\underline{X_{1}}a^{n\left[t\right]}\right)\cdots \left(\underline{X_{k}}a^{n\left[t\right]}\right)a^{Q}. \tag{21}\]
Moreover, according to the hypothesis,
\[a^{P}\underline{X_{1}}\cdots\underline{X_{k}}a^{Q}\equiv_{n}\left(a^{\left[n \right]}b\right)^{k}a^{\left[n\right]} \tag{22}\]
and since for all \(i,j\),
\[a^{i}\equiv_{n}a^{j}\iff a^{i+n\left[t\right]}\equiv_{nt}a^{j+n\left[t\right]}, \tag{23}\]
we have that
\[a^{P+n\left[t\right]}\left(\underline{X_{1}}a^{n\left[t\right]}\right)\cdots \left(\underline{X_{k}}a^{n\left[t\right]}\right)a^{Q}\equiv_{nt}\left(a^{ \left[n\right]+n\left[t\right]}b\right)^{k}a^{\left[n\right]+n\left[t\right]} \equiv_{nt}\left(a^{\left[nt\right]}b\right)^{k}a^{\left[nt\right]} \tag{24}\]
and thus
\[a^{P+n\left[t\right]}\underline{Y_{1}}\cdots\underline{Y_{k}}a^{Q}\equiv_{nt} \left(a^{\left[nt\right]}b\right)^{k}a^{\left[nt\right]}. \tag{25}\]
This shows that \(\mathcal{E}^{\prime}\) is a compatible set of \(nt\)-cbc whose stable is bordered by \(\left(P+n\left[t\right],Q\right)\).
Conversely, let \(\mathcal{E}^{\prime}\) be a compatible set of \(nt\)-cbc whose stable is bordered by \(\left(P+n\left[t\right],Q\right)\) and such that \(\mathcal{E}^{\prime}\in\mathcal{H}_{t}\left(\mathcal{E}\right)\). For all \(X_{1},\ldots,X_{k}\in\mathcal{E}\), there exists \(Y_{1},\ldots,Y_{k}\in\mathcal{E}^{\prime}\) such that \(Y_{i}\in H_{t}\left(X_{i}\right)\), when \(i\in\left[1,k\right]\). We have by hypothesis (25) and (21) thus (24). We apply (23) to (24) in order to get (22).
This concludes the proof.
We introduce a periodic notion for cbc and compatible sets.
**Definition 4.2**.: _We say that \(Y\) is \(n\)**-right-periodic** if there exists an \(n\)-cbc \(X\) and \(t>1\), such that \(Y\in H_{t}\left(X\right)\) and we say that \(Y\) is \(n\)**-periodic** if \(Y\) or \(\delta\left(Y\right)\) is \(n\)**-right-periodic**._
_More generally, we say that a compatible set is \(n\)**-right-periodic** if all its elements are \(n\)-right-periodic._
For example, the cbc (16) is 4-periodic since
\[\left\{b,ab,a^{2}ba,a^{3}ba^{3}\right\} \tag{26}\]
is an 4-cbc and (20).
**Remark 4.3**.: _Note that if \(Y\) is an \(n\)-**right-periodic**\(nt\)-cbc then \(Y\in H_{t}\left(\overline{Y}^{n}\right)\), where_
\[\overline{Y}^{n}=\left\{a^{\overline{i}^{n}}ba^{\overline{j}^{n}},\,a^{i}ba^{ j}\in Y\right\}.\]
The next proposition links periodicity of factorizations to periodicity of compatible sets.
**Proposition 4.4**.: _Let \(\mathcal{E}\) be a compatible set of \(nt\)-cbc such that \(\mathcal{E}^{\circ}\) is bordered by \(\left(P+n\left[t\right],Q\right)\). If for all \(k\in\left[nt\right]\) and \(Y\in\mathcal{E}\), the sets_
\[R_{k}^{nt}\left(a^{P}Y\right) \tag{27}\]
_are \(n\)-periodic in \(\mathbb{Z}_{nt}\) then \(\mathcal{E}\) is \(n\)-right-periodic._
Proof.: For all \(Y\in\mathcal{E}\), if the sets (27) are \(n\)-periodic then
\[a^{P+n[t]}Y\equiv_{nt}a^{P+n[t]}\overline{Y}^{n}a^{n[t]} \tag{28}\]
is unambiguous and thus \(\left|\overline{Y}^{n}\right|=n\). Moreover, if \(a^{i+k_{1}n}ba^{j},a^{i+k_{2}n}ba^{j}\in Y\), where \(i<n\) and \(k_{1},k_{2}<t\), then for any \(p\in P\),
\[a^{p+k_{2}n}a^{i+k_{1}n}ba^{j}\equiv_{nt}a^{p+k_{1}n}a^{i+k_{2}n}ba^{j}\]
and since (28) is ambiguous it implies that \(k_{1}=k_{2}\). Thus if \(a^{i}ba^{j}\in\overline{Y}^{n}\) then there exists \(k_{1},\ldots k_{t}\in\left[t\right]\) such that
\[\left\{a^{i+k_{1}n}ba^{j},\ldots,a^{i+k_{t}n}ba^{j+(t-1)n}\right\}\subseteq Y.\]
Therefore \(Y\in H_{t}\left(\overline{Y}^{n}\right)\).
Moreover, according to Proposition 4.1, \(\overline{Y}^{n}\) is an \(n\)-cbc. So \(Y\) (and thus \(\mathcal{E}\)) is \(n\)-right-periodic. This concludes the proof.
We define a _Hajos_ notion for cbc as composition of periodic cbc. Formally, we denote by \(H_{n}\) the set of _Hajos cbc_ of size \(n\) that we recursively define as follows:
\[H_{n}:=\left\{\begin{array}{cc}\left\{\left\{b\right\}\right\}&\text{if $n=1$,} \\ \bigcup\limits_{\begin{subarray}{c}n=tm,\,t>1,\\ X\in H_{m}\end{subarray}}\delta\left(H_{t}\left(X\right)\right)\cup H_{t}\left( X\right)&\text{otherwise.}\end{array}\right.\]
Since \(\left\{b\right\}\) is an 1-cbc then, according to Proposition 4.1, a Hajos cbc is a cbc. For example, the cbc (16) is a Hajos cbc since (20) and the dual of (26) is equal to
\[\left\{a^{0+0\times 1}ba^{0},a^{0+0\times 1}ba^{0+1\times 1},a^{0+1\times 1}ba^ {0+2\times 1},a^{0+3\times 1}ba^{0+3\times 1}\right\}\in H_{4}\left(\left\{b \right\}\right).\]
We extend the Hajos notion to compatible sets. A compatible set \(\mathcal{E}\) is said to be of _Hajos_ if and only if there exists \(t_{1},\ldots,t_{k}>1\) and some cbc \(Y_{1},\ldots,Y_{k-1}\) such that for all \(Y\in\mathcal{E}\) or for all \(Y\in\delta\left(\mathcal{E}\right)\),
\[Y\in H_{t_{k}}\left(Y_{k-1}\right),\,\delta\left(Y_{k-1}\right)\in H_{t_{k-1} }\left(Y_{k-2}\right),\,\ldots,\delta\left(Y_{2}\right)\in H_{t_{2}}\left(Y_{ 1}\right),\,\delta\left(Y_{1}\right)\in H_{t_{1}}\left(\left\{b\right\}\right).\]
Note that, according to Remark 4.3, we necessarily have \(Y_{i}=\overline{Y}^{t_{1}\cdots t_{i}}\), for all \(1\leq i<k\). In particular, \(\mathcal{E}\) is of Hajos if and only if \(\delta\left(\mathcal{E}\right)\) is of Hajos.
### Krasner border
We first do some recalls about _Krasner factorizations_. An ordered pair \((P,Q)\) is a _Krasner factorization_ (of size \(n:=|P|\times|Q|\)) if and only if for all \(k\in[n]\), there exists \(p\in P\) and \(q\in Q\) such that
\[k=p+q.\]
For example, the ordered pair (17) is a Krasner factorization of size \(4\).
Krasner factorizations are completely described in [10]. For all \(t_{1},\ldots,t_{k}>1\), the ordered pairs \((U,V)\) and \((V,U)\), where
\[U:=\sum_{i\in[1,k],\,2|i}t_{1}\ldots t_{i-1}\,[t_{i}]\,\text{ and }V:=\sum_{i\in[1,k],\,2|i}t_{1}\ldots t_{i-1}\,[t_{i}]\,, \tag{29}\]
are Krasner factorizations of size \(t_{1}\cdots t_{k}\). Conversely, any Krasner factorization can be built that way.
Krasner factorizations naturally appear in the factorization conjecture as shown in [11] and in Proposition 3.6 of [11].
**Proposition 4.5**.: _If a finite maximal code \(M\) satisfies the factorization conjecture then the set \(\mathcal{C}_{M}\) is bordered by a Krasner factorization._
Our statement is slightly different, we provide a straightforward proof.
Proof.: If a finite maximal code \(M\) satisfies the factorization conjecture then there exists \(P,S\subseteq\mathcal{A}^{*}\) such that
\[\underline{P}\,\underline{M}^{*}\,\underline{S}=\underline{\mathcal{A}}^{*}. \tag{30}\]
The restriction of (30) to words without letter \(b\), implies that there exists \(P_{0}\subseteq P\) and \(S_{0}\subseteq S\) such that \((P_{0},S_{0})\) is a Krasner factorization of size \(n\), where \(a^{n}\in M\). If \((P_{0},S_{0})\) does not borders \(\mathcal{C}_{M}\) then there exists \(\omega\in\mathcal{A}^{*}\), \(a^{i_{1}+nk_{1}}\omega a^{n\ell_{1}+j_{1}},a^{i_{2}+nk_{2}}\omega a^{n\ell_{2 }+j_{2}}\in M^{*},p_{1},p_{2}\in P_{0}\), and \(s_{1},s_{2}\in S_{0}\) such that
\[a^{p_{1}}\,(a^{n})^{k_{2}}\,a^{i_{1}+nk_{1}}\omega a^{n\ell_{1}+j_{1}}\,(a^{n} )^{\ell_{2}}\,a^{s_{1}}=a^{p_{2}}\,(a^{n})^{k_{1}}\,a^{i_{2}+nk_{2}}\omega a^{ n\ell_{2}+j_{2}}\,(a^{n})^{\ell_{1}}\,a^{s_{2}}, \tag{31}\]
where \(i_{1},i_{2},j_{1},j_{2}\in[n]\). Thus the coefficient of (31) in \(\underline{P_{0}}\,\underline{M}^{*}\underline{S_{0}}\) is greater or equal to \(2\). Which contradicts the factorization conjecture.
According to Theorem 3.2 of [11], a factorization \((P,Q)\) is of Hajos if and only if there exists a Krasner factorization \((U,V)\) such that \((U,Q)\) and \((P,V)\) are factorizations. Our next Theorem shows a equivalent result for compatible sets. We will use the following Lemma according to the proof of Theorem 4.13 of [12].
**Lemma 4.6**.: _Let \((U,V)\) be a Krasner factorization of size \(n\). If \(U\) is an \(m\)-periodic set then for any factorization \((P,V)\) of size \(n\), the set \(P\) is \(m\)-periodic._
**Theorem 4.7**.: _A compatible set is of Hajos if and only if its stable is bordered by a Krasner factorization._
Proof.: We prove by recurrence on \(n\) that any compatible set of \(n\)-cbc whose stable is bordered by a Krasner factorization is of Hajos. First, note that the unique (non-empty) compatible set of \(1\)-cbc is \(\left\{\left\{b\right\}\right\}\) and that it is of Hajos.
Assume now that any compatible set of \(j\)-cbc (where \(j<n\)) whose stable is bordered by a Krasner factorization is of Hajos. Let \(\mathcal{E}\) be a compatible set of \(n\)-cbc whose stable is bordered by a Krasner factorization \(\left(U,V\right)\). We can assume that \(n=t_{1}\cdots t_{k}\), where \(t_{i}>1\) (for \(i\in\left[1,k\right]\)), and that \(\left(U,V\right)\) is equal to (29) (otherwise, we can consider \(\delta\left(\mathcal{E}\right)\) instead of \(\mathcal{E}\)).
If \(k\) is even (respectively odd) then \(U\) (resp. \(V\)) is \(t_{1}\cdots t_{k-1}\)-periodic and for any \(X\in\mathcal{E}\) and \(\ell\in\left[n\right]\),
\[\left(R_{\ell}^{n}\left(a^{U}X\right),V\right)\quad\left(\text{resp. }\left(U,L_{\ell}^{n}\left(Xa^{V}\right)\right)\right)\]
is a factorization. Moreover according to Lemma 4.6, we know that
\[R_{\ell}^{n}\left(a^{U}X\right)\quad\left(\text{resp. }L_{\ell}^{n}\left(Xa^{V} \right)\right)\]
is also \(t_{1}\cdots t_{k-1}\)-periodic.
Thus according to Proposition 4.4, \(\mathcal{E}\in\mathcal{H}_{t_{k}}\left(\mathcal{E}^{\prime}\right)\) (resp. \(\delta\left(\mathcal{E}\right)\in\mathcal{H}_{t_{k}}\left(\mathcal{E}^{\prime }\right)\)), where \(\mathcal{E}^{\prime}\) is a compatible set of \(t_{1}\cdots t_{k-1}\)-cbc whose stable is bordered by the Krasner factorization \(\left(U,V\right)\), where \(k\) is decremented (i.e. \(k\gets k-1\)). Thanks to the recurrence hypothesis, \(\mathcal{E}^{\prime}\) is of Hajos thus \(\mathcal{E}\) is also of Hajos.
The converse is a straight forward recurrence. Indeed, \(\left\{\left\{b\right\}\right\}\) is bordered by the Krasner factorization \(\left(\left\{0\right\},\left\{0\right\}\right)\) and, according to Proposition 4.1, if \(\mathcal{E}\) is a compatible set of \(n\)-cbc whose stable is bordered by a Krasner factorization \(\left(U,V\right)\) then \(\mathcal{E}^{\prime}\in\mathcal{H}_{t}\left(\mathcal{E}\right)\) (resp. \(\delta\left(\mathcal{E}^{\prime}\right)\in\mathcal{H}_{t}\left(\delta\left( \mathcal{E}\right)\right)\)) is bordered by the Krasner factorization \(\left(U+n\left[t\right],V\right)\) (resp. \(\left(U,V+n\left[t\right]\right)\)).
According to Theorem 4.7 and its constructive proof, we know that given a stable set \(\mathcal{S}\) (such as \(\mathcal{C}_{M}\), when \(M\) is a code that satisfies the factorization conjecture) bordered by a Krasner factorization \(\left(U,V\right)\) (we can suppose that it is equal to (29) and that \(k\) is even, the others cases are similar), we have
\[Y\in H_{t_{k}}\left(Y_{k-1}\right),\,\delta\left(Y_{k-1}\right)\in H_{t_{k-1}} \left(Y_{k-2}\right),\,\ldots,\delta\left(Y_{2}\right)\in H_{t_{2}}\left(Y_{1} \right),\,\delta\left(Y_{1}\right)\in H_{t_{1}}\left(\left\{b\right\}\right),\]
for all \(Y\in\mathcal{S}\), where \(Y_{i}=\overline{Y}^{t_{1}\cdots t_{i}}\).
## 5 Cbc Hajos numbers
In this section, we fully characterize _cbc Hajos numbers_. They are numbers \(n\) such that every compatible sets of \(n\)-cbc are of Hajos. This is sum up in Theorem 5.12.
### Hajos cases
It is well known in theory of factorizations of abelian groups that given a factorization \(\left(P,Q\right)\) such that \(\left|P\right|\) is a power of a prime then either \(P\) or \(Q\) is periodic. See for example Theorem 6.1.1 from [10]. Inspired by Proposition 3.1 from [10], we prove the following slightly stronger result for the particular case of cyclic groups.
**Proposition 5.1**.: _If \((P,Q_{1})\) and \((P,Q_{2})\) are factorizations of size \(n\) such that \(|P|\) is a power of a prime then either \(P\) is periodic or \(Q_{1}\) and \(Q_{2}\) share a common period._
In order to prove it, we will use the following lemma stated as Theorem 5.5 in [10].
**Lemma 5.2**.: _If \((P,Q)\) is a normalized factorization of size \(n\) and \(|P|=p^{\alpha}q^{\beta}\), where \(p,q\) are primes and \(\alpha,\beta\geq 0\), then either \(\langle P\rangle\) (the subgroup generated by \(P\)) is not equal to \(\mathbb{Z}_{n}\) or \(\langle Q\rangle\neq\mathbb{Z}_{n}\)._
Proof of Proposition 5.1.: We prove it by a recurrence on \(n\). We can suppose that \((P,Q_{1})\) and \((P,Q_{2})\) are normalized factorizations of size \(n\) and that \(|P|=p^{\alpha}\), where \(p\) is prime and \(\alpha\geq 0\).
If \(|P|=1\) then \(P=\{0\}\) and \(Q_{1}=Q_{2}=[n]\) (in \(\mathbb{Z}_{n}\)) and thus they verify the proposition. Symmetrically, if \(|Q_{1}|=|Q_{2}|=1\) then \(P=[n]\) (in \(\mathbb{Z}_{n}\)) and \(Q_{1}=Q_{2}=\{0\}\) and thus the proposition is satisfied.
Suppose that the proposition is true for every factorizations of size \(k<n\). According to Lemma 5.2 either \(\langle P\rangle\neq\mathbb{Z}_{n}\) or \(\langle Q_{1}\rangle\neq\mathbb{Z}_{n}\) and \(\langle Q_{2}\rangle\neq\mathbb{Z}_{n}\). In the first case, there exists a prime \(t\,|\,n\) such that \(P\subseteq t\mathbb{Z}\). According to Lemma 2.4 from [10], for all \(q_{1}\in Q_{1}\) and \(q_{2}\in Q_{2}\),
\[\left(\frac{1}{t}P,\frac{1}{t}\left((Q_{i}-q_{i})\cap t\mathbb{Z}\right) \right)\quad\text{ (where $i=1,2$)} \tag{32}\]
are normalized factorizations of size \(\frac{n}{t}\).
By recurrence hypothesis, either \(\frac{1}{t}P\) is periodic in \(\mathbb{Z}_{\frac{n}{t}}\) and thus \(P\) is periodic in \(\mathbb{Z}_{n}\) or the right sides of (32) share a common period \(g\) in \(\mathbb{Z}_{\frac{n}{t}}\). For the second case, we have that
\[tg\in\bigcap_{q_{j}\in Q_{i}}\left(Q_{i}-q_{j}\right),\]
for \(i=1,2\). Thus \(tg\) is a common period of \(Q_{1}\) and \(Q_{2}\) in \(\mathbb{Z}_{n}\) according to Lemma 2.8 from [10].
Last case occurs when \(\langle P\rangle=\mathbb{Z}_{n}\) and thus \(\langle Q_{1}\rangle\neq\mathbb{Z}_{n}\). There exists a prime \(t\,|\,n\) such that \(Q_{1}\subseteq t\mathbb{Z}\). If \(t\neq p\) then \(tP+Q_{1}\subseteq t\mathbb{Z}\neq\mathbb{Z}_{n}\) which contradicts Proposition 3 from [10]. Thus \(t=p\) and \(Q_{1}\subseteq p\mathbb{Z}\). We also get \(Q_{2}\subseteq p\mathbb{Z}\) with the same argument.
As similar as before, for all \(p_{j}\in P\),
\[\left(\frac{1}{p}\left((P-p_{j})\cap p\mathbb{Z}\right),\frac{1}{p}Q_{i} \right)\quad\text{ (where $i=1,2$)} \tag{33}\]
are normalized factorizations of size \(\frac{n}{p}\). By recurrence hypothesis, either \(\frac{1}{p}Q_{1}\) and \(\frac{1}{p}Q_{2}\) share a commune period in \(\mathbb{Z}_{\frac{n}{p}}\), and thus \(Q_{1}\) and \(Q_{2}\) share a commune period in \(\mathbb{Z}_{n}\), or the left sides of (33) are periodic in \(\mathbb{Z}_{\frac{n}{p}}\).
For the second case, since their cardinalities are powers of \(p\) then they share a common period \(g\) in \(\mathbb{Z}_{\frac{n}{p}}\) (take \(g\) as the maximum of their periods, for example). Thus, we have that
\[pg\in\bigcap_{p_{j}\in P}\left(P-p_{j}\right).\]
So \(pg\) is a period of \(P\) in \(\mathbb{Z}_{n}\), according to Lemma 2.8 from [10]. This concludes the proof.
We extend Proposition 5.1 to compatible sets.
**Theorem 5.3**.: _If \(\mathcal{E}\) is a compatible set of cbc such that its stable is bordered by \((P,Q)\) where \(|P|\) is a power of a prime then \(\mathcal{E}\) is of Hajos._
Proof.: We prove it by recurrence. Let \(\mathcal{E}\) be a compatible set of \(n\)-cbc whose stable is bordered by \((P,Q)\). If \(|P|=1\) (resp. \(|Q|=1\)) then the Krasner factorization \(\left(\left\{0\right\},\left[n\right]\right)\) (resp. \(\left(\left[n\right],\left\{0\right\}\right)\)) borders \(\mathcal{E}^{\circ}\) and thus \(\mathcal{E}\) is of Hajos according to Theorem 4.7.
Suppose that the proposition is true for every compatible set of \(i\)-cbc, where \(i<n\). Let
\[\mathcal{L}:=\left\{Q\right\}\cup\left\{L_{k}^{n}\left(Xa^{Q}\right),k\in \left[n\right],X\in\mathcal{E}\right\}\]
and
\[\mathcal{R}:=\left\{P\right\}\cup\left\{R_{k}^{n}\left(a^{P}X\right),k\in \left[n\right],X\in\mathcal{E}\right\}.\]
For any \(L\in\mathcal{L},R\in\mathcal{R}\), the ordered pair \((R,L)\) is a factorization where \(|R|\) is a power of a prime thus, according to Proposition 5.1, either elements of \(\mathcal{L}\) or elements of \(\mathcal{R}\) share a common period. In first case (resp. second), according to Proposition 4.4, there exists a compatible set \(\mathcal{E}^{\prime}\) and \(t>1\) such that \(\delta\left(\mathcal{E}\right)\in\mathcal{H}_{t}\left(\mathcal{E}^{\prime}\right)\) (resp. \(\mathcal{E}\in\mathcal{H}_{t}\left(\mathcal{E}^{\prime}\right)\)). Thanks to the recurrence hypothesis, \(\mathcal{E}^{\prime}\) is of Hajos thus \(\mathcal{E}\) is also of Hajos according to Proposition 4.1.
Theorem 5.3 provides two straight forward corollaries that characterize cbc Hajos numbers.
**Corollary 5.4**.: _If \(n\) is the product of at most three primes (eventually equal) then it is a cbc Hajos number._
Proof.: Let \(\mathcal{E}\) be a compatible set of \(n\)-cbc, where \(n\) is the product of at most three primes. According to Theorem 3.4, \(\mathcal{E}^{\circ}\) is bordered by a factorization \((P,Q)\). Since \(n=|P|\times|Q|\), either \(|P|\) or \(|Q|\) is equal to \(1\) or a prime thus \(\mathcal{E}\) is of Hajos according to Theorem 5.3. This concludes the proof.
**Corollary 5.5**.: _Numbers of the form \(p^{k}q\), where \(k\geq 0\) and \(p,q\) are primes, are cbc Hajos numbers._
Proof.: Let \(\mathcal{E}\) be a compatible set of \(p^{k}q\)-cbc, where \(k\geq 0\) and \(p,q\) are primes. According to Theorem 3.4, \(\mathcal{E}^{\circ}\) is bordered by a factorization \((P,Q)\). Since \(p^{k}q=|P|\times|Q|\), either \(|P|\) or \(|Q|\) is a power of \(p\) thus \(\mathcal{E}\) is of Hajos according to Theorem 5.3. This concludes the proof.
Hajos characterization provides some simple enumerative formulas.
**Example 5.6**.: _Given a prime number \(p\), we can enumerate and count \(p\)-cbc which are also Hajos cbc of size \(p\), according to Corollary 5.4. We have that_
\[|H_{p}|=|H_{p}\left(\left\{b\right\}\right)|+\left|\delta\left(H_{p}\left( \left\{b\right\}\right)\right)\right|-\left|H_{p}\left(\left\{b\right\} \right)\cap\delta\left(H_{p}\left(\left\{b\right\}\right)\right)\right|.\]
_We first enumerate the set \(H_{p}\left(\left\{b\right\}\right)\) which is equal to_
\[\left\{\left\{a^{k_{1}}b,a^{k_{2}}ba,\ldots,a^{k_{p}}ba^{p-1}\right\},k_{1}, \ldots,k_{p}\in\left[p\right]\right\}.\]
_Thus \(\left|H_{p}\left(\left\{b\right\}\right)\right|=p^{p}\). Similarly, we have \(\left|\delta\left(H_{p}\left(\left\{b\right\}\right)\right)\right|=p^{p}\). Moreover, their meet is equal to_
\[\left\{\left\{a^{0}ba^{\sigma_{1}-1},a^{1}ba^{\sigma_{2}-1},\ldots,a^{p-1}ba^{ \sigma_{p}-1}\right\},\sigma\in\mathfrak{S}_{p}\right\}, \tag{34}\]
_where \(\mathfrak{S}_{p}\) is the group of permutations of size \(p\). Thus the cardinal of (34) is equal to \(p!\). Finally, we have the formula_
\[\left|H_{p}\right|=2p^{p}-p!.\]
### Non-Hajos cases
In this section, we prove that numbers not concerned by Corollaries 5.4 and 5.5 are not cbc Hajos numbers.
**Proposition 5.7**.: _Non-Hajos numbers are non-cbc Hajos numbers._
Proof.: Let \(n\) be a non-Hajos number and \((P,Q)\) be a non-Hajos factorization of size \(n\). Suppose that the \(n\)-cbc \(a^{P}ba^{Q}\) is of Hajos then it is bordered by a Krasner factorization \((U,V)\), according to Theorem 4.7. The ordered pairs \((U,P)\) and \((Q,V)\) must be factorizations and thus, according to Theorem 3.2 of [10], \((P,Q)\) must be a Hajos factorization which is a contradiction. Thus \(n\) is a non-cbc Hajos number.
Even if the numbers \(p_{1}^{2}q_{1}^{2},p_{1}p_{2}q_{1}^{2}\), and \(p_{1}p_{2}q_{1}q_{2}\) (when \(p_{1},p_{2},q_{1},q_{2}\) are distinct primes) are of Hajos, we prove in this section that there are not cbc Hajos numbers.
We set for the rest of this section, the ordered pairs \((L,R_{1})\) and \((L,R_{2})\), where
\[L:=p_{1}p_{2}\left[q_{1}\right]+q_{1}q_{2}\left[p_{1}\right],R_{1}:=p_{1}p_{2 }q_{1}\left[q_{2}\right]+p_{1}\left[p_{2}\right],R_{2}:=p_{1}q_{1}q_{2}\left[p _{2}\right]+q_{1}\left[q_{2}\right],\]
and \(p_{1},p_{2},q_{1},q_{2}\) are primes such that \(p_{1}p_{2}\wedge q_{1}=q_{1}q_{2}\wedge p_{1}=1\). For example, if \(p_{1}=2,p_{2}=2,q_{1}=3\), and \(q_{2}=3\) then
\[\begin{array}{rclrcl}L&=&\left\{0,4,8\right\}+\left\{0,9\right\}&=&\left\{ 0,4,8,9,13,17\right\},\\ R_{1}&=&\left\{0,12,24\right\}+\left\{0,2\right\}&=&\left\{0,2,12,14,24,26 \right\},\\ R_{2}&=&\left\{0,18\right\}+\left\{0,3,6\right\}&=&\left\{0,3,6,18,21,24\right\}.\end{array}\]
First, we prove that those ordered pairs are factorizations of size \(n:=p_{1}p_{2}q_{1}q_{2}\).
**Proposition 5.8**.: _The ordered pairs \((L,R_{1})\) and \((L,R_{2})\) are factorizations._
Proof.: The sum \(L+R_{1}\) is equal to
\[p_{1}\left[p_{2}\right]+p_{1}p_{2}\left[q_{1}\right]+p_{1}p_{2}q_{1}\left[q_{2 }\right]+q_{1}q_{2}\left[p_{1}\right]=p_{1}\left[p_{2}q_{1}q_{2}\right]+q_{1}q _{2}\left[p_{1}\right]\]
Since \(q_{1}q_{2}\wedge p_{1}=1\) then \(q_{1}q_{2}\left[p_{1}\right]\) is equal to \(\left[p_{1}\right]\) in \(Z_{p_{1}}\). So \(L+R_{1}\) is equal to
\[p_{1}\left[p_{2}q_{1}q_{2}\right]+\left[p_{1}\right]=\left[n\right]\]
in \(Z_{n}\). Thus \((L,R_{1})\) is a factorization.
Similar argument can be applied to \((L,R_{2})\).
Now, we study their periodicity.
**Proposition 5.9**.: _The sets \(R_{1}\) and \(R_{2}\) are periodic in \(Z_{n}\) without common period and \(L\) is not periodic in \(Z_{n}\)._
Proof.: By definition, \(R_{1}\) and \(R_{2}\) are periodic in \(\mathbb{Z}_{n}\) with respectively period \(p_{1}p_{2}q_{1}\) and \(p_{1}q_{1}q_{2}\). Since \(\left|R_{1}\right|=\left|R_{2}\right|=p_{2}q_{2}\), if \(R_{1}\) and \(R_{2}\) share a common period then either \(R_{1}\) or \(R_{2}\) has period \(p_{1}q_{1}\). Suppose that \(p_{1}q_{1}\) is a period of \(R_{1}\) then \(R_{1}=p_{1}q_{1}\left[p_{2}q_{2}\right]\) in \(\mathbb{Z}_{n}\), which is impossible since \(p_{1}=0+p_{1}\in R_{1}\) and \(p_{1}\not\in p_{1}q_{1}\left[p_{2}q_{2}\right]=R_{1}\). Similarly, we prove that \(p_{1}q_{1}\) is not a period of \(R_{2}\). Thus \(R_{1}\) and \(R_{2}\) are periodic in \(Z_{n}\) without common period.
Suppose that \(L\) is periodic in \(\mathbb{Z}_{n}\) with period \(g\). Since \(\left|L\right|=p_{1}q_{1}\), \(g\in\left\{p_{2}q_{2},p_{2}q_{2}p_{1},p_{2}q_{2}q_{1}\right\}\). If \(g=p_{2}q_{2}\) then \(L=p_{2}q_{2}\left[p_{1}q_{1}\right]\) in \(\mathbb{Z}_{n}\) but \(q_{1}q_{2}\in L\) and \(q_{1}q_{2}\not\in p_{2}q_{2}\left[p_{1}q_{1}\right]\) thus \(g\neq p_{2}q_{2}\). Moreover, since for all \(k\in\left[p_{1}\right]\), \(p_{1}\wedge q_{2}q_{1}k=1\) then there is no \(L^{\prime}\) such that \(L=L^{\prime}+p_{2}q_{2}p_{1}\left[q_{1}\right]\) and thus \(g\neq p_{2}q_{2}p_{1}\). Similarly, we prove that \(g\neq p_{2}q_{2}q_{1}\).
This concludes the proof.
We build a cbc over theses factorizations. Let \(Y\) be a cbc
\[\sum_{\ell\,\in\,L}a^{\ell}ba^{D_{\ell}},\]
where \(D_{\left(l\in L\right)}\in\left\{R_{1},R_{2}\right\}\) and where \(\ell_{1},\ell_{2}\in L\) be such that \(D_{\ell_{1}}=R_{1}\) and \(D_{\ell_{2}}=R_{2}\).
**Proposition 5.10**.: _The set \(Y\) is a non-Hajos cbc._
Proof.: Suppose that \(Y\) is a Hajos cbc then it is bordered by a Krasner factorization \(\left(U,V\right)\). In particular, \(\left(U,L\right),\left(R_{1},V\right),\) and \(\left(R_{2},V\right)\) must be factorizations. According to Theorem 3.2 of [10], \(\left(U,L\right)\) is of Hajos and since \(L\) is not periodic then \(U\) is periodic. Moreover, according to Lemma 4.6, the sets \(U,R_{1}\), and \(R_{2}\) must share a common period which is contradicted by Proposition 5.9.
Proposition 5.10 implies that numbers of the form \(p_{1}^{2}q_{1}^{2},p_{1}p_{2}q_{1}^{2}\), and \(p_{1}p_{2}q_{1}q_{2}\), where \(p_{1},p_{2},q_{1},q_{2}\) are distinct primes, are of non-cbc Hajos numbers.
**Example 5.11**.: _According to Proposition 5.10, the \(36\)-cbc_
\[\left\{a^{36}\right\}\cup ba^{\{0,2,12,14,24,26\}}\cup a^{\{4,8,9,13,17\}}ba^ {\{0,3,6,18,21,24\}}\]
_is not of Hajos. Similarly to Proposition 5.10, we can show that the code_
\[\left\{a^{36}\right\}\cup a^{\{0,4,8,9,13,17\}}ba^{\{0,2,12,14,24,26\}}\cup a ^{\{0,4,8,9,13,17\}}ca^{\{0,3,6,18,21,24\}}\]
_over the alphabet \(\left\{a,b,c\right\}\) is not of Hajos. If one of them is included in a finite maximal code then it would not be bordered by a Krasner factorization and thus it would provide a counterexample to the factorization conjecture._
Thanks to Corollaries 5.4 and 5.5 and Propositions 5.7 and 5.10, we conclude this section by providing the exhaustive list of cbc Hajos numbers.
**Theorem 5.12**.: _Cbc Hajos numbers are product of at most three primes or numbers of the form \(p^{k}q\), where \(k\geq 0\) and \(p,q\) are primes._
Smallest non-cbc Hajos numbers are therefore \(2^{2}3^{2}=36\), \(2^{2}3\times 5=60\), and \(2^{3}3^{2}=72\). It is referred as sequence A320632 in [SIb].
## 6 Prefix-suffix codes
We recall that given two codes \(C_{1}\) and \(C_{2}\) over the alphabet \(\mathcal{A}\), the code \(C_{1}\) is said to be a _composition_ of \(C_{2}\) if and only if \(C_{1}\) is a code over the alphabet \(C_{2}\) (i.e. \(C_{1}\subseteq{C_{2}}^{*}\)). For example, the code
\[\left\{aa,ab,abbab,bbaa\right\} \tag{35}\]
is a composition of the code \(\left\{aa,ab,b\right\}\) since it is equal to
\[\left\{aa,ab,\left(ab\right)\left(b\right)\left(ab\right),\left(b\right) \left(aa\right)\right\}.\]
Of course, a code is always a composition of himself and of its alphabet.
We recall that according to Proposition 2.6.1 from [1], if \(C_{2}\) is a code (over \(\mathcal{A}\)) and \(C_{1}\) is a code over \(C_{2}\) then \(C_{1}\) is a code over \(\mathcal{A}\). A code \(C_{1}\) is recursively said to be a _prefix-suffix_ code over \(C_{2}\) if it is equal to \(C_{2}\) or if it is a prefix or suffix code over a _prefix-suffix_ code over \(C_{2}\). For example, the code (35) is a prefix code over
\[\left\{aa,ab,abb,bb\right\}\]
which is a suffix code over
\[\left\{aa,ab,b\right\}\]
which is again a prefix code over \(\mathcal{A}\). Thus (35) is a prefix-suffix code (over \(\mathcal{A}\)). We simply say _prefix-suffix code_ when it is a prefix-suffix code over \(\mathcal{A}\). Prefix-suffix codes are included in finite maximal codes according to Corollary 1 from [13].
We have the following proposition, inspired by Lemmas 3.3 and 3.4 from [11].
**Lemma 6.1**.: _If \(C\) is a code containing \(a^{n}\) then for any \(i_{1}(\omega),\ldots,i_{t}(\omega),j_{1}(\omega),\ldots,j_{t}(\omega)\geq 0\), where \(\omega\in C\setminus\left\{a^{n}\right\}\), the set_
\[\left\{a^{nt}\right\}\cup\bigsqcup_{\omega\in C\setminus a^{n}}\left\{a^{ni_ {1}(\omega)}\omega a^{ntj_{1}(\omega)},a^{ni_{2}(\omega)}\omega a^{n(1+tj_{2}( \omega))},\ldots,a^{ni_{t}(\omega)}\omega a^{n(t-1+tj_{t}(\omega))}\right\} \tag{36}\]
_is a prefix-suffix code over \(C\)._
Proof.: The set
\[\left\{\left(a^{n}\right)^{t}\right\}\cup\bigsqcup_{\omega\in C\setminus a^{ n}}\left\{\left(a^{n}\right)^{i_{1}(\omega)}\omega,\left(a^{n}\right)^{i_{2}( \omega)}\omega\left(a^{n}\right),\ldots,\left(a^{n}\right)^{i_{t}(\omega)} \omega\left(a^{n}\right)^{t-1}\right\} \tag{37}\]
is a suffix code over the code \(C\) and
\[\left\{a^{nt}\right\}\cup\bigsqcup_{\omega\in C\setminus a^{n}}\left\{\left( a^{ni_{1}(\omega)}\omega\right)\left(a^{nt}\right)^{j_{1}(\omega)},\left(a^{ni_{2}( \omega)}\omega a^{n}\right)\left(a^{nt}\right)^{j_{2}(\omega)},\ldots,\left(a^ {ni_{t}(\omega)}\omega a^{n(t-1)}\right)\left(a^{nt}\right)^{j_{t}(\omega)}\right\}\]
is a prefix code over the code (37). Thus the code (36) is a prefix-suffix code over \(C\).
We deduce from this lemma a theorem about completion.
**Theorem 6.2**.: _Let \(\mathcal{E}\) be a Hajos compatible set of \(n\)-cbc, where \(n>1\), and \(C:=\left\{a,\omega_{1},\ldots,\omega_{k}\right\}\) a prefix-suffix code. For any \(X_{1},\ldots,X_{k}\subseteq a^{*}ba^{*}\) such that \(\overline{X_{1}}^{n},\ldots,\overline{X_{k}}^{n}\in\mathcal{E}\), the set_
\[\left\{a^{n}\right\}\cup\bigsqcup_{i=1}^{k}X_{i}[b\leftarrow\omega_{i}], \tag{38}\]
_where \(X[b\leftarrow\omega]\) is the set of words \(X\) whose letters \(b\) are replaced by word \(\omega\), is a prefix-suffix code and thus it is included in a finite maximal code._
Proof.: We prove it by a recurrence on \(n\).
Since \(\mathcal{E}\) is of Hajos then there exists a compatible set \(\mathcal{E}^{\prime}\) of \(m\)-cbc such that \(n=mt,t>1\), and \(\mathcal{E}\in\mathcal{H}_{t}\left(\mathcal{E}^{\prime}\right)\) or \(\delta\left(\mathcal{E}\right)\in\mathcal{H}_{t}\left(\mathcal{E}^{\prime}\right)\). We can assume that \(\mathcal{E}\in\mathcal{H}_{t}\left(\mathcal{E}^{\prime}\right)\), the other case is similar. Therefore, for all \(X_{1},\ldots,X_{k}\in\mathcal{E}\), there exists \(Y_{1},\ldots,Y_{k}\in\mathcal{E}^{\prime}\) such that \(X_{i}\in H_{t}\left(Y_{i}\right)\) for all \(i\in[1,k]\).
If \(m=1\) then \(X_{i}\in H_{t}\left(\left\{b\right\}\right)\), for all \(i\in[1,k]\). Moreover, since \(C\) is a prefix-suffix code then according to Proposition 6.1, the set (38) is also a prefix-suffix code.
Otherwise (when \(m>1\)) we can assume, by recurrence hypothesis, that
\[\left\{a^{m}\right\}\cup\bigsqcup_{i=1}^{k}Y_{i}[b\leftarrow\omega_{i}]\]
is a prefix-suffix code and thus according to Proposition 6.1, the set (38) is also a prefix-suffix code.
Theorem 3.2 from [11] is the particular case of Theorem 6.2 where \(C=\left\{a,b\right\}\) and \(\mathcal{E}\) is made of one cbc of the form \(a^{P}ba^{Q}\). Note that our alphabet \(\mathcal{A}\) does not have to be binary.
Our next corollary is a small step towards the inclusion problem.
**Corollary 6.3**.: _Let \(n\) be a cbc Hajos number, \(\omega\in\mathcal{A}^{*}\setminus a^{*}\), and \(X\subseteq a^{*}\omega a^{*}\). Considering the set \(\left\{a^{n}\right\}\cup X\), the following statements are equivalent:_
1. _it is included in a finite maximal code,_
2. \(C_{X}\left(\omega\right)\) _is included in an_ \(n\)_-cbc,_
3. \(C_{X}\left(\omega\right)\) _is included in an_ \(n\)_-Hajos cbc,_
4. _it is a prefix-suffix code._
Proof.: Statement 1 implies Statement 2 according to the recalls made in Section 1 and Statement 2 implies Statement 3 since \(n\) is a cbc Hajos number.
Suppose that Statement 3 is true. Let \(Y\) be a Hajos cbc that contains \(C_{X}\left(\omega\right)\). The set \(\left\{a,\omega\right\}\) is a code since \(\omega\in\mathcal{A}^{*}\setminus a^{*}\) and it is prefix-suffix because any code with two elements is prefix-suffix according to Theorem 3 from [10]. Thus according to Theorem 6.2,
\[\left\{a^{n}\right\}\cup X\subseteq\left\{a^{n}\right\}\cup Y[b\leftarrow\omega]\]
is a prefix-suffix code. This proves that Statement 3 implies Statement 4.
We recall that Statement 4 implies Statement 1 according to Corollary 1 from [10].
For example, the code
\[\{aaab,aaba,b,ba\} \tag{39}\]
is not prefix-suffix (we do not prove it, we just did a computer check). Thus if it is included in a finite maximal code then it would not be bordered by a Krasner factorization and thus it would provides a counterexample to the factorization conjecture. Such a code would contain a word of the form \(a^{n}\) where \(n\) is a non-cbc Hajos number, in particular \(n\geq 36\).
**Remark 6.4**.: _There is a converse to Theorem 6.2. Indeed, any prefix-suffix code is included in a prefix-suffix finite maximal code. Such a code, let call it \(M\), satisfies the factorization conjecture, according to Proposition 14.1.2 from [1]. Thus according to Proposition 4.5, \(C_{M}\) is bordered by a Krasner factorization and thus it is of Hajos according to Theorem 4.7._
Next Theorem provides the best known bound for (the strong version of) the long-standing _triangle conjecture_.
**Theorem 6.5**.: _The strong triangle conjecture is true for the particular cases where \(n\) is a cbc Hajos number._
Proof.: If \(X\) is an \(n\)-cbc where \(n\) is a cbc Hajos number then it is prefix-suffix according to Theorem 6.2. Moreover, according to Example 14.6.1 and Proposition 14.6.3 from [1], any prefix-suffix cbc verifies the triangle property.
According to Theorem 6.5, the strong triangle conjecture (and thus the Zhang and Shum conjecture) is, in particular, true when \(n<36\).
## 7 Not commutatively prefix bayonet codes
In this section, we prove a conjecture about the size of a potential counterexample to the triangle conjecture.
A list of codes that do not verify the original triangle conjecture3 is exhibit in [11, 12], they are called _not commutatively prefix_ bayonet codes. We recall that if one of those is included in a finite maximal code then the triangle conjecture and the factorization conjecture are false. And a necessary condition for a bayonet code to be included in a finite maximal code is to be included in a cbc.
Footnote 3: the first counterexample was found by Shor, as recalled in the introduction.
The following conjecture about the divisibility of \(n\) such that an \(n\)-cbc contains a given bayonet code is proposed in [11].
**Conjecture 7.1**.: _For any \(n\)-cbc \(X\) and \(d\) prime to \(n\), the set_
\[\varphi_{d}(X):=\left\{a^{i}ba^{\overline{d^{j}}^{n}}\text{ such that }a^{i}ba^{j}\in X\right\}\]
_is an \(n\)-cbc._
We prove a stronger version of this conjecture.
**Theorem 7.2**.: _Let \(\mathcal{E}\) be a compatible set of \(n\)-cbc such that \(\mathcal{E}^{\circ}\) is bordered by \((P,Q)\). For any \(X\in\mathcal{E}\), \(d_{1}\) prime to \(|Q|\), and \(d_{2}\) prime to \(|P|\), the set_
\[\left\{\varphi_{d_{1},d_{2}}(X)\right\}\cup\mathcal{E},\]
_where_
\[\varphi_{d_{1},d_{2}}(X):=\left\{a^{\overline{d_{1}i}}ba^{\overline{d_{2}j}} \text{ such that }a^{i}ba^{j}\in X\right\},\]
_is a compatible set of \(n\)-cbc and its stable is bordered by \((P,Q)\)._
Proof.: We prove by a recurrence on \(k\) the following property: for all \(X_{1},\ldots,X_{j}\in\mathcal{E}\cup\{\varphi_{d}(X)\}\) such that
\[|\{X_{i}\text{ such that }i\in[1,j]\text{ and }X_{i}=\varphi_{d}(X)\}|\leq k,\]
we have
\[a^{P}\underline{X_{1}}\cdots\underline{X_{j}}a^{Q}\equiv_{n}\left(a^{[n]}b \right)^{j}a^{[n]}.\]
It is true for \(k=0\) because \(\mathcal{E}\) is a compatible set whose stable is bordered by \((P,Q)\). Suppose now that it is true for \(k\). Let \(X_{1},\ldots,X_{j}\in\mathcal{E}\cup\{\varphi_{d}(X)\}\) be such that
\[|\{X_{i}\text{ such that }i\in[1,j]\text{ and }X_{i}=\varphi_{d}(X)\}|=k+1\]
and let \(\ell\) be
\[\min\left\{i\text{ such that }X_{i}=\varphi_{d}(X)\right\}.\]
By recurrence hypothesis,
\[a^{P}\underline{X_{1}}\cdots\underline{X_{\ell-1}}\,\underline{X}\, \underline{X_{\ell+1}}\cdots\underline{X_{j}}a^{Q}\equiv_{n}\left(a^{[n]}b \right)^{j}a^{[n]}\]
thus for all \(i,i_{1},\ldots,i_{\ell-1}\in[n]\),
\[\left(R_{i}^{n}\left(a^{P}X_{1}\circ_{i_{1}}\cdots X_{\ell-1}\circ_{i_{\ell-1 }}X\right),Q\right)\]
are borders of \(\mathcal{E}^{\circ}\). According to Proposition 3.5, for all \(i,i_{1},\ldots,i_{\ell-1}\in[n]\) and \(d\) prime to \(|P|\),
\[\left(dR_{i}^{n}\left(a^{P}X_{1}\circ_{i_{1}}\cdots X_{\ell-1}\circ_{i_{\ell-1 }}X\right),Q\right)\]
are also borders of \(\mathcal{E}^{\circ}\). So
\[\left(R_{i}^{n}\left(a^{P}X_{1}\circ_{i_{1}}\cdots X_{\ell-1}\circ_{i_{\ell-1 }}\varphi_{d}(X)\right),Q\right)\]
are borders of \(\mathcal{E}^{\circ}\) and thus
\[a^{P}\underline{X_{1}}\cdots\underline{X_{j}}a^{Q}\equiv_{n}\left(a^{[n]}b \right)^{j}a^{[n]}.\]
Thus the proposition is true for \(k+1\). This proves that \(\mathcal{E}\cup\{\varphi_{d}(X)\}\) is a compatible set. We obtain the expected result by duality.
Theorem 7.2 does imply Conjecture 7.1 because according to Proposition 3.3, for any \(n\)-cbc \(X\), there exists an ordered pair \((P,Q)\) which borders \(\left\{X\right\}^{\circ}\) and any number prime to \(n=|P|\times|Q|\) is also prime to \(|P|\) and \(|Q|\).
Theorem 7.2 allows us to compute some lower bounds about potential counterexamples to the triangle conjecture.
**Example 7.3**.: _According to [14], one of the four smallest (for cardinality) not commutatively prefix bayonet codes is_
\[T:=\left\{b,ba^{2},ba^{8},ba^{10},aba^{8},aba^{10},a^{4}b,a^{4}ba^{2},a^{5}b,a^{5} ba^{3},a^{5}ba^{6},a^{9}b,a^{9}ba^{2}\right\}.\]
_We already know4, thanks to computer exploration and factorization theory, that if \(T\) is included in an \(n\)-cbc then \(n=4k\), where \(k\geq 8\)._
Footnote 4: see section 3.1 of [14]
_We note that_
\[\left(ba^{2\times 2}\right)\left(b\right)=\left(b\right)\left(a^{4}b\right)\text{ and }\left(a^{5}ba^{3\times 3}\right)\left(b\right)=\left(a^{5}b\right)\left(a^{9}b\right)\]
_thus \(\mu_{1,2}\left(T\right)\) and \(\mu_{1,3}\left(T\right)\) are not codes, where_
\[\mu_{d_{1},d_{2}}(T):=\left\{a^{d_{1}i}ba^{d_{2j}}\text{ such that }a^{i}ba^{j}\in T \right\}.\]
_Likewise, we note that \(\mu_{2,1}\left(T\right)\) and \(\mu_{3,1}\left(T\right)\) are not codes. Thus according to Theorem 7.2, if \(T\) is included in an \(n\)-cbc \(X\) then any border \(\left(P,Q\right)\) of \(\left\{X\right\}^{\circ}\) is such that \(2\times 3\,|\,|P|\) and \(2\times 3\,|\,|Q|\), thus \(36\,|\,n\)._
_Similar argument can be applied to others known not commutatively prefix bayonet codes._
## Conclusion and perspectives
We conclude this article by exposing our main perspectives. We do not conjecture that the general case of the strong triangle conjecture is true. We believe that techniques developed in order to build non-Hajos factorizations and non-_Redei_ factorizations such as in [15] could be useful to create counterexamples to the strong triangle conjecture. Since every counterexample of the (Zhang and Shum) triangle conjecture must contain a counterexample to the strong triangle conjecture, we believe that it is an intermediate step in order to find a counterexample to the triangle conjecture (if it exists).
Our second perspective is the converse of Proposition 4.5, we wounder if every finite maximal codes bordered by Krasner factorizations satisfy the factorization conjecture. Thanks to the characterization provided by Theorem 4.7, we are more confident about a positive answer. If it is the case then results about the triangle conjecture from Theorem 6.5 could be extend to the factorization conjecture.
|
2309.08002 | HIVE: Scalable Hardware-Firmware Co-Verification using Scenario-based
Decomposition and Automated Hint Extraction | Hardware-firmware co-verification is critical to design trustworthy systems.
While formal methods can provide verification guarantees, due to the complexity
of firmware and hardware, it can lead to state space explosion. There are
promising avenues to reduce the state space during firmware verification
through manual abstraction of hardware or manual generation of hints. Manual
development of abstraction or hints requires domain expertise and can be
time-consuming and error-prone, leading to incorrect proofs or inaccurate
results. In this paper, we effectively combine the scalability of
simulation-based validation and the completeness of formal verification. Our
proposed approach is applicable to actual firmware and hardware implementations
without requiring any manual intervention during formal model generation or
hint extraction. To reduce the state space complexity, we utilize both static
module-level analysis and dynamic execution of verification scenarios to
automatically generate system-level hints. These hints guide the underlying
solver to perform scalable equivalence checking using proofs. The extracted
hints are validated against the implementation before using them in the proofs.
Experimental evaluation on RISC-V based systems demonstrates that our proposed
framework is scalable due to scenario-based decomposition and automated hint
extraction. Moreover, our fully automated framework can identify complex bugs
in actual firmware-hardware implementations. | Aruna Jayasena, Prabhat Mishra | 2023-09-14T19:24:57Z | http://arxiv.org/abs/2309.08002v2 | HIVE: Scalable Hardware-Firmware Co-Verification using Scenario-based Decomposition and Automated Hint Extraction
###### Abstract
Hardware-firmware co-verification is critical to design trustworthy systems. While formal methods can provide verification guarantees, due to the complexity of firmware and hardware, it can lead to state space explosion. There are promising avenues to reduce the state space during firmware verification through manual abstraction of hardware or manual generation of hints. Manual development of abstraction or hints requires domain expertise and can be time-consuming and error-prone, leading to incorrect proofs or inaccurate results. In this paper, we effectively combine the scalability of simulation-based validation and the completeness of formal verification. Our proposed approach is applicable to actual firmware and hardware implementations without requiring any manual intervention during formal model generation or hint extraction. To reduce the state space complexity, we utilize both static module-level analysis and dynamic execution of verification scenarios to automatically generate system-level hints. These hints guide the underlying solver to perform scalable equivalence checking using proofs. The extracted hints are validated against the implementation before using them in the proofs. Experimental evaluation on RISC-V based systems demonstrates that our proposed framework is scalable due to scenario-based decomposition and automated hint extraction. Moreover, our fully automated framework can identify complex bugs in actual firmware-hardware implementations.
Simulation-based Validation, Formal Verification, Firmware Verification, RISC-V Systems, Test Generation, Abstraction, Hint Extraction
## I Introduction
System-on-Chip (SoC) architectures consist of components that are implemented as both hardware and firmware. External IPs such as cryptographic accelerators, and communication modules such as UART, I\({}^{2}\)C and SPI modules use memory-mapped input/outputs (MMIO) to communicate with the processor. These devices are typically connected with the firmware applications and the device drivers. Both firmware applications and device drivers are usually written in \(C\) language. Once the application is ready, they are compiled with the device drivers using the toolchain corresponding to the processor in the SoC. Next, a customized linker script informs the compiler about the memory configurations used in the setup to obtain the final compiled binary. An efficient and scalable framework is needed to verify such a complex system across all these layers of hardware, drivers, and applications.
There are promising test generation-based validation techniques for functional and security verification of SoCs [1, 2, 3]. These techniques either rely on the abstraction of the hardware-firmware interactions [3] or applicable on specific applications [4]. Due to exponential input space complexity, test generation based techniques are likely to miss corner cases, and therefore cannot provide a correctness guarantee. For example, we need to simulate a 64-bit adder (adds two 64-bit inputs) with \(2^{128}\) test vectors to provide the verification guarantee. Clearly, it is infeasible to simulate a real-world design with all possible input vectors.
Formal verification methods [5, 6, 7, 8, 9, 10] can provide strong guarantees about the correctness of a system. While formal verification is not affected by input space complexity, it can lead to state space explosion when dealing with large and complex systems. As a result, the scalability of formal verification is limited to small parts of the design due to the fact that symbolic expression term size grows exponentially with the increasing design size. There are promising efforts for system-level formal verification that rely on the designer's expertise to guide the proofs with manual hints [6, 7] to partition the state space and reduce the growth of expression term size.
Existing firmware verification efforts explore two promising avenues to reduce the state space as illustrated in Figure 1. The methods in the first category perform firmware verification on top of multiple manually abstracted versions of the hardware instead of register-transfer level (RTL) implementation. For example, Figure 0(a) shows that to perform formal verification (FV) of a firmware (FW) for \(n\) scenarios, a designer needs to manually construct the abstracted hardware model (\(HW_{i}\)) customized for the \(i\)-th scenario. Manual abstraction based on verification scenarios needs domain expertise and can be erroneous. Moreover, discrepancies between the actual implementation and the abstract model can lead to false positive results. The methods in the second category perform automated formal model generation for hardware, but it relies on manual extraction of hints from both firmware and hardware to guide formal proofs to avoid state explosion due to rapidly increasing term size. For example, Figure 0(b) shows that a designer needs to manually construct the hint (\(hint_{i}\)) customized for the \(i\)-th scenario. Manual development of hints can be time-consuming and error-prone, leading to incorrect proofs or inaccurate results.
Fig. 1: Existing formal firmware verification avenues
### _State-of-the-Art in State Space Reduction_
An inherent weakness of formal verification is state space explosion, where the backend solver has too many possibilities to explore at once. In a system that combines both hardware and firmware, there are multiple ways that state explosion can occur. Due to the concurrency of hardware, state registers can expand rapidly in each clock cycle in which it gets evaluated. Figure 2 illustrates an example scenario for a state register. When the number of unrolling cycles for the design increases, the number of possible paths that the state register can take expands exponentially. Similarly, symbolic variables also expand in an exponential manner. Moreover, firmware consists of branch statements which increases the number of possible states the system can have. These factors contribute to the symbolic state explosion and path explosion [11, 12] during formal verification.
In order to guide the formal proofs without getting into the state explosion problem, there are promising state space reduction techniques. These techniques aim to reduce the size of the state space by identifying and eliminating states that are either irrelevant or equivalent to other states for a given verification context. The goal is to make the verification process more efficient and effective by reducing the number of states that need to be analyzed. We outline four popular approaches for state space reduction.
_1) Abstraction: :_ Abstraction involves simplifying the system model by removing details that are not relevant to the properties being verified [13]. This can include abstracting away certain variables, simplifying the system topology, or reducing the level of details in the system model. Usually, abstraction is a manual process that relies on the expertise of the verification engineer. Quality and the similarity of the abstraction determine the guarantees and accuracy of the proofs. In other words, if the abstraction does not represent the system correctly, verification can lead to false positive results.
_2) Symmetry Reduction: :_ Symmetry reduction involves identifying and eliminating symmetric states in the system model [13]. Symmetric states are states that are equivalent to other states under certain transformations, such as swapping the values of variables or reordering events.
_3) Partial-order Reduction:_ Partial order reduction involves analyzing only a subset of the possible orderings of events in the system model [13]. This can be achieved by identifying independent events that can be executed in any order, and then only analyzing a representative subset of the possible orderings.
_4) Decomposition:_ Decomposition enables state-space reduction by breaking a problem into multiple sub-problems [13]. Instead of going through the manual and tedious task of abstraction for system validation purposes, our proposed technique divides the main verification problem into sub-problems. Then symmetric reduction and partial order reduction techniques are directly applied to each sub-problem. Our sub-problem decomposition technique adheres the basic principles of assume-guarantee [14, 15] and compositionality [16, 15]. Assume-guarantee defines the requirements that should exist in a system to decompose them into multiple sub-problems while providing the system-level guarantee. Compositionality refers to the property of a system or proof that allows it to be divided into smaller, independent parts or sub-problems, which can be analyzed or verified separately, and then combined to form a complete solution.
### _Research Contributions_
In this paper, we effectively combine the rigor and completeness of formal verification and the scalability of test generation based validation. We simplify the system validation problem by dividing it into sub-problems. For each sub-problem, we check the consistency of the implementation against the specification. This covers input output consistency of sub-problems and ensures that the implementation satisfies the specification. To further reduce the state space for each sub-problem, we utilize both concrete simulation and static analysis of the designs to extract proof supporting artifacts. Figure 3 presents an overview of our proposed framework, _HIVE_ (_Hint-based Verification_), where proof supporting artifacts required for sub-problems are extracted through static analysis of the implementation as well as simulation of relevant test scenarios.
Figure 4 highlights the novelty of our methodology compared to the existing approaches. Figure 3(a) shows that the existing methods rely on manual abstraction of the hardware implementation, but it can lead to false positive results. As shown in Figure 3(b), our proposed approach has two objectives: (i) automated formal model generation to avoid false positive results, and (ii) scenario-based verification to reduce the state space specific to the verification scenarios. Here, \(M_{i}\) and \(m_{i}\) represent the \(i\)-th component (modu
Fig. 3: Overview of our automated _Hint-based Verification_ (_HIVE_) framework. Formal proofs are supported by hints.
Fig. 2: State space expansion of a state register of a hardware implementation with respect to the evaluated clock cycles.
and implementation (I), respectively. Similarly, \(M_{i}^{j}\) and \(m_{i}^{j}\) represent the \(i\)-th module customized for the \(j\)-th scenario in the specification and implementation, respectively.
Specifically, this paper makes the following major contributions.
* It enables automated generation of formal models from the system-level implementation. We compile the firmware and combine it with the hardware to obtain the system model.
* To avoid state space explosion, it decomposes the system-level equivalence checking problem into sub-module level equivalence checking problems. We utilize function-level boundaries of the firmware and module-level boundaries of hardware to perform module-level equivalence checking.
* We utilize both architectural and behavioral models through static analysis of modules and dynamic analysis of the implementation. This leads to the automated generation of module interactions and supporting artifacts.
* It reduces the state space further by scenario-based analysis. We utilize supporting artifacts to generate system-level hints and perform path prioritization and symbolic state reduction related to the scenario under validation. The generated hints guide the equivalence-checking proofs in the simplified state space specific to the scenario as illustrated in Figure 3(b).
Our proposed framework leverages the strengths of both formal verification and simulation-based validation for scalable validation of firmware running on hardware (RTL) implementation. Since _HIVE_ generates the formal model automatically from the implementation, it is a complete model by construction relative to the implementation. Any issue found during the verification process can be corrected directly on the implementation and re-validated using our proposed method. This is in contrast with the existing methods that require fixes and re-validation of multiple (abstract and implementation) models.
The remainder of the paper is organized as follows, First, we discuss the background and related work. Next, we discuss the _HIVE_ methodology in detail with examples. Then, we present experimental results with three case studies. Finally, we discuss the applicability and limitations of the proposed framework.
## II Background and Related Work
In this section, we first survey formal methods that utilize manual abstraction or manual generation of hints for firmware verification. Next, we discuss firmware validation methods using concolic testing.
### _Verification using Manual Abstraction_
Blom et al. have identified overlapping states in a formal model based on the cone of confluence [17]. The authors propose a prioritization algorithm to identify and detect confluence properties of the models under verification. Yorav et al. have utilized the control-flow graph to simplify the transition diagram for validation of parallel programs [18]. Vasudevan et al. have reduced the state space through antecedent-conditioned slicing for hardware designs [19]. Their technique reduces the state space through abstraction, and it can be utilized for the verification of safety properties in the form of \(G(A\Rightarrow C)\), where \(A\) (antecedent) and \(C\) (consequent) are propositional logic formulas. An improved version of the algorithm utilizes the extra information from the antecedent to prune and limit the state space further for a particular problem [20]. Analysis of Verilog designs with transaction-like protocol descriptions to reduce the state space for verification of protocols is discussed in [21]. The authors consider different SystemVerilog constructs with protocol-targeted assumptions for the abstraction of design and define and limit the state space of the verification problem. Another framework to validate _SystemC_ models utilizing a _SystemC_ intermediate representation is proposed in [22]. The authors perform static analysis on formal models and perform model slicing with abstraction to reduce the state space. None of these approaches consider state space reduction in the context of hardware-firmware verification.
Instruction Level Abstraction (ILA) is a popular technique used to formally verify firmware in System-on-Chip (SoC) designs [8, 9, 10, 23, 24]. ILA abstracts a processor design at the instruction level for verification purposes, which allows the verification of the design to be performed at a higher level of abstraction than the gate-level or RTL-level implementation. ILA involves modeling the processor's behavior as a set of instructions that can be executed by the processor, rather than modeling the behavior of the individual gates and wires that make up the processor. ILA can also enable the effective reuse of verification efforts across different processor designs, as the instruction-level model can be reused for different implementations of the same processor architecture. Note that ILA generation is a manual and tedious task that requires a thorough understanding of the particular processor design. Moreover, incorrect ILA will lead to inaccurate results.
Fig. 4: _HIVE_ utilizes both module-level and test scenario-based decomposition for improving the scalability of proofs.
Furthermore, in order to verify external accelerators and modules that connects with the SoC, separate abstraction for their implementations are required (similar to Figure 0(a)).
### _Formal Verification using Manual Hints_
_RTLV_[7] is a formal verification framework that utilizes hybrid symbolic execution to verify firmware that runs on the hardware. It utilizes _Rosette_ solver-aided programming language to perform the symbolic evaluation. Compared to the existing frameworks, it can handle complex RISC-V-based CPUs with unrolling over 20,000 clock cycles. In order to guide the proofs, the authors have used manually developed hints that are specific to the design under verification (similar to Figure 0(b)). Athalye et al. have developed a framework for validation of hardware security modules (HSM) with information refinement [6]. The authors have performed equivalence checking on three simple HSM designs implemented with both hardware and firmware. The proposed technique has several limitations. To avoid the state explosion, the authors have manually provided hints to reduce the rapidly growing symbolic term space. Due to the fact that the verification problem is posed as one verification problem to the underlying solver, complex design aspects such as cryptographic calculations cannot be handled with the proposed technique due to solver timeouts.
The existing approaches rely on manual hints [6, 7], which requires significant expertise and can be error-prone. An incorrect hint would add more complexity to the solver in terms of state space and will eventually fail. This highlights the need for automated generation of proof supporting artifacts as well as system-level hints for scalable firmware validation.
### _Firmware Validation using Concolic Testing_
Ahn et al. presented a concolic testing framework based on the concept of consumer-producer relationships between hardware and firmware [1]. Their technique combines the transaction-level models (TLM) of the implementation, which are implemented in _SystemC_, with the firmware implemented in \(C\) for the evaluation. The symbolic execution is performed with KLEE [25]. The authors remove the concurrent nature of the firmware by modeling the interactions as sequential events through mapping interactions to the consumer and producer models. Concolic testing approach based on virtual prototypes for verification of hardware and software models is proposed in [26]. The authors utilize virtual device models as the hardware model and run the software on the virtual device model while collecting traces. These traces are then utilized to guide the test generation process in the required direction. Although the concolic testing framework is effective in generating test cases to activate corner cases, these techniques also require virtual prototype models of the hardware implementations. Note that the generated tests are abstract tests, which may not be directly applicable to actual implementation. Moreover, the virtual prototype model needs to be in a form that can be analyzed with existing concolic testing tools that were developed for software verification. Therefore, these approaches are not suitable for firmware validation on top of hardware (RTL) implementation.
There are promising efforts that enable concolic testing on RTL-level hardware implementations [27, 28, 29]. These techniques operate on the RTL-level hardware models and the applicability is based on the controllability of the design with respect to the inputs of the designs. A major limitation of applying these techniques for hardware-firmware validation is that most of the hardware-firmware interactions are controlled through the firmware. Therefore, concolic testing needs to generate and modify the parts of the firmware, which is infeasible due to the complexity and the device-specific nature (memory layout and instruction set) of the firmware.
## III HIVE: Automated and Scalable Hint-based Formal Verification
Figure 5 provides an overview of our proposed automated Hint-based Verification (_HIVE_) framework. The major steps of the framework are outlined in Algorithm 1. First, we need to consider the verification scenarios and associated tests to cover these scenarios, this is the input to the algorithm. Next, the firmware tests are compiled and corresponding binaries are obtained (Line 1-7). Then, we construct the system model (Line 8). Next, we extract the proof supporting artifacts from the designs using dynamic analysis and perform automated hint extraction (Line 9-15). Finally, for each simplified sub-problem, we utilize the generated hints to perform formal verification (Line 16-21). The remainder of this section describes each step in detail. We use the SoC implementation shown in Figure 6 to describe each step of Algorithm 1 using illustrative examples.
Fig. 5: Overview of _HIVE_ framework that consists of five main steps: outlining test scenarios, system model construction, supporting artifact generation by static and dynamic analysis, hint generation, and equivalence checking.
Example 1 (SoC): _Consider a firmware configurable traffic light controller design implemented on a RISC-V-instruction-set-based SoC, as shown in Figure 6. The SoC design is implemented in Verilog and consists of CPU, ROM, RAM, UART, and TLC modules as illustrated in the figure. In this example, firmware is able to control timing parameter values for different states of the traffic light controller, and corresponding state machine transitions are defined in the hardware description. The traffic light controller is connected to the processor of the SoC with memory-mapped I/O. There are two separate drivers for controlling the UART and TCL from the firmware. Further, all the sensor data is communicated to the SoC via an external device via the UART communication interface. Once the reset of the controller is triggered, the traffic light system starts to work with the newly configured values. As illustrated in Figure 6, the system output is represented by the LED signals which is a 7-bit vector._
```
1:Sceenarios: \(T\{t_{1},t_{2},...,t_{n}\}\), RTL: \(R\{r_{1},r_{2},...,r_{n}\}\), Drivers: \(d\)
2:Verification: \(V\{v_{1},v_{2},...,v_{n}\}\)
3:for each \(t\) in \(T\)do\(\triangleright\) Section III-A
4:\(f\)\(\leftarrow\)compiled(\(d\) + \(t\))
5:\(F.append(f)\)
6:for Each \(r\) in \(R\)do
7: Spec.append([t,behavior])\(\triangleright\) Section III-B
8:endfor
9:endfor
10:\(M\)\(\leftarrow\)flattened(\(R\))\(\triangleright\) Section III-C
11:for each \(f\) in \(F\)do
12: trace \(\leftarrow\)\(simulate(f+M)\)
13:\(S\)\(\leftarrow\)\(signalRanking(trace)\)
14:\(a\)\(\leftarrow\)\(extractArtifacts(S,R,M)\)\(\triangleright\) Section III-D
15:\(h\)\(\leftarrow\)\(hintGeneration(S,M(a),\tau)\)\(\triangleright\) Section III-E
16:\(H.append(h)\)
17:endfor
18:\(t\)\(\leftarrow\)\(smt(R,d)\)\(\triangleright\) Section III-G
19:for each \(r\) in \(R\)do
20:for each \(t\) in \(T\)do
21: Vappend( \(I(H^{r}_{t})\subseteq Spec^{r}_{t}\))\(\triangleright\) Pass/ Fail
22:endfor
23:endfor
24:Return V
```
**Algorithm 1**_HIVE_ Algorithm
### _Test Cases for Verification Scenarios_
In this step, we outline the functional scenarios that we want to provide guarantee with formal verification. For this purpose, we utilize the same test plan that is typically developed by designers to test the firmware. This may include unit tests for each of the components in the design under test, which are written in C or assembly ("_._._c_"_._asm_) for testing the firmware. _HIVE_ can effectively use these unit tests to identify the behavior of the model related to the specific test scenarios. Note that one test case would be enough to state one test scenario. For example, if the firmware uses a 64-bit adder (one scenario), we only need to specify one testcase (e.g., adding 2 and 3 should produce 5) for verifying the addition scenario instead of \(2^{128}\) testcases. These tests should cover all the scenarios that are expected to get formally verified.
After writing each of the test cases, we compile the firmware with device drivers using the relevant toolchain of the processor instruction-set architecture. Note that we keep only the test case as the main function of the SoC during this process. This will generate a separate firmware for each of the verification scenarios. Next, we need to simulate the design to obtain the behavioral model of the system for each scenario.
Example 2 (Test Cases): _Let us verify the functionality of the traffic light controller. We have to write test cases for the major verification scenarios. In this example, we have considered four simple test scenarios: SoC requesting sensor data from an external device (**Scenario 1**), SoC reading sensor data from the external device (**Scenario 2**), firmware requesting the traffic light controller for the side street green light (**Scenario 3**), and firmware requesting a pedestrian crossing instance for the main road (**Scenario 4**). These verification scenarios and corresponding test cases are derived from the test plan. Listing 1 presents the test case written in the firmware in c language for the first scenario. These test cases will be compiled separately with the device drivers in order to obtain four separate binary/hex files corresponding to each test scenario._
```
1#include<stddef.h>
2#include"driver.h"
3
4intmain(void){
5charmg1()}
6uint_init();//InitializeUART
7uart_write(msg,strlen(msg));//WritemsgtoUART
8return0;
9}
```
Listing 1: Firmware test case related to Scenario 1.
### _Development of Specification_
Once we have each of the test scenarios, we need to capture the expected behavior of the system in a formal model. We write the functionality of each of the components with respect to each scenario. We put them together to obtain the formal model of the specification.
Example 3 (Specification): _Since Example 2 has four scenarios, we need to write four specifications for each of the module in the SoC (Example 1). Listing 2 shows a part of the specification that we have developed to capture the expected behavior of UART for the first scenario in the Racket programming syntax (*._rt_)._
```
1(define(send-bytebyte)
2/InitializingSendingprocess
3(send-bit#)
4;sendingthedata=bits
5(for-range08(lambda(i))(send-bit
6
Fig. 6: SoC used for a traffic light controller.
(bitvector->bool(extract i i byte))))
7 / Ending the message
8 (send-bit #1))
9....
### _Automated Generation of System Models_
At this stage, we have compiled a binary for each of the verification scenarios, which is ready to be simulated with the implementation. In order to proceed with the simulation of each scenario, we construct the system model. We first flatten the hierarchy of the SoC. It is important to keep the signal names preserved with the hierarchy during this stage since we need to align the dynamic simulation-related behavioral details with module-level architectural details. After we obtain the flattened design, we create multiple compiled versions of the design by adding each of the binary created in the previous stage as the firmware. This results in multiple copies of the same hardware, running each test scenario separately.
Next, we need to perform dynamic analysis. Dynamic analysis is performed by analyzing the trace data after performing the concrete simulations. For this purpose, we automatically generate the testbenches to simulate the SoC. This also includes the template to include external inputs to the system, if required for the particular test case. The objective of this step is to create a functional system model that passes the particular test case.
Example 4 (System Models): _For the example SoC, we have four verification scenarios in Example 2. As a result, we will have four separate system models which contain relevant test case firmware for the relevant scenario. Figure 7 illustrates the individual system models for dynamic analysis (concrete simulation) while module-level implementation is utilized for static analysis._
### _Generation of Supporting Artifacts_
We define proof-supporting artifacts as objects that can be derived from the system model. _HIVE_ utilizes both module-level static analysis and system-level dynamic analysis to extract supporting artifacts. We first describe these two analysis techniques. Next, we discuss how to combine the extracted artifacts to infer meaningful details about the implementation.
#### Iii-D1 **Module-Level Static Analysis**
The purpose of the static analysis is to identify the structure of the implementation. This includes information such as signal dependencies, path conditions, and state register details. In order to identify these details, _HIVE_ extracts the abstract syntax tree (AST) of each module. AST contains information about control flow and data flow related to the implementation of the module. Further, _HIVE_ extracts the Finite-State-Machine (FSM) details present in the module implementation.
Example 5 (Static Analysis Artifacts): _For the SoC in Example 1, HIVE was able to automatically extract all the module level ASTs along with two main FSMs related to the validation scenarios. Figure 7(a) shows the FSM extracted from the UART module while Figure 7(c) shows the FSM extracted from the TLC module. Note that these figures show the visual representation of the FSMs. However, HIVE internally represent them as comma delimited ASCII text file (*.kiss) format. Figure 7(b) and Figure 7(d) provides the details extracted from the specification to identify each state in the FSM. For example, \(Stop_{1}\), \(Stop_{2}\), and \(Stop_{1}\) refer to transmitter communication halt states at 1/2 byte, byte, and final stop, respectively. Similarly, \(Stop_{r}\) refers to the receiver's final stop. Likewise, \(Shift_{t}\) and \(Shift_{r}\) refer to transmitter and receiver shift states, respectively. Finally, \(Parity_{t}\) and \(Parity_{r}\) refer to transmitter and receiver parity check states, respectively._
#### Iii-D2 **System-Level Dynamic Analysis**
The purpose of the dynamic analysis is to automatically identify the behavioral model of the system for a specific scenario. _HIVE_ simulates the designs with the system models generated in Section III-C and obtains the simulation traces related to each scenario separately. Next, the following procedure is applied to the simulation trace of each model separately.
Each of the signals in the model is analyzed and ranked according to their value change frequency. Algorithm 2 illustrates the main steps involved in the signal ranking process. First, it extracts the signal list in the execution trace and generates a data structure to store the history related to each signal. Next, it analyses the simulation trace while appending the concrete value of each signal observed in the simulation, followed by sorting the final signal list based on the increasing order of value change frequency observed for each signal. Note that all the unknown logic (uninitialized) value states are considered for weakening and ranked last, while high-impedance states are taken into account for the signal ranking process. The ranking procedure serves as the heuristic for the
Fig. 8: Finite State Machine (FSM) extracted from the UART and TLC modules by _HIVE_ and the relevant specifications.
Fig. 7: System models generated for the example SoC.
hint generation process to identify the hint type corresponding to signals (_i.e._, if a certain signal is changed less than a certain threshold (\(\tau\)), consider for abstraction).
```
1:Value Change Dump (VCD) Traces: \(vcd\)
2:Ranked Signals: \(S\)
3:
4:\(Signals\leftarrow\) extractSignals(\(vcd\))
5:for each trace in \(vcd\)do
6:\(\{\)signal, value\(\}\leftarrow\)trace
7:\(Signals\)\(<\)signal\(>\)-append(value)
8:endfor
9:\(\mathsf{S}\leftarrow\)sort Signals based on length(value)
10:Return \(S\)
```
**Example 6** (Dynamic Analysis Artifacts): _After performing dynamic analysis on models in Example 4, HIVE obtains a data structure related to the signal history of each of the models and ranks the signals. Figure 9 illustrates the different percentages of signals for different frequencies of signal value changes found for each model separately._
#### Iv-B3 **Signal Alignment**
After performing both static analysis and dynamic analysis, we need to combine the architectural model with the behavioral model of the design. _HIVE_ aligns the control flow graph (CFG) of the flattened netlist with each sub-module CFG by referring to the signal names. From CFG, it extracts the term dependency graph for each of the state variables in sub-modules. Then from the sub-modules which contain FMSs, state register behavior is aligned with the FSM. Later this is used for path prioritization during formal proofs.
Example 7 (Signal Alignment): _Flatened implementation of Example 1 SoC contains all signal names with implementation hierarchy. If we consider the signal "soc.uart.recv_buf_data" of the flattened netlist, this represents the signal "recv_buf_data" of the UART module. HIVE automatically decodes such information from the signal name and maps it with the corresponding module-level signal._
### _Generation of System-level Hints_
In _HIVE_, hint generation is a fully automated process, where the user does not need to know implementation details about the design. Firmware is typically developed on top of hardware design created by third parties, and verification of these system environments is made possible by our proposed technique. In this section, we discuss the process of generating effective hints to simplify proofs by utilizing the proof-supporting artifacts derived in Section III-D. This step of _HIVE_ serves two purposes. First, it automatically identifies each variable in the design to be used as a state variable or symbolic variable, and generates abstract syntax for each of the hints by analyzing the proof supporting artifacts. Next, it translates abstract syntax into a form that can be understood by the formal verification framework.
Algorithm 3 illustrates the main steps involved in the module-level hint generation process. First, it iterates through the modules available in the implementation of the SoC. If there is an FSM in the module, it aligns the corresponding state register with the ranked signal list. This signal is analyzed against proof supporting artifacts to perform path prioritization (discussed in Section III-E2). All the other signals in the module are also aligned with the corresponding signal from the ranked signal list. If there are multiple instantiations of the same component, those will have multiple occurrences corresponding to each alignment. Then the aligned signals are added to the hint list corresponding to the module and categorized based on the prospect hint type. First, it evaluates the signals whose value got changed only once during the concrete simulation. These are candidate signals to consider for concretization. These signals are analyzed with the FSM states and path conditions. Specifically, a signal is appended for concretization if it got its value assignment in a state/path that is not in the current state/path conditions for the scenario under test.
In _HIVE_, the user has the ability to control the threshold (\(\tau\)) to consider a signal for overapproximation (default \(\tau=5\), and this gets verified on the system model). All the uninitialized signals are considered for weakening. Since the back-end solver only has access to a single sub-problem at a time, the hint synthesis step performs solver queries on the system model in order to verify hints before committing them during the equivalence checking proofs.
#### Iv-E1 **Elaboration of Hint Types**
Performing modifications to the symbols involved in the formal proofs such that they have more constraints can reduce the state space of the verification problem. For that purpose, _HIVE_ performs the following types of symbol alterations:
_Concretization_: The process of concretization involves replacing symbolic values with concrete values that represent specific values that the system or component can take during execution. This allows verification of the system or component using specific input values relevant for the scenario under test, rather than analyzing the behavior of the system or component under all possible input values. Since we have the system model and the hint list generated by Algorithm 3, we perform system-level queries from the solver to verify the variables that can be concretized before using them in module-level proofs.
_Weakening_: Weakening a variable will effectively prune the formal model of the system by imposing constraints for that variable. This reduces the size of the symbolic execution tree and improves the efficiency of the symbolic execution by considering only the most relevant path to the specific scenario
Fig. 9: Signal ranking calculated by _HIVE_ on different test cases by analysis of simulation traces corresponding to each concrete simulation scenario. (\(\tau=5\))
under consideration.
_Overapproximation and Abstraction_: The overapproximation process will replace a symbolic variable that is propagated through the system with a new variable. This makes the analysis less precise but more tractable. Abstraction will replace the variable with a symbolic term if the variable only depends on allowed dependencies, which starts with the module inputs, and adds the symbol to the allowed dependencies.
Example 8 (Generated Hints): _Figure 10 illustrates the percentage of signals considered for different hint types for the four scenarios outlined in Example 2. After considering each of the signals as a hint for different modules, they are verified against the system model to check their validity._
#### Iv-B2 **Path Prioritization for Scenario Analysis**
The goal of path prioritization is to focus the verification effort on the most relevant parts of the design. Most important paths are determined based on the test scenarios that are provided in the firmware application in Section III-A. During symbolic evaluation, these paths will get priority in the state space. Further, each scenario will be evaluated as a separate sub-problem. Scenario-based decomposition is particularly useful when dealing with complex control flow or data structures, where it may be difficult to analyze all possible behaviors in a single case. By dividing the problem into multiple scenarios, the verification effort can be focused on specific parts of the system, making the analysis more tractable. Figure 11 illustrates two instances where scenario-based decomposition is applied to a state register based on two test scenarios.
Algorithm 4 illustrates the major steps involved in the path prioritization process. First, we reconstruct the execution tree based on the test cases that the model was simulated on. Then traces that have different paths due to the test case difference are identified. For each case, we construct the hints such that path conditions for states that are not in the path get weakened. Path conditions are derived from the FSM that we extracted from the design during static analysis.
```
0: Traces \(S\), State Registers \(S_{r}\),
0: Path Conditions \(p\)
1:for each v in \(S\)do
2:if v \(\in S_{r}\)then
3:\(v\)<exec\(>\)append(v.value) \(\triangleright\) Generate Execution Tree
4:endif
5:endfor
6:for each \(r\in S_{r}\)do
7:for each state \(\in r\)do
8:if state \(\notin\) v<exec\(>\)then
9: conditions \(\leftarrow\) getConditions(State)
10: p.append(conditions)
11:endif
12:endfor
13:endfor
14:Return \(p\)
```
**Algorithm 4** pathPrioritization(\(S,S_{r}\))
#### Iv-B
Fig. 11: Reduction of state space by execution path prioritization with respect to the scenarios outlined in firmware.
Fig. 10: Types of hints extracted and verified by _HIVE_ on different test scenarios and % of hints on the system model.
### _Scenario-based Decomposition_
Once hint generation is complete, _HIVE_ starts preparing for the hybrid symbolic execution. For this, natural boundaries of the designs such as module instantiations are used. Moreover, test scenarios are evaluated separately making them sub-problems of the original verification problem. For ease of development and human understanding, system designs are composed of component (module)-level designs. Following the design hierarchy, system models can be decomposed into components. There are promising efforts for proof by decomposition with conjoining specifications and assume-guarantee reasoning [14, 15]. In other words, the sub-problem decomposition technique of _HIVE_ is based on assume-guarantee reasoning.
In assume-guarantee reasoning, each subsystem or component is viewed as a black box, where the assumptions specify the possible behaviors of the environment or other components that interact with the subsystem, and the guarantees specify the expected behaviors of the subsystem under all possible assumptions. The technique then uses logical reasoning to prove that the guarantees of each subsystem are preserved in the presence of the assumptions of other subsystems or the environment. In the paradigm of assume-guarantee, we define assumption (\(A\)) about the system, sub-component (\(C\)), and the property (\(P\)) that should hold within the system. The formula for the assume-guarantee relationship can be formulated as \(f=\left\langle A\right\rangle C\langle P\rangle\). Here \(f\) evaluates to true, whenever \(C\) is a part of a system that satisfies \(A\). In this case, the system must also guarantee \(P\).
Consider a system \(S\) that is composed of two sub-components of \(C_{1}\) and \(C_{2}\) (\(S=C_{1}||\,C_{2}\)). In order to check whether the \(S\) satisfies \(P\), by analyzing \(C_{1}\) and \(C_{2}\) assume-guarantee reasoning principle in inference rule can be applied as follows.
\begin{tabular}{l l} (Premise 1) & \(\left\langle A\right\rangle C_{1}\langle P\rangle\) \\ (Premise 2) & \(\frac{\left\langle true\right\rangle C_{2}\left\langle A\right\rangle}{ \left\langle true\right\rangle C_{1}\left\|\,C_{2}\left\langle P\right\rangle \right.\) \\ \end{tabular}
This indicates that if \(\left\langle A\right\rangle C_{1}\langle P\rangle\) and \(\left\langle true\right\rangle C_{2}\langle A\rangle\) holds, then \(\left\langle true\right\rangle S\left\langle P\right\rangle\) hold true. In order for the above inference rule to be valid, \(A\) must be more abstract than \(C_{2}\), and still it should describe \(C_{2}\)'s behavior correctly while \(A\) should be strong enough for \(C_{1}\) to satisfy \(P\).
_HIVE_ make assumptions on the system model, validates them on the system model, and provides validated hints to the sub-component wise proofs. Therefore, _HIVE_ framework natively satisfies the definition of assume-guarantee. Equivalence checking proofs can be performed on the decomposed system by selecting one component at a time with the hints generated by the _HIVE_. Note that in situations that have the same set of hints in multiple cases, _HIVE_ will concatenate them to one sub-problem, and hence repeated analysis is avoided. Figure 13 illustrates a situation where _HIVE_ simplifies the formal model of the hardware module \(m_{j}\) for _scenario i_ using the hints extracted from the system. Here, the dynamic analysis is performed with the simulation of _scenario i_. _HIVE_ effectively removes unrelated components from the formal model using system-level hints, as discussed in Section III-E. The simplified model \(m^{\prime}_{j}\) is used in the verification of _Scenario i_, as discussed in the next section.
Example 10 (Sub-Problems): _HIVE is able to decompose the verification problem by module boundaries and verification scenarios. For the SoC in Example 1, we have provided four test cases. Based on both factors, HIVE decomposes the entire system model into the following sub-problems._
\begin{tabular}{l l} \(SoC_{scenario1}=\) & \(cpu_{1}\parallel rom_{1}\parallel ran_{1}\parallel uart_{1}\parallel ltc_{1}\) \\ \(SoC_{scenario2}=\) & \(cpu_{2}\parallel rom_{2}\parallel ram_{2}\parallel uart_{2}\parallel ltc_{2}\) \\ \(SoC_{scenario3}=\) & \(cpu_{3}\parallel rom_{3}\parallel ram_{3}\parallel uart_{3}\parallel ltc_{3}\) \\ \(SoC_{scenario4}=\) & \(cpu_{4}\parallel rom_{4}\parallel ram_{4}\parallel uart_{4}\parallel ltc_{4}\) \\ \end{tabular}
### _Sub-Problem Proofs using Hints_
We convert a copy of the system model with inline firmware application (without test scenarios) to SMT format, which serves as our formal model for equivalence checking. Then, the proposed framework will construct the state machine of the implementation against the state machine of the specification. Effective utilization of hints can simplify extremely complicated proof scenarios to tractable problems for the underlying SAT solvers. In this section, we discuss three types of limitations on scalable proof with symbolic execution. Then we discuss how hints are used in HIVE to mitigate these limitations. There are three major factors that affect the scalability of formal proofs: path explosion, state space explosion, and exhaustive components in the design.
_Path Explosion:_ Path explosion occurs in formal methods, such as model checking and symbolic execution, where the number of paths in a system's state space can grow exponentially with the size of the system. In other words, it becomes computationally infeasible to exhaustively check all possible paths for correctness or to explore the entire state space. As a result, the verification process may may introduce unacceptable time and memory overhead, and in reality, may unsuccessfully terminate after exceeding memory capacity of the computer. _HIVE_ addresses the path explosion by performing path prioritization relevant to the test scenarios outlined in Algorithm 4.
_Symbolic Term Explosion:_ Another problem faced by formal proofs is that verification techniques that rely on symbolic
Fig. 12: Path prioritization for four scenarios outlined in Example 2 (colored paths get priority).
execution can lead to a symbolic term explosion problem, which occurs when the number of possible program states (combinations of symbolic and concrete values) grow too large to be handled by the analysis tool. This can occur when the symbolic execution engine explores all possible states through the program, which can be exponential in the size of the input space. _HIVE_ implements constraints on the symbolic variables in the form of concretization, weaken, overapproximation, and abstraction as outlined in Section III-E1. This process has the ability to control the term space from symbolic to concrete while making some variables opaque so that their values can be substituted later in the process.
_Exhaustive Components:_ There are specific components in the design where a large run time can be anticipated during the proofs. Components such as memories and counters can contribute to the exponential state space expansion. This problem is also addressed by the path prioritization technique since all the unused areas of memory components will get lower priority by effectively abstracting such implementations to inputs and outputs with only relevant logic.
Note that _HIVE_ generates the hint only for the scenario under test. Therefore, assumptions and properties are only valid for the specific scenario under consideration.
```
1(define(overapproximate-caselmodule)
2(matchmodule)
3['uart(for([field(list
4"="recv_buf_valid"
5"="recv_buf_data"
6"="recv_divcnt"
7"="recv_pattern"
8"="recv_state"
9"=".1]]]
10(match-abstract!fieldfield)]
11['cpu(for([field(list
11"=".1]]]]
12(match-abstract!fieldfield))]
14...
15(default(displayin"Modulenotfound["])]))
16!Usageofhintsintheproys.
17(require"="syncrete/du/hints.rt");importhints
18(hintoverapproximate-casel'{}'unt);useforUARTproof
```
Example 11 (Hints in _Racket_ Syntax) : _For four testing scenarios in Example 2, HIVE categorized signals into groups to be considered as hints in each scenario. Then these signals are adopted into pre-built Racket programming language templates. A sample set of overapproximate hints after verifying them on the system model and adopting them to Racket format are illustrated in Listing 3. Hints related to each case are constructed separately and generated hints can be directly imported into the proofs._
## IV Experiments
This section presents three case studies using hardware and firmware implementations to demonstrate the effectiveness of _HIVE_. First, we introduce the setup we have used to perform the experiments. Next, we discuss each case study and present results on hardware-firmware co-verification.
### _Experimental Setup_
We consider three System-on-Chip (SoC) implementations. These implementations utilize _PicoRV32_ processor [30] which is a CPU based on RISC-V instruction-set architecture configured with a ROM and RAM.
To perform symbolic execution and equivalence checking, we have used _Rosette_[31] language with _Z3_[32] as the back-end solver. Rosette is a solver-aided programming language extension for the _Racket_ functional programming language. In order to compile the \(C\) and _assembly_ programs into binary, _riscv-gnu-toolchain_[33] was used. To convert Verilog RTL implementations into _smt2_ (_Satisfiability Modulo Theories Library v2_) format, _Yosys_[34] open synthesis tool was used. _Rosette_ package _rtlv_[7] was used for lifting the smt2 output to racket supporting syntax and _rtlv/shiva_ was used to feed hints to the equivalence checking problems. Specifications related to each of the test scenarios are manually written in the _Racket_ language. Extraction of abstract syntax tree is performed by _Icarus Verilog Loadable Target API_[35]. Module level FSM extraction and data flow (_*.blif_) are extracted through _Yosys_. All the simulations for obtaining Value Change Dumps (VCD) were performed with Icarus Verilog [36]. All the scripts related to hint generation, path prioritization, and symbolic state reduction were implemented in _Python_. All the experiments were performed on a _ARMv8.5_ based _Apple M1_ system with 16GB RAM inside a _Docker_ environment.
For the evaluation of performance, we have selected three benchmark circuits that contain considerably large FSM implementations. Table I presents the size and trace length-related details about the benchmark SoCs. After the validation process, each of the SoC was compiled with _Yosys_ and
Fig. 13: Illustration of how _HIVE_ simplify module level proofs utilizing system level hints. Here \(m_{0}-m_{n}\) represents different modules in the hardware implementation. In this instance, the system level hints simplify the formal model of the module \(m_{j}\) for the _scenario_\(i\) to \(m^{\prime}_{j}\) using constraints. Simplified \(m^{\prime}_{j}\) can be directly used in formal proof of _scenario_\(i\).
_nextprn_[37] to generate the necessary binary file to program the _Lattice ICE-Sugar-Nano_ open-source Field Programmable Gate Array (FPGA). Finally, the usability of validated real-world SoCs were evaluated by running the firmware on the programmed FPGA.
### _Case Study 1: Simple Encrypted Backup (SEB)_
Simple Encrypted Backup (SEB) is a data logger SoC that reads data with a secret numeric pin from a host device and stores the result after performing a simple transposition cipher on the data on an external memory card. It uses the UART interface to communicate with the host device and uses an external memory card reader that communicates through the Serial Peripheral Interface (SPI) to store data as illustrated in Figure 14. The external device can request to read the data from the storage by providing the secret pin back. External devices access the SoC via the terminal environment as illustrated in Figure 15.
Both UART and SPI are configured on the SoC such that they both have separate MMIO regions to communicate with the CPU. The transposition cipher logic is implemented within the firmware. In order to evaluate the correctness of the encrypted backup SoC, we have developed a test plan with several test scenarios. This includes verifying reading from the host machine, writing to the host machine, reading from the external memory card, writing to the external memory card, encrypting the data, and decrypting the data. First, we attempted to perform equivalence checking on the system for outlined properties without _HIVE_, and solver timeouts were observed for all scenarios. Therefore, we applied the proposed approach and validated the implementation.
Table II presents the results related to the verification process of the SEB SoC. The verification results illustrate that _HIVE_ is able to decompose the SoC verification problem into tractable sub-problems for the underlying solver within an acceptable time and memory.
### _Case Study 2: ECC Core Accelerator_
Elliptic Curve Cryptography (ECC) functions are popularly used in public key cryptosystems that can be utilized for both authentication and authenticated encryption. ECC core accelerator provides functionality to perform calculations related to the Elliptic Curve Digital Signature Algorithm (ECDSA) and Elliptic Curve Integrated Encryption Scheme (ECIES) faster. Implementation of ECC core consists of _ECC_Arithmetic core_ and _SHA256_. Users can use the ECC core as an IP in the SoC design as illustrated in Figure 16. Through the firmware driver, each of the functionalities can be accessed by the application. ECC core is connected with the SoC via the MMIO, and based on the request register status, it will perform the relevant calculation and the results will be written back to the results register. ECC core implementation consists of several nested FSMs in order to achieve the above functionalities. In order to evaluate the effectiveness of _HIVE_, we have selected several functional scenarios from the _ECC_Arithmetic core_. These functional scenarios include testing the field addition, field multiplication, field inverse, and fast reduction. Table III presents the elapsed time and consumed memory for the hint generation process as well as the verification process with _HIVE_.
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline \multirow{2}{*}{**Property**} & \multirow{2}{*}{**Test Case**} & \multicolumn{2}{c|}{**Hint Generation**} & \multicolumn{2}{c|}{**Equivalence**} \\ \cline{3-5} & & **Generation** & & **Checking** \\ \hline \multirow{2}{*}{Read} & _from SPI Memory_ & 13 & 674 & 19 & 639 \\ \cline{2-5} & _from Host_ & 17 & 756 & 25 & 703 \\ \hline \multirow{2}{*}{Write} & _to SPI Memory_ & 15 & 698 & 20 & 655 \\ \cline{2-5} & _to host_ & 18 & 789 & 24 & 697 \\ \hline \multirow{2}{*}{Text} & _Encrypt_ & 21 & 701 & 30 & 985 \\ \cline{2-5} & _Decrypt_ & 21 & 701 & 32 & 946 \\ \hline \end{tabular}
\end{table} TABLE II: Time (in Minutes) and memory (in Megabytes) consumed by the validation scenarios of the SEB SoC. _We did not show comparison with state-of-the-art since existing methods without hints face timeouts for all scenarios._
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline Case Study SoC & **SEB** & **ECC** & **LCD** \\ \hline Lines of Code (HW+FW) & 4371 & 4999 & 5022 \\ \hline Number of Signals (HW) & 1471 & 1563 & 1494 \\ \hline Average Trace Size (MB) & 695 & 1874 & 1084 \\ \hline Simulated Clock Cycles & 50,000 & 100,000 & 100,000 \\ \hline \end{tabular}
\end{table} TABLE I: Size comparison of RISC-V based SoC implementations considered for the case studies.
Fig. 14: Simple Encrypted Backup SoC.
Fig. 15: Simple Encrypted Backup terminal interface on a host machine, SoC is executed on the _Ice-Sugar-Nano_ FPGA.
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline \multirow{2}{*}{**Test Cases for**} & \multicolumn{2}{c|}{**Hint Generation**} & \multicolumn{2}{c|}{**Equivalence**} \\ \cline{2-5} & **Time** & **Memory** & **Time** & **Memory** \\ \hline _Field Addition_ & 33 & 2109 & 207 & 1876 \\ \hline _Field Multiplication_ & 47 & 2967 & 239 & 1908 \\ \hline _Field Inverse_ & 41 & 2479 & 211 & 1856 \\ \hline _Fast Reduction_ & 28 & 1795 & 242 & 2064 \\ \hline \end{tabular}
\end{table} TABLE III: Time (in Minutes) and memory (in Megabytes) consumed by the verification process of the SoC with the ECC Core for different verification scenarios of _Arithmetic Core_. _We did not show comparison with state-of-the-art since existing methods without hints face timeouts for all scenarios._
### _Case Study 3: LCD Controller_
This SoC implementation consists of a hardware Liquid Crystal Display (LCD) controller as illustrated in Figure 17. It can be configured by the firmware and then display data can be sent to the display. Configure parameters are sent as command packets from the firmware and data packets are sent as data packets to the controller. This implementation is designed to work with ST77XX family display drivers. The controller consists of an FSM with 33 states and 30 transitions. ST77XX family device drivers have particular delay periods between different command parameters and data packets.
According to the specification, implementation requires satisfying several basic conditions to properly initialize the external display and to show graphics on the display. These requirements can be outlined as follows, 1) display delay values are within the specification range, 2) correct commands are sent to the external display from the firmware, and 3) data packets are in sync with the coordinates of the display. Based on the requirements, we formulated several test scenarios for this experiment. For each of the test scenarios, we have written one test case in the firmware. Then we used the original specification [38] of the display driver to write the specification in _Racket_ language.
After applying the _HIVE_ on the test scenarios, we performed equivalence checking to provide a guarantee for all the scenarios. Table IV illustrates the verification scenarios for the LCD controller SoC and elapsed time and memory used for each of the verification sub-problem with _HIVE_ generated hints. We were able to reveal a functional bug related to _CASET_ and _RASET_ test cases that were causing the external ST7735 driver to set up X and Y coordinate values incorrectly, this was fixed in the firmware device driver. Figure 18 illustrates the design running on the FPGA with _checkered_Flag.c_ benchmark. Figure 18a illustrates the SoC with the driver bug, and here the external display displays graphics incorrectly. Figure 18b presents the validated SoC after fixing the driver bug, displaying data on the external display properly.
## V Applicability and Limitations
In this paper, we focused on automated generation of hints to simplify equivalence-checking problems during functional verification by reducing the state space. However, _HIVE_ usability is not limited to functional verification. This framework can be used in other scenarios (e.g., security validation) to reduce the state space and make the problem more tractable. Note that Rosette was used in the experiments due to seamless integration with the backend SMT (Z3) solver for symbolic execution to aid formal proofs. However, other similar formal tools also can be used with minor modifications to the framework.
Most formal verification techniques are built on top of a set of assumptions. _HIVE_ framework also makes the following two assumptions.
Test Plan Completeness: _HIVE_ assumes the availability of a complete test plan as the starting point. In case of an incomplete test plan, _HIVE_ provides guarantees for the specified scenarios. For example, when verifying a calculator with four operations (+, -, *, /), we expect one test for each operation for completeness. If the test plan includes only one test (e.g., 2+3=5), _HIVE_ will guarantee that the adder in the calculator will perform the correct addition for all possible inputs (\(2^{128}\) possibilities with 64-bit inputs). Specification coverage based test plan generation [39, 40] and the completeness of a given test set [41] are well-studied research fields, and beyond the scope of this paper.
Completeness of Simulation Traces: _HIVE_ needs only one testbench to simulate the actual implementation (Verilog and compiled C code) to collect simulation traces. Traces are
\begin{table}
\begin{tabular}{|l|l|c|c|c|c|} \hline \multirow{2}{*}{**Property**} & \multirow{2}{*}{**Test Cases**} & \multicolumn{2}{c|}{**Hint**} & \multicolumn{2}{c|}{**Equivalence**} \\ \cline{3-6} & & **Generation** & & **Checking** \\ \cline{3-6} & & **Time** & **Mem** & **Time** & **Mem** \\ \hline \multirow{3}{*}{\begin{tabular}{l} Delay \\ Check \\ \end{tabular} } & _SWRESET_ & 23 & 1649 & 37 & 1489 \\ \cline{2-6} & _SLPOUT_ & 29 & 1720 & 33 & 1348 \\ \cline{2-6} & _NORON_ & 27 & 1644 & 32 & 1394 \\ \cline{2-6} & _DISPON_ & 27 & 1679 & 42 & 1489 \\ \hline \multirow{3}{*}{\begin{tabular}{l} Command \\ Mode \\ \end{tabular} } & _CASET_ & 19 & 976 & 23 & 1296 \\ \cline{2-6} & _RASET_ & 19 & 981 & 22 & 1301 \\ \cline{2-6} & _COLMOD_ & 19 & 967 & 21 & 1290 \\ \cline{2-6} & _MADCTL_ & 20 & 970 & 26 & 1310 \\ \hline \multirow{3}{*}{
\begin{tabular}{l} Data \\ Mode \\ \end{tabular} } & _NEW_,_FRAME_ & 17 & 979 & 31 & 1358 \\ \cline{2-6} & _WALT_FRAME_ & 19 & 963 & 33 & 1380 \\ \cline{1-1} \cline{2-6} & _INIT_FRAME_ & 18 & 958 & 33 & 1367 \\ \hline \end{tabular}
\end{table} TABLE IV: Time (in Minutes) and memory (in Megabytes) consumed by the verification process of the SoC with the SPI LCD Controller for different verification scenarios. _We did not show comparison with state-of-the-art since existing methods without hints face timeouts for all scenarios._
Fig. 17: LCD Controller SoC.
Fig. 18: SoC with SPI LCD controller running _checkered_Flag.c_ benchmark. Firmware is sending the display content to the external display with the ST7735 driver on _Lattice Ice-Sugar-Nano_ FPGA.
expected to be complete since there are a finite number of scenarios and each scenario is about a few thousand cycles. If simulations are not complete, the number of generated hints will be reduced and verification effort (time) will increase. However, since solvers verify the hints before using them, the accuracy of the proofs will not be affected.
## VI Conclusion
Hardware-firmware co-verification is an important requirement in designing reliable systems. While there are promising formal verification approaches, they rely on manually written hints or abstracted versions of the hardware to deal with the state space explosion. Manual intervention requires design expertise and can be time-consuming and error-prone. In this paper, we presented an automated hint-based verification (_HIVE_) framework that does not require any manual intervention during model generation or hint extraction. We effectively utilize both static analysis and concrete simulation of the implementation to extract proof supporting artifacts from the system model. _HIVE_ can automatically generate system-level hints to guide formal proofs to avoid state space explosion. Extensive evaluation using real-world hardware-firmware implementations demonstrates the effectiveness and scalability of the proposed framework. In all the case studies, _HIVE_ was able to provide a guarantee for the scenarios in a reasonable time, while state-of-the-art methods failed to verify them (timeout). Moreover, _HIVE_ is able to identify several faulty scenarios in the hardware-firmware implementations.
|
2309.13344 | LES of a non-premixed hydrogen flame stabilized by bluff-bodies of
various shapes | Dynamics of flames stabilized downstream of different shape bluff-bodies
(cylindrical, square, star) with different wall topologies (flat, wavy) is
investigated using large-eddy simulations (LES). A two-stage computational
procedure involving the ANSYS software and an in-house academic high-order code
is combined to model a flow in the vicinity of the bluff-bodies and a flame
formed downstream. The fuel is nitrogen-diluted hydrogen and the oxidizer is
hot air in which the fuel auto-ignites. After the ignition, the flame
propagates towards the bluff-body surfaces and stabilizes in their vicinity. It
is shown that the flames reflect the bluff-body shape due to large-scale strong
vortices induced in the shear layer formed between the main recirculation zone
and the oxidizer stream. The influence of the acute corners of the bluff-bodies
on the flame dynamics is quantified by analysing instantaneous and
time-averaged results. Compared to the classical conical bluff-body the largest
differences in the temperature and velocity distributions are observed in the
configuration with the square bluff-body. The main recirculation zone is
shortened by approximately 15% and at its end temperature in the axis of the
flame is almost 200 K larger. Simultaneously, their fluctuations are slightly
larger than in the remaining cases. The influence of the wall topology (flat
vs. wavy) in the configuration with the classical conical bluff-body turned out
to be very small and it resulted in modifications of the flow and flame
structures only in the direct vicinity of the bluff-body surface. | Agnieszka Wawrzak, Robert Kantoch, Artur Tyliszczak | 2023-09-23T11:34:39Z | http://arxiv.org/abs/2309.13344v1 | # LES of a non-premixed hydrogen flame stabilized by bluff-bodies of various shapes
###### Abstract
Dynamics of flames stabilized downstream of different shape bluff-bodies (cylindrical, square, star) with different wall topologies (flat, wavy) is investigated using large-eddy simulations (LES). A two-stage computational procedure involving the ANSYS software and an in-house academic high-order code is combined to model a flow in the vicinity of the bluff-bodies and a flame formed downstream. The fuel is nitrogen-diluted hydrogen and the oxidizer is hot air in which the fuel auto-ignites. After the ignition, the flame propagates towards the bluff-body surfaces and stabilizes in their vicinity. It is shown that the flames reflect the bluff-body shape due to large-scale strong vortices induced in the shear layer formed between the main recirculation zone and the oxidizer stream. The influence of the acute corners of the bluff-bodies on the flame dynamics is quantified by analysing instantaneous and time-averaged results. Compared to the classical conical bluff-body the largest differences in the temperature and velocity distributions are observed in the configuration with the square bluff-body. The main recirculation zone is shortened by approximately 15% and at its end temperature in the axis of the flame is almost 200 K larger. Simultaneously, their fluctuations are slightly larger than in the remaining cases. The influence of the wall topology (flat vs. wavy) in the configuration with the classical conical bluff-body turned out to be very small and it resulted in modifications of the flow and flame structures only in the direct vicinity of the bluff-body surface.
bluff-body, wavy-wall, LES, hydrogen, non-premixed flame 2023]Agnieszka Wawrzak 1, Robert Kantoch 1, and Artur Tyliszczak 1, 2023
1, 2023
## 1 Introduction
A better understanding of mutual interactions between turbulent flow and flame occurring downstream the bluff-body geometries limit the further progress in improvements in the efficiency and safety of various combustion applications (burners, chambers, engines). Moreover, a suitable control of turbulent flames dynamics is required for establishment of low-emissions devices according to current international regulations. Such a control can be achieved by applying passive and/or active flow control techniques. Both provide modulation of the multi-scale mixing processes (enhancement/suppression) by the intensification of interactions between large and small turbulent scales.
Using the bluff-body as a part of the injection system constitutes the prominent example of the passive flame control. The bluff-body generates recirculation zones improving the mixing and stabilizing the flame position (Docquier and Candel, 2002). Tyliszczak et al. (2014) examined a non-premixed flame stabilized in a central recirculation zone produced by a conical/cylindrical bluff-body. They showed that the flame is approaching extinction due to even small changes in simulation parameters. Combination of the passive and active control methods can be applied for optimization of the combustion process in bluff-body burner as it was shown by Kypraiou et al. (2018). The researchers performed the experimental studies on the premixed, partially-premixed and non-premixed methane flames showing that the impact of acoustic oscillations on the bluff-body flames is directly related to the fuel injection system.
The complexity of turbulent mixing and combustion processes raises many important questions that still remain unanswered, despite the great interest of the scientific community in this field. For instance, to what extend does the mixing of the fuel and oxidizer downstream the bluff-body depend on the shape and roughness of the bluff-body? As it was recently demonstrated by Drozdz et al. (2021), the wavy wall, with carefully selected waviness parameters, can effectively enhance the effect of amplitude modulation and hence increase wall shear stress and postpone turbulent separation.
Concerning the geometrical shaping, interesting findings were recently reported by Tyliszczak et al. (2022). The authors showed a distinct trend regarding the dynamics of the jets coming out from the shaped orifices. They found a shorter potential core and faster velocity decay for a circular jet compared to a non-circular one, e.g., issuing from a triangular nozzle. This contradicted the common knowledge of the jets emanating from sharp-edged orifices. In general, they are more energetic compared to jets generated by smoothly profiled nozzles, therefore they are characterized by a faster decay of the axial velocity (Mi and Nathan, 2010; Kuban et al., 2021). Nevertheless, Tyliszczak et al. (2022) have shown that the level of control that can be achieved by shaping the nozzle is higher than previously expected.
The present paper aims at numerical simulations of non-premixed hydrogen flame stabilized by specially designed bluff-bodies with corrugated surfaces. Computational fluid dynamics (CFD) tools have been widely used for the flame control analyses providing the
results related to both the global characteristics as well as concerning deep insight into a flame structure and its dynamics.
In this work we use the Large Eddy Simulation method and a 'no model' approach for the simulations of the combustion process where the chemical sources terms are calculated directly using the filtered variables (Duwig et al., 2011). The response of the flame to different wall topologies (flat or wavy, with the waviness oriented streamwise) and complex shapes (hexagram, square) is thoroughly discussed based on instantaneous and time averaged results.
## 2 Modelling
### LES Approach
In the present study we use two different numerical LES solvers and apply a two-stage computational approach. In the first stage (I), the second order ANSYS Fluent LES solver is involved to model the flow inside the inlet section of the bluff-body burner. The further calculations (stage II) are performed using a in-house high-order numerical algorithm based on the projection methods (Tyliszczak, 2016). It solves the Favre filtered set of the governing equations assuming the low Mach number approximation (Geurts, 2004):
\[\partial_{t}\tilde{\rho}+\nabla\cdot(\tilde{\rho}\widetilde{\mathbf{u}})=0 \tag{1}\]
\[\tilde{\rho}\partial_{t}\widetilde{\mathbf{u}}+(\tilde{\rho}\widetilde{ \mathbf{u}}\cdot\nabla)\widetilde{\mathbf{u}}+\nabla\tilde{\rho}=\nabla\cdot \left(\tau+\tau^{\text{SOS}}\right) \tag{2}\]
\[\tilde{\rho}\partial_{t}\widetilde{Y}_{\alpha}+(\tilde{\rho}\widetilde{ \mathbf{u}}\cdot\nabla)\widetilde{Y}_{\alpha}=\nabla\cdot\left(\tilde{\rho}(D _{\alpha}+D_{\alpha}^{\text{SOS}})\nabla\widetilde{Y}_{\alpha}\right)+ \widetilde{w}_{\alpha} \tag{3}\]
\[\tilde{\rho}\partial_{t}\widetilde{h}+(\tilde{\rho}\widetilde{\mathbf{u}} \cdot\nabla)\widetilde{h}=\nabla\cdot\left(\tilde{\rho}(D+D^{\text{SOS}}) \nabla h\right) \tag{4}\]
\[p_{0}=\overline{\rho}R\widetilde{T} \tag{5}\]
where the bar and tilde symbols denote filtered quantities, \(u_{t}\) are the velocity components, \(p\) is the hydrodynamic pressure, \(\rho\) is the density and \(h\) stands for the total enthalpy. The symbols \(p_{0}\) and \(R\) are the thermodynamic pressure and gas constant, respectively. The subscript \(\alpha\) is the index of the species \(\alpha=1,\dots,\text{N}\)-species whereas the variables \(Y_{\alpha}\) represent species mass fractions. An unresolved sub-grid stress tensor, resulting from the filtering of the non-linear advection terms is defined as \(\tau^{\text{SOS}}=2\mu\mathbf{S}\), where \(\mathbf{S}\) is the rate of strain tensor of the resolved velocity field and \(\mu_{t}\) is the sub-grid viscosity computed according to the model proposed by Vreman (2004). The sub-grid diffusivities in Eqs. 3, 4 are computed as \(D^{\text{SOS}}=\mu_{t}/(\tilde{\rho}\sigma)\) where \(\sigma\) is the turbulent Schmidt/Prandtl number assumed equal to 0.7 (Jones and Navarro-Martinez, 2007).
The chemical sources terms \(\widetilde{w}_{\alpha}\) in Eq. 3 involve the filtered reaction rates of species \(\alpha\), which are strongly non-linear functions of the species mass fractions and enthalpy:
\[\widetilde{w}_{\alpha}(\mathbf{Y},\tilde{h})=\dot{w}_{\alpha}(\widetilde{ \mathbf{Y}},\widetilde{h})+\mathcal{F}(\widetilde{Y_{\alpha}Y_{\alpha}}^{ \prime},\widetilde{Y_{\alpha}}^{\prime\prime},\dots) \tag{6}\]
They are computed applying the 'no model' approach and are calculated directly using the filtered variables \(\widetilde{w}_{\alpha}(\mathbf{Y},\widetilde{h})=\dot{w}_{\alpha}(\widetilde{ \mathbf{Y}},\widetilde{h})\)(Duwig et al., 2011).
### Test case
In the present work we consider a typical combustion chamber with a conical bluff-body similar to the one studied experimentally by Kypraiou et al. (2018). We replace the methane by the hydrogen diluted with nitrogen, which is an alternative zero-carbon fuel. The hydrogen mass fraction in the fuel stream is equal to 0.05. A schematic view of the analysed configuration is presented in Fig. 1. The cold fuel (300 K) is injected to the chamber through the 4 mm slot in a fuel pipe ended with a bluff-body surface. The bulk axial velocity of the fuel is assumed equal to 10 m/s. Inside the chamber the fuel ignites in a co-flowing air heated to 1000 K. A global equivalence ratio is equal to 0.047 and the theoretical power generated by the combustion process is 0.5 kW. Unlike in the original configuration of Kypraiou et al. (2018) we do not add the swirl to the oxidizer stream and the dominant effect on the flame is due to the changing geometry of the bluff-body.
Concerning the dynamics of the bluff-body stabilized flames, directly above the bluff-body a central recirculation zone (CRZ in Fig. 1) is formed. Its dimensions and inner structure depend on the bluff-body size and flow parameters. Inside CRZ a smaller inner recirculation zones (IRZs) may exist. The bluff-bodies investigated in this paper are characterised by the equivalent diameter \(D_{b}=2\sqrt{S/\pi}\)=25 mm (\(S\) - actual area of bluff body) and they are placed in a circular duct of diameter \(D\)=37 mm. Four different bluff-bodies are considered, the cylindrical with a flat and wavy wall (waviness oriented streamwise), square, and hexagram (star) one. All geometries are displayed in Fig. 1 along with the computational meshes prepared in the ANSYS Meshing module.
### Numerical details
As mentioned before, the simulations are performed using a two-stage approach schematically presented in Fig. 1. In the first stage (I) the ANSYS Fluent solver is used to model the flow through the entrance duct and around the bluff-bodies. During these calculations we acquire unsteady velocity signals at the end of the inlet section for a period of \(150D_{b}/U_{b}\) (\(U_{b}\)- bulk velocity of the co-flow). The computational grids for all bluff-bodies generated in ANSYS Meshing module are shown in the central cross-section in Fig. 1. They are block-structured and precisely fitted to the shape of each bluff-body. All geometries are discretised in this way that a near-wall cell size allows for a proper resolution of the turbulent boundary layers (\(y+\approx 1\)). The minimum element size is assumed at the level of 0.5 mm for the bluff-body with the flat wall, whereas in the case of the wavy-wall a finer mesh in the near-wall region is used with the cell height 0.25 mm. The overall numbers of cells in the inlet duct are equal to 3.6\(\times 10^{5}\) and 3.7\(\times 10^{5}\) for the cylindrical bluff-body with the smooth and wavy wall, respectively.
In the second part of the calculations (stage II) the SAILOR code is applied, which so far has been used in a number of studies devoted to combustion problems in jet type flames (Tyliszczak, 2015; Rosiak and Tyliszczak, 2016) and mixing layers (Wawrzak and Tyliszczak, 2019, 2020). The extracted velocity fields from the stage I of the simulations are imposed onto the inlet plane of the computational domain involving only the outer section of the burner (see Fig. 1). There is no heat transfer between the flame and the bluff-body, since the bluff-body is only embedded in the preliminary Fluent simulations. In a real situation, a flame attached to a bluff-body would certainly lose heat toward it. However, we neglected this effect and conducted comparative studies focusing on flow structure affected by different bluff-bodies. One would expect that the possible errors resulting from the applied
Figure 1: Schematic view on the computational configuration. Bluff-body geometries and computational meshes.
procedure would be similar in all cases.
In stage II, the computational domain extends \(6D_{b}\) in the axial direction and \(4D_{b}\) in the radial and tangential directions. A grid independent solution is obtained at a grid size \(N_{x}\times N_{z}\times N_{y}=144\times 144\times 192\) nodes. The nodes are compacted axially and radially but almost a uniform mesh is constructed close to the inlet section. The maximum cell volume is equal to \(1.7\times 10^{-9}\) m\({}^{3}\), that corresponds to the minimum cell volume of the computational meshes used in the ANSYS software. The time-step is computed according to the Courant-Friedrichs-Lewy (CFL) condition with the CFL number assumed equal to 0.25.
Concerning the boundary conditions, at the side boundaries the velocity is set to zero and the species and enthalpy are computed form the Neumann condition, i.e., \(\partial Y_{x}/\partial n=0\) and \(\partial h/\partial n=0\). The pressure is computed from the Neumann condition (\(\partial p/\partial n=0\)) both at the side boundaries as well as at the inlet plane. At the outlet boundary, all velocity components, species and enthalpy are computed from the convective boundary condition \(\partial C/\partial t+V_{c}\partial C/\partial y=0\), where \(C\) represents a general variable and \(V_{c}\) is the convection velocity calculated as the mean velocity in the outlet plane. To avoid back-flow the velocity \(V_{c}\) is limited such that \(V_{c}=\max(V_{c},0)\). The pressure at the outflow is assumed constant and equal to \(p=101325\) Pa.
The SAILOR code uses the projection method for pressure-velocity coupling (Tyliszczak, 2016) combined with a predictor-corrector method (Adams-Bashforth / Adams-Moulton) applied for the time integration. The spatial discretization is performed on half-staggered meshes applying the 6th order compact finite difference approximation for the momentum and continuity equations. The second-order TVD (Total Variation Diminishing) scheme with Koren limiter is used for the transport equations for chemical species and enthalpy. The time integration of the chemical source is performed with the help of the VODPK (Variable-coefficient Ordinary Differential equation solver with the Preconditioned Krylov method) solver (P. N. Brown and A.C. Hindmarsh, 1989) that is well suited for stiff systems. The chemical reaction terms are calculated using the CHEMKIN interpreter and the chemical reactions are computed using detailed mechanism of Mueller et al. (Mueller et al., 1999) involving 9 species and 21 reactions.
Preliminary simulations performed using ANSYS Fluent and the SAILOR code for the air stream flowing in the duct around the cylindrical bluff-body were conducted to verify the solution strategy. The comparison of the velocity profiles behind the bluff-body revealed that an interpolation of the velocity components required for the two-stage computational procedure does not introduce significant errors. As can be seen in Fig. 2 both solvers provided similar velocity evolution downstream of the bluff-body. However, the results obtained using the SAILOR code seem to be more accurate, at least qualitatively, as they reflect the symmetric shape of the bluff-body. The applied two-stage approach was also proven to yield the correct results in previous studies devoted to passively controlled jet flames issuing from polygonal nozzles (Kuban et al., 2021).
## 3 Results
### Instantaneous flame dynamics
As the streams of the fuel and oxidizer are injected from the inlet plane into a quiescent ambient fluid the flow undergoes the Kelvin-Helmholtz instability due to velocity gradients in the shear layers. The vortex rings are generated and detached from the bluff body edge. Their shape is respective to the bluff-body shape that can be deduced based on the \(Q\)-parameter isosurfaces presented in Fig. 3. In this figure, the blue cylinders denote the control surface used to predict the entrainment rate, which will be discussed later. Additionally, the vorticity magnitude plotted on the central cross-section is shown at the sides. Depending on the
Figure 2: Axial velocity profiles at three different heights above the bluff-body obtained using Fluent and SAILOR codes.
bluff-body, the flow fields differ and it can be seen that with increasing bluff-body complexity the flow patterns become more complex. Since the initial vortices are considerably dependent on the shape and wall topology of the bluff-body, their distortion changes as they travel downstream. Regarding the circular bluff-body, an earlier formation of streamwise vortical structures takes place for the case with the flat wall, whereas stronger vortex rings are observed when the wall is wavy. The results obtained for the cases with the square and star bluff-bodies reveal very complex flow structure. The presence of small-scale vortices generated behind the sharp corners causes a deformation of large vortical structures closer to the inlet compared to the case with the circular bluff-body. Comparison of the vorticity contours reveals that flow inside CRZ is the most energetic for the cases with the non-circular shapes, consistently with the findings for non-reacting jet flows (e.g. [33]). For the cases with the square and star bluff-bodies the vorticity contours seem to be more wrinkled, especially in the region closer to the inlet in the outer shear layer. Keeping in mind that the flames reflect the shapes of the bluff-bodies close to the inlet plane, it is expected that the flow is not a viable solution.
Figure 3: Instantaneous \(Q\)-parameter isosurfaces and vorticity modulus contours in the central cross-section shown at the boundaries. The partially translucent blue cylindrical surfaces denote a border of subdomain used for the calculation of the entrainment.
the impact of large-scale intense vortices on the flame structure and its shape is prominent.
Figure 4 illustrates the instantaneous distributions of OH species mass fraction, temperature, heat release rate (HRR), scalar dissipation rate (SDR), and the axial velocity. It can be observed that the flames are stabilized by the reversed flow (see the black isolines indicating the recirculation zones) that also manifests by a large level of OH species in the inner shear layer. The heat release rate is found to be the strongest near the flame edge where also the OH species is abundantly produced. Both the OH and HRR distributions indicate relatively low reactive regions inside the recirculation zones within the axial distance \(y/D=0.4-0.8\). Here, the fuel is rapidly mixed with the recirculating hot air and combustion products promoting the partial oxidation accompanied by a small amount of heat release. The scalar dissipation rate is low in these regions, contrary to the vicinity of the bluff-body flame and downstream axial locations. Finally, it can be seen that the recirculation zone, which have the shape depending on bluff-body, controls the shape of the flame.
Figure 4: Instantaneous distributions of the OH mass fraction, temperature, heat release rate (HRR), scalar dissipation rate (SDR), and the axial velocity. The black line represents its zero level. The results obtained for the various bluff-bodies are shown in the rows.
### Statistical properties
Detailed analysis of the flame dynamics in CRZ has been carried out based on the time-averaged results presented in Figs. 5-7. The time-averaging procedure started after the flames fully developed in the domain and was continued for \(250D_{b}/U_{b}\) time units resulting in well-converged statistical results. Figure 5 presents the contours of the time-averaged axial velocity and its fluctuations (r.m.s) for all cases considered. The location of the recirculation zone is represented by the isolines of \(\left<U_{y}\right>=0\). As can be seen, the narrowing of the central recirculation zones is caused by the the oxidizer stream. This is accompanied by a significant increase of the axial velocity fluctuations due to a strong shear forces. It is evident that the location, size and shape of the recirculation zones are only slightly affected by the wall topology compared to the changes caused by the bluff-body shape. However, there are noticeable differences regarding the cases with the flat and wavy walls. Unlike as in the case with the flat wall, the isoline of the zero axial velocity in the case with the wavy wall is nearly parallel to the fuel stream. It results in an enhanced mixing that manifests by a more uniform velocity field inside CRZ. It is also clearly seen that the recirculation zone shrinks radially in the case with star bluff-body. It significantly changes the shape also when the square bluff-body is used. In this case, the fluctuation maximum is shifted beyond the recirculation zone.
vortices generated at the sharp corners. This suggests that the square bluff-body can ensure the most intensified fuel/oxidizer mixing.
The entrainment is calculated following Wygnanski and Fiedler (1969) as:
\[\frac{d\dot{Q}}{dy}=\oint_{c}\,\mathbf{U}\cdot\mathbf{n}dc=\int_{0}^{2\pi}\, \langle V_{r}\rangle\,r\,d\theta \tag{7}\]
where \(\dot{Q}\) is the volume flux, \(c\) is a path outside a given region, \(\mathbf{U}\) and \(\mathbf{n}\) are the velocity and normal vectors, \(V_{r}\) is the radial velocity component. In this work we calculate the entrainment over a surface of a cylindrical domain with the radius \(1.5D_{b}\) that was shown in Fig. 3. This cylinder encloses the flow region in which the majority large vortical structures fits.
The entrainment rate profiles presented in Fig. 6(c) shows that starting from \(y/D>0.2\) the volume flux linearly increases up to \(y/D\approx 1.5\) that is approximately the distance where the recirculation zones end. It drops further downstream and reaches the local minimum at \(y/D\approx 3\). Concerning the impact of the bluff-body geometry on the entrainment, it can be seen that only minor differences between the cases with the flat and wavy exist. The most pronounced one seems to be the faster entrainment growth immediately above the bluff-body surface for the case with the flat wall. The situation changes when the star bluff-body is used. In this case, starting from approximately \(y/D=0.1\) the entrainment increases the fastest. This is caused by the vortices generated in the acute corners and their quicker break-up. They also alter the central flow downstream and therefore the maximum occurred at \(y/D\approx 1.5\) is lower than in the remaining cases. Regarding the region above the recirculation zones, i.e., \(y/D>2\), the solution obtained for the configuration with the square bluff-body differs the most. In this case, rather than the local minimum the profile reveals a plateau at relatively high level. A similar shape of the entrainment in this region is observed for the star bluff-body, yet, in this case the values of the entrainment are more or less the same as in the cases with the c
Fig. 6: Centerline profiles of the time-averaged axial velocity (a) and its fluctuations (b) and the profiles of the entrainment (c).
The increase of the entrainment in the configurations with the star and square bluff-bodies influences the radial distribution of the hydrogen mass fraction and temperature. Their profiles in three different axial distances inside the recirculation zone (\(x/D_{b}\)=1, \(y/D_{b}\)=1.25, \(y/D_{b}\)=1.5) are presented in Fig. 7. The enhancement of the mixing for the case with the square bluff-body is manifested by a reduced fuel mass fraction in the axis at the distance \(y/D_{b}\)=1.5 (see also Fig. 7(a)). Moreover, for this case the temperature at \(y/D_{b}\)=1.5 is approximately 200 K higher compared to the remaining cases. This means a more intense combustion process in the central part of the flow. On the other hand, in the region of the shear layer \(x/D_{b}\)=0.5 the lowest temperature is found in the configuration with the star bluff-body, especially at \(y/D_{b}\)=1.25 and \(y/D_{b}\)= 1.5. Apparently, in the configuration with the star shape bluff-body the flame is noticeably shifted towards the axis, whereas in the case with the square bluff body it seems compacted towards the inlet plane. This is in line with the observation made for the velocity field and CRZ structure.
Radial profiles of the temperature fluctuations presented in Fig. 7(c) also show important differences between particular solutions. Initially (\(y/D_{b}\)=1-1.25), the local maximum of the temperature fluctuations persists in the inner shear layer at \(x/D_{b}\)=0.1 for all the cases. Then, the fluctuations consistently rise and the maximum shifts towards the axis. It appears first on the axis at \(y/D_{b}\)=1.5 when the square bluff-body is used. In this case, the contribution of the fluctuating component is considerably higher close to the inlet plane and is transferred further downstream with 40 K higher values than for the cylindrical bluff-body with flat wall. Such increase in the fluctuating quantities consistently affects the mean component. Moreover, it can be seen that there are no substantial differences in the profiles obtained for different wall topologies.
Figure 7: Radial profiles of the time averaged H2 species mass fraction and temperature (mean and rms) at different axial locations.
Summary
The paper presented the analysis of flame structures behind a conical bluff-body burner having different wall topologies and shapes. The research was performed using the LES method and two numerical tools, ANSYS Fluent code and in-house code SAILOR. The former allowed for accurate simulations of the flow around the complex shape bluff-bodies and the latter for precise modelling of the combustion process. It has been shown that in the vicinity of the bluff-body its shape has a strong influence on vortices induced in the shear layer formed between the central recirculation zone and the oxidizer stream. Compared to the conventional circular shape, the acute corners caused the enhancement of the mixing processes in the central recirculation zone. The impact of change in geometry shape and wall topology observed in instantaneous results were quantified based on the time-averaged plots of mean and fluctuating velocity, fuel mass fraction and temperature. In general, it has been observed that in the configurations with the star and square bluff-bodies the flame in the recirculation zone is shifted towards the injected fuel stream. A faster oxidizer entrainment observed in these cases caused an enhanced mixing process. The analysis of the velocity distributions revealed a shortened recirculation zone by 15% of the equivalent bluff-body diameter and the smallest axial velocity drop for the configuration with the square bluff-body. This resulted in a significantly larger temperature in the centerline of the flame and higher temperature fluctuations. The influence of the wall topology (flat vs. wavy) in the configuration with the classical conical bluff-body turned out to be very small. It resulted only in minor modifications of the flow structures in the direct vicinity of the inlet plane but this had practically no impact on the flame formed downstream.
## Acknowledgment
This work was supported by the National Science Center in Poland (Grant 2020/39/B/ST8/02802) and statutory founds (BS/PB-1-100-3011/2022/P). The computations were carried out using the PL-Grid Infrastructure.
|
2309.04638 | On the effective dynamics of Bose-Fermi mixtures | In this work, we describe the dynamics of a Bose-Einstein condensate
interacting with a degenerate Fermi gas, at zero temperature. First, we analyze
the mean-field approximation of the many-body Schr\"odinger dynamics and prove
emergence of a coupled Hartree-type system of equations. We obtain rigorous
error control that yields a non-trivial scaling window in which the
approximation is meaningful. Second, starting from this Hartree system, we
identify a novel scaling regime in which the fermion distribution behaves
semi-clasically, but the boson field remains quantum-mechanical; this is one of
the main contributions of the present article. In this regime, the bosons are
much lighter and more numerous than the fermions. We then prove convergence to
a coupled Vlasov-Hartee system of equations with an explicit convergence rate. | Esteban Cárdenas, Joseph K. Miller, Nataša Pavlović | 2023-09-08T23:35:25Z | http://arxiv.org/abs/2309.04638v2 | # On the effective dynamics of Bose-Fermi mixtures
###### Abstract.
In this work, we describe the dynamics of a Bose-Einstein condensate interacting with a degenerate Fermi gas, at zero temperature. First, we analyze the mean-field approximation of the many-body Schrodinger dynamics and prove emergence of a coupled Hartree-type system of equations. We obtain rigorous error control that yields a non-trivial scaling window in which the approximation is meaningful. Second, starting from this Hartree system, we identify a novel scaling regime in which the fermion distribution behaves semi-classically, but the boson field remains quantum-mechanical; this is one of the main contributions of the present article. In this regime, the bosons are much lighter and more numerous than the fermions. We then prove convergence to a coupled Vlasov-Hartee system of equations with an explicit convergence rate.
###### Contents
* 1 Introduction
* 2 Main Results
* 3 Second Quantization I: Preliminaries
* 4 Second Quantization II: The Fluctuation Dynamics
* 5 Second Quantization III: Proof of Theorem 1
* 6 Quantum Optimal Transportation: Proof of Theorem 2
* A Well-posedness of the PDEs
* B Calculation of the Infinitesimal Generator
## 1. Introduction
In this work, we study the dynamics of a gas composed of \(M\) identical fermions and \(N\) identical bosons moving in Euclidean space \(\mathbb{R}^{d}\), for spatial dimensions \(d\geqslant 2.\) The Hilbert space for the system is the tensor product
\[\mathscr{H}\equiv L_{a}^{2}(\mathbb{R}^{dM})\otimes L_{s}^{2}(\mathbb{R}^{dN} )\, \tag{1.1}\]
where \(L_{a}^{2}\) and \(L_{s}^{2}\) correspond to the subspaces of antisymmetric and symmetric functions, respectively; we neglect any internal degrees of freedom the particles may have. We assume that the two systems are non-relativistic, and interact by means of a two-body potential
Introduction
Background of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the theory of the the theory of the theory of the theory of the theory of the theory of the theory of the the theory of the theory of the theory of the the theory of the theory of the theory of the the theory of the theory of the theory of the theory of the theory of the the theory of the theory of the theory of the theory of the the theory of the theory of the theory of the the theory of the theory of the the theory of the theory of the the theory of the the theory of the the theory of the theory of the theory of the the theory of the theory of the the theory of the the theory of the the theory of the theory of the the theory of the theory of the the theory of the the theory of the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the the theory of the theory of the the the theory of the the theory of the the theory of the the theory of the the theory of the theory of the the theory of the the theory of the the theory of the the theory of the the the theory of the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the the theory of the the theory of the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the the theory of the the theory of the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the the theory of the the theory of the the theory of the the theory of the the theory of the the the theory of the the theory of the the theory of the the theory of the the theory of the the the theory of the the the theory of the the theory of the the theory of the the theory of the the theory of the the the theory of the the theory of the the theory of the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the the theory of the the theory of the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the theory of the the theory of the the theory of the the theory of the the the theory of the the theory of the the theory of the the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the the theory of the the the theory of the the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the the theory of the the theory of the the theory of the the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the theory of the the theory of the the theory of the the theory of the the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the the theory of the the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the theory of the the theory of the the theory of the the theory of the the theory of the the the theory of the the theory of the the theory of the the theory of the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the theory of the the theory of the the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the the theory of the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the the theory of the theory of the the the theory of the theory of the the theory of the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the the theory of the the theory of the theory of the the theory of the the theory of the the theory of the theory of the the theory of the the theory of the theory of the the theory of the the theory of the theory of the the theory of the the theory of the the theory of the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the the theory of the theory of the the theory of the theory of the theory of the the theory of the the theory of the the theory of the theory of the theory of the the theory of the the theory of the the theory of the the theory of the theory of the the theory of the theory of the theory of the the theory of the theory of the theory of the theory of the theory of the theory of the the theory of the theory of the the theory of the theory of the theory of the the theory of the theory of the theory of the the theory of the theory of the theory of the the theory of the the theory of the theory of the the theory of the the theory of the theory of the the theory of the theory of the theory of the theory of the the theory of the theory of the theory of the theory of the the theory of the theory of the of the theory of the the theory of the theory of the theory of the theory of the the theory of the of the theory of the theory of the theory of the of the theory of the theory of the the theory of the theory of the theory of the theory of the theory of the the theory of the of the theory of the theory of the theory of the the theory of the of the theory of the theory of the of the theory of the theory of the of the theory of the of the theory of the theory of the of the theory of the theory of the of the theory of the of the theory of the theory of the of the theory of the theory of the of the theory of the theory of the of the theory of the theory of the theory of the of the theory of the of the theory of the theory of the theory of the theory of the of the theory of the theory of the of the theory of the of the theory of the theory of the of the theory of the of the theory of the of the theory of the theory of the of the theory of the of the theory of the theory of the of the theory of the of the theory of the theory of the of the theory of the of the theory of the of the theory of the of the theory of the of the theory of the of the theory of the theory of the of the of the theory of the of the theory of the of the theory of the of the of the theory of the of the theory of the of the theory of the of the theory of the of the of the of the of the theory of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of of the of the of the of the of the of the of the of the of the of the of the of the of the of of the of the of the of the of the of of the of the of the of of the of the of the of of the of the of the of the of the of the of the of the of the of the of the of the of of the of the of the of the of the of the of the of of the of the of the of the of the of the of the of of the of the of the of the of the of of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of of the of the of the of the of of the of the of the of the of of the of the of of the of the of the of the of the of of the of of the of the of the of the of the of of the of the of of the of the of of the of of the of of the of the of of the of the of the of of the of the of the of the of the of the of the of of the of the of the of the of of the of the of the of the of of the of the of the of the of of the of the of of the of the of the of the of the of the of of the of of the of the of the of the of the of the of of the of the of the of the of the of the of the of the of of the of the of of the of the of of the of the of the of the of the of the of the of of the of the of the of the of the of the of the of the of the of the of the of of the of the of the of the of of the of the of of the of the of the of of the of the of of the of the of the of the of the of the of of the of of the of the of the of the of the of of the of the of the of of the of the of the of of the of the of the of the of of the of the of the of of the of the of of the of of the of the of the of the of the of of the of the of the of the of the of of the of of the of the of the of of the of the of the of of the of the of the of the of the of of
see also [19] where uniform-in-time bounds for error estimates are proven. We also refer the reader to the following non-exhaustive list of references on related works [32, 2, 45, 53, 40, 41, 42] on the derivation of the Hartree equation, and to [23, 26, 24, 25, 1, 44, 14] on the derivation of the Gross-Pitaevskii equation.
For cold gases of \(M\) interacting fermions, one obtains the Hartree-Fock equation
\[i\hbar\partial_{t}\omega(t)=\Big{[}-\frac{\hbar^{2}}{2}\Delta+W* \rho(t)-X(t),\omega(t)\Big{]}\;, \tag{1.6}\]
where \(\rho(t,x)=M^{-1}\omega(t;x,x)\) is the density of particles, and \(X(t)\) is the so-called exchange term. Here, the solution \(\omega(t)\) is a positive, trace-class operator on \(L^{2}(\mathbb{R}^{d})\) whose trace is equal to \(M\); it ought to describe an interacting Fermi gas of \(M\) particles. The Hartree-Fock equation has been historically studied in two scaling regimes. The first derivation in the "microscopic regime" (namely, physical scales for which \(\hbar=1\)) was carried out in [4] for regular interactions, and later improved in [31] for Coulomb systems. In the "macroscopic regime" (namely, physical scales for which \(\hbar=M^{-1/d}\)), the first derivation was carried out in [22] for real analytic potentials, yielding an optimal convergence rate for short macroscopic times. More recently, the derivation was revisited in [9] using second quantization methods, relaxing significantly the regularity of the potentials, and extending the time-validity of the derivation-as a tradeoff, one here requires additional semi-classical structure on the initial data. This inspired substantial work in the literature; see for instance [10, 7, 54, 30].
On the other hand, the \(\hbar\downarrow 0\) limit of the Hartree-Fock equation (1.6) leads to the Vlasov equation
\[(\partial_{t}+p\cdot\nabla_{x}+F_{f}(t)\cdot\nabla_{p})f(t,x,p)=0 \tag{1.7}\]
where \(F_{f}(t,x)=-\int\nabla W(x-y)f(t,y,p)dydp\) is a mean-field force term, and \(f(t)\in L^{1}_{+}(\mathbb{R}^{2d}_{x,p})\) is a macroscopic phase-space distribution function. In particular, the Pauli exclusion principle, \(0\leqslant\omega\leqslant 1\) viewed in the sense of quadratic forms, still holds in the macroscopic limit \(0\leqslant f\leqslant 1\), in the pointwise sense. One can therefore understand the solution of the Vlasov equation (1.7) as the description of a macroscopic gas with quantum features. As for the derivation of the Vlasov equation from interacting quantum systems, the first works on the subject are [52, 59]. Here, the derivation is carried out in the macroscopic regime, by studying directly the BBGKY hierarchy associated to the many-body Schrodinger dynamics. The convergence from the Hartree/Hartree-Fock equation to the Vlasov equation was later analyzed in [48, 49, 33], although providing no convergence rate. The first work to provide a convergence rate for regular potentials was [3]. Later, the derivation of a convergence rate from the Hartree to the Vlasov dynamics was revisited and established in [8, 17] for a larger class of potentials.
### Bose-Fermi mixtures
Investigating degenerate mixtures of bosons and fermions is an extremely active area of research in experimental physics for constructing and understanding novel quantum bound states such as those in superconductors, superfluids, and supersolids [28, 56, 21]. These ultra-cold Bose-Fermi mixtures are fundamentally different
from degenerate gases with only bosons or fermions. They not only show an enriched phase diagram, but also a fundamental instability due to energetic considerations coming from the Pauli exclusion principle [51]. In particular, the fermionic particles maintain a higher energy than the bosonic particles in the ground state, causing a physical instability due to the energetic difference. This difference bounds from above the number of fermions allowed to exist in these doubly degenerate mixtures [18]. On the other hand, by varying the ratio of masses of bosons and fermions in these mixtures, experimentalists have studied Bose-Einstein condensates with bose-bose interactions mediated by fermions [20].
Inspired by this activity in the physics community, in this paper we start exploring the mathematical theory of Bose-Fermi mixtures, by studying the mean-field dynamics of the Hamiltonian introduced in (1.2). Here, one of the main challenges is understanding the physical scales of the system that allow for suitable analysis. Indeed, the Pauli Exclusion Principle implies that for confined gases of fermions at low temperatures, fermions have a characteristic energy that varies in a scale \(\hbar^{2}M^{\frac{2}{4}}/m_{F}\), whereas for bosons this is only of order \(\hbar^{2}/m_{B}\). Thus, finding a scaling regime in which one can capture the effective dynamics of the system presents a challenge in itself that we have to address.
Let us informally describe the main results of this paper, stated rigorously in Section 2. The first result, formulated in Theorem 1, contains a quantum mean-field approximation of the many-body Schrodinger dynamics. Here, we prove that the one-particle reduced density matrices for the corresponding fermionic and bosonic subsystems (see (2.2) for the definition) are effectively described by a pair of interacting variables
\[(\omega,\varphi):\mathbb{R}\to\mathscr{L}^{1}(L^{2}(\mathbb{R}^{d}))\times L^ {2}(\mathbb{R}^{d})\, \tag{1.8}\]
satisfying the system of self-consistent equations, which we shall refer to as the Hartree equation.
\[\begin{cases}&i\hbar\partial_{t}\omega=[-(\hbar^{2}/2m_{F})\Delta+\lambda N\, (V*\rho_{B}),\omega]\\ &i\hbar\partial_{t}\varphi=-(\hbar^{2}/2m_{B})\Delta\varphi+\lambda M\,(V* \rho_{F})\varphi\end{cases}. \tag{1.9}\]
Here \(\rho_{F}(t,x)=\frac{1}{M}\omega(t;x,x)\) and \(\rho_{B}(t,x)=|\varphi(t,x)|^{2}\) correspond to the fermionic and bosonic position densities, respectively. A few comments are in order.
* The derivation of the above equation can be heuristically justified as follows. We assume that at time \(t=0\), the system has been externally confined by means of an harmonic trap, at zero temperature. The many-body wave function is then expected to be of the form \(\psi(0)=\psi_{F}(0)\otimes\psi_{B}(0)\in\mathscr{H}\), where the fermionic component \(\psi_{F}(0)\) is a Slater determinant, and the bosonic component \(\psi_{B}(0)\) is a fully factorized state-for a precise meaning, see Assumption 1. If the interactions between particles are weak enough, then the above structure for the wave function is approximately valid also for later times. In particular, a direct calculation shows that plugging the formal ansatz \(\psi(t)=\psi_{F}(t)\otimes\psi_{B}(t)\)-preserving the initial data structure-in the Schrodinger dynamics (1.4) leads to the Hartree-Hartree equation (1.9) as a self-consistent approximation.
\(\square\) While the informal justification of the emergence of (1.9) may not present a challenge, obtaining a scaling regime in which the above system describes the leading order dynamics is non-trivial. More specifically, using Second Quantization methods, we provide rigorous control of error terms, which become small only in a particular non-empty parameter window-including both macroscopic and microscopic scaling regimes-see Theorem 1 for more details.
In our second result, stated in Theorem 2, we study a new scaling regime (contained in the aforementioned parameter window) for the combined system. In this scaling regime, the fermion component \(\omega\) is described semi-classically, but the bosonic component \(\varphi\) remains quantum-mechanical. More precisely, the regime that we focus on is given by
\[\lambda=\frac{1}{N}\,\qquad\hbar=\frac{1}{M^{\frac{1}{d}}}\,\qquad m_{B}= \hbar\,\qquad m_{F}=1\quad\text{and}\quad N=M^{1+\frac{1}{d}}. \tag{1.10}\]
Let us note here that under this scaling regime, the Hamiltonian that drives the boson field \(\varphi\) is proportional to \(\hbar\). In other words, it has the form \(\hbar(-(1/2)\Delta+V*\rho_{F}^{h})\). Thus, it follows that factors of \(\hbar\) cancel out in the second equation of (1.9), which enable us to analyze the semi-classical limit. To this end, we consider \(f^{\hbar}=W^{\hbar}[\omega^{\hbar}]\), the Wigner transform of the fermionic component (see (2.21) for the definition of the Wigner transform) and prove that in the \(\hbar\downarrow 0\) limit, there is convergence \((f^{\hbar},\varphi^{h})\to(f,\varphi)\), where the latter variables satisfy a coupled system of equations. This system has the following form, and we shall refer to it as the _Vlasov-Hartree equation_
\[\begin{cases}&(\partial_{t}+p\cdot\nabla_{x}+F_{B}(t,x)\cdot\nabla_{p})f=0\\ &i\partial_{t}\varphi=-\frac{1}{2}\Delta\varphi+(V*\rho_{F})\varphi\end{cases}. \tag{1.11}\]
Here, \(F_{B}(t,x)\equiv-\int\nabla V(x-y)|\varphi(t,y)|^{2}\mathrm{d}y\) is a mean-field force that the bosons exert over the fermions, and \(\rho_{F}(t,x)\equiv\int_{\mathbb{R}^{d}}f(t,x,p)\mathrm{d}p\) is the fermionic position density. Our proof of convergence is quantitative, and implements for the problem at hand recently developed techniques of Quantum Optimal Transportation (QOT) [36, 37, 38, 39, 46].
In addition to recognizing a mean-field scaling regime that allows us to rigorously derive the Hartree-Hartree system (1.9), one of the main contributions of this article is the identification of a novel mean-field semi-classical scaling regime in which the limiting dynamics of (1.9) is non-trivial. To the authors best knowledge, this regime had not been identified previously in the literature. We would like to note that such a regime can be related to systems of particles which have been studied (individually) in experiments, including Ytterbium-171 for fermions, and Helium-4 for bosons. Here, the ratio of the masses is of order \(m_{B}/m_{F}\simeq 10^{-2}\); comparable to the value of the _effective_ Planck's constant \(\hbar\) at particular length, time and energy scales. The reader is referred to Subsection 2.4 for more details on the physical interpretation of our mathematical results.
### Organization of this paper
In Section 2 we formulate our main results in Theorem 1 and 2. In Section 3 we give preliminaries on the Second Quantization formalism, that we will extensively use. In Section 4 we study the dynamics of the fluctuations around a
combined Bose-Einstein condensate and degenerate Fermi gas, which then we use to prove Theorem 1 in Section 5. Next, in Section 6 we adapt the formalism of Quantum Optimal Transportation and utilize it to prove Theorem 2. Finally, we include Appendix A where we state some basic well-posedness results regarding the PDEs introduced in this paper, and Appendix B where we give details of the calculation of the infinitesimal generator of the fluctuation dynamics.
### Acknowledgements
E.C is very thankful to F. Golse for an enlightening conversation regarding QOT, and to N. Benedikter, M. Porta and C. Saffirio for helpful discussions regarding the mean-field dynamics of Fermi systems. E.C. gratefully acknowledges support from the Provost's Graduate Excellence Fellowship at The University of Texas at Austin and from the NSF grant DMS-2009549, and the NSF grant DMS-2009800 through Thomas Chen. J.M. gratefully acknowledges support from the Provost's Graduate Excellence Fellowship at The University of Texas at Austin and from the NSF grants No. DMS-1840314. N.P. gratefully acknowledges support from the NSF under grants No. DMS-1840314, DMS-2009549 and DMS-2052789.
## 2. Main Results
In this section, we describe the main results of this article, that have already been announced in the introductory section. In particular, in subsection 2.1 we present Theorem 1, describing the quantum mean-field approximation of the many-body Schrodinger dynamics. Here, we prove an upper bound on the error term that comes from the approximation of the one-particle reduced density matrices for the corresponding fermionic and bosonic subsystems, and the solution of the Hartree-Hartree equation (1.9). In subsection 2.2 we present Theorem 2, in which we study the scaling regime (1.10) for the Bose-Fermi system. We prove that in the \(\hbar\downarrow 0\) limit, there is convergence towards the Vlasov-Hartree equation (1.11). As stated in the Introduction, one of the main contributions of this article is the identification of a semi-classical scaling regime in which the limiting dynamics of the coupled system is non-trivial-to the authors best knowledge, this regime had not been identified previously in the literature. In subsection 2.3 we briefly discuss the strategy of our proofs and the methods that we employ. Finally, in subsection 2.4 we discuss the physical interpretation of our main mathematical results.
_Notations_. Before we move on to the main results of this section, let us introduce some notation that we will be using in the rest of the article.
* \(L^{p}(\mathbb{R}^{n})\) denotes the Lebesgue spaces of \(p\)-th integrable functions, for \(p\in[1,\infty]\). The subset of non-negative functions is denoted by \(L^{p}_{+}(\mathbb{R}^{n})\).
* \(\mathscr{P}_{m}(\mathbb{R}^{n})\) is the space of probability measures on \(\mathbb{R}^{n}\) that have finite \(m\in\mathbb{N}\) moments.
* \(\mathscr{S}(\mathbb{R}^{n})\) denotes the space of Schwartz functions of rapid decay.
* \(W^{k,p}(\mathbb{R}^{n})\) for \(k\in\mathbb{N}\) and \(p\in[1,\infty]\), denotes the Sobolev space of functions with derivatives of order \(k\), that are \(p\)-th integrable.
* \(H^{s}(\mathbb{R}^{n})=W^{s,2}(\mathbb{R}^{n})\) for \(s\geqslant 1\) and \(\dot{H}^{s}(\mathbb{R}^{n})\) is the usual homogeneous Sobolev space.
* \(\mathscr{L}^{1}(X)\) stands for the Banach space of trace-class operators over \(X\), endowed with the norm \(\|A\|_{\mathrm{Tr}}\equiv\mathrm{Tr}|A|\). Similarly, \(\mathscr{L}^{2}(X)\) is the space of Hilber-Schmidt operators with norm \(\|A\|_{HS}\equiv\|A^{*}A\|_{\mathrm{Tr}}^{1/2}\).
* We say that \(C>0\) is a _constant_ if it is a positive number, independent of the physical parameters \(N,M,\hbar,\lambda,m_{F},m_{B}\) and \(t\).
* \(\langle\xi\rangle=(1+\xi^{2})^{1/2}\) denotes the standard angle bracket.
### The mean-field approximation
As we have previously discussed, the main interest in this article is to consider the mean-field dynamics generated by the Hamiltonian \(H\), introduced in (1.2). To this end, we introduce the wave function of the system at time \(t\in\mathbb{R}\)
\[\psi(t)\equiv\exp\Big{(}-itH/\hbar\Big{)}\psi \tag{2.1}\]
where \(\psi\in\mathscr{H}\) is the initial data of the system. Since our gas corresponds of two subsystems-each composed of identical particles-it will be crucial to introduce the corresponding fermionic and bosonic one-particle reduced density matrices. These are the time-dependent trace-class operators \(\gamma_{F}(t),\,\gamma_{B}(t)\in\mathscr{L}^{1}(L^{2}(\mathbb{R}^{d}))\) whose kernels are defined as the partial traces
\[\begin{cases}&\gamma_{F}(t;x,x^{\prime})\equiv M\,\int_{\mathbb{R}^{d(M-1)} \times\mathbb{R}^{dN}}\psi(t;x,\mathbf{x}_{M-1};\mathbf{y}_{N})\overline{\psi}\,(t;x^ {\prime},\mathbf{x}_{M-1};\mathbf{y}_{N})\mathrm{d}\mathbf{x}_{M-1}\mathrm{d}\mathbf{y}_{N}\\ &\gamma_{B}(t;y,y^{\prime})\equiv N\,\int_{\mathbb{R}^{dM}\times\mathbb{R}^{d (N-1)}}\psi(t;\mathbf{x}_{M};y,\mathbf{y}_{N-1})\overline{\psi}\,(t;\mathbf{x}_{M};y^{ \prime},\mathbf{y}_{N-1})\mathrm{d}\mathbf{x}_{M}\mathrm{d}\mathbf{y}_{N-1}\end{cases} \tag{2.2}\]
for \(t\in\mathbb{R}\) and \(x,x^{\prime},y,y^{\prime}\in\mathbb{R}^{d}\). Here, we denote by \(\mathbf{x}_{M-1}=(x_{1},\dots,x_{M-1})\), \(\mathbf{y}_{N}=(y_{1},\dots,y_{N})\) and similarly \(\mathbf{x}_{M}\) and \(\mathbf{y}_{N-1}\), the variables that are being traced out. In particular, we note here that the normalizations are chosen so that for all times \(t\in\mathbb{R}\) there holds
\[\mathrm{Tr}\gamma_{F}(t)=M\qquad\text{and}\qquad\mathrm{Tr}\gamma_{B}(t)=N. \tag{2.3}\]
We describe now the conditions that we shall impose in the initial data \(\psi\in\mathscr{H}\) associated to the solution of the Schrodinger dynamics (2.1). Physically, the situation we consider concerns the description of an initially prepared cold gas of fermions and bosons. At zero temperature, we expect the fermion component to be described as a degenerate Fermi gas-parametrized by a Slater determinant-whereas the boson gas undergoes Bose-Einstein condensation, described by a single-particle wave function. This will be made rigorous in Assumption 1. In addition, we require additional assumptions on the scales in which the Fermi gas varies-see Remark 2.2 for more details.
Let us now discuss the effective dynamics of this system. Indeed, if the interactions between particles are weak enough, we expect the zero temperature structure described above to approximately persist for times \(t>0\). Thus, we expect to find a mean-field approximation for the reduced density matrices \(\gamma_{F}\) and \(\gamma_{B}\) in terms of a pair of interacting variables
\[(\omega,\varphi):\mathbb{R}\to\mathscr{L}^{1}(L^{2}(\mathbb{R}^{d}))\times L^ {2}(\mathbb{R}^{d}) \tag{2.4}\]
that solve a self-consistent equation. A formal calculation using a time-dependent Slater determinant/fully factorized ansatz combined with replacing the full interaction \(V(x-y)\)
with an average over the position densities then yields
\[\left\{\begin{array}{rl}&i\hbar\partial_{t}\omega=[-(\hbar^{2}/2m_{F})\Delta+ \lambda N\,(V\ast\rho_{B}),\omega]\\ &i\hbar\partial_{t}\varphi=-(\hbar^{2}/2m_{B})\Delta\varphi+\lambda M\,(V\ast \rho_{F})\varphi\\ &(\omega,\varphi)(0)=(\omega_{0},\varphi_{0})\in\mathscr{L}^{1}(L^{2}(\mathbb{ R}^{d}))\times L^{2}(\mathbb{R}^{d})\end{array}\right., \tag{2.5}\]
up to leading order. Here \(\rho_{F}(t,x)=\frac{1}{M}\omega(t;x,x)\) and \(\rho_{B}(t,x)=|\varphi(t,x)|^{2}\) correspond to the fermionic and bosonic position densities, respectively. We shall refer to (2.5) as the Hartree-Hartree equation.
We are now ready to rigorously state our assumptions which we indicated above.
**Assumption 1** (Schrodinger initial data).: _We assume that the initial data \(\psi\in\mathscr{H}\) satisfies the following conditions._
1. _(Zero temperature)_ \(\psi\) _is a factorized state of the form_ \[\psi=\psi_{F}\otimes\psi_{B}\.\] (2.6) _Additionally, each factor satisfies the following assumptions._ 1. _There exists a rank-_ \(M\) _orthogonal projection_ \(\omega_{0}=\sum_{i=1}^{M}|\phi_{i}\rangle\langle\phi_{i}|\) _on the one-particle space_ \(L^{2}(\mathbb{R}^{d})\) _such that_ \[\psi_{F}(x_{1},\ldots,x_{M})=\frac{1}{\sqrt{M!}}\det_{1\leqslant i,j\leqslant M }\bigl{[}\phi_{i}(x_{j})\bigr{]}\.\] (2.7) _There exists a unit vector in the one-particle space_ \(\varphi_{0}\in L^{2}(\mathbb{R}^{d})\)_, such that_ \[\psi_{B}(y_{1},\ldots,y_{N})=\varphi_{0}(y_{1})\cdots\varphi_{0}(y_{N})\.\] (2.8)
2. _(Semi-classical bounds) Let_ \((\omega,\varphi)\) _be the solution of the Hartree-Hartree system (_2.5_) with initial data_ \((\omega_{0},\varphi_{0})\in\mathscr{L}^{1}(L^{2}(\mathbb{R}^{d}))\times L^{2} (\mathbb{R}^{d})\)_. Then, we assume that there exists a constant_ \(C>0\) _such that for all_ \(t\in\mathbb{R}\) _there holds_ \[\|[\![e^{i\xi\cdot x},\omega(t)]\!]\|_{\rm Tr}\leqslant C\exp(C|t|)M\hbar\, \langle\xi\rangle\ \,\qquad\forall\xi\in\mathbb{R}^{d}.\] (2.9)
**Remark 2.1** (Reduced density matrices).: Let us observe that under the above assumptions, one can calculate that the following relations hold at \(t=0\)
\[\gamma_{F}(0)=\omega_{0}\qquad\text{and}\qquad\gamma_{B}(0)=N\left|\varphi_{0} \rangle\langle\varphi_{0}\right|. \tag{2.10}\]
In other words, the initial data is such that the one-particle reduced density matrices are given by \(\omega_{0}\), and \(N\left|\varphi_{0}\rangle\langle\varphi_{0}\right|\), respectively.
**Remark 2.2** (Semi-classical bounds).: Let us now comment on the semi-classical bounds (2.9) that we present in Assumption 1. Two comments are in order.
_(i)_ These estimates were first considered by the authors in [9] in the derivation of the Hartree-Fock equation from interacting Fermi systems. In comparison to the present work, the scaling is chosen so that \(m_{F}=1\), \(\lambda=1/M\) and \(\hbar=1/M^{1/d}\). In this situation, the bounds (2.9) follow from the propagation-in-time along the solutions of (1.6) of the following trace estimates on the initial data
\[\|[\![x,\omega_{0}]\!]\|_{\rm Tr}\leqslant CM\hbar\quad\text{and}\quad\|[\![i \hbar\nabla,\omega_{0}]\!]\|_{\rm Tr}\leqslant CM\hbar. \tag{2.11}\]
From the physical point of view, the commutator estimates contained in (2.11) record _macroscopic_ or _semi-classical_ behaviour. Namely, in macroscopic units the value of the effective Planck constant \(\hbar\) is small and consequently, so are the quantum-mechanical commutators on the left hand side of (2.11). We refer the reader to the original reference for a more in-depth physical discussion. Finally, let us note that-to the authors best knowledge-the only example in \(\mathbb{R}^{d}\) in which the bounds (2.11) have been verified consists of states describing a non-interacting system of fermions, trapped by an harmonic potential [5, Theorem 3.2].
_(ii)_ In this work, we view the bounds (2.9) as a condition that we impose on both the initial data, _and_ the scaling regime under consideration. In particular, one might ask whether the bounds (2.9) can be guaranteed by properly choosing initial data and by restricting the physical parameters. We show that this is exactly the case.
Let us further explain. First of all, one may repeat the argument carried out in [5, Theorem 3.2] without any scaling restrictions and prove that the trace estimates for the same (externally trapped) initial data take the form
\[\|[x,\omega_{0}]\|_{\mathrm{Tr}}\leqslant Cm_{F}^{-1/4}M\sqrt{\hbar M^{-1/d}} \qquad\text{and}\qquad\|[i\hbar\nabla,\omega_{0}]\|_{\mathrm{Tr}}\leqslant Cm_{ F}^{1/4}M\sqrt{\hbar M^{-1/d}} \tag{2.12}\]
where \(C\) is a constant that only depends on the strength of the trap \(V_{\mathrm{ext}}(x)=\kappa x^{2}/2\). As a side remark, we note that (2.12) with the scaling mentioned in _(i)_ recovers (2.11). Furthermore, one may adapt the proof of [9, Proposition 3.4] to show that the following bounds are propagated along the solutions of the Hartree-Hartree equation (2.5)
\[\sup_{\xi\in\mathbb{R}^{d}}\langle\xi\rangle^{-1}\,\|[e^{i\xi\cdot x},\omega(t )]\|_{\mathrm{Tr}}\leqslant C\big{(}m_{F}^{-1/4}+m_{F}^{-3/4}\big{)}e^{\big{[} \max\big{\{}1,\lambda N/m_{F}\big{\}}|t|\big{]}}M\sqrt{\hbar M^{-1/d}} \tag{2.13}\]
where we have kept track of all the physical parameters for comparison. In particular, one here obtains (2.9) provided we assume that
\[m_{F}\geqslant 1,\qquad\lambda N\leqslant m_{F}\quad\text{and}\quad M^{- \frac{1}{d}}\leqslant\hbar\;. \tag{2.14}\]
In other words, by considering externally trapped initial data satisfying (2.12), and by constraining the parameter window as in (2.14), it holds true that the solution of the Hartree-Hartree equation (2.5) verifies the semi-classical bounds contained in Assumption 1. Finally, let us mention here that-while the above discussion is important in the macroscopic regime-in the microscopic regime (_i.e._ when \(\hbar=1\)), the bounds (2.9) are automatically satisfied, independently of the initial data \(\omega_{0}\), and the constraint (2.14).
The natural topology in which convergence is expected to hold corresponds to that of trace-class operators. Our main theorem is the following result.
**Theorem 1** (The mean-field approximation).: _Assume that the interaction potential satisfies \(\int_{\mathbb{R}^{d}}\langle\xi\rangle^{2}\,|\hat{V}(\xi)|\mathrm{d}\xi<\infty\). Let us consider the following:_
* _Let_ \(\psi(t)=\exp(-itH/\hbar)\psi\) _be the wave function of the system, with initial data verifying Assumption_ 1_. Let_ \(\gamma_{F}(t)\) _and_ \(\gamma_{B}(t)\) _be the one-particle reduced density matrices, as defined in (_2.2_)._
\(\Box\)
_Let \((\omega(t),\varphi(t))\) be the solution of the Hartree-Hartree equation (2.5)._
_Additionally, assume that the scaling regime is chosen so that the following is satisfied: for all \(\ell\geqslant 1\) there exists \(k_{\ell}\geqslant 1\) such that for all physical parameters \(\lambda\), \(\hbar\), \(N\) and \(M\) there holds_
\[\frac{\lambda\sqrt{N}}{\hbar}M^{\ell}\leqslant(\hbar M)^{k_{\ell}}. \tag{2.15}\]
_Then, there exists a constant \(C>0\) such that for all \(t\in\mathbb{R}\) there holds_
\[\frac{1}{M}\|\gamma_{F}(t)-\omega(t)\|_{\mathrm{Tr}} \leqslant\frac{C}{\sqrt{M}}\mathrm{exp}\left[C\lambda\sqrt{\frac{ NM}{\hbar}}\bigg{(}1+\sqrt{\frac{\hbar M}{N}}\bigg{)}\exp|t|\right]\,, \tag{2.16}\] \[\frac{1}{N}\|\,\gamma_{B}(t)-N\,|\varphi(t)\rangle\langle\varphi (t)|\,\|_{\mathrm{Tr}} \leqslant\frac{C}{\sqrt{N}}\mathrm{exp}\left[C\lambda\sqrt{\frac{ NM}{\hbar}}\bigg{(}1+\sqrt{\frac{\hbar M}{N}}\bigg{)}\exp|t|\right]\,. \tag{2.17}\]
**Remark 2.3**.: The above Theorem provides an explicit convergence rate from the many-body Schrodinger dynamics to the solution of the Hartree-Hartree system. Note that we have chosen not to fix the parameter regime in the theory-this is in contrast to most works in the literature. The reason is that the scaling regime in which Theorem 1 provides a reasonable approximation was _not_ known by the authors in the onset of this investigation. Our interest then was not to prove an optimal convergence rate, but to actually _find_ a meaningful scaling regime.
Regarding the previous remark, Theorem 1 contains a meaningful approximation whenever the argument in the time dependent function is \(\mathcal{O}(1)\) with respect to the physical parameters. Let us describe two scaling regimes that we regard as interesting.
**Microscopic regime**. If one is working in microscopic units, we may set \(\hbar=1\). One can then investigate the mean-field regime in which the number of bosons and fermions is the same. Namely
\[\lambda=\frac{1}{N}\,\qquad\hbar=1\quad\text{and}\quad N=M \tag{2.18}\]
and we also set \(m_{F}=m_{B}=2\) for completeness. Clearly, the condition (2.15) is verified with \(k_{\ell}=\ell\). In this case, one should regard Theorem 1 as capturing the emergence of the mean-field equations
\[\begin{cases}&i\partial_{t}\omega=[-\Delta+(V*\rho_{B}),\omega]\\ &i\partial_{t}\varphi=-\Delta\varphi+(V*\rho_{F})\varphi\,\end{cases} \tag{2.19}\]
as the leading order term driving the dynamics of the Hamiltonian \(H\), for our choice of initial data. Note that in this case, the semiclassical condition imposed in (2.9) is verified immediately, independently of the structure of the initial data. However, since \(\mathrm{Tr}\omega(t)=M\), the above equation does not yield a non-trivial limit when \(M\to\infty\).
**Macroscopic regime**. In macroscopic units, the value of \(\hbar\) becomes small. As is well-known, for a system of confined fermions, the energy scale of each particle is \(\hbar^{2}M^{\frac{2}{4}}/m_{F}\)
One is then interested in the regime for which this scale is \(\mathcal{O}(1)\) - this is the so-called semiclassical limit that has been studied extensively in the literature for systems of interacting fermions. On the other hand, for bosons the energy per particle has the scale \(\hbar^{2}/m_{B}.\) We can then tune the parameters so that the _total energy_ of the system is balanced. For instance, we may look at
\[\lambda=\frac{1}{N}\,\qquad\hbar=\frac{1}{M^{\frac{1}{d}}}\,\qquad m_{B}= \hbar\,\qquad m_{F}=1\quad\text{and}\quad N=M^{1+\frac{1}{d}}. \tag{2.20}\]
It is possible to check that condition (2.15) is verified with \(k_{\ell}=\frac{1+d(2\ell-1)}{2(d-1)}.\) Similarly, one may readily verify that the condition (2.14) is satisfied. This leads to a natural candidate on the initial data for the fermionic component \(\omega_{0}\) that verifies Assumption 1; see Remark 2.2 for more details.
### The semi-classical limit
In this subsection, we adopt the macroscopic scaling regime given by the equations (2.20). Let us now motivate the upcoming semiclassical analysis of the coupled Hartree system. In what follows, we shall incorporate the \(\hbar\) dependence on the solution \((\omega^{h},\varphi^{h})\) of the coupled Hartree-Hartree equation (2.5). We start by noting that one of the main consequences of the scaling regime (2.20) is that \(\lambda M=\hbar.\) Hence, the Hamiltonian for the boson field \(\varphi_{t}^{h}\) is proportional to \(\hbar,\) i.e. it has the form \(\hbar(-(1/2)\Delta+V*\rho_{F}^{h})\)-it follows that factors of \(\hbar\) cancel out in the equation. Thus, the solution of the coupled Hartee equation can now be analyzed semi-clasically, in the limit \(\hbar\downarrow 0\). Indeed, for \(t\in\mathbb{R}\) we consider the Wigner transform of the fermionic density matrix
\[f^{h}(t)\equiv W^{h}[\omega^{h}(t)]\quad\text{where}\quad W^{h}[\omega](x,p) \equiv\frac{1}{(2\pi)^{\frac{d}{2}}}\int_{\mathbb{R}^{d}}\omega\Big{(}x+\frac{ y}{2},x-\frac{y}{2}\Big{)}e^{-i\frac{y\cdot p}{h}}\mathrm{d}y\, \tag{2.21}\]
where \((x,p)\in\mathbb{R}^{d}\times\mathbb{R}^{d}.\) Heuristically, the pair \((f^{h},\varphi^{h})\) converges to a solution \((f,\varphi)\in L^{1}_{+}(\mathbb{R}^{2d})\times L^{2}(\mathbb{R}^{d})\) of
\[\begin{cases}&(\partial_{t}+p\cdot\nabla_{x}+F_{B}(t,x)\cdot\nabla_{p})f=0\\ &i\partial_{t}\varphi=-\frac{1}{2}\Delta\varphi+(V*\rho_{F})\varphi\\ &(f,\varphi)(0)=(f_{0},\varphi_{0})\in L^{1}_{+}(\mathbb{R}^{2d})\times L^{2} (\mathbb{R}^{d})\end{cases} \tag{2.22}\]
where \(F_{B}(t,x)\equiv-\int\nabla V(x-y)|\varphi(t,y)|^{2}\mathrm{d}y\) and \(\rho_{F}(t,x)\equiv\int_{\mathbb{R}^{d}}f(t,x,p)\mathrm{d}p.\) We shall refer to (2.22) as the Vlasov-Hartee equation.
**Distances**. As we have mentioned previously, we are interested in the case in which the initial system is at zero temperature. In this situation, Thomas-Fermi theory suggests that one cannot expect the initial data of the classical fermion subsystem to have plenty of regularity (see for reference [47]). In what follows, we introduce distances which we will use throughout this article. In particular, they will be necessary in our analysis of convergence to the Vlasov-Hartee system in this context of "low regularity".
\(\square\)_Wasserstein distance_. Given \(n\in\mathbb{N}\), we denote the \(n\)-th Wasserstein distance between probability measures \(\mu,\nu\in\mathscr{P}_{n}(\mathbb{R}^{2d})\) by
\[W_{n}(\mu,\nu)\equiv\Big{(}\inf_{\pi}\int_{\mathbb{R}^{2d}\times\mathbb{R}^{2d}}| z-z^{\prime}|^{n}\pi(\mathrm{d}z\otimes\mathrm{d}z^{\prime})\Big{)}^{\frac{1}{n}} \tag{2.23}\]
where the initiimum is taken over all couplings of \(\mu\) and \(\nu\), i.e. probability measures \(\pi\in\mathscr{P}(\mathbb{R}^{2d}\times\mathbb{R}^{2d})\) with first marginal \(\mu\), and second marginal \(\nu\).
\(\square\)_Fourier-based norms_. Given \(s\in\mathbb{R}\), and \(g:\mathbb{R}^{2d}\to\mathbb{C}\) we introduce the following Fourier-based norm
\[|g|_{s}\equiv\sup_{\zeta\in\mathbb{R}^{2d}}(1+|\zeta|)^{-s}\ |\hat{g}(\zeta)|. \tag{2.24}\]
In applications, we take \(s\geqslant 0\). Hence, we also regard \(|\cdot|_{s}\) as a negative Sobolev norm.
Our assumption for the initial data now reads.
**Assumption 2** (Hartree initial data).: _The pair \((\omega_{0}^{\hbar},\varphi_{0}^{\hbar})\in\mathscr{L}^{1}(L^{2}(\mathbb{R}^ {d}))\times L^{2}(\mathbb{R}^{d})\) satisfies the following conditions._
1. \(\omega_{0}^{\hbar}\in\mathscr{L}^{1}(L^{2}(\mathbb{R}^{d}))\) _satisfies_ \(0\leqslant\omega_{0}^{\hbar}=(\omega_{0}^{\hbar})^{*}\leqslant 1\)_,_ \(\mathrm{Tr}\omega_{0}^{\hbar}=M\) _and_ \(\mathrm{Tr}\omega_{0}^{\hbar}(-\Delta+x^{2})<\infty\)_. Further, we assume that there exists a real-valued_ \(f_{0}\in L^{1}(\mathbb{R}^{2d})\) _such that :_ 1. \(0\leqslant f_{0}(x,p)\leqslant 1\) _and_ \(\int_{\mathbb{R}^{2d}}f_{0}(x,p)\mathrm{d}x\mathrm{d}p=1\)_._ 2. _There are finite second moments:_ \(f_{0}\in\mathscr{P}_{2}(\mathbb{R}^{2d})\)_._ 3. \(\lim_{\hbar\downarrow 0}|f_{0}-f_{0}^{\hbar}|_{1}=0\)_, where_ \(f_{0}^{\hbar}=W^{\hbar}[\omega_{0}^{\hbar}]\)__._
2. _There exists_ \(\varphi_{0}\in L^{2}(\mathbb{R}^{d})\) _with_ \(\|\varphi_{0}\|_{L^{2}}=1\) _such that_ \(\lim_{\hbar\downarrow 0}\|\varphi_{0}-\varphi_{0}^{\hbar}\|_{L^{2}}=0\)_._
**Remark 2.4** (Low regularity).: Let us note that in Assumption 2 there are _no_ regularity requirements on the limits of the sequence of initial data. Of course, this comes with a price. First, we shall need the interaction potential to be at least \(V\in C^{1,1}(\mathbb{R}^{d};\mathbb{R})\). Second, the metric that we use to measure the distances between fermion densities is rather weak. Namely, it involves testing over functions \(h(x,p)\) for which both the integrals \(\int_{\mathbb{R}^{2d}}\left\langle\zeta\right\rangle^{2}|\hat{h}(\zeta)|^{2} \mathrm{d}\zeta\) and \(\int_{\mathbb{R}^{2d}}|\zeta|\,|\hat{h}(\zeta)|\mathrm{d}\zeta\) are finite. Third-compared to similar results in the literature-we need two moments in phase space, rather than only one.
**Remark 2.5** (Fermion mode of convergence).: In Assumption 2 we require that \(f_{0}^{\hbar}\to f_{0}\) with respect to the negative Sobolev norm \(|\cdot|_{1}\). Two comments are in order.
_(i)_ This assumption on the initial data can be verified for examples of non-interacting Fermi gases, that arise as minimizers of variational problems in the presence of an external trap \(V_{\mathrm{ext}}(x)\). More precisely, let \(\omega_{0}^{\hbar}\) be the one-particle reduced density matrix of an approximate ground state \(\psi_{F}^{\hbar}\) of the minimization problem
\[E(M)\,=\,\inf_{\bigwedge_{i=1}^{M}L^{2}(\mathbb{R}^{d})}\,\sigma\bigg{[}\sum_{ i=1}^{M}-\hbar^{2}\Delta_{x_{i}}+V_{\mathrm{ext}}(x_{i})\bigg{]}\,\qquad\hbar=M^{-1/d}. \tag{2.25}\]
Without loss of generality, one may assume that \(\omega_{0}^{h}\) is an approximate minimizer of the associated Hartree-Fock problem
\[E_{HF}(M)=\inf\Big{\{}\mathrm{Tr}(\omega[-\hbar^{2}\Delta+V_{\mathrm{ext}}(x)])\ :\ \omega=\omega^{2}=\omega^{*},\ \mathrm{Tr}\omega=M\Big{\}}. \tag{2.26}\]
Thus, \(\omega_{0}^{h}\) can be assumed to be an orthogonal projection, _i.e._\(\omega_{0}^{h}=(\omega_{0}^{h})^{2}\), which is equivalent to \(\psi_{F}^{h}\) being a Slater determinant. It has been proven in [47, Theorem 1.2] that, as \(\hbar\downarrow 0\) (and, up to extraction of a subsequence), the Wigner transform \(f_{0}^{h}=W^{h}[\omega_{0}^{h}]\) converges in a weak sense to the function \(f_{0}(x,p)=\mathds{1}(p^{2}\leqslant C_{TF}\rho(x)^{2/d})\). Here, \(C_{TF}\) is the Thomas-Fermi constant, and \(\rho\) is the minimizer of the associated Thomas-Fermi problem
\[\mathcal{E}(\rho)=\frac{d}{d+2}\int_{\mathbb{R}^{d}}\rho(x)^{1+\frac{2}{d}} \mathrm{d}x+\int_{\mathbb{R}^{d}}V_{\mathrm{ext}}(x)\rho(x)\mathrm{d}x \tag{2.27}\]
with constraints \(\rho(x)\geqslant 0\), \(\int\rho(x)dx=1\) and \(\rho\in L^{1}\cap L^{1+\frac{2}{d}}\). In particular, the \(\hbar\downarrow 0\) convergence can be shown to hold with respect to the negative Sobolev norm (2.24) as well. In other words, there holds \(\lim_{\hbar\downarrow 0}|f_{0}^{h}-f_{0}|_{1}=0\). This is proved by the first author in upcoming work [13].
_(ii)_ The above discussion should be compared with the \(L^{1}\)-norm convergence considered in [8, Theorem 2.5] for the initial data, in the context of interacting Fermi gases. While their conclusion is strictly stronger-that is, stronger mode of convergence-to the authors best knowledge the only examples in \(\mathbb{R}^{d}\) for which the \(L^{1}\) convergence has been verified correspond to coherent states. Unfortunately, these are not examples of zero temperature states (_i.e._ orthogonal projections). We believe that there is value in our approach since-as the examples considered in _(i)_ arise form orthogonal projections-we are able to put together Theorem 1 and our next result. In particular, with this approach we obtain a _quantitative_ convergence from the Schrodinger to the Vlasov-Hartee dynamics, in the situation of low regularity or-put differently-the zero temperature situation.
Our main result concerning the semi-classical limit of the coupled Hartree equations is the following theorem.
**Theorem 2** (The semi-classical limit).: _Assume that the interaction potential satisfies \(\int_{\mathbb{R}^{d}}\left\langle\xi\right\rangle^{2}|\hat{V}(\xi)|\mathrm{d} \xi<\infty.\) Let us consider the following:_
* _Let_ \((\omega^{h},\varphi^{h})\) _be the solution of the Hartree-Hartree system (_2.5_), with intial data_ \((\omega_{0}^{h},\varphi_{0}^{h})\) _satisfying Assumption_ 2_. Denote by_ \(f^{h}(t)=W^{h}[\omega^{h}(t)]\) _its Wigner transform._
* _Let_ \((f,\varphi)\) _be the solution to the Vlasov-Hartree system (_2.22_), with initial data_ \((f_{0},\varphi_{0})\)_._
_Then, there exists \(C>0\) such that for all times \(t\in\mathbb{R}\) and test functions \(h:\mathbb{R}^{2d}\to\mathbb{C}\), the following inequalities hold true_
\[|\left\langle h,(f_{t}-f_{t}^{h})\right\rangle| \leqslant C_{2}(t)\|\left\langle\zeta\right\rangle\hat{h}\|_{L^{1 }}\Big{(}|f_{0}^{h}-f_{0}|_{1}+\hbar\Big{)}+C_{1}(t)\|\zeta|\hat{h}\|_{L^{2}} \Big{(}\|\varphi_{0}-\varphi_{0}^{h}\|_{L^{2}}+\hbar^{1/2}\Big{)}\] \[\|\varphi_{t}-\varphi_{t}^{h}\|_{L^{2}} \leqslant C_{2}(t)\Big{(}|f_{0}^{h}-f_{0}|_{1}+\hbar\Big{)}+C_{1} (t)\Big{(}\|\varphi_{0}-\varphi_{0}^{h}\|_{L^{2}}+\hbar^{1/2}\Big{)}. \tag{2.28}\]
_Here, we are denoting \(C_{1}(t)=C\exp(Ct^{2})\) and \(C_{2}(t)=C\exp(C\exp C|t|)\)._
**Remark 2.6** (Convergence rates).: The above result gives an explicit convergence rate from the Hartree-Hartree to the Vlasov-Hartee dynamics. Of course, this is not the optimal convergence rate in \(\hbar\), which we believe should be \(\mathcal{O}(\hbar)\) on the right hand side. In this work, we have not tried to optimize this rate. Indeed, our main goal was to _identify_ the leading order equations that drive the effective dynamics of the Bose-Fermi mixture, which Theorem 2 appropriately does. In a similar spirit, we have not tried to optimize the growth-in-time of the constants involved in our estimates.
**Remark 2.7** (A variational norm).: Here we have formulated our theorem in terms of test functions. Alternatively, it can be formulated in terms of the norm
\[\|f\|\equiv\sup\big{\{}\,\langle h,f\rangle:h\in\mathscr{S}(\mathbb{R}^{2d}), \|\,\langle\zeta\rangle\,\hat{h}\|_{L^{1}}\leqslant 1\ \text{ and }\ \|\zeta|\,\hat{h}\|_{L^{2}}\leqslant 1\big{\}}, \tag{2.29}\]
which is strictly weaker than the norms \(\|\cdot\|_{\dot{H}^{-1}}\) and \(|\cdot|_{1}\).
### Strategy of the proofs
Let us outline the proofs of our main results, Theorem 1 and 2.
The proof of Theorem 1 consists of the study of an appropriate fluctuation dynamics. For gases of interacting bosons the approach was first carried out in [55], whereas for gases of interacting fermions the approach was employed in [9]. The difficulty of tackling the Bose-Fermi mixture lies in how to properly combine these two approaches. In the present paper, we adapt the approach of studying fluctuation dynamics for the problem at hand. Namely, we introduce in Section 3 the formalism of Second Quantization on Fock space \(\mathscr{F}\). In this formalism, coherent states describing Bose-Einstein condensates are parametrized by a Weyl operator \(\mathcal{W}[\sqrt{N}\varphi(t)]\), whereas degenerate Fermi gases are implemented by a particle-hole transformation \(\mathcal{R}[\omega(t)]\); see Sections 3.1.2 and 3.2.2 for more details. In Section 4 we then study the dynamics of _fluctuations_ around the tensor product of these states. Roughly speaking, the problem is then reduced to estimating the "number of excitations" outside of \((\varphi(t),\omega(t))\). We implement this point of view by introducing a new unitary transformation on \(\mathscr{F}\), denoted by \(\mathcal{U}(t,s)\) and defined in (4.3). Its understanding is fundamental in our analysis, and leads to the number estimates contained in Theorem 3. The proof of these estimates is based on the analysis of its infinitesimal generator, which has the form (see Lemma 4.1 for details)
\[\mathcal{L}(t)=\mathrm{d}\Gamma_{F}[h_{F}(t)]\otimes\mathds{1}+\mathds{1} \otimes\mathrm{d}\Gamma_{B}[h_{B}(t)]+\lambda\sqrt{N}\mathcal{L}_{2,1}(t)+ \lambda\mathcal{L}_{2,2}(t) \tag{2.30}\]
Here, the difficulty lies in controlling the terms \(\mathcal{L}_{2,1}(t)\) and \(\mathcal{L}_{2,2}(t)\), which do not commute with particle number operators and can potentially increment the number of fluctuations.
The proof of Theorem 2 is essentially divided in two steps. First, we rely on techniques developed in [8] to understand the stability of the Hartree-Hartree equation (2.5) with respect to the metric in \(\mathscr{L}^{1}(L^{2}(\mathbb{R}^{d}))\) that is induced by the norm \(|\cdot|_{s}\), defined in (2.24). Second, using some recently developed tools from Quantum Optimal Transporation, we are able to show that the convergence from the Hartree-Hartree (2.5) to the Vlasov-Hartree (2.22) dynamics can be controlled in the negative Sobolev space \(\dot{H}^{-1}\). These tools include the introduction in [36] of the quantum analogue of the classical Wasserstein distance
between two probability measures, which have been later developed and applied to the analysis of single species many-particle systems in a series of papers-see for instance [37, 38, 39, 46]. One of the main advantages of these techniques is the fact that they require _no_ regularity on the initial data under consideration. This is compatible with Assumption 2, in which we assume our initial data corresponds to a zero temperature state-for fermionic systems, the \(\hbar\downarrow 0\) limit of the Wigner function of these states is expected to be of Thomas-Fermi type, which fails for instance to be in \(W^{1,1}\).
### Physical discussion of our results
Let us recall that we have chosen to work with dimensionless variables. Thus, the physical parameters \(\hbar\), \(m_{F}\), \(m_{B}\) and \(\lambda\) that appear in the Hamiltonian (1.2) correspond to the numerical values that the original parameters take, when measured with respect to some chosen scales of length \(\ell_{0}\), time \(\tau_{0}\) and mass \(m_{0}\). In other words, denoting by \(E_{0}=m_{0}\ell_{0}^{2}\tau_{0}^{-2}\) the characteristic energy scale of the system, we have
\[\hbar=\frac{\hbar_{0}}{E_{0}\tau_{0}},\qquad m_{B}=\frac{m_{B,0}}{m_{0}}, \qquad m_{F}=\frac{m_{F,0}}{m_{0}},\quad\text{and}\quad\lambda=\frac{\lambda_ {0}}{E_{0}} \tag{2.31}\]
where \(\hbar_{0}=10^{-34}\mathsf{kg}\,\mathsf{m}^{2}\,\mathsf{s}^{-1}\), \(m_{F,0}\) and \(m_{B,0}\) are the original masses of fermions and bosons, and \(\lambda_{0}\) is the original interaction strength. Recall that in the scaling regime (2.20) we have considered \(m_{F}=1\) and \(m_{B}=\hbar\). In particular, let us set \(m_{0}=m_{F,0}\) as our mass scale. Hence, \(m_{F}=1\) and we find that \(m_{B}=\hbar\) is equivalent to the dimensionless equation
\[\frac{m_{B,0}}{m_{F,0}}=\frac{\hbar_{0}}{m_{F,0}\ell_{0}^{2}\tau_{0}^{-1}}. \tag{2.32}\]
Let us now comment on the physical validity of this regime. Amongst the lightest bose atoms that have been used to create Bose-Einstein condensates lies Helium-4, with a mass of \(\sim 4\,\mathsf{Da}\). On the other hand, Ytterbium-171 has been cooled down to form degenerate Fermi gases, and has a mass of \(171\,\mathsf{Da}\sim 3\times 10^{-25}\,\mathsf{kg}\). The ratio between these two masses then gives a value \(m_{B}\sim 2\times 10^{-2}\). In particular, in an experiment that has been confined to length scales of \(\ell_{0}\sim 10\mu\mathsf{m}\), and is observed for \(\tau_{0}\sim 10^{-2}\mathsf{s}\), gives a value for the effective Planck's constant of \(\hbar\sim 3\times 10^{-2}\)-comparable to the ratio of the masses of the Bose-Fermi mixture.
We also note that some small mass limits have been considered in theoretical physics in the search for evidence of ultralight bosons in cosmology. For example, in [50], the authors consider the Schrodinger-Poisson system, and compare it to the Vlasov-Poisson equation in the classical limit with \(\hbar/m_{B}\to 0\). In their analysis, they consider bosonic masses \(m_{B,0}\) in the range of \(10^{-22}\mathsf{eV}/\mathsf{c}^{2}\sim 10^{-58}\mathsf{kg}\), corresponding to axion dark matter.
## 3. Second Quantization I: Preliminaries
It is convenient to study the Hamiltonian (1.2) in the second quantization formalism. Here, we allow the number of particles to fluctuate and thus consider the Hilbert space composed of the corresponding Fock spaces. Namely, we let
\[\mathscr{F}\equiv\mathscr{F}_{F}\otimes\mathscr{F}_{B}\,\quad\text{where}\quad \mathscr{F}_{F}\equiv\mathbb{C}\oplus\bigoplus_{n=1}^{\infty}L_{a}^{2}( \mathbb{R}^{dn})\quad\text{and}\quad\mathscr{F}_{B}\equiv\mathbb{C}\oplus \bigoplus_{n=1}^{\infty}L_{s}^{2}(\mathbb{R}^{dn}) \tag{3.1}\]
are the fermionic and bosonic Fock spaces, respectively. Here, \(L^{2}_{a}\) and \(L^{2}_{s}\) correspond to \(L^{2}\)-functions that are antisymmetric and symmetric, respectively, with respect to permutations of the particle variables. On \(\mathscr{F}_{F}\) we introduce _fermionic_ creation- and annihilation operators \(a_{x}\) and \(a_{x}^{*}\) as the operator-valued distributions that satisfy the Canonical Anticommutation Relations (CAR)
\[\{a_{x},a_{x^{\prime}}^{*}\}=\delta(x-x^{\prime})\qquad\text{and}\qquad\{a_{x},a _{x^{\prime}}\}=\{a_{x}^{*},a_{x^{\prime}}^{*}\}=0,\quad x,x^{\prime}\in\mathbb{ R}^{d}\,, \tag{3.2}\]
where \(\{\cdot,\cdot\}\) is the anticommutator bracket. Similarly, on \(\mathscr{F}_{B}\) we introduce _bosonic_ creation- and annihilation operators \(b_{y}\) and \(b_{y}^{*}\) as the operator-valued distributions that satisfy the Canonical Commutation Relations (CCR)
\[[b_{y},b_{y^{\prime}}^{*}]=\delta(y-y^{\prime})\qquad\text{and}\qquad[b_{y},b_{ y^{\prime}}]=[b_{y}^{*},b_{y^{\prime}}^{*}]=0,\quad y,y^{\prime}\in\mathbb{R}^{d}\,, \tag{3.3}\]
where now \([\cdot,\cdot]\) is the commutator bracket. We shall denote by \(\Omega_{F}=(1,\mathbf{0})\) and \(\Omega_{B}=(1,\mathbf{0})\) the vaccum vector in each space, and by \(\Omega\equiv\Omega_{F}\otimes\Omega_{B}\) the vaccum state of the combined system.
In this setting, the many-particle Hamiltonian introduced in the previous section can be written in terms of creation- and annihilation- operators in the following form
\[\mathcal{H}=\frac{\hbar^{2}}{2m_{F}}\int_{\mathbb{R}^{d}}a_{x}^{*}(-\Delta_{x })a_{x}\mathrm{d}x+\frac{\hbar^{2}}{2m_{B}}\int_{\mathbb{R}^{d}}b_{y}^{*}(- \Delta_{y})b_{y}\mathrm{d}y+\lambda\int_{\mathbb{R}^{2d}}V(x-y)\,a_{x}^{*}a_{ x}b_{y}^{*}b_{y}\,\mathrm{d}x\mathrm{d}y\;, \tag{3.4}\]
where we do not display explicitly the tensor product symbols. As for the dynamics, we introduce the time evolution of the second quantized system as
\[\Psi(t)\equiv\exp\big{(}-it\mathcal{H}/\hbar\big{)}\Psi(0)\,\qquad\forall t\in \mathbb{R}. \tag{3.5}\]
Since the Hamiltonian \(\mathcal{H}\) is quadratic and diagonal in creation- and annihilation operators, it commutes with the _fermionic_ and _bosonic_ number operators
\[\mathcal{N}_{F}\equiv\int_{\mathbb{R}^{d}}a_{x}^{*}a_{x}\mathrm{d}x\qquad \text{and}\qquad\mathcal{N}_{B}\equiv\int_{\mathbb{R}^{d}}b_{y}^{*}b_{y} \mathrm{d}y. \tag{3.6}\]
Consequently, if \(\Psi(0)^{(n,m)}=\delta_{n,N}\delta_{m,M}\psi(0)\), then for all \(t\in\mathbb{R}\) it holds that
\[\Psi(t)^{(n,m)}=\delta_{n,N}\delta_{m,M}\psi(t)\, \tag{3.7}\]
where \(\psi(t)\) is the state corresponding to the \((N+M)\)-particle system, defined in (2.1). Most importantly, one may verify that the following relations hold true for the one-particle reduced density matrices
\[\gamma_{F}(t;x_{1},x_{2})=\big{\langle}\Psi(t),a_{x_{2}}^{*}a_{x_{1}}\otimes \Psi(t)\big{\rangle}_{\mathscr{F}}\quad\text{and}\quad\gamma_{B}(t;y_{1},y_{2 })=\big{\langle}\Psi(t),\mathds{1}\otimes b_{y_{2}}^{*}b_{y_{1}}\Psi(t)\big{ \rangle}_{\mathscr{F}}. \tag{3.8}\]
The equations given in (3.8) are the starting point in the proof of Theorem 1.
In the rest of this section, we introduce preliminaries that we will need to prove Theorem 1. Namely, In Subsections 3.1 and 3.1 we give a more detailed account of the second quantization formalism for both fermions and bosons. Our goal here is not to be thorough, but to collect basic results and fix the notation that will be used throughout the article. The reader is refered to the book [11] and the lecture notes [57] for more details.
### Fermions
Throughout this subsection, we will write the Fermionic Fock space \(\mathscr{F}_{F}\) as follows
\[\mathscr{F}_{F}=\bigoplus_{n=0}^{\infty}\mathscr{F}_{F}^{(n)}\quad\text{where} \quad\mathscr{F}_{F}^{(0)}\equiv\mathbb{C}\quad\text{and}\quad\mathscr{F}_{F}^ {(n)}\equiv L_{a}^{2}(\mathbb{R}^{dn}),\ \forall n\geqslant 1, \tag{3.9}\]
where \(L_{a}^{2}(\mathbb{R}^{dn})\) corresponds to the space of \(L^{2}\) functions that are antisymmetric with respect to the permutation of the particles position variables. The space \(\mathscr{F}_{F}\) becomes a Hilbert space when endowed with the inner product
\[\langle\Psi_{1},\Psi_{2}\rangle_{\mathscr{F}_{F}}\equiv\sum_{n=0}^{\infty} \langle\,\Psi_{1}^{(n)},\Psi_{2}^{(n)}\,\rangle_{\mathscr{F}_{F}^{(n)}}\,\qquad \forall\Psi_{1},\Psi_{2}\in\mathscr{F}_{F}. \tag{3.10}\]
On the Fock space \(\mathscr{F}_{F}\) one introduces the smeared-out creation- and annihilation operators as follows. Given \(f\in L^{2}(\mathbb{R}^{d})\), we let \(a^{*}(f)\) and \(a(f)\) be defined for \(\Psi\in\mathscr{F}_{F}\) as
\[\big{(}a^{*}(f)\Psi\big{)}^{(n)}(x_{1},\dots,x_{n}) \equiv\frac{1}{\sqrt{n}}\sum_{i=1}^{n}(-1)^{i}f(x_{i})\Psi^{(n-1)} (x_{1},\dots,x_{i-1},x_{i+1},\dots,x_{n})\, \tag{3.11}\] \[\big{(}a(f)\Psi\big{)}^{(n)}(x_{1},\dots,x_{n}) \equiv\sqrt{n+1}\int_{\mathbb{R}^{d}}\overline{f(x)}\Psi^{(n+1)}(x,x_{1},\dots,x_{n})\mathrm{d}x. \tag{3.12}\]
In particular, they satisfy the following version of the CAR
\[\{a(f),a^{*}(g)\}=\langle f,g\rangle_{L^{2}}\quad\text{and}\quad\{a^{\pm}(f), a^{\pm}(g)\}=0 \tag{3.13}\]
where we recall \(\{\cdot,\cdot\}\) stands for the anticommutator. In particular, it is easy to see that the CAR turns them into bounded operators, with norms
\[\|a^{*}(f)\|_{B(\mathscr{F}_{F})}=\|a(f)\|_{B(\mathscr{F}_{F})}=\|f\|_{L^{2}}. \tag{3.14}\]
Let us finally mention that the connection with the operator-valued distributions \(a^{*}_{x}\) and \(a_{x}\) is by means of the formulae
\[a^{*}(f)=\int_{\mathbb{R}^{d}}f(x)a^{*}_{x}\mathrm{d}x\qquad\text{and}\qquad a (f)=\int_{\mathbb{R}^{d}}\overline{f(x)}a_{x}\mathrm{d}x\,\qquad f\in L^{2}(\mathbb{R}^{d}w) \tag{3.15}\]
#### 3.1.1. Fermionic operators on \(\mathscr{F}_{F}\)
Given a closed linear operator \(\mathcal{O}:\mathcal{D}(\mathcal{O})\subset L^{2}(\mathbb{R}^{d})\to L^{2}( \mathbb{R}^{d})\), we consider its second quantization \(\mathrm{d}\Gamma_{F}[\mathcal{O}]\) as the diagonal operator on \(\mathscr{F}_{F}\), defined as
\[(\mathrm{d}\Gamma_{F}[\mathcal{O}])^{(n)}=\sum_{i=1}^{n}\mathbb{1}^{i-1} \otimes\mathcal{O}\otimes\mathbb{1}^{n-i},\ n\geqslant 1\qquad\text{and} \qquad(\mathrm{d}\Gamma[\mathcal{O}])^{(0)}=0\, \tag{3.16}\]
initially on tensor products of elements of \(\mathcal{D}(\mathcal{O})\), and then closed. In most cases of interest \(\mathcal{O}\) is bounded (or even trace-class) and the domain of \(\mathrm{d}\Gamma_{F}[\mathcal{O}]\) is contained in \(\mathcal{D}(\mathcal{N}_{F})\)-the only exception will be the Laplacian \(\mathcal{O}=-\Delta\), in which case \(\hbar^{2}/2m_{F}\mathrm{d}\Gamma_{F}[-\Delta]\) is the associated kinetic energy of the system.
The reader should be aware that, at least formally, if \(\mathcal{O}\) has an operator kernel \(\mathcal{O}(x,x^{\prime})\), then one may write in terms of creation- and annihilation operators
\[\mathrm{d}\Gamma_{F}[\mathcal{O}]=\int_{\mathbb{R}^{2d}}\mathcal{O}(x,x^{\prime} )a_{x^{\prime}}^{*}a_{x}\mathrm{d}x\mathrm{d}x^{\prime}. \tag{3.17}\]
In this context, one of the most important observables in second quantization corresponds to the the fermionic number operator. It is defined as the second quantization of the the identity operator on \(L^{2}(\mathbb{R}^{d})\), which has the distributional kernel \(\mathds{1}(x,x^{\prime})=\delta(x-x^{\prime})\). Namely,
\[\mathcal{N}_{F}=\bigoplus_{n=0}^{\infty}n=\mathrm{d}\Gamma_{F}[\mathds{1}]= \int_{\mathbb{R}^{d}}a_{x}^{*}a_{x}\mathrm{d}x. \tag{3.18}\]
Let us now collect some basic results concerning estimates for the second quantization of operators in fermionic Fock space in the following lemma. For a proof, we refer the reader to [9, Lemma 3.1].
**Lemma 3.1** (Estimates for fermionic operators).: _Let \(\mathcal{O}:L^{2}(\mathbb{R}^{d})\to L^{2}(\mathbb{R}^{d})\) be a bounded operator. Then, the following holds true._
1. _For all_ \(\Psi,\Phi\in\mathcal{D}(\mathcal{N}_{F})\)__ \[\|\mathrm{d}\Gamma_{F}(\mathcal{O})\Psi\|_{\mathscr{F}_{F}} \leqslant\|\mathcal{O}\|_{B(L^{2})}\|\mathcal{N}_{F}\Psi\|_{ \mathscr{F}_{F}}\,\] (3.19) \[|\left\langle\Psi,\mathrm{d}\Gamma_{F}(\mathcal{O})\Phi\right\rangle _{\mathscr{F}_{F}}| \leqslant\|\mathcal{O}\|_{B(L^{2})}\|\mathcal{N}_{F}^{\frac{1}{2}} \Psi\|_{\mathscr{F}_{F}}\|\mathcal{N}_{F}^{\frac{1}{2}}\Phi\|_{\mathscr{F}_{ F}}\.\] (3.20)
2. _If_ \(\mathcal{O}\) _is Hilbert-Schmidt with kernel_ \(\mathcal{O}(x,y)\)_, then for all_ \(\Psi\in\mathcal{D}(\mathcal{N}_{F}^{\frac{1}{2}})\)__ \[\|\mathrm{d}\Gamma_{F}(\mathcal{O})\Psi\|_{\mathscr{F}_{F}} \leqslant\|\mathcal{O}\|_{HS}\|\mathcal{N}_{F}^{\frac{1}{2}}\Psi\|_ {\mathscr{F}_{F}}\] (3.21) \[\|\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}\mathcal{O}(x_{1},x_{2 })a_{x_{1}}a_{x_{2}}\mathrm{d}x_{1}\mathrm{d}x_{2}\|_{\mathscr{F}_{F}} \leqslant\|\mathcal{O}\|_{HS}\|\mathcal{N}_{F}^{\frac{1}{2}}\Psi\|_ {\mathscr{F}_{F}}\] (3.22) \[\|\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}\mathcal{O}(x_{1},x_{2 })a_{x_{1}}^{*}a_{x_{2}}^{*}\mathrm{d}x_{1}\mathrm{d}x_{2}\|_{\mathscr{F}_{F}} \leqslant\|\mathcal{O}\|_{HS}\|(\mathcal{N}_{F}+\mathds{1})^{\frac{1}{2}} \Psi\|_{\mathscr{F}_{F}}\.\] (3.23)
3. _If_ \(\mathcal{O}\) _is trace-class with kernel_ \(\mathcal{O}(x,y)\)_, then for all_ \(\Psi\in\mathscr{F}_{F}\)__ \[\|\mathrm{d}\Gamma_{F}(\mathcal{O})\Psi\|_{\mathscr{F}_{F}} \leqslant\|\mathcal{O}\|_{Tr}\|\Psi\|_{\mathscr{F}_{F}}\] (3.24) \[\|\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}\mathcal{O}(x_{1},x_{2 })a_{x_{1}}a_{x_{2}}\mathrm{d}x_{1}\mathrm{d}x_{2}\|_{\mathscr{F}_{F}} \leqslant\|\mathcal{O}\|_{Tr}\|\Psi\|_{\mathscr{F}_{F}}\] (3.25) \[\|\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}\mathcal{O}(x_{1},x_{2 })a_{x_{1}}^{*}a_{x_{2}}^{*}\mathrm{d}x_{1}\mathrm{d}x_{2}\|_{\mathscr{F}_{F}} \leqslant\|\mathcal{O}\|_{Tr}\|\Psi\|_{\mathscr{F}_{F}}\.\] (3.26)
#### 3.1.2. Particle-hole transformation
In this subsection we introduce a class of Bogoliubov transformations on Fock space that we will use in the proof of Theorem 1; they will be useful in quantifying the number of fluctuations outside of a degenerate Fermi gas.
More precisely, let us consider \(\omega\) to be a rank-\(M\) orthogonal projection on \(L^{2}(\mathbb{R}^{d})\). Thus, there exists an orthonormal basis \(\{\phi_{i}\}_{i=1}^{\infty}\subset L^{2}(\mathbb{R}^{d})\) such that
\[\omega=\sum_{i=1}^{M}|\phi_{i}\rangle\,\langle\phi_{i}|\enspace. \tag{3.27}\]
We introduce a map on Fock space \(\mathcal{R}[\omega]:\mathscr{F}_{F}\to\mathscr{F}_{F}\), which we shall refer to as a _particle-hole transformation_ associated to \(\omega\). We define it according to its action on creation- and annihilation operators as follows
\[\mathcal{R}^{*}[\omega]a^{*}(\phi_{i})\mathcal{R}[\omega]\equiv\begin{cases}a ^{*}(\phi_{i})\,,&i\leqslant M\\ a(\phi_{i})\,,&i>M\end{cases}\enspace, \tag{3.28}\]
and its action on the vaccum \(\mathcal{R}[\omega]\Omega_{F}\equiv a^{*}(\phi_{1})\cdots a^{*}(\phi_{M}) \Omega_{F}.\) Note that since the span of vectors of the form \(a^{*}(\phi_{i_{1}})\cdots a^{*}(\phi_{i_{n}})\Omega_{F}\) is dense in \(\mathscr{F}_{F}\), the above prescription completely determines \(\mathcal{R}[\omega]\).
Let us now collect additional properties of the map \(\mathcal{R}[\omega]\) In order to state them, we need to introduce some important notation. Indeed, we consider the operators on the one-particle space \(u,v\in B(L^{2}(\mathbb{R}^{d}))\) defined as
\[u\equiv\mathds{1}-\omega\qquad\text{and}\qquad v=\sum_{i=1}^{M}|\overline{ \phi_{i}}\rangle\,\langle\phi_{i}|\enspace. \tag{3.29}\]
The following properties are recorded in the following Lemma. We refer the reader to [9] for more details.
**Lemma 3.2** (Properties of \(\mathcal{R}\)).: _Let \(\omega\), \(u\), \(v\) and \(\mathcal{R}[\omega]\) be as above. Then, the following statements hold true._
1. \(\mathcal{R}[\omega]\) _is a unitary transformation on_ \(\mathscr{F}_{F}\)_, and_ \(\mathcal{R}^{*}[\omega]=\mathcal{R}[\omega]\)_._
2. _We denote_ \(u_{y}(x)\equiv u(x,y)\) _and_ \(v_{y}(x)\equiv v(x,y)\)_. Then, for all_ \(x\in\mathbb{R}^{d}\)__ \[\mathcal{R}^{*}[\omega]\ a_{x}^{*}\ \mathcal{R}[\omega] =a^{*}(u_{x})+a(\overline{v_{x}})\enspace,\] (3.30) \[\mathcal{R}^{*}[\omega]\ a_{x}\ \mathcal{R}[\omega] =a(u_{x})+a^{*}(\overline{v_{x}})\enspace.\] (3.31)
3. _For all_ \(x,y\in\mathbb{R}^{d}\) _there holds_ \[\left\langle\mathcal{R}[\omega]\Omega_{F},\,a_{y}^{*}\,a_{x}\,\mathcal{R}[ \omega]\Omega_{F}\right\rangle_{\mathscr{F}_{F}}=\omega(x,y)\enspace.\] (3.32) _In words, the one-particle reduced density matrix of_ \(\mathcal{R}[\omega]\Omega_{F}\) _corresponds to_ \(\omega\)_._
4. \(u^{*}=u^{2}=u\) _and_ \(v^{*}=\overline{v}\)_._
5. \(u^{*}u+v^{*}v=\mathds{1}\) _and_ \(u\overline{v}=vu=0\)_._
**Remark 3.1**.: The unitary map \(\mathcal{R}[\omega]\) is an example of the implementator of a Bogoliubov transformation-that is, a map on Fock space that preserves the Canonical Anticommutation Relations. More precisely, consider the maps
\[\nu=\begin{pmatrix}u&\bar{v}\\ v&\bar{u}\end{pmatrix}\qquad\text{ and }\qquad A(f,g)\equiv a(f)+a^{*}(\bar{g}) \tag{3.33}\]
for all \(f,g\in L^{2}(\mathbb{R}^{d}).\) In this context, \(\nu\) is a _Bogoliubov transformation_. Namely, it holds true that for all \(f,f_{1},f_{2},g\in L^{2}(\mathbb{R}^{d}):\)
\[\{A(\nu(f_{1},g_{1}),\nu(f_{2},g_{2}))\} =\{A(f_{1},g_{1}),A(f_{2},g_{2})\}\, \tag{3.34}\] \[A^{*}(\nu(f,g)) =A(\nu(\bar{g},\bar{f}))\,. \tag{3.35}\]
Furthermore, \(\mathcal{R}[\omega]\) implements \(\nu\) on the Fock space \(\mathscr{F}_{F}\), in the sense that for all \(f,g\in L^{2}(\mathbb{R}^{d})\)
\[\mathcal{R}^{*}[\omega]A(f,g)\mathcal{R}[\omega]=A(\nu(f,g)) \tag{3.36}\]
Let us note that while the notion of Bogoliubov transformations is quite general, in the physical situation at hand it is sufficient to consider particle-hole transformations, which has an explicit representation. This is because the initial state we consider is a pure state \(\psi_{F}(0)\) corresponding to a Slater determinant of \(M\) particles. Consequently, its one-particle reduced density matrix is a rank-\(M\) orthogonal projection. The situation is quite different in the positive temperature case, when states are mixed, and no longer orthogonal projections.
### Bosons
Similarly, throughout this subsection we apply the following notation to denote the bosonic Fock space \(\mathscr{F}_{B}\)
\[\mathscr{F}_{B}\equiv\bigoplus_{n=0}^{\infty}\mathscr{F}_{B}^{(n)}\quad\text{ where}\quad\mathscr{F}_{B}^{(0)}\equiv\mathbb{C}\quad\text{and}\quad\mathscr{F}_{B}^{(n)} \equiv L_{s}^{2}(\mathbb{R}^{dn}),\ \forall n\geqslant 1 \tag{3.37}\]
where \(L_{s}^{2}(\mathbb{R}^{dn})\) corresponds to the subspace of symmetric functions. \(\mathscr{F}_{B}\) is a Hilbert space when endowed with the inner product
\[\langle\Phi_{1},\Phi_{2}\rangle_{\mathscr{F}_{B}}\equiv\sum_{n=0}^{\infty} \Big{\langle}\Phi_{1}^{(n)},\Phi_{2}^{(n)}\Big{\rangle}_{\mathscr{F}_{B}^{(n )}}\,\qquad\forall\Phi_{1},\Phi_{2}\in\mathscr{F}_{B}. \tag{3.38}\]
On the bosonic Fock space \(\mathscr{F}_{B}\) one introduces the smeared-out creation- and annihilation operators as follows. Given \(f\in L^{2}(\mathbb{R}^{d})\), we let \(b^{*}(f)\) and \(b(f)\) be defined for \(\Phi\in\mathscr{F}_{F}\) as
\[\big{(}b^{*}(f)\Phi\big{)}^{(n)}(y_{1},\dots,y_{n}) \equiv\frac{1}{\sqrt{n}}\sum_{i=1}^{n}f(y_{i})\Phi^{(n-1)}(y_{1}, \dots,y_{i-1},y_{i+1},\dots,y_{n})\, \tag{3.39}\] \[\big{(}b(f)\Phi\big{)}^{(n)}(y_{1},\dots,y_{n}) \equiv\sqrt{n+1}\int_{\mathbb{R}^{d}}\overline{f(y)}\Phi^{(n+1)}( y,y_{1},\dots,y_{n})\mathrm{d}y. \tag{3.40}\]
In contrast to the fermions, they satisfy the Canonical Commutation Relations (CCR)
\[[b(f),b^{*}(g)]=\langle f,g\rangle_{L^{2}}\quad\text{and}\quad[b^{\pm}(f),b^{ \pm}(g)]=0. \tag{3.41}\]
In particular, they are unbounded operators on \(\mathscr{F}_{B}\), but relatively bounded with respect to the bosonic number operator-see Lemma 3.3. They are connected to the operator-valued distributions \(b^{*}_{y}\) and \(b_{y}\) by means of the formulae
\[b^{*}(f)=\int_{\mathbb{R}^{d}}f(y)b^{*}_{y}\mathrm{d}y\qquad\text{and}\qquad b (f)=\int_{\mathbb{R}^{d}}\overline{f(y)}b_{y}\mathrm{d}y\,\qquad f\in L^{2}(\mathbb{R}^{d}). \tag{3.42}\]
#### 3.2.1. Operator estimates
We proceed analogously as we did for fermions. Namely, given an operator \(\mathcal{O}\) in the one-particle space \(L^{2}(\mathbb{R}^{d})\), we considers its second quantization \(\mathrm{d}\Gamma_{B}[\mathcal{O}]\) acting on \(\mathscr{F}_{B}\) defined as
\[(\mathrm{d}\Gamma_{B}[\mathcal{O}])^{(n)}\equiv\sum_{i=1}^{n}\mathbb{1}^{i-1} \otimes\mathcal{O}\otimes\mathbb{1}^{n-i},\ n\geqslant 1\qquad\text{and} \qquad(\mathrm{d}\Gamma[\mathcal{O}])^{(0)}\equiv 0. \tag{3.43}\]
Similarly, if \(\mathcal{O}\) has an operator kernel \(\mathcal{O}(y,y^{\prime})\), one may write in terms of creation- and annihilation operators
\[\mathrm{d}\Gamma_{B}[\mathcal{O}]=\int_{\mathbb{R}^{2d}}\mathcal{O}(y,y^{ \prime})b_{y^{\prime}}^{*}b_{y}\mathrm{d}y\mathrm{d}y^{\prime}. \tag{3.44}\]
Analogously as we did in the case of fermions, we now introduce the corresponding relations for the bosonic number operator
\[\mathcal{N}_{B}=\bigoplus_{n=0}^{\infty}n=\mathrm{d}\Gamma_{B}[\mathbb{1}]= \int_{\mathbb{R}^{d}}b_{y}^{*}b_{y}\mathrm{d}y. \tag{3.45}\]
Let us now collect some basic results concerning estimates for the second quantization of operators in bosonic Fock space. For reference, see [16, Lemma 3.1].
**Lemma 3.3** (Estimates for bosonic operators).: _Let \(\mathcal{O}:L^{2}(\mathbb{R}^{d})\to L^{2}(\mathbb{R}^{d})\) be a bounded operator, and let \(f\in L^{2}(\mathbb{R}^{d})\). Then, the following holds true (1) For all \(\Phi\in\mathcal{D}(\mathcal{N}_{B}^{\frac{1}{2}})\)_
\[\|b(f)\Phi\|_{\mathscr{F}_{b}} \leqslant\|f\|_{L^{2}}\|\mathcal{N}_{B}^{\frac{1}{2}}\Phi\|_{ \mathscr{F}_{b}} \tag{3.46}\] \[\|b^{*}(f)\Phi\|_{\mathscr{F}_{b}} \leqslant\|f\|_{L^{2}}\|(\mathcal{N}_{B}+1)^{\frac{1}{2}}\Phi\|_ {\mathscr{F}_{b}}. \tag{3.47}\]
_(2) For all \(\Phi\in\mathcal{D}(\mathcal{N}_{B})\)_
\[\|\mathrm{d}\Gamma_{B}(\mathcal{O})\Phi\|_{\mathscr{F}_{b}}\leqslant\| \mathcal{O}\|_{B(L^{2})}\|\mathcal{N}_{B}\Phi\|_{\mathscr{F}_{b}}. \tag{3.48}\]
#### 3.2.2. Coherent states
Analogously as we did for fermions, we introduce a unitary transformation that we shall make use of in the rest of the article. Namely, for \(f\in L^{2}(\mathbb{R}^{d})\) we introduce the _Weyl operator_ as
\[\mathcal{W}[f]\equiv\exp\Big{(}b(f)-b^{*}(f)\Big{)}:\mathscr{F}_{B}\to \mathscr{F}_{B}. \tag{3.49}\]
Note that since the argument in the exponential is anti self-adjoint, \(\mathcal{W}[f]\) is automatically a unitary map. Its action on the vacuum vector creates a _coherent state_. That is, thanks to the Baker-Camper-Hausdorff formula, one has on \(\mathscr{F}_{B}\)
\[(\mathcal{W}[f]\Omega_{B})^{(n)}=e^{-\frac{|f|^{2}}{2}}\frac{f^{\otimes n}}{ \sqrt{n!}}\,\quad\forall n\geqslant 0. \tag{3.50}\]
In particular, the probability of finding the system with \(n\) particles is given by \(e^{-|f|}\|f\|^{n}/n!\), which follows a Poisson distribution with parameter \(\lambda=\|f\|_{L^{2}}\).
We collect in the following lemma some properties of the Weyl operator and coherent states. For more details, we refer the reader to [55, Lemma 2.2]
**Lemma 3.4** (Properties of \(\mathcal{W}\)).: _Let \(f\in L^{2}(\mathbb{R}^{d})\), and \(\mathcal{W}[f]\) be as above. Then, the following statements hold true_
1. \(\mathcal{W}[f]\) _is a unitary transformation on_ \(\mathscr{F}_{B}\)_, and_ \(\mathcal{W}^{*}[f]=\mathcal{W}[-f]\)_._
2. _For all_ \(y\in\mathbb{R}^{d}\)__ \[\mathcal{W}^{*}[f]b_{y}\mathcal{W}[f] =b_{y}+f(y)\,\] (3.51) \[\mathcal{W}^{*}[f]b_{y}^{*}\mathcal{W}[f] =b_{y}^{*}+\overline{f(y)}\.\] (3.52)
3. _For all_ \(y,y^{\prime}\in\mathbb{R}^{d}\)__ \[\big{\langle}\mathcal{W}[f]\Omega_{B},b_{y^{\prime}}^{*}b_{y}\mathcal{W}[f] \Omega_{B}\big{\rangle}_{\mathscr{F}_{B}}=\overline{f(y^{\prime})}f(y)\.\] (3.53) _In words, the one-particle reduced density matrix of_ \(\mathcal{W}[f]\Omega_{B}\) _corresponds to the projector_ \(\left|f\right\rangle\!\left\langle f\right|.\)__
## 4. Second Quantization II: The Fluctuation Dynamics
The main goal of this section is to set up the proof of Theorem 1. Namely, we will introduce and study the fluctuation dynamics around a state consisting of a degenerate Fermi gas, and a Bose-Einstein condensate, evolving in time according to the mean-field equations (1.5). We prove that the number of fluctuations around this state is small, relative to the numbers \(N\) and \(M\). In the next section, we show that these estimates imply Theorem 1.
This approach is nowadays considered standard, and has been successfully employed in the derivation of several mean-field equations from many-particle systems. The first work to use these techniques in the derivation of the Hartree equation for bosons is [55], while for fermions is [9]. Our proofs are heavily inspired by their ideas, and actually borrows a few estimates. Finally, let us also refer the reader to the book [11] for a cohesive treatment of the subject.
_A note on domains_. In what follows, we will be extensively manipulating the number operators \(\mathcal{N}_{F}\) and \(\mathcal{N}_{B}\). These are positive, unbounded self-adjoint operators on the Hilbert space \(\mathscr{F}\), with domains \(\mathcal{D}(\mathcal{N}_{F})=\{\Psi\in\mathscr{F}:\sum_{n,m}n^{2}\|\Psi^{(n,m) }\|_{\mathscr{F}_{F}^{(n)}\otimes\mathscr{F}_{B}^{(m)}}^{2}<\infty\}\) and \(\mathcal{D}(\mathcal{N}_{B})=\{\Psi\in\mathscr{F}:\sum_{n,m}m^{2}\|\Psi^{(n,m) }\|_{\mathscr{F}_{F}^{(n)}\otimes\mathscr{F}_{B}^{(m)}}^{2}<\infty\}\). In order to simplify the exposition, we shall avoid making reference to the unbounded nature of these operators. Let us note that there is no risk in doing so. Indeed, in applications all the states \(\Psi\in\mathscr{F}\) that we manipulate belong to the intersection \(\cap_{k=1}^{\infty}\mathcal{D}(\mathcal{N}_{F}^{k})\cap\mathcal{D}(\mathcal{N }_{B}^{k})\), and all the dynamics we consider leave the above space invariant.
### Number Estimates
Throughout this section, we denote by \((\omega,\varphi)\) the pair of variables that solves the Hartree-Hartree equations (1.5).
Let us now introduce a fundamental family of unitary transformations in our analysis. Namely, using the notation from Section 3, we define the following time-dependent particle-hole transformation, and Weyl operator
\[\mathcal{R}_{t}\equiv\mathcal{R}[\omega(t)]:\mathscr{F}_{F}\to\mathscr{F}_{F} \quad\text{and}\quad\mathcal{W}_{t}\equiv\mathcal{W}[\sqrt{N}\varphi(t)]: \mathscr{F}_{B}\to\mathscr{F}_{B}. \tag{4.1}\]
Note that both of these transformations will map the respective vaccuum vectors into states in Fock space whose one-particle reduced density matrices correspond to \(\omega(t)\) and \(N\,|\varphi(t)\rangle\langle\varphi(t)|\), respectively. In other words, for all \(t\in\mathbb{R}\) and \((x,y)\in\mathbb{R}^{2d}\)
\[\big{\langle}\mathcal{R}_{t}\Omega_{F},a_{y}^{*}a_{x}\mathcal{R}_{t}\Omega_{F }\big{\rangle}_{\mathscr{F}_{F}}=\omega(t;x,y)\quad\text{and}\quad\big{\langle} \mathcal{W}_{t}\Omega_{b},b_{y}^{*}b_{x}\mathcal{W}_{t}\Omega_{b}\big{\rangle} _{\mathscr{F}_{B}}=N\overline{\varphi(t,y)}\varphi(t,x). \tag{4.2}\]
We proceed to define the two-parameter family of unitary transformations \(\mathcal{U}(t,s):\mathscr{F}\to\mathscr{F}\), which we refer to as the _fluctuation dynamics_, as follows
\[\mathcal{U}(t,s)\equiv\big{(}\mathcal{R}_{t}\otimes\mathcal{W}_{t}\big{)}^{*} \exp\bigg{[}-i\frac{(t-s)}{\hbar}\mathcal{H}\bigg{]}\big{(}\mathcal{R}_{s} \otimes\mathcal{W}_{s}\big{)}\,\qquad t,s\in\mathbb{R}. \tag{4.3}\]
The fluctuation dynamics measures, in a quantitative way, how far the many-body Schrodinger dynamics is from the mean-field variables \((\omega(t),\varphi(t))\). In order to make this statement precise, we recall that we have defined on \(\mathscr{F}_{F}\) and \(\mathscr{F}_{B}\) the fermionic and bosonic number operators, respectively,
\[\mathcal{N}_{F}=\int_{\mathbb{R}^{d}}a_{x}^{*}a_{x}\mathrm{d}x\qquad\text{and }\qquad\mathcal{N}_{b}=\int_{\mathbb{R}^{d}}b_{y}^{*}b_{y}\mathrm{d}y. \tag{4.4}\]
Unless confusion arises, we denote with the same symbols their natural extension to \(\mathscr{F}\). Finally, we introduce on \(\mathscr{F}\) the _total number operator_
\[\mathcal{N}\equiv\mathcal{N}_{F}+\mathcal{N}_{B}. \tag{4.5}\]
The main result of this section is the following theorem. It contains estimates for the growth-in-time for the expectations of \(\mathcal{N}_{F}\) and \(\mathcal{N}_{B}\) with respect to the fluctuation dynamics \(\mathcal{U}(t,s)\).
**Theorem 3** (Number estimates).: _Let \((\omega,\varphi)\) satisfy (1.5), and assume that the assumptions in Theorem 1 hold true. Let \(\mathcal{U}(t,s)\) be the fluctuation dynamics. Then, the following statements hold true_
1. _For all_ \(\ell,k\in\mathbb{N}\) _there is a constant_ \(C>0\) _such that for all_ \(\Psi\in\mathscr{F}\) _and_ \(t,s\in\mathbb{R}\)__ \[\langle\Psi \mathcal{U}^{*}(t,s)\,\mathcal{N}_{F}^{\ell}\,\mathcal{U}(t,s) \Psi\rangle_{\mathscr{F}}\] (4.6) \[\leqslant K(t-s)\Big{[}\|\Psi\|_{\mathscr{F}}\|(\mathcal{N}^{\ell }+\mathds{1})\Psi\|_{\mathscr{F}}+\Theta_{k,\ell}\|(\mathds{1}+M^{-1}\mathcal{ N}_{F})^{\ell}\Psi\|_{\mathscr{F}}\|(\mathcal{N}+\mathds{1})^{k+3/2}\Psi\|_{ \mathscr{F}}\Big{]}\.\]
2. _For all_ \(\ell,k\in\mathbb{N}\) _there is a constant_ \(C>0\) _such that for all_ \(\Psi\in\mathscr{F}\) _and_ \(t,s\in\mathbb{R}\)__ \[\langle\Psi \mathcal{U}^{*}(t,s)\,\mathcal{N}_{B}^{\ell}\,\mathcal{U}(t,s) \Psi\rangle_{\mathscr{F}}\] (4.7) \[\leqslant K(t-s)\Big{[}\|\Psi\|_{\mathscr{F}}\|(\mathcal{N}^{ \ell}+\mathds{1})\Psi\|_{\mathscr{F}}+\Theta_{k,\ell}\|(\mathds{1}+N^{-1} \mathcal{N}_{B})^{\ell}\Psi\|_{\mathscr{F}}\|(\mathcal{N}+\mathds{1})^{k+3/2} \Psi\|_{\mathscr{F}}\Big{]}\.\]
_Here, we denote \(K(t)\equiv C\mathrm{exp}[C\lambda\sqrt{NM/\hbar}\,(1+\sqrt{\hbar M/N})\exp|t|]\) together with_
\[\Theta_{k,\ell}\equiv\frac{\lambda\sqrt{N}}{\hbar}\frac{M^{\ell}}{(\hbar M)^{k}} \geqslant 0. \tag{4.8}\]
**Remark 4.1** (Evolution of the vacuum vector).: In applications, we will consider \(\Psi=\Omega\). In particular, there holds \(\|(\mathcal{N}_{F}+\mathds{1})^{\ell}\Omega\|_{\mathscr{F}}=1\) for all \(\ell\in\mathbb{N}\). Thus, given \(\ell\in\mathbb{N}\), we may take \(k=k_{\ell}\) as in the statement of Theorem 1. Hence, there holds \(\Theta_{k,\ell}\leqslant 1\) uniformly in the physical parameters \(\lambda\), \(\hbar\), \(N\) and \(M\). In this situation one obtains the following estimate, provided one re-updates the constant \(C>0\)
\[\|(\mathcal{N}+\mathds{1})^{\ell}\mathcal{U}(t,s)\Omega\|_{\mathscr{F}} \leqslant K(t-s)\,\qquad\forall t,s\in\mathbb{R}. \tag{4.9}\]
**Remark 4.2** (Boundedness of operators).: An important consequence of Theorem 3 is that for all \(\ell\), there exists \(K_{\ell}\geqslant 1\) such that for all \(\Psi\in\mathscr{F}\)
\[\|(\mathcal{N}+1)^{\ell}\mathcal{U}(t,s)\Psi\|_{\mathscr{F}}\leqslant K(t-s) \|(\mathcal{N}+\mathds{1})^{K_{\ell}}\Psi\|_{\mathscr{F}}\,\qquad\forall t,s\in \mathbb{R}. \tag{4.10}\]
Consequently, \((\mathcal{N}+\mathds{1})^{\ell}\mathcal{U}(t,s)(\mathcal{N}+\mathds{1})^{-K_{ \ell}}\) is a bounded linear operator in \(\mathscr{F}\), and the same holds for its adjoint. We record this in the following statement
\[\|(\mathcal{N}+\mathds{1})^{\ell}\mathcal{U}(t,s)(\mathcal{N}+\mathds{1})^{-K _{\ell}}\|_{B(\mathscr{F})}=\|(\mathcal{N}+\mathds{1})^{-K_{\ell}}\mathcal{U} (t,s)(\mathcal{N}+\mathds{1})^{\ell}\|_{B(\mathscr{F})}\leqslant K(t-s). \tag{4.11}\]
Here of course, we have used that \(\mathcal{U}^{*}(t,s)=\mathcal{U}(s,t)\) and the symmetry \(K(t-s)=K(s-t)\).
We dedicate the rest of this section to the proof of the above Theorem.
### The infinitesimal generator
In order to establish the number estimates contained in Theorem 3 we need to study the time evolution of the fluctuation dynamics. To this end, we introduce infinitesimal generator \(\mathcal{L}(t)\) of \(\mathcal{U}(t,s)\) as the time-dependent, self-adjoint operator on \(\mathscr{F}\) determined by the equation
\[\begin{cases}&i\hbar\partial_{t}\mathcal{U}(t,s)=\mathcal{L}(t)\mathcal{U}(t, s)\,\\ &\mathcal{U}(t,t)=\mathds{1}\.\end{cases} \tag{4.12}\]
The computation of \(\mathcal{L}(t)\) is tedious, but can be carried out explicitly. Let us record the result of the calculation in Lemma 4.1 below, and postpone the proof to an Appendix.
_Notation._ Before we state the explicit form of the infinitesimal generator, we introduce useful notations. Recall that \((\omega(t),\varphi(t))\) is a solution of the Hartree-Hartree equation (1.5).
* We denote by \(h_{F}(t)\) and \(h_{B}(t)\) the following time-dependent one-particle Hamiltonians on \(H^{2}(\mathbb{R}^{d})\) \[h_{F}(t) \equiv-\frac{\hbar^{2}}{2m_{F}}\Delta+\lambda N\big{(}V*\rho_{B} (t)\big{)}\,\] (4.13) \[h_{B}(t) \equiv-\frac{\hbar^{2}}{2m_{B}}\Delta+\lambda M\big{(}V*\rho_{F} (t)\big{)}\.\] (4.14) Here, \(\rho_{F}(t,x)=M^{-1}\omega(t;x,x)\) and \(\rho_{B}(t,x)=|\varphi(t,x)|^{2}\) are the corresponding fermionic and bosonic position densities.
\(\square\) Upon decomposing \(\omega(t)=\sum_{i=1}^{M}|\phi_{i}(t)\rangle\langle\phi_{i}(t)|\) we denote with the same notation as in Section 3, for all \(t\in\mathbb{R}\)
\[u(t)\equiv\mathds{1}-\omega(t)\qquad\text{ and }\qquad v(t)\equiv\sum_{i=1}^{M}| \overline{\phi_{i}(t)}\rangle\langle\phi_{i}(t)| \tag{4.15}\]
\(\square\) For fixed \(x\in\mathbb{R}^{d}\) and \(t\in\mathbb{R}\), we denote by \(u_{x,t}\) and \(v_{x,t}\) the distributions given by
\[u_{t,x}(x^{\prime})=u(t;x,x^{\prime})\qquad\text{and}\qquad v_{t,x}(x^{\prime} )=v(t;x,x^{\prime}) \tag{4.16}\]
for all \(x^{\prime}\in\mathbb{R}^{d}\).
**Lemma 4.1**.: _Let \(\mathcal{U}(t,s)\) be the unitary transformation defined in (4.3), and let \(\mathcal{L}(t)\) be its infinitesimal generator. Then, \(\mathcal{L}(t)\) admits the following representation (modulo scalars)_
\[\mathcal{L}(t)=\mathrm{d}\Gamma_{F}[h_{F}(t)]\otimes\mathds{1}+\mathds{1} \otimes\mathrm{d}\Gamma_{B}[h_{B}(t)]+\lambda\sqrt{N}\mathcal{L}_{2,1}(t)+ \lambda\mathcal{L}_{2,2}(t)\qquad\forall t\in\mathbb{R}. \tag{4.17}\]
_Here, \(h_{F}(t)\) and \(h_{B}(t)\) are the one-particle Hamiltonians defined in Eq. (4.13). The time-dependent operators \(\mathcal{L}_{2,1}(t)\) and \(\mathcal{L}_{2,2}(t)\) are self-adjoint operators on \(\mathscr{F}\), and are given by the expressions (here, we supress time labels for convenience)_
\[\mathcal{L}_{2,1} =\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}v(x-y)a^{*}(u_{x})a(u_{ x})\otimes\left(\varphi(y)b_{y}^{*}+\overline{\varphi}(y)b_{y}\right)\, \mathrm{d}x\mathrm{d}y\] \[-\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}v(x-y)a^{*}(\overline{v _{x}})a(\overline{v_{x}})\otimes\left(\varphi(y)b_{y}^{*}+\overline{\varphi}(y )b_{y}\right)\,\mathrm{d}x\mathrm{d}y\] \[+\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}v(x-y)a^{*}(u_{x})a^{*} (\overline{v_{x}})\otimes\left(\varphi(y)b_{y}^{*}+\overline{\varphi}(y)b_{y} \right)\,\mathrm{d}x\mathrm{d}y\] \[+\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}v(x-y)a(\overline{v_{x} })a(u_{x})\otimes\left(\varphi(y)b_{y}^{*}+\overline{\varphi}(y)b_{y}\right) \,\mathrm{d}x\mathrm{d}y\, \tag{4.18}\]
_and_
\[\mathcal{L}_{2,2} =\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}v(x-y)a^{*}(u_{x})a(u_ {x})\otimes b_{y}^{*}b_{y}\,\mathrm{d}x\mathrm{d}y\] \[-\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}v(x-y)a^{*}(\overline{v _{x}})a(\overline{v_{x}})\otimes b_{y}^{*}b_{y}\,\mathrm{d}x\mathrm{d}y\] \[+\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}v(x-y)a^{*}(u_{x})a^{*} (\overline{v_{x}})\otimes b_{y}^{*}b_{y}\,\mathrm{d}x\mathrm{d}y\] \[+\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}v(x-y)a(\overline{v_{x} })a(u_{x})\otimes b_{y}^{*}b_{y}\,\mathrm{d}x\mathrm{d}y. \tag{4.19}\]
Unfortunately-as it happens often with similar mean-field theories-the above generator is not exactly to prove estimates as the ones contained in Proposition 3. We shall introduce a modified generator, together with a new _truncated_ fluctuation dynamics, and then prove that these two are close together.
### Truncated dynamics
We introduce \(\widetilde{\mathcal{U}}(t,s)\) as the strongly continuous, two-parameter family of unitary operators that solves
\[\begin{cases}&i\hbar\partial_{t}\widetilde{\mathcal{U}}(t,s)=\widetilde{ \mathcal{L}}(t)\widetilde{\mathcal{U}}(t,s)\,\\ &\widetilde{\mathcal{U}}(s,s)=\mathds{1}\,\end{cases} \tag{4.20}\]
where the _truncated infinitesimal generator_\(\widetilde{\mathcal{L}}(t)\) is defined as follows
\[\widetilde{\mathcal{L}}(t)=\mathrm{d}\Gamma_{F}[h_{F}(t)]\otimes\mathds{1}+ \mathds{1}\otimes\mathrm{d}\Gamma_{B}[h_{B}(t)]+\lambda\sqrt{N}\widetilde{ \mathcal{L}}_{2,1}(t)+\lambda\mathcal{L}_{2,2}(t)\qquad\forall t\in\mathbb{R}. \tag{4.21}\]
The first, second, and fourth terms in (4.21) are identical to those found in Lemma 4.1 for \(\mathcal{L}(t)\). However, for the third term we have introduced a cut-off in one of the off-diagonal terms originally found in \(\mathcal{L}_{2,1}(t)\):
\[\widetilde{\mathcal{L}}_{2,1} =\mathds{1}\otimes\chi(\mathcal{N}_{B}\leqslant\hbar M)\int_{ \mathbb{R}^{d}\times\mathbb{R}^{d}}v(x-y)a^{*}(u_{x})a(u_{x})\otimes\left( \varphi(y)b_{y}^{*}+\overline{\varphi}(y)b_{y}\right)\,\mathrm{d}x\mathrm{d}y\] \[-\mathds{1}\otimes\chi(\mathcal{N}_{B}\leqslant\hbar M)\int_{ \mathbb{R}^{d}\times\mathbb{R}^{d}}v(x-y)a^{*}(\overline{v_{x}})a(\overline{ v_{x}})\otimes\left(\varphi(y)b_{y}^{*}+\overline{\varphi}(y)b_{y}\right)\,\mathrm{d}x \mathrm{d}y\] \[+\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}v(x-y)a^{*}(u_{x})a^{* }(\overline{v_{x}})\otimes\left(\varphi(y)b_{y}^{*}+\overline{\varphi}(y)b_{y }\right)\,\mathrm{d}x\mathrm{d}y\] \[+\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}v(x-y)a(\overline{v_{x }})a(u_{x})\otimes\left(\varphi(y)b_{y}^{*}+\overline{\varphi}(y)b_{y}\right) \,\mathrm{d}x\mathrm{d}y. \tag{4.22}\]
The cut-off \(\mathcal{N}_{B}\leqslant\hbar M\) is specifically tailored for the problem at hand, and is introduced so that one can "close the inequalities" when running a Gronwall-type argument.
Let us recall that we have introduced the number operators \(\mathcal{N}_{F}\) and \(\mathcal{N}_{B}\). In this subsection, we shall give estimates for the growth-in-time of expectations of these observables with respect to the truncated fluctuation dynamics \(\widetilde{\mathcal{U}}(t,s).\) The main result of this subsection is the following proposition, containing relevant number estimates.
**Proposition 4.1** (Number estimates for the truncated dynamics).: _Assume that the solution of the mean-field equations \((\omega,\varphi)\) satisfies the following apriori bound_
\[\|[e^{i\xi\cdot x},\omega(t)]\|_{\mathrm{Tr}}\leqslant C\exp(Ct)M\hbar\langle \xi\rangle\ \,\qquad\forall\xi\in\mathbb{R}^{d}. \tag{4.23}\]
_Then, for every \(k\in\mathbb{N}\) there exists a constant \(C>0\) such that for all \(t,s\in\mathbb{R}\) and \(\Psi\in\mathscr{F}\) there holds_
\[\langle\Psi,\widetilde{\mathcal{U}}^{*}(t,s)\mathcal{N}^{k}\widetilde{ \mathcal{U}}(t,s)\Psi\rangle_{\mathscr{F}}\leqslant C\exp\left[C\lambda\sqrt{ \frac{NM}{\hbar}}\bigg{(}1+\sqrt{\frac{\hbar M}{N}}\bigg{)}\exp|t-s|\right] \left\langle\Psi,(\mathcal{N}^{k}+\mathds{1})\Psi\right\rangle_{\mathscr{F}}\.\]
The proof of the above estimates is based on the Gronwall inequality, and the commutator estimates contained in the following lemma.
**Lemma 4.2** (Commutator estimates).: _Under the same assumptions of Proposition 4.1, for all \(\theta\in\mathbb{R}\) there is \(C>0\) such that for all \(t,s\in\mathbb{R}\) for all \(\Psi,\Phi\in\mathscr{F}\) there holds_
\[|\langle\Psi,[\widetilde{\mathcal{L}}(t),\mathcal{N}_{F}]\Phi \rangle_{\mathscr{F}}| \leqslant Ce^{Ct}\lambda\Big{(}\sqrt{NM\hbar}+M\hbar\Big{)} \bigg{(}\|(\mathcal{N}+\mathds{1})^{\frac{1+\theta}{2}}\Psi\|_{\mathscr{F}}^{ 2}+\|(\mathcal{N}+\mathds{1})^{\frac{1+\theta}{2}}\Phi\|_{\mathscr{F}}^{2} \bigg{)}\, \tag{4.24}\] \[|\langle\Psi,[\widetilde{\mathcal{L}}(t),\mathcal{N}_{B}]\Psi \rangle_{\mathscr{F}}| \leqslant Ce^{Ct}\lambda\Big{(}\sqrt{NM\hbar}+M\hbar\Big{)} \bigg{(}\|(\mathcal{N}+\mathds{1})^{\frac{1+\theta}{2}}\Psi\|_{\mathscr{F}}^{ 2}+\|(\mathcal{N}+\mathds{1})^{\frac{1+\theta}{2}}\Phi\|_{\mathscr{F}}^{2} \bigg{)}. \tag{4.25}\]
First, we turn to the proof of the above commutator estimates. Subsequently, we show how they imply Proposition 4.1.
Proof of Lemma 4.2.: First, we prove the estimates for \(\mathcal{N}_{F}\). Secondly, we prove the estimates for \(\mathcal{N}_{B}\). Here, we only prove the case for \(\theta=0\). For general \(\theta\) it suffices to insert an indentity \(\mathds{1}=(\mathcal{N}+4)^{\theta}(\mathcal{N}+4)^{1-\theta}\) and use the pull through formulas \(\mathcal{N}a_{x}=a_{x}(\mathcal{N}+\mathds{1})\), and \(\mathcal{N}b_{y}=b_{y}(\mathcal{N}+\mathds{1})\) on each term of \(\widetilde{\mathcal{L}}(t)\).
Proof of Eq. (4.24).: Using the relations \([\mathcal{N}_{F},a^{*}(g)]=+a^{*}(g)\) and \([\mathcal{N}_{F},a(g)]=-a(g)\) one is able to calculate the commutator
\[[\widetilde{\mathcal{L}}(t),\mathcal{N}_{F}]=+\lambda\sqrt{N}[\mathcal{L}_{2, 1}(t),\mathcal{N}_{F}]+\lambda[\mathcal{L}_{2,2}(t),\mathcal{N}_{F}]\]
where
\[[\mathcal{L}_{2,1}(t),\mathcal{N}_{F}]= +2\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}V(x-y)a^{*}(u_{x})a^{* }(\overline{v}_{x})\otimes\big{(}\varphi(y)b_{y}^{*}+\overline{\varphi(y)}b_ {y}\big{)}\mathrm{d}x\mathrm{d}y\] \[-2\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}V(x-y)a(\overline{v}_{ x})a(u_{x})\otimes\big{(}\varphi(y)b_{y}^{*}+\overline{\varphi(y)}b_{y}\big{)} \mathrm{d}x\mathrm{d}y\, \tag{4.26}\] \[[\mathcal{L}_{2,2}(t),\mathcal{N}_{F}]= +2\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}V(x-y)a^{*}(u_{x})a^{* }(\overline{v}_{x})\otimes b_{y}^{*}b_{y}\mathrm{d}x\mathrm{d}y\] \[-2\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}V(x-y)a(\overline{v}_{ x})a(u_{x})\otimes b_{y}^{*}b_{y}\mathrm{d}x\mathrm{d}y. \tag{4.27}\]
Let us first estimate the terms in (4.26). To this end, we use a Fourier decomposition \(V(x)=(2\pi)^{-d/2}\int_{\mathbb{R}^{d}}\hat{V}(\xi)e_{\xi}(x)\mathrm{d}\xi\), where \(e_{\xi}(x)\equiv\exp[ix\cdot\xi]\), to find that
\[[\mathcal{L}_{2,1}(t), \mathcal{N}_{F}] \tag{4.28}\] \[=\frac{2}{(2\pi)^{\frac{d}{2}}}\int_{\mathbb{R}^{d}}\hat{V}(\xi) \Big{(}\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}[ue_{\xi}v^{*}](x_{1},x_{2})a _{x_{1}}^{*}a_{x_{2}}^{*}\mathrm{d}x_{1}\mathrm{d}x_{2}\Big{)}\otimes\big{(}b^ {*}[e_{-\xi}\varphi]+b[e_{\xi}\varphi]\big{)}\mathrm{d}\xi\] \[-\frac{2}{(2\pi)^{\frac{d}{2}}}\int_{\mathbb{R}^{d}}\hat{V}(\xi) \Big{(}\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}[ve_{\xi}u](x_{1},x_{2})a_{x_ {1}}a_{x_{2}}\mathrm{d}x_{1}\mathrm{d}x_{2}\Big{)}\otimes\big{(}b^{*}[e_{-\xi} \varphi]+b[e_{\xi}\varphi]\big{)}\mathrm{d}\xi\.\]
Thus, we use the estimates contained in Lemma 3.1 and 3.3 and the Cauchy-Schwarz inequality for \(\langle\cdot,\cdot\rangle_{\mathscr{F}}\) to find that there exists a constant \(C>0\) such that for all \(\Psi,\Phi\in\mathscr{F}\)
there holds
\[|\langle\Psi,[\mathcal{L}_{2,1}(t) \mathcal{N}_{F}]\Phi\rangle_{\mathscr{F}}|\] \[\leqslant C\|V\|\sup_{\xi\in\mathbb{R}^{d}}\left\langle\xi\right\rangle ^{-1}\|ue_{\xi}v^{*}\|_{HS}\|(\mathcal{N}_{F}+\mathds{1})^{\frac{1}{2}}\otimes \mathds{1}\Psi\|_{\mathscr{F}}\|\varphi\|_{L^{2}}\|\mathds{1}\otimes( \mathcal{N}_{B}+\mathds{1})^{\frac{1}{2}}\Phi\|_{\mathscr{F}}\] \[\leqslant C\|V\|\sqrt{Mh}\exp(Ct)\|(\mathcal{N}_{F}+\mathds{1})^{ \frac{1}{2}}\otimes\mathds{1}\Psi\|_{\mathscr{F}}\|\mathds{1}\otimes(\mathcal{ N}_{B}+\mathds{1})^{\frac{1}{2}}\Phi\|_{\mathscr{F}}\ \, \tag{4.29}\]
where in the last line we have used \(\|ue_{\xi}v^{*}\|_{HS}\leqslant C\sqrt{Mh}\left\langle\xi\right\rangle e^{Ct}\) and \(\|\varphi\|_{L^{2}}=1\). Similarly, the second term in (4.27) may be expanded in its Fourier coefficients-we find that
\[[\mathcal{L}_{2,2}(t),\mathcal{N}_{F}]\] \[-\frac{2}{(2\pi)^{\frac{d}{2}}}\int_{\mathbb{R}^{d}}\hat{V} \big{(}\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}[ve_{\xi}u](x_{1},x_{2})a_{x_{ 1}}a_{x_{2}}\mathrm{d}x_{1}\mathrm{d}x_{2}\big{)}\otimes\mathrm{d}\Gamma[e_{- \xi}]\mathrm{d}\xi. \tag{4.30}\]
Thus, we use the estimates contained in Lemma 3.1 and 3.3 and the Cauchy-Schwarz inequality for \(\left\langle\cdot,\cdot\right\rangle_{\mathscr{F}}\) to find that there exists a constant \(C>0\) such that for all \(\Psi\in\mathcal{D}(\mathcal{N}_{F})\cap\mathcal{D}(\mathcal{N}_{B})\) there holds
\[|\langle\Psi,[\mathcal{L}_{2,2}(t),\mathcal{N}_{F}]\Phi\rangle_{ \mathscr{F}}| =|\langle(\mathcal{N}_{B}+\mathds{1})^{\frac{1}{2}}\Psi,[ \mathcal{L}_{2,2}(t),\mathcal{N}_{F}](\mathcal{N}_{B}+\mathds{1})^{-\frac{1}{2 }}\Phi\rangle_{\mathscr{F}}|\] \[\leqslant C\|V\|\sup_{\xi\in\mathbb{R}^{d}}\left\langle\xi\right\rangle ^{-1}\|ue_{\xi}v^{*}\|_{Tr}\|(\mathcal{N}_{B}+\mathds{1})^{\frac{1}{2}}\Psi\|_ {\mathscr{F}}\|\varphi\|_{L^{2}}\|\mathds{1}\otimes\mathcal{N}_{B}(\mathcal{ N}_{B}+\mathds{1})^{-\frac{1}{2}}\Phi\|_{\mathscr{F}}\] \[\leqslant C\|V\|M\hbar\exp(Ct)\|(\mathcal{N}_{B}+\mathds{1})^{ \frac{1}{2}}\Psi\|_{\mathscr{F}}\|\mathds{1}\otimes(\mathcal{N}_{B}+\mathds{1 })^{\frac{1}{2}}\Phi\|_{\mathscr{F}}\, \tag{4.31}\]
where in the last line we have used \(\|ue_{\xi}v^{*}\|_{Tr}\leqslant CM\hbar\left\langle\xi\right\rangle e^{Ct}\) and \(\|\varphi\|_{L^{2}}=1\).
We gather the estimates contained in Eqs. (4.29) and (4.31), and use the basic estimates \(\mathcal{N}_{F}\otimes\mathds{1}\leqslant\mathcal{N}\), \(\mathds{1}\otimes\mathcal{N}_{B}\leqslant\mathcal{N}\) together with Young's inequality \(ab\leqslant a^{2}/2+b^{2}/2\) for \(a,b\geqslant 0\). This finishes the proof.
_Proof of Eq. (4.25)._ Using the relations \([\mathcal{N}_{B},b^{*}(g)]=+b^{*}(g)\) and \([\mathcal{N}_{B},b(g)]=+b(g)\) one is able to calculate the commutator
\[[\widetilde{\mathcal{L}}(t),\mathcal{N}_{B}]=+\lambda\sqrt{N}[\widetilde{ \mathcal{L}}_{2,1}(t),\mathcal{N}_{B}]\]
where
\[[\widetilde{\mathcal{L}}_{2,1}(t),\mathcal{N}_{B}] =+2\chi(\mathcal{N}_{B}\leqslant\hbar M)\int_{\mathbb{R}^{d} \times\mathbb{R}^{d}}V(x-y)a^{*}(u_{x})a(u_{x})\otimes\big{(}\varphi(y)b_{y}^{ *}+\overline{\varphi(y)}b_{y}\big{)}\mathrm{d}x\mathrm{d}y\] \[+2\chi(\mathcal{N}_{B}\leqslant\hbar M)\int_{\mathbb{R}^{d} \times\mathbb{R}^{d}}V(x-y)a^{*}(\overline{v}_{x})a(\overline{v}_{x})\otimes \big{(}\varphi(y)b_{y}^{*}+\overline{\varphi(y)}b_{y}\big{)}\mathrm{d}x \mathrm{d}y\] \[+2\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}V(x-y)a^{*}(u_{x})a^{* }(\overline{v}_{x})\otimes\big{(}\varphi(y)b_{y}^{*}+\overline{\varphi(y)}b_ {y}\big{)}\mathrm{d}x\mathrm{d}y\] \[+2\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}V(x-y)a(\overline{v}_{ x})a(u_{x})\otimes\big{(}\varphi(y)b_{y}^{*}+\overline{\varphi(y)}b_{y}\big{)} \mathrm{d}x\mathrm{d}y. \tag{4.32}\]
Similarly as before, a Fourier decomposition for the interaction potential yields
\[[\widetilde{\mathcal{L}}_{2,1}(t),\mathcal{N}_{B}] =\frac{2}{(2\pi)^{\frac{d}{2}}}\int_{\mathbb{R}^{d}}\hat{V}(\xi) \mathrm{d}\Gamma[ue_{\xi}u]\otimes\chi(\mathcal{N}_{B}\leqslant\hbar M)\big{(} b^{*}[e_{-\xi}\varphi]+b[e_{\xi}\varphi]\big{)}\mathrm{d}\xi \tag{4.33}\] \[+\frac{2}{(2\pi)^{\frac{d}{2}}}\int_{\mathbb{R}^{d}}\hat{V}(\xi) \mathrm{d}\Gamma[v^{*}e_{\xi}v]\otimes\chi(\mathcal{N}_{B}\leqslant\hbar M) \big{(}b^{*}[e_{-\xi}\varphi]+b[e_{\xi}\varphi]\big{)}\mathrm{d}\xi\] (4.34) \[+\frac{2}{(2\pi)^{\frac{d}{2}}}\int_{\mathbb{R}^{d}}\hat{V}(\xi) \Big{(}\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}[ue_{\xi}v^{*}](x_{1},x_{2})a _{x_{1}}^{*}a_{x_{2}}^{*}\mathrm{d}x_{1}\mathrm{d}x_{2}\Big{)}\otimes\big{(}b ^{*}[e_{-\xi}\varphi]+b[e_{\xi}\varphi]\big{)}\mathrm{d}\xi\] (4.35) \[+\frac{2}{(2\pi)^{\frac{d}{2}}}\int_{\mathbb{R}^{d}}\hat{V}(\xi) \Big{(}\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}[ve_{\xi}u](x_{1},x_{2})a_{x_ {1}}a_{x_{2}}\mathrm{d}x_{1}\mathrm{d}x_{2}\Big{)}\otimes\big{(}b^{*}[e_{-\xi} \varphi]+b[e_{\xi}\varphi]\big{)}\mathrm{d}\xi. \tag{4.36}\]
Up to an overall minus sign, the terms contained in Eqs. (4.35) and (4.36) have already been estimated above. Thus, it suffices to estimate the two terms contained in Eqs. (4.33) and (4.34). Let us start from the one in Eq. (4.33) and, for simplicity, let us only show how to bound the contribution arising from \(b[e_{\xi}\varphi]\)-the other one is analogous. Let us fix \(\xi\in\mathbb{R}^{d}\), and let \(\Psi,\Phi\in\mathscr{F}\). Denoting \(\chi_{M}\equiv\chi(\mathcal{N}_{B}\leqslant\hbar M)\), we find using and _pull-through formula_ that \(\mathcal{N}_{B}b_{x}=b_{x}(\mathcal{N}_{B}+\mathds{1})\)
\[|\left\langle\Psi,\mathrm{d}\Gamma[ue_{\xi}u]\otimes\chi_{M}b[e_ {\xi}\varphi]\Phi\right\rangle_{\mathscr{F}}| =|\langle\chi_{M}\mathcal{N}_{B}^{\frac{1}{2}}\Psi,\mathrm{d} \Gamma[ue_{\xi}u]\otimes\chi_{M}b[e_{\xi}\varphi](\mathcal{N}_{B}+\mathds{1} )^{-\frac{1}{4}}\Phi\rangle_{\mathscr{F}}|\] \[\leqslant\|\mathcal{N}_{F}^{\frac{1}{2}}\chi_{M}\mathcal{N}_{B}^{ \frac{1}{2}}\Psi\|_{\mathscr{F}}\|\mathcal{N}_{F}^{\frac{1}{2}}\chi_{M}b[e_{ \xi}\varphi](\mathcal{N}_{B}+\mathds{1})^{-\frac{1}{4}}\Phi\|_{\mathscr{F}}\] \[\leqslant\sqrt{M\hbar}\|\mathcal{N}_{F}^{\frac{1}{2}}\otimes \mathds{1}\Psi\|_{\mathscr{F}}\|\mathcal{N}_{F}^{\frac{1}{2}}\otimes \mathds{1}\Phi\|_{\mathscr{F}} \tag{4.37}\]
where we used Lemma 3.1 and 3.3 together with \(\|ue_{\xi}u\|_{B(L^{2})}\|\leqslant 1\). Similarly, we find
\[|\left\langle\Psi,\mathrm{d}\Gamma[v^{*}e_{\xi}v]\otimes\chi_{M}b[e_{\xi} \varphi]\Psi\right\rangle_{\mathscr{F}}|\leqslant\sqrt{M\hbar}\|\mathcal{N}_{ F}^{\frac{1}{2}}\otimes\mathds{1}\Psi\|_{\mathscr{F}}\|\mathcal{N}_{F}^{\frac{1}{2}} \otimes\mathds{1}\Phi\|_{\mathscr{F}} \tag{4.38}\]
The proof if finished, once we gather our estimates and use the elementary inequalities \(\mathcal{N}_{F}\otimes\mathds{1}\leqslant\mathcal{N}\), \(\mathds{1}\otimes\mathcal{N}_{B}\leqslant\mathcal{N}\) together with Young's inequality \(ab\leqslant a^{2}/2+b^{2}/2\) for \(a,b\geqslant 0\).
Proof of Proposition 4.1.: Let us fix \(k\in\mathbb{N}\) and recall that \(\mathrm{i}\hbar\partial_{t}\widetilde{\mathcal{U}}(t,s)=\widetilde{ \mathcal{L}}(t)\widetilde{\mathcal{U}}(t,s)\). Hence, we may compute the time derivative for \(t,s\in\mathbb{R}\)
\[\frac{d}{dt}\widetilde{\mathcal{U}}^{*}(t,s)\mathcal{N}^{k} \mathcal{U}(t,s) =\frac{1}{\mathrm{i}\hbar}\widetilde{\mathcal{U}}^{*}(t,s)[ \widetilde{\mathcal{L}}(t),\mathcal{N}^{k}]\widetilde{\mathcal{U}}(t,s)\] \[=\frac{1}{\mathrm{i}\hbar}\sum_{i=1}^{k}\widetilde{\mathcal{U}}^ {*}(t,s)\mathcal{N}^{i-1}[\widetilde{\mathcal{L}}(t),\mathcal{N}]\mathcal{N}^{k -i}\widetilde{\mathcal{U}}(t,s). \tag{4.39}\]
Taking \(\Psi\in\mathscr{F}\), we can then estimate that
\[\frac{d}{dt}\left\langle\Psi,\widetilde{\mathcal{U}}^{*}(t,s)\mathcal{N}^{k} \widetilde{\mathcal{U}}(t,s)\Psi\right\rangle_{\mathscr{F}}\leqslant\frac{1}{ \hbar}\sum_{i=1}^{k}|\langle\mathcal{N}^{i-1}\widetilde{\mathcal{U}}(t,s)\Psi,[ \widetilde{\mathcal{L}}(t),\mathcal{N}]\mathcal{N}^{k-i}\widetilde{\mathcal{U}}(t,s)\Psi\rangle_{\mathscr{F}}|. \tag{4.40}\]
Fix \(i=1,\ldots,k\) and let \(\theta=1+k-2i\in\mathbb{R}\). In view of Lemma 4.2 there exists \(C>0\) such that
\[|\langle\mathcal{N}^{i-1}\widetilde{\mathcal{U}}(t,s)\Psi,[ \widetilde{\mathcal{L}}(t),\mathcal{N}]\mathcal{N}^{k-i}\widetilde{\mathcal{U} }(t,s)\Psi\rangle_{\mathscr{F}}|\] \[\quad\leqslant Ce^{Ct}\lambda(\sqrt{NM\hbar}+M\hbar)\Big{(}\|( \mathcal{N}+\mathds{1})^{\frac{1+\theta}{2}}\mathcal{N}^{i-1}\widetilde{ \mathcal{U}}(t,s)\Psi\|_{\mathscr{F}}^{2}+\|(\mathcal{N}+\mathds{1})^{\frac{1 -\theta}{2}}\mathcal{N}^{k-i}\widetilde{\mathcal{U}}(t,s)\Psi\|_{\mathscr{F}} ^{2}\Big{)},\] \[\quad\leqslant Ce^{Ct}\lambda(\sqrt{NM\hbar}+M\hbar)\|(\mathcal{N }+\mathds{1})^{\frac{k}{2}}\widetilde{\mathcal{U}}(t,s)\Psi\|_{\mathscr{F}}^{2}\] \[\quad=Ce^{Ct}\lambda(\sqrt{NM\hbar}+M\hbar)\langle\Psi,\widetilde {\mathcal{U}}^{*}(t,s)(\mathcal{N}^{k}+\mathds{1})\widetilde{\mathcal{U}}(t, s)\Psi\rangle_{\mathscr{F}}. \tag{4.41}\]
The proof is then finished after we apply the Gronwall inequality and use the initial condition \(\widetilde{\mathcal{U}}(s,s)=\mathds{1}\).
### Proof of Theorem 3
Let us now give a proof of Theorem 3. Essentially, we shall compare the number estimates generated by \(\mathcal{U}(t,s)\) and \(\widetilde{\mathcal{U}}(t,s)\). The latter have already been established in Proposition 3. Additionally, we shall need the following _weak bounds_ on the growth of particle number with respect to the original fluctuation dynamics.
**Lemma 4.3** (Weak number estimates).: _Let \(\mathcal{U}(t,s)\) be the fluctuation dynamics, defined in (4.3). Then, the following statements hold true._
1. _For all_ \(\ell\in\mathbb{N}\) _there is a constant_ \(C>0\) _such that for all_ \(t,s\in\mathbb{R}\) _and_ \(\Psi\in\mathscr{F}\) _there holds_ \[\|(\mathcal{N}_{F}+M)^{\ell}\mathcal{U}(t,s)\Psi\|_{\mathscr{F}}\leqslant C\| (\mathcal{N}_{F}+M)^{\ell}\Psi\|_{\mathscr{F}}\] (4.42)
2. _For all_ \(\ell\in\mathbb{N}\) _there is a constant_ \(C>0\) _such that for all_ \(t,s\in\mathbb{R}\) _and_ \(\Psi\in\mathscr{F}\) _there holds_ \[\|(\mathcal{N}_{B}+N)^{\ell}\mathcal{U}(t,s)\Psi\|_{\mathscr{F}}\leqslant C\| (\mathcal{N}_{B}+N)^{\ell}\Psi\|_{\mathscr{F}}\] (4.43)
Proof.: We only give a proof for the fermion number operator since the proof for bosons is similar; we refer the reader to [55, Lemma 3.6] for the situation in which only bosons are present (interactions do not change the proof).
Let us then consider the particle-hole transformation \(\mathcal{R}_{t}\) in the definition of \(\mathcal{U}(t,s)\) and notice that
\[\mathcal{R}_{t}(\mathcal{N}_{F}+M)^{\ell}\mathcal{R}_{t}^{*}=[\mathcal{R}_{t}( \mathcal{N}_{F}+M)\mathcal{R}_{t}^{*}]^{\ell}=(\mathrm{d}\Gamma[v(t)]-\mathrm{ d}\Gamma[u(t)]+2M)^{\ell}. \tag{4.44}\]
The terms \(\mathrm{d}\Gamma[u(t)]\) and \(\mathrm{d}\Gamma[v(t)]\) can be estimated using Lemma 3.1. Namely, it follows that there exists \(c_{0}\) such that for all \(\Phi\in\mathscr{F}\)
\[\|(\mathrm{d}\Gamma[v(t)]-\mathrm{d}\Gamma[u(t)]+2M)\Phi\|_{\mathscr{F}} \leqslant c_{0}\|(\mathcal{N}_{F}+M)\Phi\|_{\mathscr{F}}. \tag{4.45}\]
Consequently, since \([\mathrm{d}\Gamma_{F}[J],\mathcal{N}_{F}]=0\) for \(J=u(t)\) and \(J=v(t)\), an \(\ell\)-fold application of the previous estimate yields
\[\|(\mathcal{N}_{F}+M)^{\ell}\mathcal{R}_{t}^{*}\Phi\|_{\mathscr{F}}\leqslant c _{0}^{\ell}\|(\mathcal{N}_{F}+M)^{\ell}\Phi\|_{\mathscr{F}}\,\qquad\forall t\in\mathbb{R}. \tag{4.46}\]
Here we have used the fact that \(\mathcal{R}_{t}\) is a unitary transformation on Fock space. We conclude using the fact that \((\mathcal{N}_{F}+M)\) commutes with the Schrodinger dynamics \(\exp(-i(t-s)\mathcal{H}/\hbar)\),
and the bosonic Weyl operator \(\mathcal{W}_{t}\). That is
\[\|(\mathcal{N}_{F}+M)^{\ell}\mathcal{U}(t,s)\Psi\|_{\mathscr{F}} =\|(\mathcal{N}_{F}+M)^{\ell}\mathcal{R}_{t}^{*}\mathcal{W}_{t}^{* }e^{-i(t-s)\mathcal{H}/\hbar}\mathcal{R}_{s}\mathcal{W}_{s}\Psi\|_{\mathscr{F}}\] \[\leqslant c_{0}^{\ell}\|(\mathcal{N}_{F}+M)^{\ell}\mathcal{R}_{s} \Psi\|_{\mathscr{F}}\] \[\leqslant c_{0}^{2\ell}\|(\mathcal{N}_{F}+M)^{\ell}\Psi\|_{ \mathscr{F}}. \tag{4.47}\]
where in the last line we have used that \(\mathcal{R}_{s}=\mathcal{R}_{-s}^{*}\) together with the estimate (4.46). This finishes the proof.
Proof of Theorem 3.: Since the proof is essentially the same one for fermions and bosons, we only present the proof of the former in full details, and point out the differences with respect to the latter. To this end, let us now fix \(\Psi\in\mathscr{F}\) together with \(t,s\in\mathbb{R}\) and \(k,\ell\in\mathbb{N}\). We start by computing that
\[\|\mathcal{N}_{F}^{\frac{\ell}{2}}\mathcal{U}(t,s)\Psi\|_{ \mathscr{F}}^{2} =\langle\Psi,\mathcal{U}(t,s)\mathcal{N}_{F}^{\ell}\widetilde{ \mathcal{U}}(t,s)\Psi\rangle_{\mathscr{F}}+\langle\Psi,\mathcal{U}(t,s) \mathcal{N}_{F}^{\ell}\Big{(}\mathcal{U}(t,s)-\widetilde{\mathcal{U}}(t,s) \Big{)}\Psi\rangle_{\mathscr{F}}\] \[\leqslant\|\Psi\|_{\mathscr{F}}\|\mathcal{N}_{F}^{\ell}\widetilde {\mathcal{U}}(t,s)\Psi\|_{\mathscr{F}}+\|\mathcal{N}_{F}^{\ell}\mathcal{U}(t,s )\Psi\|_{\mathscr{F}}\|(\widetilde{\mathcal{U}}(t,s)-\mathcal{U}(t,s))\Psi\|_ {\mathscr{F}}. \tag{4.48}\]
We now estimate the two terms in (4.48).
For the first one, we use the fact that the truncated dynamics satisfies the estimate contained in Proposition 4.1. Namely, for a constant \(C>0\)
\[\|\mathcal{N}_{F}^{\ell}\widetilde{\mathcal{U}}(t,s)\Psi\|_{\mathscr{F}} \leqslant CK(t-s)\|(\mathcal{N}+\mathds{1})^{\ell}\Psi\|_{\mathscr{F}}. \tag{4.49}\]
For the second term in (4.48) we use the weak number estimates from Lemma 4.3. Namely, for a constant \(C=C(\ell)\)
\[\|\mathcal{N}_{F}^{\ell}\mathcal{U}(t,s)\Psi\|_{\mathscr{F}}\leqslant CM^{ \ell}\|(\mathds{1}+\mathcal{N}_{F}/M)^{\ell}\Psi\|\,\qquad\forall t,s\in\mathbb{R}. \tag{4.50}\]
Next, we study the difference between the original and the truncated dynamics. That is, we use Duhamel's formula to find that
\[\widetilde{\mathcal{U}}(t,s)-\mathcal{U}(t,s)=-i\delta\int_{s}^{t}\mathcal{U}( t,r)\Big{(}\widetilde{\mathcal{L}}_{2,1}(r)-\mathcal{L}_{2,1}(r)\Big{)} \widetilde{\mathcal{U}}(r,s)\mathrm{d}r; \tag{4.51}\]
where we have used the fact that \(\widetilde{\mathcal{L}}(t)-\mathcal{L}(t)=\lambda\sqrt{N}(\widetilde{\mathcal{ L}}_{2,1}(t)-\mathcal{L}_{2,1}(t))\), and we have collected \(\delta=\lambda\sqrt{N}/\hbar\) as a pre factor. Next, for \(\Phi=\mathcal{U}(t,s)\Psi\in\mathscr{F}\) one may estimate using
Lemma 3.1 and 3.3 that for all \(k\geqslant 0\) (here, we omit time labels for notational convenience)
\[\|(\widetilde{\mathcal{L}}_{2,1}-\mathcal{L}_{2,1})\Phi\|_{\mathscr{ F}} \leqslant\int_{\mathbb{R}^{d}}|\hat{V}(\xi)|\Big{\|}\mathrm{d}\Gamma [ue_{\xi}u]\otimes\chi(\mathcal{N}_{B}\geqslant\hbar M)\Big{(}b[e_{\xi}\varphi] +b^{*}[e_{-\xi}\varphi]\Big{)}\Phi\Big{\|}_{\mathscr{F}}\mathrm{d}\xi\] \[+\int_{\mathbb{R}^{d}}|\hat{V}(\xi)|\Big{\|}\mathrm{d}\Gamma[v^{*} e_{\xi}v]\otimes\chi(\mathcal{N}_{B}\geqslant\hbar M)\Big{(}b[e_{\xi}\varphi] +b^{*}[e_{-\xi}\varphi]\Big{)}\Phi\Big{\|}_{\mathscr{F}}\mathrm{d}\xi\] \[\leqslant 2\|\hat{V}\|_{L^{1}}\|\mathcal{N}_{F}\otimes\chi( \mathcal{N}_{B}\geqslant\hbar M)(\mathcal{N}_{B}+\mathds{1})^{\frac{1}{2}} \Phi\|_{\mathscr{F}}\] \[\leqslant\frac{2\|\hat{V}\|_{L^{1}}}{(\hbar M)^{k}}\|\mathcal{N} _{F}\otimes\chi(\mathcal{N}_{B}\geqslant\hbar M)(\mathcal{N}_{B}+\mathds{1})^ {k+\frac{1}{2}}\Phi\|_{\mathscr{F}}\] \[\leqslant\frac{C\|\hat{V}\|_{L^{1}}}{(\hbar M)^{k}}\|(\mathcal{N }+\mathds{1})^{k+\frac{3}{2}}\Phi\|_{\mathscr{F}}. \tag{4.52}\]
We put our last three estimates together to find that thanks to Proposition 4.1
\[\|\Big{(}\widetilde{\mathcal{U}}(t,s)-\mathcal{U}(t,s)\Big{)}\Psi\|_{\mathscr{ F}}\leqslant K(t-s)\frac{\lambda\sqrt{N}}{\hbar}\frac{1}{(\hbar M)^{k}}\|(1+ \mathcal{N})^{k+3/2}\Psi\|_{\mathscr{F}}. \tag{4.53}\]
Putting our estimates together, we find that there exists \(C>0\) such that
\[\|\mathcal{N}_{F}^{\frac{\ell}{2}}\mathcal{U}(t,s)\Psi\|_{\mathscr{F}}^{2} \leqslant K(t-s)\Big{[}\|\Psi\|_{\mathscr{F}}\|(\mathcal{N}+\mathds{1})\Psi\| _{\mathscr{F}}+\Theta_{k,\ell}\|\big{(}\mathds{1}+\frac{\mathcal{N}_{F}}{M} \big{)}^{\ell}\Psi\|_{\mathscr{F}}\|(\mathcal{N}+\mathds{1})^{k+\frac{3}{2}} \Psi\|_{\mathscr{F}}\Big{]}\]
where \(\Theta_{k,\ell}\) is as in the statement of the theorem.
As for the bosons, the only modification comes from the weak number estimates obtained from Lemma 4.3, In the bosonic case, one finds that for a constant \(C=C(\ell)\)
\[\|\mathcal{N}_{B}^{\ell}\mathcal{U}(t,s)\Psi\|_{\mathscr{F}}\leqslant CN^{ \ell}\|(\mathds{1}+\mathcal{N}_{B}/N)^{\ell}\Psi\|\,\qquad\forall t,s\in\mathbb{R}\, \tag{4.54}\]
This finishes the proof.
## 5. Second Quantization III: Proof of Theorem 1
Let us now turn to the proof our first main result. We shall reduce the problem of proving the estimates contained in Theorem 1, to that of proving number estimates for the fluctuation dynamics, as defined in Section 4. Then, we apply Theorem 3. The proofs we present are heavily inspired by the works [55, 9] and also [15, 16], and are adapted to the case at hand. Let us remark here that our proofs are shorter, because we do not aim at obtaining the optimal convergence rates \(1/M\) for fermions, and \(1/N\) for bosons. Here instead, we content ourselves with \(1/\sqrt{M}\) and \(1/\sqrt{N}\), respectively.
Let us first introduce some notation. Letting \(\mathcal{U}(t,s)\) be the fluctuation dynamics, we consider the following _fluctuation vectors_ in Fock space \(\mathscr{F}\)
\[\Omega_{1}(t)\equiv\mathcal{U}(t,0)\Omega\quad\text{and}\quad\Omega_{2}(t) \equiv\mathcal{U}(t,0)\Omega_{F}\otimes\mathcal{W}^{*}[\sqrt{N}\varphi_{0}] \frac{b^{*}(\varphi_{0})}{\sqrt{N}}\Omega_{B}. \tag{5.1}\]
They will be extremely useful in the proof of Theorem 1. Heuristically, \(\Omega_{1}(t)\) determines the fluctuations of the system when the initial data has a bosonic coherent state. On the other hand, \(\Omega_{2}(t)\) describes the fluctuations when the initial data has a bosonic factorized
state. Let us collect in the following lemma some results concerning these vectors; for reference, see [16] or [19].
**Lemma 5.1** (Properties of \(\Omega_{1}\) and \(\Omega_{2}\)).: _Let us denote from now on_
\[d_{N}\equiv\frac{\sqrt{N!}}{N^{N/2}e^{-N/2}} \tag{5.2}\]
_Then, the following statementes hold true_
1. \(P_{N}\mathcal{W}[\sqrt{N}\varphi_{0}]\Omega_{B}=\frac{1}{d_{N}}\frac{b^{*}( \varphi_{0})}{\sqrt{N!}}\Omega_{B}\) _where_ \(P_{N}\equiv 1(\mathcal{N}_{B}=N)\)_._
2. \(\langle\Omega_{2}(t),\Omega_{1}(t)\rangle_{\mathscr{F}}=\langle\Omega_{2}(0),\Omega_{1}(0)\rangle_{\mathscr{F}}=\frac{1}{d_{N}}\) _._
3. _There exists a constant_ \(C>0\) _such that_ \(\|(\mathcal{N}_{F}+\mathds{1})^{-1/2}\Omega_{2}(0)\|_{\mathscr{F}}\leqslant \frac{C}{d_{N}}\) _._
Proof of Theorem 1.: We recall recall here that one-particle reduced densities have been re-cast in creation- and annihilation operators in (2.2).
Proof of (2.16): First, we establish the result for fermions. If we denote the time evolution of operators by \(a_{x}(t)=e^{it/\hbar\mathcal{H}}a_{x}e^{-it/\hbar\mathcal{H}}\) then we obtain
\[\gamma_{F}(t;x_{1},x_{2}) =\big{\langle}\Psi(t),a_{x_{2}}^{*}a_{x_{1}}\Psi(t)\big{\rangle}_ {\mathscr{F}}\] \[=\bigg{\langle}\mathcal{R}_{0}\otimes\frac{b^{*}(\varphi_{0})^{N }}{\sqrt{N!}}\Omega,a_{x_{2}}^{*}(t)a_{x_{1}}(t)\mathcal{R}_{0}\otimes\frac{b^ {*}(\varphi_{0})^{N}}{\sqrt{N!}}\Omega\bigg{\rangle}_{\mathscr{F}}\] \[=d_{N}\left\langle\mathcal{R}_{0}\otimes\frac{b^{*}(\varphi_{0} )^{N}}{\sqrt{N!}}\Omega,a_{x_{2}}^{*}(t)a_{x_{1}}(t)\mathcal{R}_{0}\otimes \mathcal{W}[\sqrt{N}\varphi_{0}]\Omega\right\rangle_{\mathscr{F}}\] \[=d_{N}\left\langle\Omega_{2}(t),\mathcal{R}_{t}^{*}a_{x_{2}}^{*} a_{x_{1}}\mathcal{R}_{t}\Omega_{1}(t)\right\rangle_{\mathscr{F}} \tag{5.3}\]
Next, we look at the conjugation relations
\[\mathcal{R}_{t}^{*}a_{x}^{*}\mathcal{R}_{t}=a^{*}(u_{t,x})+a(\overline{v_{t,x }})\quad\text{and}\quad\mathcal{R}_{t}^{*}a_{x}\mathcal{R}_{t}=a(u_{t,x})+a^{* }(\overline{v_{t,x}}) \tag{5.4}\]
and obtain, using \(\langle\bar{v}_{x},\bar{v}_{y}\rangle=\omega(x,y)\) and \(\langle\Omega_{1},\Omega_{2}\rangle=d_{N}\)
\[\gamma_{F}(t;x_{1},x_{2})-\omega(t;x_{1},x_{2}) =d_{N}\left\langle\Omega_{2}(t),a^{*}(u_{x_{2},t})a(u_{x_{1},t}) \Omega_{1}(t)\right\rangle_{\mathscr{F}}\] \[-d_{N}\left\langle\Omega_{2}(t),a^{*}(\overline{v}_{x_{1},t})a( \overline{v}_{x_{2},t})\Omega_{1}(t)\right\rangle_{\mathscr{F}}\] \[+d_{N}\left\langle\Omega_{2}(t),a^{*}(u_{x_{2},t})a^{*}(v_{x_{1},t})\Omega_{1}(t)\right\rangle_{\mathscr{F}}\] \[+d_{N}\left\langle\Omega_{2}(t),a(\overline{v}_{x_{2},t})a(u_{x_{ 1},t})\Omega_{1}(t)\right\rangle_{\mathscr{F}}. \tag{5.5}\]
Next, we let \(\mathcal{O}\) be a compact operator in \(L^{2}(\mathbb{R}^{d})\) with kernel \(\mathcal{O}(x_{1},x_{2})\). The above identity now implies that
\[\operatorname{Tr}\!\mathcal{O}\big{(}\gamma_{F}(t)-\omega(t)\big{)} =d_{N}\left\langle\Omega_{2}(t),\,\mathrm{d}\Gamma_{F}[u_{t} \mathcal{O}u_{t}]\Omega_{1}(t)\right\rangle_{\mathscr{F}}\] \[+d_{N}\left\langle\Omega_{2}(t),\int_{\mathbb{R}^{d}\times \mathbb{R}^{d}}[v_{t}\mathcal{O}u_{t}](x_{1},x_{2})a_{x_{1}}a_{x_{2}}\,\Omega_{1 }(t)\right\rangle_{\mathscr{F}}\] \[+d_{N}\left\langle\Omega_{2}(t),\int_{\mathbb{R}^{d}\times \mathbb{R}^{d}}[v_{t}\mathcal{O}u_{t}]^{*}(x_{1},x_{2})a_{x_{1}}^{*}a_{x_{2}}^{* }\,\Omega_{1}(t)\right\rangle_{\mathscr{F}} \tag{5.6}\]
Next, we make use of Remark 4.2. In particular, we consider \(K_{1/2}\geqslant 1\) to be the integer that controls of \(\mathcal{N}^{1/2}\). The Cauchy-Schwarz inequality and Lemma 3.1 then gives
\[\big{|}\mathrm{Tr}\mathcal{O}\big{(}\gamma_{F}(t)-\omega(t)\big{)} \big{|} \leqslant\|u\mathcal{O}u\|_{B(\mathscr{F})}d_{N}\|(\mathcal{N}+ \mathds{1})^{-K_{1/2}}\Omega_{2}(t)\|_{\mathscr{F}}\|(\mathcal{N}+\mathds{1}) ^{K_{1/2}+1}\Omega_{1}(t)\|_{\mathscr{F}}\] \[+\|v\mathcal{O}v\|_{B(\mathscr{F})}d_{N}\|(\mathcal{N}+\mathds{1} )^{-K_{1/2}}\Omega_{2}(t)\|_{\mathscr{F}}\|(\mathcal{N}+\mathds{1})^{K_{1/2} +1}\Omega_{1}(t)\|_{\mathscr{F}}\] \[+\|v\mathcal{O}u\|_{HS}d_{N}\|(\mathcal{N}+\mathds{1})^{-K_{1/2} }\Omega_{2}(t)\|_{\mathscr{F}}\|(\mathcal{N}+\mathds{1})^{K_{1/2}+1/2}\Omega_ {1}(t)\|_{\mathscr{F}}\] \[+\|(vOu)^{*}\|_{HS}d_{N}\|(\mathcal{N}+\mathds{1})^{-K_{1/2}} \Omega_{2}(t)\|_{\mathscr{F}}\|(\mathcal{N}+\mathds{1})^{K_{1/2}+1/2}\Omega_ {1}(t)\|_{\mathscr{F}}. \tag{5.7}\]
First, we control \(\Omega_{2}(t)\) and then, we control \(\Omega_{1}(t)\). Indeed, Remark 4.2 implies that
\[\|(\mathcal{N}+\mathds{1})^{-K_{1/2}}\Omega_{2}(t)\|_{\mathscr{F}} \leqslant\|(\mathcal{N}+\mathds{1})^{-K_{1/2}}\mathcal{U}(t,0)( \mathcal{N}+\mathds{1})^{1/2}\|_{B(\mathscr{F})}\|(\mathcal{N}+\mathds{1})^{- 1/2}\Omega_{2}(0)\|_{\mathscr{F}}\] \[\leqslant K(t)/d_{N} \tag{5.8}\]
where we have used \(\|(\mathcal{N}_{F}+1)^{-1/2}\Omega_{2}(0)\|_{\mathscr{F}}\leqslant c/d_{N}\.\) On the other hand, Remark 4.1 immediately implies that
\[\|(\mathcal{N}+\mathds{1})^{K_{1/2}+1}\Omega_{1}(t)\|_{\mathscr{F}}\leqslant K(t) \tag{5.9}\]
provided one re-updates \(K(t)\). Finally, we use \(\|u\mathcal{O}u\|_{B(\mathscr{F})}\leqslant\|\mathcal{O}\|_{B(\mathscr{F})}\), \(\|v\mathcal{O}v\|_{B(\mathscr{F})}\leqslant\|\mathcal{O}\|_{B(\mathscr{F})}\), \(\|v\mathcal{O}u\|_{HS})\leqslant\sqrt{M}\|\mathcal{O}\|_{B(\mathscr{F})}\), and \(\|(v\mathcal{O}u)^{*}\|_{HS}\leqslant\sqrt{M}\|\mathcal{O}\|_{B(\mathscr{F})}\); and the fact that the space of compact operators is dual to the space of trace-class operators to conclcude that
\[\|\gamma_{F}(t)-\omega(t)\|_{\mathrm{Tr}}\leqslant K(t)\sqrt{M}. \tag{5.10}\]
This finishes the proof of the estimate for fermions.
_Proof of (2.17)_. As for the bosons, we start by looking at the identity that is the analogous of (5.3). Namely, a similar argument shows that
\[\gamma_{B}(t;y_{1},y_{2})=d_{N}\left\langle\Omega_{1}(t),\mathcal{W}_{t}^{*}b_ {y_{2}}^{*}b_{y_{1}}^{*}\,\mathcal{W}_{t}\Omega_{2}(t)\right\rangle_{\mathscr{ F}}. \tag{5.11}\]
Next, the conjugation relations
\[\mathcal{W}_{t}^{*}b_{x}\mathcal{W}_{t}=b_{x}+\sqrt{N}\varphi_{t}(x)\qquad \text{and}\qquad\mathcal{W}_{t}^{*}b_{x}^{*}\mathcal{W}_{t}=b_{x}^{*}+\overline {\varphi_{t}(x)} \tag{5.12}\]
combined with the identity \(\langle\Omega_{2},\Omega_{1}\rangle=1/d_{N}\) now imply that
\[\gamma_{B}(t;y_{1},y_{2})-N\varphi_{t}(y_{1})\overline{\varphi_{ t}(y_{2})} =d_{N}\left\langle\Omega_{2}(t),b_{y_{2}}^{*}b_{y_{1}}\Omega_{1}(t)\right\rangle\] \[+d_{N}\sqrt{N}\left\langle\Omega_{2}(t),b_{y_{2}}^{*}\varphi_{t}(y _{1})\Omega_{1}(t)\right\rangle\] \[+d_{N}\sqrt{N}\left\langle\Omega_{2}(t),b_{y_{1}}\overline{ \varphi_{t}(y_{2})}\Omega_{1}(t)\right\rangle. \tag{5.13}\]
Next, we consider a trace-class operator \(\mathcal{O}\) with kernel \(O(x,y)\) and integrate over \(y_{1},\ y_{2}\) to find that
\[\Big{|}\mathrm{Tr}\Big{(}\mathcal{O}(\gamma_{B}(t)-N\,|\varphi_{ t}\rangle\langle\varphi_{t}|)\Big{)}\Big{|} \leqslant d_{N}\left\langle\Omega_{2}(t),\mathrm{d}\Gamma_{B}[ \mathcal{O}]\Omega_{1}(t)\right\rangle\] \[+d_{N}\sqrt{N}\left\langle\Omega_{2}(t),\big{(}b^{*}[\mathcal{O} \varphi_{t}]+b[\mathcal{O}\varphi_{t}]\big{)}\Omega_{1}(t)\right\rangle. \tag{5.14}\]
Similarly as we did for fermions, we consider \(K_{1/2}\) to be the integer from Remark 4.1. Thus, the Cauchy-Schwarz inequality, the pull-through formula \(\mathcal{N}b_{y}=b_{y}(\mathcal{N}+1)\) and Lemma 3.3 now imply that
\[\Big{|}\mathrm{Tr}\Big{(}\mathcal{O}(\gamma_{B}(t)-N\,|\varphi_{t} \rangle\langle\varphi_{t}|)\Big{)}\Big{|}\\ \leqslant d_{N}\|(\mathcal{N}+\mathds{1})^{-K_{\frac{1}{2}}} \Omega_{1}(t)\|_{\mathscr{F}}\|(\mathcal{N}+\mathds{1})^{K_{\frac{1}{2}}} \mathrm{d}\Gamma_{B}[\mathcal{O}]\Omega_{1}(t)\|_{\mathscr{F}}\\ +d_{N}\sqrt{N}\|(\mathcal{N}+\mathds{1})^{-K_{\frac{1}{2}}} \Omega_{1}(t)\|_{\mathscr{F}}\|(\mathcal{N}+\mathds{1})^{K_{\frac{1}{2}}}b^{ *}[\mathcal{O}\varphi_{t}]\Omega_{1}(t)\|_{\mathscr{F}}\\ +d_{N}\sqrt{N}\|(\mathcal{N}+\mathds{1})^{-K_{\frac{1}{2}}} \Omega_{1}(t)\|_{\mathscr{F}}\|(\mathcal{N}+\mathds{1})^{K_{\frac{1}{2}}}b[ \mathcal{O}\varphi_{t}]\Omega_{1}(t)\|_{\mathscr{F}}\\ \leqslant d_{N}\|\mathcal{O}\|_{B(L^{2})}\|(\mathcal{N}+\mathds{1})^{-K_{ \frac{1}{2}}}\Omega_{1}(t)\|_{\mathscr{F}}\|(\mathcal{N}+\mathds{1})^{K_{\frac {1}{2}}+1}\Omega_{1}(t)\|_{\mathscr{F}}\\ +2d_{N}\sqrt{N}\|\mathcal{O}\varphi_{t}\|_{L^{2}}\|(\mathcal{N}+ \mathds{1})^{-K_{\frac{1}{2}}}\Omega_{1}(t)\|_{\mathscr{F}}\|(\mathcal{N}+ \mathds{1})^{K_{\frac{1}{2}}+\frac{1}{2}}\Omega_{1}(t)\|_{\mathscr{F}}. \tag{5.15}\]
In order to conclude, we control the right hand side of the above equation with the estimates (5.8) and (5.9). Using that \(\|\mathcal{O}\varphi\|_{L^{2}}\leqslant\|\mathcal{O}\|_{B(L^{2})}\) and recalling the space of compact operators is dual to the space of trace-class operators, we find
\[\|\gamma_{B}(t)-N\,|\varphi_{t}\rangle\langle\varphi_{t}|\,\|_{\mathrm{Tr}} \leqslant K(t)\sqrt{N}. \tag{5.16}\]
This finishes the proof of the Theorem.
## 6. Quantum Optimal Transportation: Proof of Theorem 2
In this section, we address the semi-classical limit of the coupled Hartree-Hartree system first introduced in (2.5). Here, we adopt the scaling regime presented in (2.20). Thus, we study the solution \((\omega^{\hbar},\varphi^{\hbar})\) of the equation
\[\begin{cases}&i\hbar\partial_{t}\omega^{\hbar}=\Big{[}-\tfrac{1}{2}\hbar^{2} \Delta+V*\rho_{B},\omega^{\hbar}\Big{]}\\ &i\partial_{t}\varphi^{\hbar}=-\tfrac{1}{2}\Delta\varphi^{\hbar}+(V*\rho_{F}) \varphi^{\hbar}\.\end{cases} \tag{6.1}\]
where \(\rho_{B}(t,x)=|\varphi^{\hbar}(t,x)|^{2}\) and \(\rho_{F}(t,x)=M^{-1}\omega^{\hbar}(t;x,x)\) are the bosonic and fermionic position densities, with some prescribed initial data \((\omega^{\hbar}_{0},\varphi^{\hbar}_{0})\in\mathscr{L}^{1}(L^{2}(\mathbb{R}^ {d}))\times L^{2}(\mathbb{R}^{d})\). We always assume that \(\mathrm{Tr}\omega^{\hbar}_{0}=M=\hbar^{-d}\) and \((\omega^{\hbar}_{0})^{*}=\omega^{\hbar}_{0}\geqslant 0\); these properties are of course preserved by the flow generated in (6.1).
In order to analyze the macroscopic limit, given \(\hbar>0\) we denote the _Wigner transform_ of a reduced density matrix \(\omega\in\mathcal{L}^{1}(L^{2}(\mathbb{R}^{d}))\) by
\[W^{\hbar}[\omega](x,p)\equiv\frac{\hbar^{d}}{(2\pi)^{d}}\int_{\mathbb{R}^{d}} \omega\Big{(}x+\frac{\hbar}{2}\xi,x-\frac{\hbar}{2}\xi\Big{)}e^{-i\xi\cdot p} \mathrm{d}\xi\,\qquad(x,p)\in\mathbb{R}^{2d}. \tag{6.2}\]
We understand the above map as a unitary transformation \(W^{\hbar}:L^{2}(\mathbb{R}^{d}_{x}\times\mathbb{R}^{d}_{x^{\prime}})\to L^{2}( \mathbb{R}^{d}_{x}\times\mathbb{R}^{d}_{p})\). Additionally, we record here the associated inverse map. Namely, for a regular enough phase-space \(f=f(x,p)\) distribution, we consider the operator on \(L^{2}(\mathbb{R}^{d})\) whose kernel is
defined as
\[\mathrm{Op}^{h}_{w}[f](x,x^{\prime})\equiv\hbar^{-d}\int_{\mathbb{R}^{d}}f\Big{(} \frac{x+x^{\prime}}{2},p\Big{)}e^{ip\cdot(x-x^{\prime})/\hbar}\mathrm{d}p\,\qquad(x,y)\in \mathbb{R}^{2d} \tag{6.3}\]
to be the _Weyl quantization_ map.
Intuitively, two phenomena happen when the \(\hbar\downarrow 0\) limit is taken. First, the _dynamics_ (i.e. the time evolution of the system) is changing. Secondly, the _initial data_ of the systems is converging from the quantum-mechanical, to the classical regime. We shall study each process separately.
### Stability of Hartree-Hartree
Here and in the rest of the article, we shall be using the following notation for a special class of semi-classical operators on \(L^{2}(\mathbb{R}^{d})\). Namely
\[\mathcal{O}_{\xi,\eta}\equiv\exp\big{(}i\xi\cdot x+i\eta\cdot p\big{)},\qquad \xi,\eta\in\mathbb{R}^{d} \tag{6.4}\]
where \(p=-i\hbar\nabla_{x}\). If \(\omega\in\mathscr{L}^{1}(L^{1}(\mathbb{R}^{d}))\) is a trace-class operator, then it is possible to define the additional (weaker) norm, which we shall use to measure distances
\[|||\omega|||_{s}\equiv\sup_{\xi,\eta\in\mathbb{R}^{d}}(1+|\xi|+|\eta|)^{-s}| \mathrm{Tr}(\mathcal{O}_{\xi,\eta}\omega)|\,\qquad s\geqslant 0. \tag{6.5}\]
The motivation for the introduction of such norm comes from the following observation. If we denote by \(f=W^{h}[\omega]\) its Wigner transform, then it is well-known that
\[\hat{f}(\xi,\eta)=\frac{1}{M}\mathrm{Tr}(\mathcal{O}_{\xi,\eta}\omega)\, \qquad\forall\xi,\eta\in\mathbb{R}^{d} \tag{6.6}\]
and, consequently,
\[|f|_{s}=\frac{1}{M}|||\omega|||_{s}\,\qquad\forall s\geqslant 0\ ; \tag{6.7}\]
we remind the reader that the norm \(|\cdot|_{s}\) has been defined in (2.24). It is important to note that if \(\mathrm{Tr}|\omega|\leqslant N\), then \(|f|_{s}\leqslant 1\).
Our first result towards the prove of Theorem 2 is the stability of the Hartree dynamics with respect to norm \((|||\cdot|||_{1},\|\cdot\|_{L^{2}})\). Let us note here that this result uses the ideas presented in [8], although the result itself is nor stated or proved.
**Proposition 6.1** (Stablity of Hartree-Hartree).: _Let \((\omega_{1}^{h},\varphi_{1}^{h})\) and \((\omega_{2}^{h},\varphi_{2}^{h})\) be two solutions of the coupled Hartree system (2.5) with intial data \((\omega_{1,0}^{h},\varphi_{1,0}^{h})\) and \((\omega_{2,0}^{h},\varphi_{2,0}^{h})\), satisfying \(\mathrm{Tr}\omega_{i,0}^{h}=M\) and \((\omega_{i,0}^{h})^{*}=\omega_{i,0}^{h}\geqslant 0\) for \(i=1,2\). Then, there exists \(C>\) such that_
\[\frac{1}{M}|||\omega_{1}^{h}(t)-\omega_{2}^{h}(t)|||_{1}+\|\varphi _{1}^{h}(t) -\varphi_{2}^{h}(t)\|_{L^{2}} \tag{6.8}\] \[\leqslant C\exp(\exp C|t|)\Big{(}\frac{1}{M}|||\omega_{1,0}^{h}- \omega_{2,0}^{h}|||_{1}+\|\varphi_{1,0}^{h}-\varphi_{2,0}^{h}\|_{L^{2}}\Big{)}\]
_for all \(t\in\mathbb{R}\)_
Proof.: In what follows, we shall drop the \(\hbar\) superscript in order to ease the notation. Let us then consider the unitary evolution groups associated to the solutions \(\{(\omega_{i},\varphi_{i})\}_{i=1,2}\). That is, for \(i=1,2\), in the notation of Appendix A, we consider
\[\omega_{i}(t)=U_{F,i}^{*}(t)\omega_{i.0}U_{F,i}(t)\qquad\text{and}\qquad\varphi_ {i}(t)=U_{B,i}(t)\varphi_{i,0} \tag{6.9}\]
where \(t\in\mathbb{R}\). Then, a straightforward computation using the generators of \(U_{F,1}\) and \(U_{F,2}\) yields
\[i\hbar\frac{d}{dt}U_{F,1}^{*}(t)\Big{(}\omega_{1}(t)-\omega_{2}(t)\Big{)}U_{F, 1}(t)=U_{F,1}^{*}(t)\Big{[}V*\big{(}\rho_{B,1}(t)-\rho_{B,2}(t)\big{)},\omega_ {2}(t)\Big{]}U_{F,1}(t). \tag{6.10}\]
Here, \(\rho_{B,i}(t,x)=|\varphi_{i}(t,x)|^{2}\). Thus, take a semi-classical observable \(\mathcal{O}_{\xi,\eta}\) and compute the trace
\[\operatorname{Tr}\!\mathcal{O}_{\xi,\eta}\Big{(}\omega_{1}(t)- \omega_{2}(t)\Big{)} \tag{6.11}\] \[=\operatorname{Tr}\!\mathcal{O}_{\xi,\eta}\big{(}\omega_{1,0}- \omega_{2,0}\big{)}\] \[\qquad\frac{-i}{\hbar}\int_{0}^{t}\!\operatorname{Tr}\!\Big{(} \mathcal{O}_{\xi,\eta}U_{F,1}(t-s)\big{[}V*\big{(}\rho_{B,1}(s)-\rho_{B,2}(s) \big{)},\omega_{2}(s)\big{]}U_{F,1}^{*}(t-s)\Big{)}\!\mathrm{d}s\.\]
Clearly, \(|\operatorname{Tr}\!\mathcal{O}_{\xi,\eta}(\omega_{1,0}-\omega_{2,0})|\leqslant (1+|\xi|+|\eta|)\ |||\omega_{1}(0)-\omega_{2}(0)|||_{1}\). For the second term, we use ciclicity of the trace \(\operatorname{Tr}\!AB=\operatorname{Tr}\!BA\), combined with the following general Fourier series expansion
\[[V*\rho,\omega]=\int_{\mathbb{R}^{d}}\hat{V}(k)\hat{\rho}(k)[e_{k},\omega] \mathrm{d}k\qquad\text{where}\qquad e_{k}(x)\equiv(2\pi)^{-d/2}e^{ik\cdot x}. \tag{6.12}\]
We obtain
\[|\operatorname{Tr}\!\Big{(}\mathcal{O}_{\xi,\eta}U_{F,1}(t-s) \big{[}V*\big{(}\rho_{B,1}(s)-\rho_{B,2}(s)\big{)},\omega_{2}(s)\big{]}U_{F,1 }^{*}(t-s)\Big{)}|\] \[\leqslant\int_{\mathbb{R}^{d}}|\hat{V}(k)|\ |\hat{\rho}_{B,1}(s,k)-\hat{\rho}_{B,2}(s,k)|\ | \operatorname{Tr}\!\Big{(}\big{[}e_{k},U_{F,1}^{*}(t-s)\mathcal{O}_{\xi,\eta}U _{F,1}(t-s)\big{]}\Big{)}\!\omega_{2}(s)|\ dk\] \[\leqslant\hbar M(1+|\xi|+|\eta|)\exp(C|t-s|)\int_{\mathbb{R}^{d}} |k||\hat{V}(k)|\ |\hat{\rho}_{B,1}(s,k)-\hat{\rho}_{B,2}(s,k)|\ dk\.\] \[\leqslant\hbar M(1+|\xi|+|\eta|)\exp(C|t-s|)\||k|\hat{V}\|_{L^{1}} \|\hat{\rho}_{B,1}(s)-\hat{\rho}_{B,2}(s)\|_{L^{\infty}}. \tag{6.13}\]
In the third line, we have used the commutator estimate from Lemma 6.1, with \(U_{\rho}=U_{F,1}\). Next, we use \(\|\hat{\rho}_{B,1}-\hat{\rho}_{B,2}\|_{L^{\infty}}\leqslant\|\rho_{B,1}-\rho_ {B,2}\|_{L^{1}}\leqslant 2\|\varphi_{1}-\varphi_{2}\|_{L^{2}}\) and analyze the boson fields. A similar argument shows that
\[\|\varphi_{1}(s)-\varphi_{2}(s)\|_{L^{2}} \leqslant\|\varphi_{1}(0)-\varphi_{2}(0)\|_{L^{2}}+\int_{0}^{s} \|V*(\rho_{F,1}(r)-\rho_{F,2}(r))\varphi_{2}(r)\|_{L^{2}}dr\] \[\leqslant\|\varphi_{1}(0)-\varphi_{2}(0)\|_{L^{2}}+\||k|\hat{V}\| _{L^{1}}\int_{0}^{s}\mathrm{d}r\sup_{k\in\mathbb{R}^{d}}|k|^{-1}\big{|}\hat{ \rho}_{F,1}(r,k)-\hat{\rho}_{F,2}(r,k)\big{|}\] \[\leqslant\|\varphi_{1}(0)-\varphi_{2}(0)\|_{L^{2}}+\frac{\||k|\hat {V}\|_{L^{1}}}{M}\int_{0}^{s}|||\omega_{1}(r)-\omega_{2}(r)|||_{1}\mathrm{d}r \tag{6.14}\]
where we used \(\hat{\rho}_{F}(k)=M^{-1}\mathrm{Tr}\mathcal{O}_{k,0}\omega.\) We can now close the inequalities and apply the Gronwall inequality.
#### 6.1.1. Borrowed Commutator Estimates
In this subsubsection, we state a result that we used in the proof of the above stability estimate. They concern the propagation-in-time of a certain class of commutator estimates. These results were originally proved in [8, Lemma 4.2] for an interacting system of fermions in the combined semi-classical and mean-field regime. Since the proofs can be easily adapted to the present case, we shall omit them.
In order to state it, we introduce the following unitary dynamics. Namely, given a time-dependent position density \(\rho:\mathbb{R}\to L^{1}(\mathbb{R}^{d})\) satisfying \(\rho(t,x)\geqslant 0\) and \(\int\rho(t,x)dx=1\) we consider
\[\begin{cases}&i\hbar\partial_{t}U_{\rho}(t,s)=\Big{(}-\frac{\hbar^{2}}{2} \Delta+V*\rho(t)\Big{)}U_{\rho}(t,s)\\ &U_{\rho}(t,t)=\mathds{1}\.\end{cases} \tag{6.15}\]
In the interacting fermion system, one chooses \(\rho(t,x)=N^{-1}\omega(t;x,x)\) whereas in our case, the density corresponds to that of bosons \(\rho(t,x)=|\varphi(t,x)|^{2}\). Since the estimates are quite robust with respect to the choice of density, we state here the result for arbitrary \(\rho\). One only needs \(\sup_{t}|\hat{\rho}(t)|_{L^{\infty}}<\infty\).
**Lemma 6.1**.: _Assume that \(\int_{\mathbb{R}^{d}}\left\langle\xi\right\rangle^{2}|\hat{V}(\xi)|\mathrm{d} \xi<\infty\) and let \(U_{\rho}(t,s)\) be the unitary flow defined in (6.15). Then, there exists \(C>0\) such that for all \((\xi,\eta)\in\mathbb{R}^{d}\times\mathbb{R}^{d}\) and \(t,s\in\mathbb{R}\) there holds_
\[\sup\Biggl{\{}\frac{1}{|k|}\Bigl{|}\mathrm{Tr}\bigl{[}e^{ik\cdot x},U_{\rho}^{* }(t,s)\mathcal{O}_{\xi,\eta}U_{\rho}(t,s)\bigr{]}\omega\Bigr{|}\ :\ k\in\mathbb{R}^{d}\, \mathrm{Tr}|\omega|\leqslant 1\Biggr{\}}\leqslant\hbar(|\xi|+|\eta|)e^{C|t-s|} \tag{6.16}\]
_where \(\mathcal{O}_{\xi,\eta}\) is the semi-classical observable defined in (6.4)._
### Quantum Optimal Transportation
A lot of technology has been developed in the last decade in the context of Quantum Optimal Transporation, by means of the introduction of relevant pseudo-metrics. In this paper, we apply and adapt some of these results to the study of the problem at hand.
In this context, one is given a probability measure \(f\in\mathscr{P}(\mathbb{R}^{2d})\) describing a classical state, and wishes to compare it to a quantum state \(\omega\), belonging to the space
\[\mathcal{P}(L^{2}(\mathbb{R}^{d}))\equiv\left\{\omega\in\mathscr{L}^{1}(L^{2} (\mathbb{R}^{d}))\,:\,\omega=\omega^{*}\geqslant 0,\ \mathrm{Tr}\omega=M=\hbar^{-d}\right\}\,. \tag{6.17}\]
Let us immediately note here that our normalization is different than most results on the literature, where one considers trace-class operators \(\mathcal{R}\) with \(\mathrm{Tr}\mathcal{R}=1\). This is of course only a matter of scaling, and the passage from one to the other is given by \(\mathcal{R}=\hbar^{d}\omega\). In particular, we choose the normalization in (6.17) because it is more natural for the problem at hand.
Let us now introduce the concept of a _coupling_, between classical and quantum states
**Definition 1** (Coupling).: _Given \(f\in\mathscr{P}(\mathbb{R}^{2d})\) and \(\omega\in\mathcal{P}(L^{2}(\mathbb{R}^{d}))\), we say that the operator-valued function \(Q:\mathbb{R}^{2d}_{x,p}\to\mathscr{B}(L^{2}(\mathbb{R}^{2d}))\) is a **coupling** for \(f\) and \(\omega\), if the following is satisfied_
1. _For almost every_ \((x,p)\in\mathbb{R}^{2d}\) _there holds_ \(Q(x,p)=Q(x,p)^{*}\geqslant 0\)_._
2. \(\int_{\mathbb{R}^{2d}}Q(x,p)\mathrm{d}x\mathrm{d}p=\omega\)_._
3. \(\mathrm{Tr}Q(x,p)=\hbar^{-d}\,f(x,p)\)_._
_The set of all couplings between \(f\) and \(\omega\) is denoted by \(\mathcal{C}(f,\omega)\)._
Throughout this section, we denote by \(\hat{x}:\mathscr{D}(\hat{x})\subset L^{2}(\mathbb{R}^{d})\to L^{2}(\mathbb{R}^ {d})\) the standard multiplication operator on \(L^{2}\), and similarly for \(\hat{p}=-i\hbar\nabla_{x}\) on \(H^{1}(\mathbb{R}^{d})\). We introduce the following cost function, taking values in the space of unbounded self-adjoint operators on \(L^{2}(\mathbb{R}^{d})\)
\[c_{\hbar}(x,p)\equiv\frac{1}{2}(x-\hat{x})^{2}+\frac{1}{2}(p-\hat{p})^{2}\, \qquad(x,p)\in\mathbb{R}^{2d}, \tag{6.18}\]
initially defined on \(\mathscr{S}(\mathbb{R}^{d})\) and then closed in \(L^{2}\). Let us recall that we denote by \(\mathscr{P}_{n}(\mathbb{R}^{2d})\) the space of measures with finite \(n\) moments. Simailxy, we denote by \(\mathcal{P}_{n}(L^{2}(\mathbb{R}^{d}))\) the space of quantum states \(\omega\in\mathcal{P}(L^{2}(\mathbb{R}^{d}))\) such that \(\mathrm{Tr}\big{[}\sqrt{\omega}(\hat{x}^{2}+\hat{p}^{2})^{n}\sqrt{\omega} \big{]}<\infty\).
**Definition 2** (Quantum Wasserstein).: _For all \(f\in\mathscr{P}_{2}(\mathbb{R}^{d})\) and \(\omega\in\mathcal{P}_{2}(L^{2}(\mathbb{R}^{d}))\) we define the **second quantum Wasserstein distance** as the quantity_
\[E_{\hbar}(f,\omega)\equiv\hbar^{d/2}\inf_{Q\in\mathcal{C}(f,\omega)}\Big{(} \int_{\mathbb{R}^{2d}}\mathrm{Tr}Q(x,p)c_{\hbar}(x,p)\mathrm{d}x\mathrm{d}p \Big{)}^{1/2}\in[0,\infty]. \tag{6.19}\]
Up to scaling, the functional \(E_{\hbar}\) has been the object of several studies in recent years. In particular, it has been proven that it is a natural object to study when comparing the dynamics of the Hartree and Vlasov equation, for system of interacting fermions. We adapt the proof of [37, Theorem 2.5] for the present case of interest.
**Theorem 4**.: _Assume \(V\in C^{1,1}(\mathbb{R}^{d},\mathbb{R})\). Let \((\omega_{\hbar}(t),\varphi_{\hbar}(t))\) solve the Hartree-Hartree equation (6.1) with initial data \((\omega_{0}^{\hbar},\varphi_{0}^{\hbar})\). Further, let \((f(t),\varphi(t))\) solve the Vlasov-Hartree equation, with initial data \((f_{0},\varphi_{0})\). Then, there exists \(C=C(V)>0\) such that for all \(t\in\mathbb{R}\) there holds_
\[E_{\hbar}\big{(}f(t),\omega_{\hbar}(t)\big{)}+\|\varphi(t)-\varphi_{\hbar}(t) \|_{L^{2}}\leqslant C\exp(Ct^{2})\Big{(}E_{\hbar}(f_{0},\omega_{0}^{\hbar})+ \|\varphi_{0}-\varphi_{0}^{\hbar}\|_{L^{2}}\Big{)}. \tag{6.20}\]
Two questions remain. The first one is: what norms are bounded above by \(E_{\hbar}(f,\omega_{\hbar})\)? The second one is: is the right hand side of (6.20) small when \(\hbar\downarrow 0\)? Neither of these questions has a trivial answer. Fortunately, they have already been answered quite recently in the literature. In order to state it, we must introduce the following Gaussian mollifer at scale \(\hbar>0\)
\[\mathscr{G}_{\hbar}(z)\equiv\hbar^{-d/4}\mathscr{G}_{1}(\hbar^{-1/2}z)\quad \text{with}\quad\mathscr{G}_{1}(z)\equiv\pi^{-d/4}\exp(-z^{2}/2) \tag{6.21}\]
for \(z=(x,p)\in\mathbb{R}^{2d}\). In our notation and scaling, we record the relevant results in the following lemma.
**Lemma 6.2**.: _The following statements are true._
1. [(46, Corollary 1.1)] _Let_ \(d\geqslant 2\)_,_ \(\omega\in\mathcal{P}_{2}(L^{2}(\mathbb{R}^{d}))\) _and_ \(f\in\mathscr{P}_{2}(\mathbb{R}^{2d})\)_. Assume that_ \(\omega\leqslant 1\)_. Then, there exists_ \(C=C(d)\) _such that_ \[\|f-W_{\hbar}[\omega]\|_{\dot{H}^{-1}}\leqslant E_{\hbar}[f,\omega]+C\sqrt{ \hbar}\.\] (6.22)
2. [(37, Theorem 2.4)] _For all_ \(f\in\mathscr{P}_{2}(\mathbb{R}^{2d})\) _there holds_ \[E_{\hbar}[f,\operatorname{Op}_{w}^{\hbar}(f*\mathscr{G}_{\hbar})]\leqslant \sqrt{\hbar}.\] (6.23)
**Remark 6.1** (Anti-Wick Quantization).: So far, we have introduced the Weyl quantization map, as the inverse of the Wigner transform. As is well-known, this quantization map does not preserve positivity. On the other hand, the anti-Wick (or, Toeplitz) quantization does, and is defined as follows. Given a phase-space distribution \(\mu\in\mathscr{P}(\mathbb{R}^{2d})\), one defines on \(L^{2}(\mathbb{R}^{d})\) the bounded, self-adjoint operator
\[\operatorname{Op}_{\mathrm{aw}}^{\hbar}(\mu)\equiv\operatorname{Op}_{w}^{ \hbar}(\mu*\mathscr{G}_{\hbar})=\hbar^{-d}\int_{\mathbb{R}^{2d}}|f_{x,p}^{h} \rangle\langle f_{x,p}^{h}|\operatorname{d}\!\mu(x,p) \tag{6.24}\]
where \(f_{x,p}^{\hbar}(y)\equiv\hbar^{-d/4}g(\hbar^{-1/2}(x-y))\exp(ip\cdot y/\hbar)\) is a coherent state at scale \(\hbar>0\), with an \(L^{2}\)-normalized Gaussian profile \(g\). Note that preservation of positivity follows immediately from the last formula, since it is the convex combination of positive operators (in this case, orthogonal projections). In particular, it follows from the previous Lemma that \(E_{\hbar}\big{[}f,\operatorname{Op}_{\mathrm{aw}}^{\hbar}(f)\big{]}\leqslant \hbar^{1/2}\).
Let us now prove Theorem 4. Since the proof is a simple adaptation of that of [37, Theorem 2.5] in which we replace the self-interacting term, with the interaction with the boson field \(\varphi(t)\), we only provide the sketch of the proof and refer to the original reference for the details.
Proof of Theorem 4.: First, we introduce some notation we shall make use of throughout the proof. Namely, letting \((\omega^{\hbar}(t),\varphi^{\hbar}(t))\) and \((f(t),\varphi(t))\) be as in the statement of the Theorem, we let the bosonic densities be
\[\rho_{f}(t,x)\equiv|\varphi(t,x)|^{2}\quad\text{and}\quad\rho_{\omega}(t,x) \equiv|\varphi^{\hbar}(t,x)|^{2}\,\qquad\forall(t,x)\in\mathbb{R}\times\mathbb{R}^{d}. \tag{6.25}\]
The notation is made such that they enter the equations for \(f\) and \(\omega\), respectively.
_Step 1. Propagation of moments_. Let \(f_{0}\in\mathscr{P}_{2}(\mathbb{R}^{2d})\) be the initial datum of the Vlasov-Hartee system. Then, it holds true that \(f(t)\in\mathscr{P}_{2}(\mathbb{R}^{2d})\). Indeed, let \(\Phi_{\varphi}\) be the ODE characteristics map from Theorem 6. Then, the claim follows from the formula \(f(t,z)=f_{0}\circ\Phi_{\varphi}^{-1}(t,z)\) and changing variables \(z=(x,p)\in\mathbb{R}^{2d}\) in
\[\int_{\mathbb{R}^{2d}}|z|^{2}f(t,z)\mathrm{d}z=\int_{\mathbb{R}^{2d}}|\Phi_{ \varphi}(t,z)|^{2}f_{0}(z)\mathrm{d}z\leqslant\kappa(t)^{2}\int_{\mathbb{R}^{ 2d}}|z|^{2}f_{0}(z)\mathrm{d}z<\infty. \tag{6.26}\]
Here, we have used that the estimate \(|\Phi_{\varphi}(t,z)|\leqslant\kappa(t)|z|\), see Remark A.1.
_Step 2. Choice of Coupling._ Given \(Q_{0}^{\hbar}\in\mathcal{C}(f_{0},\omega_{0})\) we consider \(Q^{\hbar}(t)\) to be the time-dependent coupling solving the PDE
\[\begin{cases}&\partial_{t}Q^{\hbar}+\{\frac{1}{2}p^{2}+V*\rho_{f}(t,x),Q^{\hbar} \}+\frac{i}{\hbar}\big{[}\frac{1}{2}\hat{p}^{2}+V*\rho_{\omega}(t,\hat{x}),Q_{ \hbar}\big{]}=0\,\\ &Q^{\hbar}(0)=Q_{0}^{\hbar}\.\end{cases} \tag{6.27}\]
Here, we denote by \(\{F,G\}=\nabla_{x}F\nabla_{p}G-\nabla_{p}F\nabla_{x}G\) the Poisson bracket of two classical observables, and we recall that \(\hat{p}=-i\hbar\nabla_{x}\) and \(\hat{x}\) are the standard momentum and position observables in \(L^{2}(\mathbb{R}^{d})\). One may define \(Q^{\hbar}(t)\) by means of conjugation of a unitary map, and composition of a vector field-see for details. In particular, it follows from such representation and [37, Lemma 4.2] that this procedure actually defines a coupling between \(f(t)\) and \(\omega^{\hbar}(t)\). Namely, \(Q_{\hbar}(t)\in\mathcal{C}(f(t),\omega^{\hbar}(t))\) for all \(t\in\mathbb{R}\).
_Step 3. Dynamics of the coupling._ We now estimate the growth of the second order moments of the coupling \(Q_{\hbar}(t)\). Namely, we define
\[\mathcal{E}_{\hbar}(t)\equiv\hbar^{d}\int_{\mathbb{R}^{2d}}\mathrm{Tr}\big{[} c_{\hbar}(x,p)Q_{\hbar}(t;x,p)\big{]}\mathrm{d}x\mathrm{d}p\,\qquad\forall t\in\mathbb{R} \tag{6.28}\]
and compute its time derivative as follows. In view of (6.27) we find that
\[\hbar^{-d}\frac{d}{dt}\mathcal{E}_{\hbar}(t) =\int_{\mathbb{R}^{2d}}\mathrm{Tr}\Big{[}Q_{\hbar}(t;x,p)\Big{\{} \frac{1}{2}p^{2}+V*\rho_{f}(t,x),c_{\hbar}(x,p)\Big{\}}\Big{]}\mathrm{d}x \mathrm{d}p\] \[+\frac{i}{\hbar}\int_{\mathbb{R}^{2d}}\mathrm{Tr}\Big{[}Q_{\hbar} (t;x,p)\big{[}\frac{1}{2}\hat{p}^{2}+V*\rho_{\omega}(t,\hat{x}),c_{\hbar}(x,p) \big{]}\Big{]}\mathrm{d}x\mathrm{d}p \tag{6.29}\]
where we have integrated by parts and put both brackets on the cost function. Following [37] we calculate explicitly the above brackets and find that
\[\big{\{}\frac{1}{2}p^{2}+V*\rho_{f}(t,x),c_{\hbar}(x,p)\big{\}} =p\cdot(x-\hat{x})-\nabla V*\rho_{f}(t,x)(p-\hat{p}) \tag{6.30}\] \[\frac{i}{\hbar}\Big{[}\frac{1}{2}\hat{p}^{2}+V*\rho_{\omega}(t, \hat{x}),c_{\hbar}(x,p)\big{]} =-\frac{1}{2}(x-\hat{x})\hat{p}-\frac{1}{2}\hat{p}(x-\hat{x})\] (6.31) \[+\frac{1}{2}\nabla V*\rho_{\omega}(t,\hat{x})\cdot(p-\hat{p})+ \frac{1}{2}(p-\hat{p})\cdot\nabla V*\rho_{\omega}(t,\hat{x})\.\]
Thus, a straightforward manipulation, combined with the identities for self-adjoint operators \(AB+BA\leqslant A^{2}+B^{2}\) yields
\[\hbar^{-d}\frac{d}{dt}\mathcal{E}_{\hbar}(t) =\int_{\mathbb{R}^{2d}}\mathrm{Tr}\Big{[}Q_{\hbar}(t;x,p)\frac{1} {2}\Big{(}(x-\hat{x})\cdot(p-\hat{p})+(p-\hat{p})\cdot(x-\hat{x})\Big{)}\Big{]} \mathrm{d}x\mathrm{d}p\] \[+\int_{\mathbb{R}^{2d}}\mathrm{Tr}\Big{[}Q_{\hbar}(t;x,p)\frac{1} {2}(p-\hat{p})\cdot\Big{(}\nabla V*\rho_{\omega}(t,\hat{x})-\nabla V*\rho_{f} (t,x)\Big{)}\Big{]}\mathrm{d}x\mathrm{d}p \tag{6.32}\] \[\leqslant C\hbar^{-d}\mathcal{E}_{\hbar}(t)+\frac{1}{2}\int_{ \mathbb{R}^{2d}}\mathrm{Tr}\Big{[}Q_{\hbar}(t;x,p)|\nabla V*\rho_{\omega}(t, \hat{x})-\nabla V*\rho_{f}(t,x)|^{2}\Big{]}\mathrm{d}x\mathrm{d}p\.\]
The second term in the last displayed equation can be estimated as follows. We use the triangle inequality and the fact that \(Q_{\hbar}(t)\) is a coupling for \(f(t)\) to find that
\[\int_{\mathbb{R}^{2d}}\mathrm{Tr}\Big{[} Q_{\hbar}(t;x,p)|\nabla V*\rho_{\omega}(t,\hat{x})-\nabla V* \rho_{f}(t,x)|^{2}\Big{]}\mathrm{d}x\mathrm{d}p\] \[+\int_{\mathbb{R}^{2d}}\mathrm{Tr}\Big{[}Q_{\hbar}(t;x,p)\Big{]} \,|\nabla V*\rho_{\omega}(t,x)-\nabla V*\rho_{f}(t,x)|^{2}\mathrm{d}x\mathrm{d}p\] \[\leqslant|\nabla V|^{2}_{Lip}\|\rho_{\omega}(t)\|^{2}_{L^{1}} \hbar^{-d}\mathcal{E}_{\hbar}(t)+\|\nabla V*\rho_{\omega}(t)-\nabla V*\rho_{f} (t)\|^{2}_{L^{\infty}}\|f(t)\|_{L^{1}}\] \[\leqslant|\nabla V|^{2}_{Lip}\hbar^{-d}\mathcal{E}_{\hbar}(t)+\| \nabla V\|^{2}_{L^{\infty}}\|\varphi(t)-\varphi^{\hbar}(t)\|^{2}_{L^{2}} \tag{6.33}\]
We can now collect the previous estimatesa and integrate in time to find that
\[\hbar^{-d}\mathcal{E}_{\hbar}(t)\leqslant\hbar^{-d}\mathcal{E}_{\hbar}(0)+C \int_{0}^{t}\Big{(}\hbar^{-d}\mathcal{E}_{\hbar}(s)+\|\varphi(s)-\varphi^{ \hbar}(s)\|^{2}_{L^{2}}\Big{)}\mathrm{d}s. \tag{6.34}\]
It suffices now to estimate the difference in the \(L^{2}\) norm of the boson fields.
_Step 4. Estimates on the boson densities_. The equations for the boson fields can be written in mild formulation as follows
\[\varphi(t) =e^{-it\Delta/2}\varphi_{0}-i\int_{0}^{t}e^{-i(t-s)\Delta/2}V* \widetilde{\rho}_{f}(s)\varphi(s)\mathrm{d}s \tag{6.35}\] \[\varphi^{\hbar}(t) =e^{-it\Delta/2}\varphi_{0}^{\hbar}-i\int_{0}^{t}e^{-i(t-s)\Delta/ 2}V*\widetilde{\rho}_{\omega}(s)\varphi^{\hbar}(s)\mathrm{d}s \tag{6.36}\]
where we denote for the fermion densities \(\widetilde{\rho}_{f}(t,x)=\int_{\mathbb{R}^{d}}f(t;x,p)\ dp\) and \(\widetilde{\rho}_{\omega}(t,x)=(1/N)\omega(t;x,x)\). We estimate the \(L^{2}\) norms as follows
\[\|\varphi(t)-\varphi^{\hbar}(t)\|_{L^{2}}\] \[\leqslant\|\varphi_{0}-\varphi_{0}^{\hbar}\|_{L^{2}}+\int_{0}^{t }\|(V*\widetilde{\rho}_{f}(s)-V*\widetilde{\rho}_{\omega}(s))\varphi(s)\| \mathrm{d}s+\int_{0}^{t}\|V*\widetilde{\rho}_{\omega}(s)(\varphi(s)-\varphi^{ \hbar}(s))\|\mathrm{d}s\] \[\leqslant\|\varphi_{0}-\varphi_{0}^{\hbar}\|_{L^{2}}+\int_{0}^{t }\|V*\big{(}\widetilde{\rho}_{f}(s)-\widetilde{\rho}_{\omega}(s)\big{)}\|_{L^{ \infty}}\mathrm{d}s+\|V\|_{L^{\infty}}\int_{0}^{t}\|\varphi(s)-\varphi^{\hbar }(s)\|_{L^{2}}\mathrm{d}s\.\]
Finally, we notice that because \(Q_{\hbar}(t)\) is a coupling between \(f(t)\) and \(\omega(t)\), we obtain that for all \(X\in\mathbb{R}^{d}\)
\[V*\widetilde{\rho}_{f}(s,X)-V*\widetilde{\rho}_{\omega}(s,X)=\int_{\mathbb{R} ^{2d}}\mathrm{Tr}\Big{[}Q_{\hbar}(t;x,p)\big{(}\nabla V(X-x)-\nabla V(X-\hat{ x})\big{)}\Big{]}\mathrm{d}x\mathrm{d}p \tag{6.37}\]
and, consequently, using the Cauchy Schwarz inequality 1 we find
Footnote 1: In the following form \(\int_{\mathbb{R}^{2d}}\mathrm{d}z\mathrm{Tr}(A(z)^{*}B(z))\mathrm{d}z\leqslant( \int_{\mathbb{R}^{2d}}\mathrm{Tr}(A^{*}(z)A(z))dz)^{1/2}(\int_{\mathbb{R}^{2d}} \mathrm{Tr}(B^{*}(z)B(z))dz)^{1/2}\)
\[|V*\widetilde{\rho}_{f}(s,X)-V*\widetilde{\rho}_{\omega}(s,X)|^{2} \leqslant\Big{|}\int_{\mathbb{R}^{2d}}\mathrm{Tr}\Big{[}Q_{\hbar}(t;x,p )\big{|}\nabla V(X-x)-\nabla V(X-\hat{x})\big{|}^{2}\Big{]}\mathrm{d}x\mathrm{d }p\Big{|}\] \[\leqslant\|\nabla V\|_{Lip}^{2}\int_{\mathbb{R}^{2d}}\mathrm{Tr} \Big{[}Q_{\hbar}(t;x,p)|x-\hat{x}|^{2}\Big{]}\mathrm{d}x\mathrm{d}p\] \[\leqslant 2\|\nabla V\|_{Lip}^{2}\hbar^{-d}\mathcal{E}_{\hbar}(t). \tag{6.38}\]
We conclude that for some \(C=C(V)\) there holds
\[\|\varphi(t)-\varphi^{\hbar}(t)\|_{L^{2}}\leqslant\|\varphi_{0}-\varphi_{0}^{ \hbar}\|_{L^{2}}+C\int_{0}^{t}\Big{(}\hbar^{-\frac{d}{2}}\mathcal{E}_{\hbar}(t )^{\frac{1}{2}}+\|\varphi(s)-\varphi^{\hbar}(s)\|_{L^{2}}\Big{)}\mathrm{d}s. \tag{6.39}\]
In order to compare it with the inequality found in (6.34), we take the square of both sides in (6.39) and use the Cauchy-Schwarz inequality to find that for some \(C=C(V)>0\) there holds
\[\|\varphi(t)-\varphi^{\hbar}(t)\|_{L^{2}}^{2}\leqslant C\Bigg{(}\|\varphi_{0}- \varphi_{0}^{\hbar}\|_{L^{2}}+t\int_{0}^{t}\Big{(}\hbar^{-d}\mathcal{E}_{\hbar }(t)+\|\varphi(s)-\varphi^{\hbar}(s)\|_{L^{2}}^{2}\Big{)}\mathrm{d}s\Bigg{)}. \tag{6.40}\]
_Step 5. Conclusion._ In order to conclude our argument, we put the inequalities (6.34) and (6.40) together to find that
\[\hbar^{-d}\mathcal{E}_{\hbar}(t)+\|\varphi(t)-\varphi^{\hbar}(t) \|_{L^{2}}^{2} \leqslant C\Big{(}\hbar^{-d}\mathcal{E}_{\hbar}(0)+\|\varphi_{0}- \varphi_{0}^{\hbar}\|_{L^{2}}^{2}\Big{)}\] \[+C(1+t)\int_{0}^{t}\Big{(}\hbar^{-d}\mathcal{E}_{\hbar}(s)+\| \varphi(s)-\varphi^{\hbar}(s)\|_{L^{2}}^{2}\Big{)}\mathrm{d}s. \tag{6.41}\]
Thus, we may apply the Gronwall inequality to find that there exists \(C>0\) such that
\[\hbar^{-d}\mathcal{E}_{\hbar}(t)+\|\varphi(t)-\varphi^{\hbar}(t)\|_{L^{2}}^{2} \leqslant C\exp(Ct^{2})\Big{(}\hbar^{-d}\mathcal{E}_{\hbar}(0)+\|\varphi_{0}- \varphi_{0}^{\hbar}\|_{L^{2}}^{2}\Big{)}. \tag{6.42}\]
In order to conclude, let us recall that \(E_{\hbar}(f(t),\omega(t))^{2}\leqslant\hbar^{d}\mathcal{E}_{\hbar}(t).\) Additionally, we recall that \(\mathcal{E}_{\hbar}(0)\) is defined in terms of the initial datum \(Q^{\hbar}(0)=Q^{\hbar}_{0}\in\mathcal{C}(f_{0},\omega_{0}^{\hbar})\), corresponding to an arbitrary coupling. Therefore, we minimize the right hand side over all couplings, and take the square root, to finally find that
\[E_{\hbar}(f(t),\omega(t))+\|\varphi(t)-\varphi^{\hbar}(t)\|_{L^{2}}\leqslant C \exp(Ct^{2})\Big{(}E_{\hbar}(f_{0},\omega_{0}^{\hbar})+\|\varphi_{0}-\varphi_ {0}^{\hbar}\|_{L^{2}}\Big{)}. \tag{6.43}\]
This finishes the proof.
### Proof of Theorem 2
In this subsection, we combine the results previously established in Proposition 6.1 and Theorem 4.
Proof of Theorem 2.: Let us consider \((\omega_{0}^{\hbar},\varphi_{0}^{\hbar})\), \(f_{0}^{\hbar}\equiv W^{\hbar}[\omega_{0}^{\hbar}]\), and \((f_{0},\varphi_{0})\) as in the statement of Theorem 2. The proof is divided into two steps. In the first step, we consider the
evolution given by the Hartree-Hartree dynamics, and use Proposition 6.1 to change the initial data from \(\omega_{0}^{h}=\operatorname{Op}_{w}^{h}[f_{0}^{h}]\) to the intermediate initial data given by
\[\widetilde{\omega}_{0}^{h}\equiv\operatorname{Op}_{w}^{h}[f_{0}*\mathscr{G}_{h}]. \tag{6.44}\]
Here, \(\mathscr{G}_{h}(z)=\hbar^{-d/4}\mathscr{G}_{1}(\hbar^{-1}z)\) is the Gaussian mollifier at scale \(\hbar\). In the second step, we use Proposition 4 and Lemma 6.2 to go from the Hartree-Hartree dynamics, to the Vlasov-Hartee dynamics. Let us note that these steps will involve different metrics when measuring the distance of the fermion density. In order to conclude, we put these distances together by restricting our collection of test functions.
_Step 1._ Let \((\omega^{h}(t),\varphi^{h}(t))\) solve the Hartree-Hartree dynamics with initial data \((\omega_{0}^{h},\varphi_{0}^{h})\), and let \((\widetilde{\omega}^{h}(t),\widehat{\varphi}^{h}(t))\) solve the Hartree-Hartree dynamics with initial data \((\widetilde{\omega}_{0}^{h},\varphi_{0}^{h})\), where the fermion component has been defined in (6.44). We consider its Wigner transform as
\[\widehat{f}^{h}(t)\equiv W^{h}[\widetilde{\omega}^{h}(t)]\,\qquad\forall t\in \mathbb{R}\, \tag{6.45}\]
which we shall refer to as the _intermediate dynamics_. A direct application of the stability estimate contained in Proposition 6.1, together with the isometric property (6.7) yields
\[|f^{h}(t)-\widehat{f}^{h}(t)|_{1}+\|\varphi^{h}(t)-\widehat{\varphi}^{h}(t)\| _{L^{2}}\leqslant C\exp(C\exp C|t|)|f_{0}^{h}-f_{0}*\mathscr{G}_{h}|_{1} \tag{6.46}\]
for a constant \(C>0\), and all \(t\in\mathbb{R}\). It suffices to estimate the rigth hand side of Eq. (6.46). In particular, the triangle inequality gives
\[|f_{0}^{h}-f_{0}*\mathscr{G}_{h}|_{1}\leqslant|f_{0}^{h}-f_{0}|_{1}+|f_{0}-f_{ 0}*\mathscr{G}_{h}|_{1}. \tag{6.47}\]
The first term on the right hand side of Eq. (6.47) is already contained in the estimate of Theorem 2, so it suffices to estimate the second term. Indeed, we find that for \(\zeta\in\mathbb{R}^{2d}\)
\[|\hat{f}_{0}(\zeta)-\widehat{f_{0}*\mathscr{G}_{h}}(\zeta)|=|1-\widehat{ \mathscr{G}_{1}}(\hbar\zeta)|\,|\hat{f}_{0}(\zeta)|\leqslant\operatorname{ Lip}(\widehat{\mathscr{G}_{1}})\hbar|\zeta|. \tag{6.48}\]
In the last line we have used the fact that \(\|\hat{f}_{0}\|_{L^{\infty}}\leqslant\|f_{0}\|_{L^{1}}\leqslant 1\), and \(\widehat{\mathscr{G}_{1}}(0)=1\). Upon taking supremum over \(\zeta\in\mathbb{R}^{2d}\) one finds that \(|f_{0}-f_{0}*\mathscr{G}_{h}|_{1}\leqslant C\hbar\). Putting everything together, we find
\[|f^{h}(t)-\widehat{f}^{h}(t)|_{1}+\|\varphi^{h}(t)-\widehat{\varphi}^{h}(t)\| _{L^{2}}\leqslant C\exp(C\exp(C|t|))\Big{(}|f_{0}^{h}-f_{0}|_{1}+\hbar\Big{)} \tag{6.49}\]
for a constant \(C>0\) and all \(t\in\mathbb{R}\).
_Step 2._ Let \((f(t),\varphi(t))\) be the solution of the Vlasov-Hartree system with initial data \((f_{0},\varphi_{0})\). Similarly, let \((\widetilde{\omega}^{h}(t),\widetilde{\varphi}^{h}(t))\) be the solution of the Hartree-Hartree system with initial data \((\widetilde{\omega}_{0}^{h},\varphi_{0}^{h})\), as defined in Eq. (6.44). Then, Proposition 4 immediately implies that
\[E_{h}\big{(}f(t),\widetilde{\omega}_{h}(t)\big{)}+\|\varphi(t)-\widehat{\varphi }^{h}(t)\|_{L^{2}}\leqslant C\exp(Ct^{2})\Big{(}E_{h}(f_{0},\widetilde{\omega} _{0}^{h})+\|\varphi_{0}-\varphi_{0}^{h}\|_{L^{2}}\Big{)}. \tag{6.50}\]
Consequently, letting \(\widehat{f}^{h}(t)=W^{h}[\widetilde{\omega}^{h}(t)]\) be the Wigner transform of the intermediate dynamics, we combine Eq. (6.50) with the two estimates found in Lemma 6.2 to conclude that
\[\|f(t)-\widehat{f}^{h}(t)\|_{\dot{H}^{-1}}+\|\varphi(t)-\widehat{\varphi}^{h}( t)\|_{L^{2}}\leqslant C\exp(Ct^{2})\Big{(}\hbar^{1/2}+\|\varphi_{0}-\varphi_{0}^{h} \|_{L^{2}}\Big{)}. \tag{6.51}\]
_Conclusion._ First, we estimate the density of the fermions. Namely, let \(f^{\hbar}(t)\) and \(f(t)\) be as in the statement of Theorem 2, and let \(\widehat{f}^{\hbar}(t)\) be the intermediate dynamics we have previously introduced. Further, let us consider a test function \(h:\mathbb{R}^{2d}\to\mathbb{C}\) such that \(\|\left\langle\eta\right\rangle\hat{h}\|_{L^{1}}\) and \(\|\left\langle\eta\right\rangle\hat{h}\|_{L^{2}}\) are both finite. Then, the triangle inequality combined with Eqs. (6.49) and (6.51) imply that
\[\left|\left\langle h,(f(t)-f^{\hbar}(t))\right\rangle\right| \leqslant|\langle h,(f^{\hbar}(t)-\widehat{f}^{\hbar}(t))\rangle| +|\left\langle h,(f(t)-\widehat{f}^{\hbar}(t))\right\rangle|\] \[\leqslant\|\left\langle\zeta\right\rangle\hat{h}\|_{L^{1}}|f^{ \hbar}(t)-\widehat{f}^{\hbar}(t)|_{1}+\||\zeta|\hat{h}\|_{L^{2}}\|f(t)- \widehat{f}^{\hbar}(t)\|_{\dot{H}^{-1}} \tag{6.52}\] \[\leqslant C_{2}(t)\|\left\langle\zeta\right\rangle\hat{h}\|_{L^ {1}}\Big{(}|f_{0}^{\hbar}-f_{0}|_{1}+\hbar\Big{)}+C_{1}(t)\|\zeta|\hat{h}\|_ {L^{2}}\Big{(}\hbar^{1/2}+\|\varphi_{0}-\varphi_{0}^{\hbar}\|_{L^{2}}\Big{)}\]
where \(C_{1}(t)\equiv C\exp(Ct^{2})\) and \(C_{2}(t)\equiv C\exp(C\exp C|t|)\). Similarly, for the boson fields we find
\[\|\varphi(t)-\varphi^{\hbar}(t)\|_{L^{2}} \leqslant\|\varphi(t)-\widehat{\varphi}^{\hbar}(t)\|_{L^{2}}+\| \varphi^{\hbar}(t)-\widehat{\varphi}^{\hbar}(t)\|_{L^{2}}\] \[\leqslant C_{2}(t)\Big{(}|f_{0}^{\hbar}-f_{0}|_{1}+\hbar\Big{)}+C _{1}(t)\Big{(}\hbar^{1/2}+\|\varphi_{0}-\varphi_{0}^{\hbar}\|_{L^{2}}\Big{)}\, \tag{6.53}\]
where \(C_{1}(t)\) and \(C_{2}(t)\) are as above. This finishes the proof of the theorem.
## Appendix A Well-posedness of the PDEs
In this section, we state basic well-posedness results for the Hartree-Hartree, and Vlasov-Hartee equations that we have introduced in Section 1. For notational simplicity, denote by
\[\mathfrak{h}\equiv L^{2}(\mathbb{R}^{d})\] (A.1)
the one-particle Hilbert space.
### The Hartree-Hartree Equation
In what follows, we consider the Hartree-Hartree equation that couples the fermionic reduced density matrix, and the bosonic field. For notational simplicity, we assume the microscopic scaling regime-of course, every result in this section is independent of the scaling regime under consideration. That is, we consider the equation
\[\begin{cases}&i\partial_{t}\omega=[-\Delta+(V*\rho_{B}),\omega]\\ &i\partial_{t}\varphi=-\Delta\varphi+(V*\rho_{F})\varphi\,\\ &(\omega(0),\varphi(0)=(\omega_{0},\varphi_{0})\end{cases}\] (A.2)
for some initial datum \((\omega_{0},\varphi_{0})\in\mathscr{L}^{1}(\mathfrak{h})\times\mathfrak{h}\). Here we will be employing the notation for the bosonic and fermionic position densities, for \((t,x)\in\mathbb{R}\times\mathbb{R}^{d}:\)
\[\rho_{B}(t,x)\equiv|\varphi(t,x)|^{2}\qquad\text{and}\qquad\rho_{F}(t,x)\equiv M ^{-1}\omega(t;x,x)\.\] (A.3)
Here, we only consider bounded potentials. The analysis of mean-field equations with such interactions is classical. Hence, we state the main results in the next Theorem and omit the proofs. For instance, we refer the reader to e.g [12] whose proof can be adapted to the problem at hand.
**Theorem 5**.: _Let \((\omega_{0},\varphi_{0})\in\mathscr{L}^{1}(\mathfrak{h})\times\mathfrak{h}\) with \(\omega_{0}^{*}=\omega_{0}\); and assume the interaction potential is bounded \(V\in L^{\infty}(\mathbb{R}^{d},\mathbb{R})\). Then, the following statementes hold true_
1. (Global well-posedness) _There exists a unique global solution_ \((\omega,\varphi)\in C(\mathbb{R},\mathscr{L}^{1}(\mathfrak{h})\times\mathfrak{h})\) _to the Hartree-Hartree equation (_A.2_) in mild form:_ \[\omega(t) =e^{-it\Delta}\omega_{0}e^{+it\Delta}-i\int_{0}^{t}e^{-i(t-s) \Delta}\big{[}V*\rho_{B}(s),\omega(s)\big{]}e^{i(t-s)\Delta}\mathrm{d}s\] (A.4) \[\varphi(t) =e^{-it\Delta}\varphi_{0}-i\int_{0}^{t}e^{-i(t-s)\Delta}\big{(}V* \rho_{F}(s)\big{)}\varphi(s)\mathrm{d}s\.\] (A.5) _Furthermore, there is continuity with respect to the initial data._
2. (Unitary evolution) _Let_ \((\omega,\varphi)\) _be the mild solution of (_A.4_), and consider the time-dependent, mean-field Hamiltonians on_ \(H^{2}(\mathbb{R}^{d})\)__ \[h_{F}(t)=-\Delta+V*\rho_{B}(t)\qquad\text{and}\qquad h_{B}(t)=-\Delta+V*\rho_ {F}(t)\.\] (A.6) _Further, consider the two-parameter unitary evolution groups on_ \(\mathfrak{h}\) _that the mean-field Hamiltonians generate_ \[\begin{cases}&i\partial_{t}U_{F}(t,s)=h_{F}(t)U_{F}(t,s)\\ &U_{F}(t,t)=\mathds{1}\end{cases}\quad\text{and}\quad\begin{cases}&i\partial_{ t}U_{B}(t,s)=h_{B}(t)U_{B}(t,s)\\ &U_{B}(t,t)=\mathds{1}\end{cases}\.\] (A.7) _Then, for all_ \(t\in\mathbb{R}\)__ \[\omega(t)=U_{F}^{*}(t,0)\omega_{0}U_{F}(t,0)\qquad\text{and}\qquad\varphi(t)= U_{B}(t,0)\varphi_{0}\.\] (A.8) _In particular,_ \(\|\omega(t)\|_{\mathrm{Tr}}=\|\omega_{0}\|_{\mathrm{Tr}}\)_,_ \(\omega(t)^{*}=\omega(t)\)_, and_ \(\|\varphi(t)\|_{L^{2}}=\|\varphi_{0}\|_{L^{2}}\) _for all_ \(t\in\mathbb{R}\)_. Additionally, if_ \(\omega_{0}\geqslant 0\)_, then_ \(\omega(t)\geqslant 0\) _and, similarly, if_ \(\omega_{0}^{2}=\omega_{0}\)_, then_ \(\omega(t)^{2}=\omega(t)\)_._
3. (Strong solutions) _Consider the spectral decomposition_ \(\omega_{0}=\sum_{k=0}^{\infty}\lambda_{k}\left|\phi_{k}\right\rangle\!\left\langle \phi_{k}\right|,\) _and assume that_ \(\sum_{k=0}^{\infty}\left|\lambda_{k}\right|\!\left\|\phi_{k}\right\|_{H^{2}}^{2}\) _and_ \(\left\|\varphi_{0}\right\|_{H^{2}}\) _are finite. Then, the solution map is continuously differentiable_ \((\omega,\varphi)\in C^{1}(\mathbb{R};\mathscr{L}^{1}(\mathfrak{h})\times \mathfrak{h})\)_, and (_A.2_) holds in the strong sense._
### The Vlasov-Hartee Equation
In this section, we analyze the coupled Vlasov-Hartee equation that we have introduced in Section 1. Namely, the coupled PDEs
\[\begin{cases}&(\partial_{t}+p\cdot\nabla_{x}+F_{\varphi}(t)\cdot\nabla_{p})f= 0\\ &i\partial_{t}\varphi=-\frac{1}{2}\Delta\varphi+V_{f}(t)\varphi\\ &(f,\varphi)(0)=(f_{0},\varphi_{0})\in L^{1}_{+}(\mathbb{R}^{2d})\times L^{2} (\mathbb{R}^{d})\end{cases}\] (A.9)
where \((f_{0},\varphi_{0})\in L^{1}_{+}(\mathbb{R}^{2d})\times L^{2}(\mathbb{R}^{d})\) is some initial datum. Here and in the sequel, we shall be using the following notation for the boson-driven force field, and fermion-field potential energy, for \((t,x)\in\mathbb{R}\times\mathbb{R}^{d}\):
\[F_{\varphi}(t,x)\equiv-\int_{\mathbb{R}^{d}}\nabla V(x-y)|\varphi(t,y)|^{2} \mathrm{d}y\qquad\text{and}\qquad V_{f}(t,x)\equiv\int_{\mathbb{R}^{2d}}V(x-y )f(t,y,p)\mathrm{d}p\mathrm{d}y\] (A.10)
The notion of solution we use is the following. Let \(I=(a,b)\subset\mathbb{R}\) be an open interval containing \(0\). We say that a bounded measurable map \((f,\varphi):[a,b]\to L^{1}(\mathbb{R}^{2d})\times L^{2}(\mathbb{R}^{d})\) is
a _weak-mild_ solution on \(I\) of the Vlasov-Hartee equation (A.9) with intial data \((f_{0},\varphi_{0})\), if the following three conditions are satisfied:
* For all \(h\in\mathscr{S}(\mathbb{R}^{2d})\) the map \(t\mapsto\langle f(t),h\rangle\) is differentiable on \(I\)
* For all \(h\in\mathscr{S}(\mathbb{R}^{2d})\) and \(t\in I\) it holds true that \[\frac{d}{dt}\,\langle f(t),h\rangle=\langle f(t),(p\cdot\nabla_{x}+F_{\varphi} (t)\cdot\nabla_{p})h\rangle\] (A.11) \[\varphi(t)=e^{-it\Delta/2}\varphi_{0}-i\int_{0}^{t}e^{-i(t-s) \Delta/2}V_{f}(s)\varphi(s)\mathrm{d}s\] (A.12)
* \((f,\varphi)(0)=(f_{0},\varphi_{0})\).
We say that a solution is local-in-time if \(I\neq\mathbb{R}\), and global-in-time if \(I=\mathbb{R}\).
Similarly as for the Hartree-Hartree equation, since we consider here only regular potentials we do not present a proof of the following well-posedness theorem. Rather, we refer the reader to [8, Appendix A] for a similar result whose proof can be adapted to our problem.
**Theorem 6**.: _Let \((f_{0},\varphi_{0})\in L^{1}_{+}(\mathbb{R}^{d})\times L^{2}(\mathbb{R}^{d})\) and assume that \(\nabla V\in\mathrm{Lip}(\mathbb{R}^{d};\mathbb{R})\). Then, the following statements hold true._
1. _(Global well-posedness) _There exists_ \((f,\varphi)\in L^{\infty}(\mathbb{R};L^{1}_{+}(\mathbb{R}^{d})\times L^{2}( \mathbb{R}^{d}))\) _a unique global weak-mild solution to the Vlasov-Hartee equation (_A.9_). Furthermore, there is continuity with respect to the initial data._
2. (Characteristics/Unitary Representation) _Let_ \((f,\varphi)\) _be the global weak-mild solution to the Vlasov-Hartee equation. We denote by_ \(\Phi_{\varphi}(t):\mathbb{R}^{2d}\to\mathbb{R}^{2d}\) _volume-preserving diffeomorphism, corresponding to the solution map of the ODE_ \[\left\{\begin{aligned} & d/dt\,x(t)=p(t)\\ & d/dt\,p(t)=F_{\varphi}(t,x(t))\end{aligned}\right.\;;\] (A.13) _and we denote by_ \((U_{f}(t,s))_{t,s\in\mathbb{R}}\) _the two-parameter family of unitary transformations defined through_ \[\left\{\begin{aligned} & i\partial_{t}U_{f}(t,s)=\big{(}-(1/2) \Delta+V_{f}(t)\big{)}U_{f}(t,s)\\ & U_{f}(t,t)=\mathds{1}\end{aligned}\right.\;.\] (A.14)
_Then, for all \(t\in\mathbb{R}\) there holds_
\[f(t)=f_{0}\big{(}\Phi_{\varphi}^{-1}(t)\big{)}\] (A.15) \[\varphi(t)=U_{f}(t,0)\varphi_{0}\;.\] (A.16)
_In particular, \(\|f(t)\|_{L^{1}}=\|f_{0}\|_{L^{1}}\), \(f(t)\geqslant 0\) and \(\|\varphi(t)\|_{L^{2}}=\|\varphi_{0}\|_{L^{2}},\) for all \(t\in\mathbb{R}\)._
3. (Strong solutions) _Assume additionally that_ \(\|(1+|x|+|p|)\,\langle\nabla_{x,p}\rangle\,f_{0}\|_{L^{1}}\) _and_ \(\|\varphi_{0}\|_{H^{2}}\) _are finite. Then, the solution map is differentiable_ \((f,\varphi)\in C^{1}(\mathbb{R};L^{1}_{+}(\mathbb{R}^{2d})\times L^{2}( \mathbb{R}^{d}))\) _and the Vlasov-Hartee equation (_A.9_) holds in the strong sense._
**Remark A.1**.: Let \((f,\varphi)\) be the weak-mild solution of the Vlasov-Hartree equation extracted from Theorem 6, and let \(\Phi_{\varphi}(t)\) be the solution map of the associated ODE. Then, a Gronwall argument shows that there exists a map \(\kappa:\mathbb{R}\mapsto(0,\infty)\) such that the following bound is satisfied
\[|\Phi_{\varphi}(t,z)|\leqslant\kappa(t)|z|\,\qquad\forall t\in\mathbb{R},\ \forall z \in\mathbb{R}^{2d}\.\] (A.17)
## Appendix B Calculation of the Infinitesimal Generator
In this section, we give more details of the calculation of the infinitesimal generator of the fluctuation dynamics \(\mathcal{U}(t,s)\), introduced in (4.3). This is the time-dependent self-adjoint operator \(\mathcal{L}(t)\) on \(\mathscr{F}\) defined through the equations
\[i\hbar\partial_{t}\mathcal{U}(t,s)=\mathcal{L}(t)\mathcal{U}(t,s)\,\qquad \mathcal{U}(t,t)=\mathds{1}\] (B.1)
where \(t,s\in\mathbb{R}\). In what follows, we let \((\omega,\varphi)\) be a strong solution of the Hartree-Hartree equation (2.5), so that the unitary maps \(\mathcal{W}_{t}\) and \(\mathcal{R}_{t}\) defined in (4.1) are differentiable with respect to \(t\in\mathbb{R}\). Note that the final result of this section is contained in Proposition B.1 and requires no \(H^{2}\) regularity of the solution \((\omega,\varphi)\). Thus, an approximation argument shows that the result also holds for mild solutions-we leave the details to the reader.
Our purpose here is to given an explicit representation of \(\mathcal{L}(t)\) in terms of creation- and annihilation- operators. As a first step in our calculation, we see that the unitarity of the maps easily imply that \(\mathcal{L}(t)\) is the contribution of three terms. Namely, for all \(t\in\mathbb{R}\) there holds
\[\mathcal{L}(t)=\mathrm{i}\hbar\partial_{t}\mathcal{R}_{t}^{*}\mathcal{R}_{t} \otimes\mathds{1}+\mathds{1}\otimes\mathrm{i}\hbar\partial_{t}\mathcal{W}_{t} ^{*}\mathcal{W}_{t}+(\mathcal{R}_{t}\otimes\mathcal{W}_{t})^{*}\mathcal{H}( \mathcal{R}_{t}\otimes\mathcal{W}_{t})\.\] (B.2)
We now proceed to calculate each term separately.
### Calculation of \(\partial_{t}\mathcal{R}_{t}^{*}\mathcal{R}_{t}\)
Let us first recall some facts and notations that we have introduced in Section 3. Namely, denoting by \(\omega_{t}\equiv\omega(t)=\sum_{i=1}^{M}|\phi_{i}(t)\rangle\langle\phi_{i}(t)|\) the fermion component of the solution of the Hartree-Hartree equation (2.5), we let \(u_{t}\equiv u(t)\) and \(v_{t}\equiv v(t)\) be the bounded operators on \(L^{2}(\mathbb{R}^{d})\) defined as \(u_{t}=1-\omega_{t}\) and \(v_{t}=\sum_{i=1}^{M}|\overline{\phi_{i}(t)}\rangle\langle\phi_{i}(t)|\,.\) The kernels of these operators define the distributions \(u_{t,x}(y)\equiv u_{t}(y,x)\) and \(v_{t,x}(y)\equiv v_{t}(y,x)\), for all \(t\in\mathbb{R}\) and \(x,y\in\mathbb{R}^{d}\). Finally, let us recall that we have introduced in (4.13) the operator \(h_{F}(t)\) as the one-particle Hamiltonian, driving the fermion dynamics. In particular, one may verify that for all \(t\in\mathbb{R}\) there holds
\[\mathrm{i}\hbar\partial_{t}u_{t}=[h_{F}(t),u_{t}]\qquad\text{and}\qquad \mathrm{i}\hbar\partial_{t}\overline{v_{t}}=h_{F}(t)\overline{v_{t}}+\overline {v_{t}h_{F}(t)}\.\] (B.3)
In order to calculate the first term in the expansion (B.2), we start with the following auxiliary Lemma. The proof is a simplification of the argument contained in [7, Proposition 3.1] for mixed states. Note that a similar calculation has been carried out in [6] for pure states, which unfortunately is not precise enough for our interests.
**Lemma B.1**.: _Assume that for all \(j=1,\ldots,M\), the orbitals are differentiable in \(L^{2}\), \(t\mapsto\phi_{j}(t)\). Then, we find that for all \(t\in\mathbb{R}\) there holds_
\[\mathrm{i}\hbar\partial_{t}\mathcal{R}_{t}^{*}\,\mathcal{R}_{t}=\mathcal{S}_{F}( t)\mathds{1}+\int_{\mathbb{R}^{2d}}\mathcal{C}(t;x,y)a_{x}^{*}a_{y}\mathrm{d}x \mathrm{d}y+\frac{1}{2}\Bigg{(}\int_{\mathbb{R}^{2d}}\mathcal{D}(t;x,y)a_{x}^ {*}a_{y}^{*}\,dxdy+h.c\Bigg{)}\.\] (B.4)
_Here, \(\mathcal{C}(t)\) and \(\mathcal{D}(t)\) are the operators on \(L^{2}(\mathbb{R}^{d})\) given by_
\[\mathcal{C}(t)=(i\hbar\partial_{t}u_{t})u_{t}+(i\hbar\partial_{t}\overline{v _{t}})v_{t}\qquad\text{and}\qquad\mathcal{D}(t)=(i\hbar\partial_{t}u_{t}) \overline{v_{t}}+(i\hbar\partial_{t}\overline{v_{t}})\overline{u_{t}}\,\] (B.5)
_and \(\mathcal{S}_{F}(t)\in\mathbb{R}\) is the scalar term._
\[\mathcal{S}_{F}(t)=-\sum_{j=1}^{M}\left\langle\phi_{j}(t),i\hbar\partial_{t} \phi_{j}(t)\right\rangle_{L^{2}}\.\] (B.6)
Sketch of proof.: Let us drop the time labels in order to simplify our notation: \(\mathcal{R}_{t}=\mathcal{R}\), \(u_{t}=u\) and \(v_{t}=v\). We start with the following observation: letting \(f\in L^{2}(\mathbb{R}^{d})\) be a test function, we obtain the conjugation relations
\[\mathcal{R}^{*}a^{*}(f)\mathcal{R}=a^{*}(uf)+a(\overline{vf})\.\] (B.7)
and thus, taking the time derivative on both sides and using \((d/dt\mathcal{R})^{*}\mathcal{R}=-\mathcal{R}^{*}(d/dt\mathcal{R})\) we obtain
\[[(d/dt\,\mathcal{R}^{*})\mathcal{R},\mathcal{R}^{*}a^{*}(f)\mathcal{R}]=a^{*} \Big{(}\frac{du}{dt}f\Big{)}+a\Big{(}\frac{d\overline{v}}{dt}\overline{f} \Big{)}\.\] (B.8)
We plug in again the conjugation relations back in the previous equation and multiply both sides with \(i\) to to obtain
\[[(i\,d/dt\,\mathcal{R}^{*})\mathcal{R},a^{*}(uf)+a(\overline{vf})]=a^{*}\Big{(} i\frac{du}{dt}f\Big{)}-a\Big{(}i\frac{d\overline{v}}{dt}\overline{f}\Big{)}\.\] (B.9)
The previous equation determines the self-adjoint operator \((i\,d/dt\,\mathcal{R}^{*})\mathcal{R}\) up to a scalar term, and implies that is quadratic in creation- and annihilation operators. Thus, there exists operators \(C\) and \(D\) on \(L^{2}(\mathbb{R}^{d})\) with operator kernels \(C(x,y)\) and \(D(x,y)\), and a scalar \(S_{F}\in\mathbb{R}\) such that
\[(i\,d/dt\,\mathcal{R}^{*})\mathcal{R}=S_{F}\mathds{1}+\int_{\mathbb{R}^{2d}}C (x,y)a_{x}^{*}a_{y}\mathrm{d}x\mathrm{d}y+\frac{1}{2}\Bigg{(}\int_{\mathbb{R} ^{2d}}D(x,y)a_{x}^{*}a_{y}^{*}\mathrm{d}x\mathrm{d}y+h.c\Bigg{)}\.\] (B.10)
Here, we may assume without loss of generality that \(\overline{C(y,x)}=C(x,y)\) and \(D(y,x)=-D(x,y)\). Let us first compute that operator contributions. A lengthy but straightforward calculation using the CAR allows us to compute the commutators
\[[(i\,d/dt\,\mathcal{R}^{*})\mathcal{R},a^{*}(uf)+a(\overline{vf})]=a^{*}(Cuf)- a(C\overline{vf})+a^{*}(Dvf)-a(D\overline{uf})\.\] (B.11)
Thus, we compare the right hand sides of (B.9) and (B.11) to obtain, as operators on \(L^{2}(\mathbb{R}^{d})\), the following equations
\[\begin{cases}Cu+Dv&=idu/dt\\ C\overline{v}+D\overline{u}&=id\overline{v}/dt\end{cases}\iff(C\ D)\begin{pmatrix} u&\overline{v}\\ v&\overline{u}\end{pmatrix}=id/dt(u\ \overline{v})\.\] (B.12)
In particular, the matrix containing \(u\) and \(v\) is a unitary map on \(L^{2}(\mathbb{R}^{d})\oplus L^{2}(\mathbb{R}^{d})\), which we denote by \(\nu\). In fact, using the relations \(u^{*}=u\), \(v^{*}=\overline{v}\) it is easy to verify that \(\nu^{*}=\nu\). Thus we invert the last equation and solve for the operators \(C\) and \(D\) to find
\[C=idu/dt\,u+id\overline{v}/dt\,v\qquad\text{and}\qquad D=idu/dt\,\overline{v} +id\overline{v}/dt\,\overline{u}\.\] (B.13)
Finally, we multiply with \(\hbar\) and identify \(\mathcal{C}=\hbar C\) and \(\mathcal{D}=\hbar D\). As for the scalar contribution, we consider its vacuum expectation value to obtain
\[S_{F}(t)=\langle\Omega_{F},id\mathcal{R}^{*}/dt\,\mathcal{R}\Omega_{F} \rangle_{\mathscr{F}_{F}}=-\left\langle\mathcal{R}\Omega_{F},id\mathcal{R}/dt \,\Omega_{F}\right\rangle_{\mathscr{F}_{F}}\.\] (B.14)
Let us now calculate the right hand side. To this end, we write in terms of the orbitals \(\omega(t)=\sum_{j=1}^{M}|\phi_{j}(t)\rangle\langle\phi_{j}(t)|\) the vector \(\mathcal{R}\Omega_{F}=a^{*}(\phi_{1})\cdots a^{*}(\phi_{M})\Omega_{F}\). A straightforward calculation using the CAR and the orthogonality relations \(\left\langle\phi_{i},\phi_{j}\right\rangle_{L^{2}}=\delta_{i,j}\) now implies
\[S_{F}(t)=-\sum_{j=1}^{M}\left\langle\phi_{j}(t),id\phi_{j}(t)/dt\right\rangle _{L^{2}}\.\] (B.15)
This finishes the proof once we multiply with \(\hbar\) and identify \(\mathcal{S}=\hbar S\).
Let us now give a more explicit representation of the term that we just calculated. Namely, in view of (B.3) it is easy to verify that
\[\mathcal{C}(t)=h_{F}(t)-u_{t}h_{F}(t)u_{t}+\overline{v_{t}}\overline{h_{F}(t) }v_{t}\qquad\text{and}\qquad\mathcal{D}(t)=-u_{t}h_{F}(t)\overline{v_{t}}+ \overline{v_{t}}\overline{h_{F}(t)}\overline{u_{t}}\] (B.16)
In particular, the second term may be simplified. Namely, in view of the relations \(h_{F}(t)=h_{F}^{*}(t)\), \(u_{t}=u_{t}^{*}\) and \(\bar{v}_{t}=v_{t}^{*}\), one may check that \((vhu)^{*}=uh\bar{v}\). Consequently, using the anticommutation relation \(\{a_{x}^{*},a_{y}^{*}\}=0\), one finds that
\[\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}(\overline{vhu})(x,y)a_{x}^{*}a_{y}^ {*}\mathrm{d}x\mathrm{d}y=-\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}(uh\overline {v})(x,y)a_{x}^{*}a_{y}^{*}\mathrm{d}x\mathrm{d}y\.\] (B.17)
We can then put the calculation from the above lemma in the following form, which is the final result of our calculation (we omit the time labels for convenience)
\[(i\hbar\partial_{t}\mathcal{R}^{*})\mathcal{R}=-\mathrm{Tr}\big{(}h_{F}\omega \big{)}+\mathrm{d}\Gamma_{F}\Big{[}h_{F}-uh_{F}u+\overline{v}\overline{h_{F}} v\Big{]}-\Bigg{(}\int_{\mathbb{R}^{2d}}[uh_{F}\bar{v}](x,y)a_{x}^{*}a_{y}^{*} \mathrm{d}x\mathrm{d}y+h.c\Bigg{)}\] (B.18)
### Calculation of \(\partial_{t}\mathcal{W}_{t}^{*}\mathcal{W}_{t}\)
Let us now calculate the second contribution of (B.2). The time derivatives of Weyl operators that are parametrized by a field \(t\mapsto\alpha(t)\in L^{2}(\mathbb{R}^{d})\) can be regarded as classical result, and we record it in the following lemma. For reference, see [34, Lemma 3.1].
**Lemma B.2**.: _Assume that \(t\mapsto\varphi_{t}\in L^{2}(\mathbb{R}^{d})\) is differentiable. Then, for all \(t\in\mathbb{R}\) there holds_
\[i\hbar\partial_{t}\mathcal{W}_{t}^{*}\mathcal{W}_{t}=N\mathrm{Im}\left\langle \varphi_{t},\hbar\partial_{t}\varphi_{t}\right\rangle-\sqrt{N}\Big{(}b^{*}(i \hbar\partial_{t}\varphi_{t})+b(i\hbar\partial_{t}\varphi_{t})\Big{)}\.\] (B.19)
Proof.: We first show that \(\partial_{t}\mathcal{W}_{t}^{*}\mathcal{W}_{t}^{*}\mathcal{W}_{t}^{*}=N\mathrm{ Im}\left\langle\varphi_{t},\hbar\partial_{t}\varphi_{t}\right\rangle-\sqrt{N} \Big{(}b^{*}(i\hbar\partial_{t}\varphi_{t})+b(i\hbar\partial_{t}\varphi_{t}) \Big{)}\). By the definition of \(\partial_{t}\mathcal{W}_{t}^{*}\mathcal{W}_{t}^{*}\mathcal{W}_{t}^{*}=N\mathrm{ Im}\left\langle\varphi_{t},\hbar\partial_{t}\varphi_{t}\right\rangle- \sqrt{N}\Big{(}b^{*}(i\hbar\partial_{t}\varphi_{t})+b(i\hbar\partial_{t} \varphi_{t})\Big{)}\), we have
\[\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}(\overline{vhu})(x,y)a_{x}^{*}a_{y}^{*} \mathrm{d}x\mathrm{d}y=-\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}(\overline{u} \overline{v})(x,y)a_{x}^{*}a_{y}^{*}\mathrm{d}x\mathrm{d}y\.\] (B.20)
We can then put the calculation from the above lemma in the following form, which is the final result of our calculation (we omit the time labels for convenience)
\[(i\hbar\partial_{t}\mathcal{R}^{*})\mathcal{R}=-\mathrm{Tr}\big{(}h_{F}\omega \big{)}+\mathrm{d}\Gamma_{F}\Big{[}h_{F}-uh_{F}u+\overline{v}\overline{h_{F}}v \Big{]}-\Bigg{(}\int_{\mathbb{R}^{2d}}[uh_{F}\bar{v}](x,y)a_{x}^{*}a_{y}^{*} \mathrm{d}x\mathrm{d}y+h.c\Bigg{)}\] (B.21)
### Calculation of \(\partial_{t}\mathcal{W}_{t}^{*}\mathcal{W}_{t}\)
Let us now calculate the second contribution of (B.2). The time derivatives of Weyl operators that are parametrized by a field \(t\mapsto\alpha(t)\in L^{2}(\mathbb{R}^{d})\) can be regarded as classical result, and we record it in the following lemma. For reference, see [34, Lemma 3.1].
**Lemma B.2**.: _Assume that \(t\mapsto\varphi_{t}\in L^{2}(\mathbb{R}^{d})\) is differentiable. Then, for all \(t\in\mathbb{R}\) there holds_
\[i\hbar\partial_{t}\mathcal{W}_{t}^{*}\mathcal{W}_{t}=N\mathrm{Im}\left\langle \varphi_{t},\hbar\partial_{t}\varphi_{t}\right\rangle-\sqrt{N}\Big{(}b^{*}(i \hbar\partial_{t}\varphi_{t})+b(i\hbar\partial_{t}\varphi_{t})\Big{)}\.\] (B.22)
We shall use the fact that \(\varphi(t)\) solves the Hartree-Hartree equation (2.5). Namely, we consider on \(H^{2}(\mathbb{R}^{d})\) the time-dependent bosonic Hamiltonian \(h_{B}(t)\) defined in (4.13) and we conclude that for all \(t\in\mathbb{R}\) there holds
\[i\hbar\partial_{t}\mathcal{W}_{t}^{*}\mathcal{W}_{t}=-N\mathrm{Re}\,\langle \varphi_{t},h_{B}(t)\varphi_{t}\rangle-\sqrt{N}\Big{(}b^{*}(h_{B}(t)\varphi_{ t})+b(h_{B}(t)\varphi_{t})\Big{)}\.\] (B.20)
### Calculation of \(\mathcal{R}_{t}^{*}\mathcal{W}_{t}^{*}\mathcal{H}\mathcal{W}_{t}\mathcal{R}_{t}\)
Our next task it to compute \((\mathcal{R}_{t}\otimes\mathcal{W}_{t})^{*}\mathcal{H}(\mathcal{R}_{t} \otimes\mathcal{W}_{t})\). To this end, we shall use extensively the conjugation relations for particle-hole transformations (see Lemma 3.2)
\[\mathcal{R}_{t}^{*}a_{x}^{*}\mathcal{R}_{t}=a^{*}(u_{t,x})+a(\overline{v_{t,x}} )\qquad\text{and}\qquad\mathcal{R}_{t}^{*}a_{x}\mathcal{R}_{t}=a(u_{t,x})+a^{* }(\overline{v_{t,x}})\,\] (B.21)
for all \((t,x)\in\mathbb{R}\times\mathbb{R}^{d}\), and similarly for Weyl operators (see Lemma 3.4)
\[\mathcal{W}_{t}^{*}b_{x}\mathcal{W}_{t}=b_{x}+\sqrt{N}\varphi_{t}(x)\qquad \text{and}\qquad\mathcal{W}_{t}^{*}b_{x}^{*}\mathcal{W}_{t}=b_{x}^{*}+\sqrt{N }\varphi_{t}(x)\.\] (B.22)
where \((t,x)\in\mathbb{R}\times\mathbb{R}^{d}\). A lengthy but otherwise straightforward calculation using these conjugation relations, together with \(u^{*}=u\) and \(v^{*}=\overline{v}\), yields the following result.
**Lemma B.3**.: _The following holds for all \(t\in\mathbb{R}\)_
\[(\mathcal{R}_{t}\otimes \mathcal{W}_{t})^{*}\mathcal{H}(\mathcal{R}_{t}\otimes\mathcal{W} _{t})\] \[=\left(N\|\nabla\varphi_{t}\|_{L^{2}}^{2}+\mathrm{Tr}\ (-\Delta\omega_{t})+\lambda NM\int_{\mathbb{R}^{2d}}\rho_{F}(t,x)V(x-y)\rho_{B }(t,y)\,dxdy\right)\mathds{1}\otimes\mathds{1}\] \[+\left(\mathrm{d}\Gamma_{F}\big{[}u_{t}h_{F}(t)u_{t}-\overline{v_ {t}}h_{F}(t)v_{t}\big{]}+\int_{\mathbb{R}^{2d}}(u_{t}h_{F}(t)\bar{v}_{t})(x,y) a_{x}^{*}a_{y}^{*}dx\,dy+\mathrm{h.c}\right)\otimes\mathds{1}\] \[+\mathds{1}\otimes\left(\mathrm{d}\Gamma_{B}\big{[}h_{B}(t)\big{]} +\sqrt{N}\big{(}b^{*}(h_{B}(t)\varphi_{t})+b(h_{B}(t)\varphi_{t})\big{)}\right)\] \[+\lambda\sqrt{N}\mathcal{L}_{2,1}(t)+\lambda\mathcal{L}_{2,2}(t)\.\] (B.23)
_Here, we denote_
\[\mathcal{L}_{2,1}(t) =\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}V(x-y)a^{*}(u_{t,x})a(u _{t,x})\otimes\left(\varphi_{t}(y)b_{y}^{*}+h.c\right)\,\mathrm{d}x\mathrm{d}y\] \[-\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}V(x-y)a^{*}(\overline{ v_{t,x}})a(\overline{v_{t,x}})\otimes\left(\varphi_{t}(y)b_{y}^{*}+h.c\right)\, \mathrm{d}x\mathrm{d}y\] \[+\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}V(x-y)a^{*}(u_{t,x})a^{* }(\overline{v_{t,x}})\otimes\left(\varphi_{t}(y)b_{y}^{*}+h.c\right)\, \mathrm{d}x\mathrm{d}y\] \[+\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}V(x-y)a(\overline{v_{t,x}})a(u_{t,x})\otimes\left(\varphi_{t}(y)b_{y}^{*}+h.c\right)\,\mathrm{d}x \mathrm{d}y\] (B.24)
_and_
\[\mathcal{L}_{2,2}(t) =\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}V(x-y)a^{*}(u_{t,x})a(u_{t,x})\otimes b_{y}^{*}b_{y}\ \mathrm{d}x\mathrm{d}y\] \[-\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}V(x-y)a^{*}(\overline{v_ {t,x}})a(\overline{v_{t,x}})\otimes b_{y}^{*}b_{y}\ \mathrm{d}x\mathrm{d}y\] \[+\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}V(x-y)a^{*}(u_{t,x})a^{* }(\overline{v_{t,x}})\otimes b_{y}^{*}b_{y}\ \mathrm{d}x\mathrm{d}y\] \[+\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}V(x-y)a(\overline{v_{t, x}})a(u_{t,x})\otimes b_{y}^{*}b_{y}\ \mathrm{d}x\mathrm{d}y\.\] (B.25)
### Putting everything together
We put together the last three lemmas to find the following explicit representaiton of the generator \(\mathcal{L}(t)\).
**Proposition B.1**.: _Let \(\mathcal{U}(t,s)\) be the unitary transformation defined in (4.3), and let \(\mathcal{L}(t)\) be its inifinitesimal generator. Then, \(\mathcal{L}(t)\) admits the following representation_
\[\mathcal{L}(t)=S(t)\mathds{1}\otimes\mathds{1}+\mathrm{d}\Gamma[h_{F}(t)] \otimes\mathds{1}+\mathds{1}\otimes\mathrm{d}\Gamma[h_{B}(t)]+\lambda\sqrt{N} \mathcal{L}_{2,1}(t)+\lambda\mathcal{L}_{2,2}(t)\.\] (B.26)
_Here the scalar term is \(S(t)=-\lambda NM\int_{\mathbb{R}^{2d}}V(x-y)\rho_{F}(t,x)\rho_{B}(t,y)dxdy\), \(h_{F}(t)\) and \(h_{B}(t)\) are the 1-particle Hamiltonians defined in (4.13), and the operators \(\mathcal{L}_{2,1}(t)\) and \(\mathcal{L}_{2,2}(t)\) are defined in Lemma B.3._
|
2309.05240 | Coupled-cluster theory for trapped bosonic mixtures | We develop a coupled-cluster theory for bosonic mixtures of binary species in
external traps, providing a promising theoretical approach to demonstrate
highly accurately the many-body physics of mixtures of Bose-Einstein
condensates. The coupled-cluster wavefunction for the binary species is
obtained when an exponential cluster operator $e^T$, where
$T=T^{(1)}+T^{(2)}+T^{(12)}$ and $T^{(1)}$ accounts for excitations in
species-1, $T^{(2)}$ for excitations in species-2, and $T^{(12)}$ for combined
excitations in both species, acts on the ground state configuration prepared by
accumulating all bosons in a single orbital for each species. We have
explicitly derived the working equations for the bosonic mixtures by truncating
the cluster operator upto the single and double excitations and using an
arbitrary sets of orthonormal orbitals for each of the species. Further, the
comparatively simplified version of the working equations are formulated using
the Fock-like operators. Finally, using an exactly solvable many-body model for
bosonic mixtures that exists in the literature allows us to implement and test
the performance and accuracy of the coupled-cluster theory for situations with
balanced as well as imbalanced boson numbers and for weak to moderately strong
intra- and inter-species interaction strengths. The comparison between our
computed results using coupled-cluster theory with the respective analytical
exact results displays remarkable agreement exhibiting excellent success of the
coupled-cluster theory for bosonic mixtures. All in all, the correlation
exhaustive coupled-cluster theory shows encouraging results and it could be a
promising approach in paving the way for high-accuracy modelling of various
bosonic mixture systems. | Anal Bhowmik, Ofir E. Alon | 2023-09-11T05:19:42Z | http://arxiv.org/abs/2309.05240v2 | # Coupled-cluster theory for trapped bosonic mixtures
###### Abstract
We develop a coupled-cluster theory for bosonic mixtures of binary species in external traps, providing a promising theoretical approach to demonstrate highly accurately the many-body physics of mixtures of Bose-Einstein condensates. The coupled-cluster wave-function for the binary species is obtained when an exponential cluster operator \(e^{T}\), where \(T=T^{(1)}+T^{(2)}+T^{(12)}\) and \(T^{(1)}\) accounts for excitations in species-1, \(T^{(2)}\) for excitations in species-2, and \(T^{(12)}\) for combined excitations in both species, acts on the ground state configuration prepared by accumulating all bosons in a single orbital for each species. We have explicitly derived the working equations for the bosonic mixtures by truncating the cluster operator upto the single and double excitations and using an arbitrary sets of orthonormal orbitals for each of the species. Further, the comparatively simplified version of the working equations are formulated using the Fock-like operators. Finally, using an exactly solvable many-body model for bosonic mixtures that exists in the literature allows us to implement and test the performance and accuracy of the coupled-cluster theory for situations with balanced as well as imbalanced boson numbers and for weak to moderately strong intra- and inter-species interaction strengths. The comparison between our computed results using coupled-cluster theory with the respective analytical exact results displays remarkable agreement exhibiting excellent success of the coupled-cluster theory for bosonic mixtures. All in all, the correlation exhaustive coupled-cluster theory shows encouraging results and it could be a promising approach in paving the way for high-accuracy modelling of various bosonic mixture systems.
Introduction
Mixtures of bosonic species created from ultracold quantum gasses are highly investigated topic, providing more degrees-of-freedom compared to single species, due to the advanced controllability of the system's parameters by the state of the art experiments. Excellent experimental control of the strengths of inter- and intra-species interactions and of the external confinement establishes mixtures of bosonic species as a rich model to investigate many-body quantum physics. A great variety of physical phenomena involving mixtures of bosonic species have attracted much attention, such as, the phase separation [1], condensate depletion [2], fermionization [3], quantum phase transition in waveguides [4], persistent currents [5], quantum mechanical stabilization [6], entanglement induced interactions [7], the miscible to immiscible phase transition [8; 9], spin-charge separation [10], emergence of spin-roton [11], ferrodark solitons [12], and quantum droplets [13].
The ground-state properties of trapped bosonic mixtures have been extensively investigated both theoretically and experimentally [14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26]. In terms of theoretical description, although many observations of bosonic mixtures so far have been explained by applying the standard mean-field theory, namely, Gross-Pitaevskii theory, the many-body modelling is an indispensable tool to capture fundamental understanding and outline new schemes of variety of quantum phenomena for applications. Presently, the popularly available many-body theories are the quantum Monte Carlo method [27; 28], the Bose-Hubbard model [29; 30], self-consistent many-body theory [31] for mixtures, and the most successful in accounting for quantum correlations, the multilayer multiconfigurational time-dependent Hartree method [32; 33; 34; 35].
Since the coupled-cluster theory was reformulated and introduced to electron-structure theory in [36; 37], it has become one of the most successful methods of choice in quantum chemistry when accuracy is concerned [38; 39]. For many-fermion systems, the coupled-cluster theory has already been proved to be a very reliable and accurate approach [40; 41; 42], also in the relativistic regime of interest [43; 44; 45; 46]. In Ref [47], a coupled-cluster theory for trapped interacting indistinguishable bosons was derived and a few numerical applications were shown upto \(10^{4}\) particles with sufficiently strong inter-boson interaction strength. Their investigation suggests that the coupled-cluster theory would be a practical approach to apply beyond single-component bosonic systems and thereby study further the bosonic mixtures of different species. Here, we would like to mention some of the recent advancements of coupled-cluster theory, such as, for electron-phonon systems [48], polarons [49; 50], and the investigation of multiexcitonic interactions [51]
In this work we develop a theoretical framework of the coupled-cluster theory for systems of
trapped binary bosonic mixtures. The general theory includes all kinds of excitations and we call it coupled-cluster theory for mixtures, or, briefly, CC-M. To begin with, we include the single and double excitations in the mixture of two species, and thereby, according to the order of excitations included the theory becomes the coupled-cluster singles doubles theory for bosonic mixtures, CCSD-M. We derive the working equations of the coupled-cluster theory with an arbitrary sets of orthonormal orbitals and also using Fock-like operators. We implement the theory and illustrate the potential usage by comparing it to an exactly solvable many-body model. This enables us to get precise results on how accurate we are in scenarios that occur for binary mixtures with balanced and imbalanced particle numbers and having weak to strong intra- and inter-species interaction strengths.
The structure of the paper is as follows: In section II, we present the CC-M ansatz and provide the basic formulation of the cluster operators for the excitations for the mixture. Section III presents the details of Fock-like operators for the bosonic mixture of binary species. In Section IV, the coupled-cluster theory for the bosonic mixtures is developed and the detailed derivation is shown. In Section V, we provide the potential of our coupled-cluster theory by comparing the results for scenarios when the mixtures have balanced and imbalanced particle numbers and for different strengths of intra- and inter-species interaction to the analytical results from an exactly solvable model. Finally, in Section VI, we summarize and conclude our work. The appendices provide additional details on the derivation of the theory and its implementation.
## II Theoretical framework for coupled-cluster theory for bosonic mixtures
In order to develop the coupled-cluster theory for binary mixture of bosons, we consider the number of bosons in species-1 and species-2 to be \(N_{1}\) and \(N_{2}\), respectively, with the total number of bosons \(N=N_{1}+N_{2}\). For simplicity, we consider the bosons to be spinless. We introduce the number of one-particle functions or orbitals for species-1 as \(M_{1}\) and for species-2 as \(M_{2}\). The many-body Hamiltonian of the two-species mixture consists of three parts as follows:
\[H=H^{(1)}+H^{(2)}+H^{(12)},\] \[H^{(1)}=\sum_{p,q}h^{(1)}_{\bar{p}\bar{q}}a^{\dagger}_{p}a_{q}+ \frac{1}{2}\sum_{p,q,r,s}V^{(1)}_{\bar{p}\bar{q}rs}a^{\dagger}_{p}a^{\dagger}_{q }a^{\dagger}_{r}a_{s},\] \[H^{(2)}=\sum_{i,j}h^{(2)}_{ij}b^{\dagger}_{i}b_{j}+\frac{1}{2} \sum_{i,j,k,l}V^{(2)}_{ijkl}b^{\dagger}_{i}b^{\dagger}_{j}b_{k}b_{l},\] \[H^{(12)}=\sum_{p,q,i,j}V^{(12)}_{\bar{p}i\bar{q}j}a^{\dagger}_{p} b^{\dagger}_{i}a_{q}b_{j}. \tag{1}\]
Here, \(a^{\dagger}_{p}\) and \(a_{p}\), where \(p=1,2,...,M_{1}\), are the bosonic creation and destruction operators, respectively, corresponding to the orbitals \(\varphi_{p}(\mathbf{r}_{1})\) of species-1 and similarly for species-2, the corresponding operators are \(b^{\dagger}_{i}\) and \(b_{i}\), where \(i=1,2,...,M_{2}\), for the orbitals \(\psi_{i}(\mathbf{r}_{2})\). In about follows, that the symbols \(\{pqrs\}\) and \(\{ijkl\}\) will be used for species-1 and species-2, respectively. Furthermore, to ease the reading of equations whenever we have indices in matrix elements for species-1, we write a bar on top. For instance in \(H^{(12)}\), the matrix element is \(V^{(12)}_{\bar{p}i\bar{q}j}\) where the variables are \(p\) and \(q\) for species-1. The creation and destruction operators follows the usual bosonic commutator relations which read
\[[a_{p},a^{\dagger}_{q}]=\delta_{pq},[a_{p},a_{q}]=0,[a^{\dagger}_ {p},a^{\dagger}_{q}]=0,\] \[[b_{i},b^{\dagger}_{j}]=\delta_{ij},[b_{i},b_{j}]=0,[b^{\dagger} _{i},b^{\dagger}_{j}]=0,\] \[[a_{p},b^{\dagger}_{i}]=0,[a_{p},b_{i}]=0,[a^{\dagger}_{p},b^{ \dagger}_{i}]=0. \tag{2}\]
In the Hamiltonian Eq. 1, the one-particle operators for species-1 and species-2 are
\[h^{(1)}_{\bar{p}\bar{q}} =\int\varphi^{*}_{p}\hat{h}^{(1)}\varphi_{q}d\mathbf{r}_{1},\] \[h^{(2)}_{ij} =\int\psi^{*}_{i}\hat{h}^{(2)}\psi_{j}d\mathbf{r}_{2}, \tag{3}\]
respectively, and the two-particle interactions in the Hamiltonian Eq. 1 are given by
\[V^{(1)}_{\bar{p}\bar{q}rs}=\int\int\varphi^{*}_{p}\varphi^{*}_{q }V^{(1)}(\mathbf{r}_{1}-\mathbf{r}^{\prime}_{1})\varphi_{r}\varphi_{s}d \mathbf{r}_{1}d\mathbf{r}^{\prime}_{1},\] \[V^{(2)}_{ijkl}=\int\int\psi^{*}_{i}\psi^{*}_{j}V^{(2)}(\mathbf{r }_{2}-\mathbf{r}^{\prime}_{2})\psi_{k}\psi_{l}d\mathbf{r}_{2}d\mathbf{r}^{ \prime}_{2},\] \[V^{(12)}_{\bar{p}i\bar{q}j}=\int\int\varphi^{*}_{p}\psi^{*}_{i}V ^{(12)}(\mathbf{r}_{1}-\mathbf{r}_{2})\varphi_{q}\psi_{j}d\mathbf{r}_{1}d \mathbf{r}_{2}. \tag{4}\]
Here the first and second terms stand for the two-particle interactions for species-1 and species-2, respectively. The last term indicates the two-particle interaction between the bosons of species-1 and species-2.
In order to solve the Hamiltonian Eq. 1 using the coupled-cluster theory, we start with the definition of the exact wavefunction of bosonic mixture which is obtained by employing an exponential operator onto the ground configuration
\[|\Psi\rangle=e^{T}|\phi_{0}\rangle. \tag{5}\]
The ground configuration is taken as the product state, \(\phi_{0}=\prod_{n=1}^{N_{1}}\varphi_{1}(r_{1n})\prod_{m=1}^{N_{2}}\psi_{1}(r_{2 m})\), which defines the standard mean-field for the bosonic mixture, also see below. We note that other mean fields are possible for bosons [52; 53], but we will not pursue such a choice here. The Hamiltonian in Eq. 1 determines the energy,\(E_{0}\), from \(H|\Psi\rangle=E_{0}|\Psi\rangle\).
For the bosonic mixture, the cluster operator \(T\) is conveniently divided into three parts,
\[T=T^{(1)}+T^{(2)}+T^{(12)}. \tag{6}\]
Here, the cluster operators, \(T^{(1)}\), \(T^{(2)}\), and \(T^{(12)}\) correspond the superposition of excitation operators. Moreover, \(T^{(1)}\) are identical to the single-component cluster operator and operates only on the Hilbert space of species-1. The same holds true for \(T^{(2)}\), but it operates only on the Hilbert space of species-2. In contrast, \(T^{(12)}\) yields the simultaneous excitations in the Hilbert spaces of both species-1 and species-2. In terms of excitations, the cluster operators can be written as
\[T^{(1)}=\sum_{n=1}^{N_{1}}T^{(1)}_{n}=\sum_{n=1}^{N_{1}}t^{(1)}_{n}(a_{1})^{n} =\sum_{n=1}^{N_{1}}\sum_{p_{1},...,p_{n}=2}^{M_{1}}c_{p_{1}p_{2}...p_{n}}a_{p_ {1}}^{\dagger}a_{p_{2}}^{\dagger}...a_{p_{n}}^{\dagger}(a_{1})^{n}, \tag{7}\]
\[T^{(2)}=\sum_{m=1}^{N_{2}}T^{(2)}_{m}=\sum_{m=1}^{N_{2}}t^{(2)}_{m}(b_{1})^{m} =\sum_{m=1}^{N_{2}}\sum_{i_{1},...,i_{m}=2}^{M_{2}}d_{i_{1}i_{2}...i_{m}}b_{i_ {1}}^{\dagger}b_{i_{2}}^{\dagger}...b_{i_{m}}^{\dagger}(b_{1})^{m}. \tag{8}\]
Note that for each species the 1st orbital is occupied and the second orbital and onward are the virtual orbitals. In Eq. 7, \(t^{(1)}_{n}=\sum_{p_{1},...,p_{n}=2}^{M_{1}}c_{p_{1}p_{2}...p_{n}}a_{p_{1}}^{ \dagger}a_{p_{2}}^{\dagger}...a_{p_{n}}^{\dagger}\) creates the excitations in the virtual orbitals of species-1. Similarly, in Eq. 8, \(t^{(2)}_{m}=\sum_{i_{1},...,i_{m}=2}^{M_{2}}d_{i_{1}i_{2}...i_{m}}b_{i_{1}}^{ \dagger}b_{i_{2}}^{\dagger}...b_{i_{m}}^{\dagger}\) generates the excitations in the virtual orbitals of species-2. \(T^{(12)}\) has the form
\[T^{(12)} = \sum_{n^{\prime}=1}^{N_{1}}\sum_{m^{\prime}=1}^{N_{2}}T^{(12)}_{ n^{\prime}m^{\prime}}=\sum_{n^{\prime}=1}^{N_{1}}\sum_{m^{\prime}=1}^{N_{2}}t^{(12 )}_{n^{\prime}m^{\prime}}(a_{1})^{n^{\prime}}(b_{1})^{m^{\prime}} \tag{9}\] \[= \sum_{n^{\prime}=1}^{N_{1}}\sum_{m^{\prime}=1}^{N_{2}}\sum_{p_{1},...,p_{n^{\prime}}=2}^{M_{1}}\sum_{i_{1},...,i_{m^{\prime}}=2}^{M_{2}}e_{p_{1 }...p_{n^{\prime}}i_{1}...i_{m^{\prime}}}a_{p_{1}}^{\dagger}...a_{p_{n^{\prime }}}^{\dagger}b_{i_{1}}^{\dagger}b_{i_{2}}^{\dagger}...b_{i_{m^{\prime}}}^{ \dagger}(a_{1})^{n^{\prime}}(b_{1})^{m^{\prime}}.\]
In the expression of \(T^{(12)}\), one can find that \(t^{(12)}_{n^{\prime}m^{\prime}}=\sum_{p_{1},...,p_{n^{\prime}}=2}^{M_{1}}\sum_{i_{1 },...,i_{m^{\prime}}=2}^{M_{2}}e_{p_{1}...p_{n^{\prime}}i_{1}...i_{m^{\prime}}} \sigma^{\dagger}_{p_{1}}...\sigma^{\dagger}_{p_{n^{\prime}}}\)
\(b^{\dagger}_{i_{1}}b^{\dagger}_{i_{2}}...b^{\dagger}_{i_{m^{\prime}}}\) is responsible for the simultaneous excitations in both species. In Eqs. 7 and 8, the expansions of \(T^{(1)}=T^{(1)}_{1}+T^{(1)}_{2}+T^{(1)}_{3}+...\) and \(T^{(2)}=T^{(2)}_{1}+T^{(2)}_{2}+T^{(2)}_{3}+...\) include the singles, doubles, triples,... and so on excitations. While for \(T^{(12)}=T^{(12)}_{11}+T^{(12)}_{12}+T^{(12)}_{21}+...\), see Eq. 9, the excitations start from doubles, each for one species. As we will investigate about the bosonic mixture, we shall find that the unknown coefficients are \(c_{p_{1}p_{2}...p_{n}}\), \(d_{i_{1}i_{2}...i_{m}}\), and \(e_{p_{1}...p_{n^{\prime}}i_{1}...i_{m^{\prime}}}\). Since the summations in Eqs. 7 to 9 are unrestricted, the coefficients do not depend on the ordering of the subscripts. Also, for the same reason, one can find the relations among the cluster operators, \([T^{(1)}_{n},T^{(1)}_{n^{\prime}}]=[T^{(2)}_{m},T^{(2)}_{m^{\prime}}]=[T^{(1) }_{n},T^{(2)}_{m}]=[T^{(1)}_{n},T^{(12)}_{n^{\prime}m^{\prime}}]=[T^{(2)}_{m^{ \prime}},T^{(12)}_{n^{\prime}m^{\prime}}]=[T^{(12)}_{nm},T^{(12)}_{n^{\prime}m ^{\prime}}]=0\). The implication of the commutation relations among the cluster operators is that the excitation operator is breakable to a product of all the partial excitation operators, i.e., \(e^{T}|\phi_{0}\rangle=e^{T_{1}}e^{T_{2}}e^{T_{12}}|\phi_{0}\rangle\). Note that, as \(T\) is the excitation operator, the wavefunction in Eq. 5 satisfies the intermediate normalization \(\langle\phi_{0}|\Psi\rangle=1\).
## III Fock-like operators for a bosonic mixture of binary species
For the study of the coupled-cluster theory for bosonic mixtures, we will first derive the working equations when the orbitals are arbitrary or unspecified to demonstrate a general perspective of the theory. However, we will also utilize the orbitals arising from the Fock-like operators to give an illustrative example. To derive the Fock-like operators, we start from the mean-field energy functional for a mixture which reads
\[E_{\rm MF} = N_{1}\Bigg{[}\int d{\bf r}_{1}\varphi_{1}^{*}({\bf r}_{1}) \Big{(}h^{(1)}+\frac{\Lambda_{1}}{2}J_{11}+\frac{\Lambda_{21}}{2}J_{12}\Big{)} \varphi_{1}({\bf r}_{1})\Bigg{]} \tag{11}\] \[+ N_{2}\Bigg{[}\int d{\bf r}_{2}\psi_{1}^{*}({\bf r}_{2})\Big{(}h^ {(2)}+\frac{\Lambda_{2}}{2}J_{22}+\frac{\Lambda_{12}}{2}J_{21}\Big{)}\psi_{1}( {\bf r}_{2})\Bigg{]},\]
where the interaction parameters are given by \(\Lambda_{1}=\lambda_{1}(N_{1}-1)\), \(\Lambda_{2}=\lambda_{2}(N_{2}-1)\), \(\Lambda_{21}=\lambda_{12}N_{2}\), and \(\Lambda_{12}=\lambda_{12}N_{1}\), and satisfy \(N_{1}\Lambda_{21}=N_{2}\Lambda_{12}\). Here \(\lambda_{1}\) and \(\lambda_{2}\) are the intra-species interaction strengths of species-1 and species-2, respectively, and \(\lambda_{12}\) is the inter-species interaction strength between species-1 and species-2. Also, the direct interaction operators \(J_{11}\), \(J_{22}\), \(J_{12}\), and \(J_{21}\) are local operators and are defined by
\[J_{11} =\int\varphi_{1}^{*}(\mathbf{r}_{1}^{\prime})V^{(1)}(\mathbf{r}_{1}- \mathbf{r}_{1}^{\prime})\varphi_{1}(\mathbf{r}_{1}^{\prime})d\mathbf{r}_{1}^{ \prime},\] \[J_{22} =\int\psi_{1}^{*}(\mathbf{r}_{2}^{\prime})V^{(2)}(\mathbf{r}_{2}- \mathbf{r}_{2}^{\prime})\psi_{1}(\mathbf{r}_{2}^{\prime})d\mathbf{r}_{2}^{ \prime},\] \[J_{21} =\int\psi_{1}^{*}(\mathbf{r}_{2})V^{(12)}(\mathbf{r}_{1}-\mathbf{ r}_{2})\psi_{1}(\mathbf{r}_{2})d\mathbf{r}_{2},\] \[J_{12} =\int\varphi_{1}^{*}(\mathbf{r}_{1})V^{(12)}(\mathbf{r}_{1}- \mathbf{r}_{2})\varphi_{1}(\mathbf{r}_{1})d\mathbf{r}_{1}. \tag{10}\]
Minimizing the mean-field energy functional with respect to the shapes of the orbitals \(\varphi_{1}(\mathbf{r}_{1})\) and \(\psi_{1}(\mathbf{r}_{2})\), the two-coupled mean-field equations of the mixture are derived,
\[[h^{(1)}+\Lambda_{1}J_{11}+\Lambda_{21}J_{21}]\varphi_{1}( \mathbf{r}_{1}) =\mu_{1}^{(1)}\varphi_{1}(\mathbf{r}_{1}),\] \[[h^{(2)}+\Lambda_{2}J_{22}+\Lambda_{12}J_{12}]\psi_{1}(\mathbf{ r}_{2}) =\mu_{1}^{(2)}\psi_{1}(\mathbf{r}_{2}) \tag{11}\]
Here \(\mu_{1}^{(1)}\) and \(\mu_{1}^{(2)}\) are the chemical potentials of the ground state of species-1 and species-2, respectively. Eq. 11 presents the Hermitian Fock-like operators
\[F_{1} =h^{(1)}+\Lambda_{1}J_{11}+\Lambda_{21}J_{21},\] \[F_{2} =h^{(2)}+\Lambda_{2}J_{22}+\Lambda_{12}J_{12}. \tag{12}\]
Eq. 11 solves for the ground state of the bosonic mixtures and the ground state orbitals define the Fock-like operators. Now, the eigenfunctions of the Fock-like operators gives us the complete orthonormal basis sets
\[F_{1}\varphi_{i} =\mu_{i}^{(1)}\varphi_{i},\] \[F_{2}\psi_{i} =\mu_{i}^{(2)}\psi_{i}. \tag{13}\]
From Eq. 13, the eigenvalues are computed as \(\langle\varphi_{i}|F_{1}|\varphi_{i}\rangle=\mu_{i}^{(1)}\) and \(\langle\psi_{i}|F_{2}|\psi_{i}\rangle=\mu_{i}^{(2)}\). Using the Fock-like operators one can simplify the general working equations by substituting the one-body matrix element with two-body matrix elements, see details in Appendix A.
## IV Derivation of the working equations
### General relations
In this section, we derive and discuss the working equations of the coupled-cluster theory for the bosonic mixture. At first, we transform the bosonic destruction and creation operators for
both the species using the expansion \(\dot{A}\equiv e^{-T}Ae^{T}=A+\frac{1}{1!}[A,T]+\frac{1}{2!}[[A,T],T]+....\). The transformed destruction and creation operators are needed eventually to construct the transformed Hamiltonian under investigation. One can readily find, for both species, that the destruction operator corresponding to the occupied orbitals, \(\varphi_{1}\) and \(\psi_{1}\), and the creation operators of the virtual orbitals, \(\varphi_{p}\) and \(\psi_{i}\) where \(p\geq 2\) and \(i\geq 2\), are invariant to the coupled-cluster transformation:
\[\dot{a}_{1} = a_{1},\] \[\dot{a}_{p}^{\dagger} = a_{p}^{\dagger},\,\,\,p=2,3,...,M_{1},\] \[\dot{b}_{1} = b_{1},\] \[\dot{b}_{i}^{\dagger} = b_{i}^{\dagger},\,\,\,i=2,3,...,M_{2}. \tag{11}\]
In contrast, the creation operators corresponding to the orbitals occupied in \(\phi_{0}\) for each species, and the destruction operators of the virtual orbitals are modified due to the coupled-cluster transformation for both species and one can readily find the relations
\[\dot{a}_{1}^{\dagger} = a_{1}^{\dagger}-{\cal K}_{1},\] \[\dot{a}_{p} = a_{p}-{\cal K}_{p},\] \[\dot{b}_{1}^{\dagger} = b_{1}^{\dagger}-{\cal L}_{1},\] \[\dot{b}_{i} = b_{i}-{\cal L}_{i}, \tag{12}\]
where \({\cal K}_{1}\) and \({\cal K}_{p}\) can be expressed as
\[{\cal K}_{1}=\sum_{n=1}^{N_{1}}nt_{n}^{(1)}(a_{1})^{n-1}+\sum_{n=1}^{N_{1}} \sum_{m=1}^{N_{2}}nt_{nm}^{(12)}(a_{1})^{n-1}b_{1}^{m}, \tag{13}\]
\[{\cal K}_{p}=\sum_{n=1}^{N_{1}}nt_{n}^{(1)(p)}(a_{1})^{n}+\sum_{n=1}^{N_{1}} \sum_{m=1}^{N_{2}}n\eta_{nm}^{(12)(p)}{a_{1}}^{n}b_{1}^{m}. \tag{14}\]
Here, the operators \(t_{n}^{(1)}\) and \(t_{nm}^{(12)}\) are defined before, see Eqs. 7 and 9. The operators \(t_{n}^{(1)(p)}\) operate in the virtual space of species-1 and create excitations, while \(\eta_{nm}^{(12)(p)}\) are also responsible for excitations but acting in the virtual space of both species. \(t_{n}^{(1)(p)}\) and \(\eta_{nm}^{(12)(p)}\) read
\[t_{n}^{(1)(p)}=\sum_{p_{2},p_{3},...,p_{n}=2}^{M_{1}}c_{pp_{2}...p_{n}}a_{p_{ 2}}^{\dagger}a_{p_{3}}^{\dagger}...a_{p_{n}}^{\dagger}, \tag{15}\]
\[\eta_{nm}^{(12)(p)}=\sum_{p_{2},p_{3},...,p_{n}=2}^{M_{1}}\sum_{i_{1},i_{2},...,i_{m}=2}^{M_{2}}e_{pp_{2}...p_{n}i_{1}i_{2}...i_{m}}a_{p_{2}}^{\dagger}a_{ p_{3}}^{\dagger}...a_{p_{n}}^{\dagger}b_{i_{1}}^{\dagger}b_{i_{2}}^{\dagger}...b_{i_{m}}^ {\dagger}. \tag{16}\]
Similarly, for species-2, \(\mathcal{L}_{1}\) and \(\mathcal{L}_{i}\) are expressed as
\[\mathcal{L}_{1}=\sum_{m=1}^{N_{2}}mt_{m}^{(2)}(b_{1})^{m-1}+\sum_{n=1}^{N_{1}} \sum_{m=1}^{N_{2}}mt_{nm}^{(12)}a_{1}^{n}b_{1}^{m-1}, \tag{11}\]
\[\mathcal{L}_{i}=\sum_{m=1}^{N_{2}}mt_{m}^{(2)(i)}(b_{1})^{m}+\sum_{n=1}^{N_{1}} \sum_{m=1}^{N_{2}}mr_{nm}^{(12)(i)}a_{1}^{n}b_{1}^{m}. \tag{12}\]
The operators \(t_{m}^{(2)(i)}\) generate excitations and operate in the virtual space of species-2 but the operators \(\tau_{nm}^{(12)(i)}\) excite bosons in both species simultaneously. \(t_{m}^{(2)(i)}\) and \(\tau_{nm}^{(12)(i)}\) can be represented as
\[t_{m}^{(2)(i)}=\sum_{i_{2},i_{3},...,i_{m}=2}^{M_{2}}d_{ii_{2}...i_{m}}b_{i_{2 }}^{\dagger}b_{i_{3}}^{\dagger}...b_{i_{m}}^{\dagger}, \tag{13}\]
\[\tau_{nm}^{(12)(i)}=\sum_{p_{1},p_{2},...,p_{n}=2}^{M_{1}}\sum_{i_{2},i_{3},...,i_{m}=2}^{M_{2}}e_{p_{1}p_{2}...p_{n}ii_{2}...i_{m}}a_{p_{1}}^{\dagger}a_{p_{ 2}}^{\dagger}...a_{p_{n}}^{\dagger}b_{i_{2}}^{\dagger}b_{i_{3}}^{\dagger}...b_{ i_{m}}^{\dagger}, \tag{14}\]
To determine the unknown coefficients, \(c_{p_{1}p_{2}...p_{n}}\), \(d_{i_{1}i_{2}...i_{m}}\), and \(e_{p_{1}...p_{n^{\prime}}i_{1}...i_{m^{\prime}}}\), it is recommended to expand the first few terms of \(\mathcal{K}_{1}\), \(\mathcal{K}_{p}\), \(\mathcal{L}_{1}\), and \(\mathcal{L}_{i}\). The expansions of \(\mathcal{K}_{1}\), \(\mathcal{K}_{p}\), \(\mathcal{L}_{1}\), and \(\mathcal{L}_{i}\), which are defined in Eqs. 10, are explicitly presented in Appendix B.
In the following calculations, it is noted that the operators \(\mathcal{K}\) and \(\mathcal{L}\) fulfil the commutation relation \([\mathcal{K}_{p},\mathcal{K}_{q}]=[\mathcal{K}_{1},\mathcal{K}_{p}]=0\), \([\mathcal{L}_{i},\mathcal{L}_{j}]=[\mathcal{L}_{1},\mathcal{L}_{j}]=0\), and \([\mathcal{L}_{1},\mathcal{K}_{p}]=[\mathcal{K}_{1},\mathcal{L}_{j}]=[\mathcal{ L}_{i},\mathcal{K}_{p}]=0\). Also, the actions of the \(\mathcal{K}\) and \(\mathcal{L}\) on \(\langle\phi_{0}|\) from the right are \(\langle\phi_{0}|\mathcal{K}_{1}=0,\langle\phi_{0}|(a_{1})^{n}\mathcal{K}_{1}= 0,\langle\phi_{0}|\mathcal{K}_{i}=c_{i}\langle\phi_{0}|a_{1}\) and \(\langle\phi_{0}|\mathcal{L}_{1}=0,\langle\phi_{0}|(b_{1})^{m}\mathcal{L}_{1}= 0,\langle\phi_{0}|\mathcal{L}_{j}=d_{j}\langle\phi_{0}|b_{1}\).
### The energy and its structure
Using the coupled-cluster ansatz, presented in Eq. 6, and the Hamiltonian in Eq. 2 of the binary mixture of bosons, the exact energy of the ground state can be calculated from
\[E_{0}=\langle\phi_{0}|e^{-T}He^{T}|\phi_{0}\rangle, \tag{15}\]
where the Hamiltonian requires to be transformed by employing the relation \(\dot{A}=e^{-T}Ae^{T}\). Now we break the transformed Hamiltonian in accordance with the number of operators depending on
the occupied orbitals \(\varphi_{1}\) and \(\psi_{1}\). Here, the transformed one-particle operator is found to be
\[\dot{H_{0}} = h_{1\bar{1}}^{(1)}\dot{a}_{1}^{\dagger}a_{1}+\sum_{s=2}^{M_{1}}h_{1 \bar{s}}^{(1)}\dot{a}_{1}^{\dagger}\dot{a}_{s}+\sum_{s=2}^{M_{1}}h_{s\bar{1}}^{( 1)}a_{s}^{\dagger}a_{1}+\sum_{r,s=2}^{M_{1}}h_{r\bar{s}}^{(1)}a_{r}^{\dagger} \dot{a}_{s} \tag{4.12}\] \[+ h_{11}^{(2)}\dot{b}_{1}^{\dagger}b_{1}+\sum_{l=2}^{M_{2}}h_{1l}^ {(2)}\dot{b}_{1}^{\dagger}\dot{b}_{l}+\sum_{l=2}^{M_{2}}h_{l1}^{(2)}b_{l}^{ \dagger}b_{1}+\sum_{k,l=2}^{M_{2}}h_{kl}^{(2)}b_{k}^{\dagger}\dot{b}_{l},\]
which includes a total of eight terms for two species, and among them the second and the sixth terms are the most intricate when calculating the exact energy. The transformed two-body operator of the Hamiltonian is divided into three parts, \(\dot{V}^{(1)}\), \(\dot{V}^{(2)}\), and \(\dot{V}^{(12)}\). The first, \(\dot{V}^{(1)}\), and the second, \(\dot{V}^{(2)}\), parts correspond to excitations in species-1 and species-2, respectively, and consist of nine terms each. The third part \(\dot{V}^{(12)}\) is the most involved one as it contains the excitations in both species and it has sixteen terms. All terms are explicitly shown in Appendix C.
We now calculate the energy using \(E_{0}=\langle\phi_{0}|\dot{H}|\phi_{0}\rangle\) and find that the energy is the combination of three parts
\[E_{0}=E_{1}+E_{2}+E_{12}, \tag{4.13}\]
where after some intricate algebra, one can readily find \(E_{1}\), \(E_{2}\), and \(E_{12}\) as
\[E_{1} = N_{1}\left[h_{1\bar{1}}^{(1)}+\frac{N_{1}-1}{2}V_{1\bar{1}1\bar{ 1}}^{(1)}\right] \tag{4.14}\] \[+ N_{1}\Biggl{[}\sum_{s=2}^{M_{1}}[h_{1\bar{s}}^{(1)}+(N_{1}-1)V_{ \bar{1}1\bar{1}\bar{s}}^{(1)}]c_{s}+\frac{N_{1}-1}{2}\sum_{r,s=2}^{M_{1}}V_{ \bar{1}1\bar{r}\bar{s}}^{(1)}(2c_{rs}+c_{r}c_{s})\Biggr{]},\]
\[E_{2} = N_{2}\left[h_{11}^{(2)}+\frac{N_{2}-1}{2}V_{1111}^{(2)}\right] \tag{4.15}\] \[+ N_{2}\Biggl{[}\sum_{l=2}^{M_{2}}[h_{1\bar{l}}^{(2)}+(N_{2}-1)V_{ 11\bar{1}l}^{(2)}]d_{l}+\frac{N_{2}-1}{2}\sum_{k,l=2}^{M_{2}}V_{11kl}^{(2)}(2 d_{kl}+d_{k}d_{l})\Biggr{]},\]
\[E_{12}=N_{1}N_{2}\Biggl{[}V_{\bar{1}1\bar{1}1}^{(12)}+\sum_{s=2}^{M_{1}}V_{ \bar{1}1\bar{s}1}^{(12)}c_{s}+\sum_{l=2}^{M_{2}}V_{\bar{1}1\bar{l}l}^{(12)}d_{ l}+\sum_{s=2}^{M_{1}}\sum_{l=2}^{M_{2}}V_{\bar{1}1\bar{s}l}^{(12)}(c_{s}d_{l}+ e_{ls})\Biggr{]}. \tag{4.16}\]
We notice that, in order to calculate \(E_{1}\), the first and second terms of \(\dot{H}_{0}\) and the first, second, and fourth terms of \(\dot{V}^{(1)}\) contribute. While for \(E_{2}\), the fifth and sixth terms of \(\dot{H}_{0}\) and the first, second, and fourth terms of \(\dot{V}^{(2)}\) contribute. To determine \(E_{12}\), only the first, second, third, and sixth terms of \(\dot{V}^{(12)}\) contribute, also see Appendix c
Here, \(E_{1}\) depends on the singles and doubles coefficients of species-1 and, analogously, \(E_{2}\) depends on the singles and doubles coefficients of species-2. \(E_{1}\) and \(E_{2}\) have the analog form of single species coupled-cluster theory, whereas \(E_{12}\) does not have an analog in single species theory as it is generated due to the inter-species interaction. If there is no inter-species interaction, \(E_{0}\) boils down to the energy of two independent single species bosonic system. Note that the equation of \(E_{0}\) presented here, Eqs. 4.13 to 4.16, is valid for all orders of coupled-cluster theory for bosonic mixtures, CC-M.
The mean-field energy, \(E_{\rm MF}=\langle\phi_{0}|H|\phi_{0}\rangle\), is contained in the first and fifth terms of \(\dot{H}_{0}\) and the first terms of \(\dot{V}^{(1)}\), \(\dot{V}^{(2)}\), and \(\dot{V}^{(12)}\). \(E_{\rm MF}\) is found to be
\[E_{\rm MF}=N_{1}\left[h^{(1)}_{\bar{1}\bar{1}}+\frac{N_{1}-1}{2}V^{(1)}_{\bar{ 1}\bar{1}\bar{1}\bar{1}}\right]+N_{2}\left[h^{(2)}_{11}+\frac{N_{2}-1}{2}V^{(2 )}_{111\bar{1}}\right]+N_{1}N_{2}V^{(12)}_{\bar{1}\bar{1}\bar{1}}. \tag{4.17}\]
From \(E_{1}\), \(E_{2}\), and \(E_{12}\), the correlation energy for the ground state of the binary bosonic mixture reads
\[E_{\rm cor} = N_{1}\Biggl{[}\sum_{s=2}^{M_{1}}[h^{(1)}_{\bar{1}\bar{s}}+(N_{1} -1)V^{(1)}_{\bar{1}\bar{1}\bar{1}\bar{s}}]c_{s}+\frac{N_{1}-1}{2}\sum_{r,s=2}^ {M_{1}}V^{(1)}_{\bar{1}\bar{1}\bar{r}\bar{s}}(2c_{rs}+c_{r}c_{s})\Biggr{]} \tag{4.18}\] \[+ N_{2}\Biggl{[}\sum_{l=2}^{M_{2}}[h^{(2)}_{1l}+(N_{2}-1)V^{(2)}_{ 111\bar{l}l}]d_{l}+\frac{N_{2}-1}{2}\sum_{k,l=2}^{M_{2}}V^{(2)}_{11kl}(2d_{kl} +d_{k}d_{l})\Biggr{]}\] \[+ N_{1}N_{2}\Biggl{[}\sum_{s=2}^{M_{1}}V^{(12)}_{\bar{1}\bar{1} \bar{s}l}c_{s}+\sum_{l=2}^{M_{2}}V^{(12)}_{\bar{1}\bar{1}\bar{1}l}d_{l}+\sum_ {s=2}^{M_{1}}\sum_{l=2}^{M_{2}}V^{(12)}_{\bar{1}\bar{1}\bar{s}l}(c_{s}d_{l}+e _{ls})\Biggr{]}.\]
In the calculation of \(E_{\rm cor}\), we have used arbitrary sets of orthonormal orbitals. If we use the orbitals generating from the Fock-like operators \(F_{1}\) and \(F_{2}\), see Eqs. 3.4 and 3.5 discussed in last section, and Eq. A.1 in Appendix A, the correlation energy simplifies and reads
\[E_{\rm cor} = \frac{N_{1}(N_{1}-1)}{2}\sum_{r,s=2}^{M_{1}}V^{(1)}_{\bar{1}\bar{ 1}\bar{r}\bar{s}}(2c_{rs}+c_{r}c_{s})+\frac{N_{2}(N_{2}-1)}{2}\sum_{k,l=2}^{M_{ 2}}V^{(2)}_{11kl}(2d_{kl}+d_{k}d_{l}) \tag{4.19}\] \[+N_{1}N_{2}\sum_{s=2}^{M_{1}}\sum_{l=2}^{M_{2}}V^{(12)}_{\bar{1} \bar{1}\bar{s}l}(c_{s}d_{l}+e_{ls}).\]
The other terms disappear due to the facts that \(\langle\varphi_{s}|F_{1}|\varphi_{1}\rangle=0\) and \(\langle\psi_{l}|F_{2}|\psi_{1}\rangle=0\). All in all, the total energy of the mixture is modified to
\[E_{0}=E_{\rm MF}-E_{\rm cor}, \tag{4.20}\]
where \(E_{\rm cor}\) is for the binary mixture of bosons and presented in Eq. 4.19.
### Equations for the coefficients
The correlation energy due to the state dressed from \(\phi_{0}\) to \(\Psi\) is expressed in terms of coefficients \(\{c_{s}\}\), \(\{d_{l}\}\), \(\{c_{rs}\}\), \(\{d_{kl}\}\), and \(\{e_{ls}\}\). The orbitals \(\phi_{i}\) and \(\psi_{j}\) can be conveniently chosen to simplify \(E_{\rm cor}\), details are in the next section. Note that even the simplified form of the equations can not be utilized to determine the unknown coefficients mentioned above since \(E_{0}=\langle\phi_{0}|e^{-T}He^{T}|\phi_{0}\rangle\) is not subject to a variational principle. Therefore, one requires approximations. For the coupled-cluster theory for bosonic mixtures we may chose approximations such as the combination of keeping \(M_{1}\) and \(M_{2}\) small and truncating the coupled-cluster excitations. In the process of truncating the excitations, one may consider \(\phi_{0}\) and all the singly and doubly excited configurations analogous by the atomic structure calculations.
Here, we observe that \(E_{0}|\phi_{0}\rangle=e^{-T}He^{T}|\phi_{0}\rangle\) holds, and hence projecting on any excited configuration of species-1 and species-2 gives us the required equation for the coefficients. Therefore, the singly excited configurations of species-1 provide the \((M_{1}-1)\) equations
\[\langle\phi_{0}|a_{1}^{\dagger}a_{\bar{i}}\dot{H}|\phi_{0}\rangle=0,\ \ \bar{i}=2,3,...,M_{1}. \tag{4.21}\]
Similarly, the singly excited configurations of species-2 lead to the \((M_{2}-1)\) equations
\[\langle\phi_{0}|b_{1}^{\dagger}b_{i}\dot{H}|\phi_{0}\rangle=0,\ \ i=2,3,...,M_{2}. \tag{4.22}\]
The doubly excited configurations generate the \(\frac{1}{2}M_{1}(M_{1}-1)\) equations for species-1
\[\langle\phi_{0}|(a_{1}^{\dagger})^{2}a_{\bar{i}}a_{\bar{j}}\dot{H}|\phi_{0} \rangle=0,\ \ \bar{i}\geq\bar{j}=2,3,...,M_{1}, \tag{4.23}\]
and the \(\frac{1}{2}M_{2}(M_{2}-1)\) equations for species-2
\[\langle\phi_{0}|(b_{1}^{\dagger})^{2}b_{i}b_{j}\dot{H}|\phi_{0}\rangle=0,\ \ i\geq j=2,3,...,M_{2}. \tag{4.24}\]
In addition, two simultaneous single excitations, one for each species, provide the \((M_{1}-1)(M_{2}-1)\) equations
\[\langle\phi_{0}|a_{1}^{\dagger}b_{1}^{\dagger}a_{\bar{i}}b_{i}\dot{H}|\phi_{0 }\rangle=0,\ \ \bar{i}=2,3,...,M_{1},\ \ i=2,3,...,M_{2}. \tag{4.25}\]
Here the number of independent equations Eqs. 4.21 to 4.25 is exactly equal to the number of distinct coefficients. Indeed, there are \(M_{1}-1\) coefficients of \(c_{\bar{i}}\), \(M_{2}-1\) coefficients of \(d_{i}\), \(M_{1}(M_{1}-1)/2\) coefficients of \(c_{\bar{i}\bar{j}}\), \(M_{2}(M_{2}-1)/2\) coefficients of \(d_{ij}\), and \((M_{1}-1)(M_{2}-1)\) coefficients of \(e_{\bar{i}\bar{i}}\). The equations presented above are coupled to each other by these unknown coefficients.
It is necessary to discuss here what would be the equations if one would go beyond the second order coupled-cluster theory. To include all triple excitations, one requires to solve four more equations, and they are for species-1 \(\langle\phi_{0}|(a_{1}^{\dagger})^{3}a_{\bar{i}}a_{\bar{j}}a_{\bar{k}}\dot{H}| \phi_{0}\rangle=0,\bar{i}\geq\bar{j}\geq\bar{k}=2,3,...,M_{1}\), species-2 \(\langle\phi_{0}|(b_{1}^{\dagger})^{3}b_{i}b_{j}b_{k}\dot{H}|\phi_{0}\rangle=0,i \geq j\geq k=2,3,...,M_{2}\), and for the combined excitations \(\langle\phi_{0}|(a_{1}^{\dagger})^{2}b_{1}^{\dagger}a_{\bar{i}}a_{\bar{j}}b_{ i}\dot{H}|\phi_{0}\rangle=0\) and \(\langle\phi_{0}|a_{1}^{\dagger}(b_{1}^{\dagger})^{2}a_{\bar{i}}b_{i}b_{j}\dot {H}|\phi_{0}\rangle=0\) where \(\bar{i}\geq\bar{j}=2,3,...,M_{1},i\geq j=2,3,...,M_{2}\) of both species. For triple excitations, additional four different types of unknown coefficients emerge which are \(\{c_{\bar{i}\bar{j}\bar{k}}\}\), \(\{d_{ijk}\}\), \(\{e_{\bar{i}\bar{j}i}\}\) and \(\{e_{\bar{i}\bar{i}j}\}\). Similarly for quadrupole excitations, one needs to solve five additional coupled equations.
Now, we evaluate the series of coupled Eqs. 4.21 to 4.25 in order to determine the unknown coefficients. The series contains sets of equation having excitation operators, \(T_{n}^{(1)}\), \(T_{m}^{(2)}\), and \(T_{n^{\prime}m^{\prime}}^{(12)}\) where ideally \(n\) and \(n^{\prime}\) run from 1 to \(N_{1}\) and \(m\) and \(m^{\prime}\) run from 1 to \(N_{2}\), see Eqs. 2.7 to 2.9. In practice, one has to truncate the expansions of \(T_{n}^{(1)}\), \(T_{m}^{(2)}\), and \(T_{n^{\prime}m^{\prime}}^{(12)}\). If \(T_{n}^{(1)}\) and \(T_{m}^{(2)}\) each consists of one excitation to the virtual orbital we call it coupled-cluster singles approach. When \(T_{n}^{(1)}\) and \(T_{m}^{(2)}\) each contains two excitations to the virtual orbital, and \(T_{n^{\prime}m^{\prime}}^{(12)}\) consists two simultaneous single excitations to the virtual orbitals, we call it coupled-cluster singles doubles for bosonic mixture (CCSD-M).
#### iv.2.1 The intra-species coefficients
We have derived the equations 4.21 to 4.25 for an arbitrary sets of orthonormal orbitals, see the details in Appendix D, but present here only the expressions found from the orbitals of \(F_{1}\) and \(F_{2}\), see Eq. 3.4. The derivations are very lengthy and involved. Now we start from solving \(\langle\phi_{0}|a_{1}^{\dagger}a_{\bar{i}}\dot{H}|\phi_{0}\rangle=0\) and \(\langle\phi_{0}|b_{1}^{\dagger}b_{i}\dot{H}|\phi_{0}\rangle=0\). We obtain \(M_{1}-1\) coupled equations with
\[\mu_{1}^{(1)}c_{\bar{i}} = -\sum_{s=2}^{M_{1}}V_{\bar{1}\bar{1}\bar{1}\bar{s}}^{(1)}(N_{1}-1)(2c _{s\bar{i}}+c_{\bar{i}}c_{s})+(N_{1}-1)\sum_{s=2}^{M_{1}}V_{\bar{1}\bar{i}\bar{s }}^{(1)}c_{s} \tag{4.26}\] \[-\sum_{l=2}^{M_{2}}V_{\bar{1}\bar{1}l\bar{l}}^{(12)}N_{2}(e_{\bar{ i}l}+c_{\bar{i}}d_{l})+N_{2}\sum_{l=2}^{M_{2}}V_{\bar{i}\bar{1}l\bar{l}}^{(12)}d_{l}\] \[+\sum_{r,s=2}^{M_{1}}V_{\bar{1}\bar{1}\bar{r}s}^{(1)}\bar{\alpha} _{r\bar{s}i}+\sum_{r,s=2}^{M_{1}}V_{\bar{1}\bar{i}\bar{r}s}^{(1)}[(N_{1}-1)(2 c_{rs}+c_{r}c_{s})]\] \[+\frac{1}{2}N_{2}(N_{2}-1)\sum_{k,l=2}^{M_{2}}V_{11kl}^{(2)}(2e_{ \bar{i}lk}+e_{\bar{i}k}d_{l}+e_{\bar{i}l}d_{k})\] \[+\sum_{s=2}^{M_{1}}\sum_{l=2}^{M_{2}}V_{\bar{1}\bar{1}l\bar{s}}^{ (12)}[(N_{1}-1)N_{2}(2e_{\bar{i}sl}+c_{s}c_{\bar{i}l}+2c_{s\bar{i}}d_{l})-N_{2 }(c_{\bar{i}}e_{sl}+c_{\bar{i}}c_{s}d_{l})]\] \[+N_{2}\sum_{s=2}^{M_{1}}\sum_{l=2}^{M_{2}}V_{\bar{i}\bar{1}l\bar{ s}}^{(12)}(e_{sl}+c_{s}d_{l}),\]
and \(M_{2}-1\) coupled equation with \(i=2,3,...,M_{2}\)
\[\mu_{1}^{(2)}d_{i} = -\sum_{l=2}^{M_{2}}V_{111l}^{(2)}(N_{2}-1)(2d_{li}+d_{i}d_{l})+(N_ {2}-1)\sum_{l=2}^{M_{2}}V_{\bar{1}i\bar{l}l}^{(2)}d_{l} \tag{4.27}\] \[-\sum_{s=2}^{M_{1}}V_{\bar{1}\bar{1}l\bar{s}}^{(12)}N_{1}(e_{si} +c_{s}d_{i})+N_{1}\sum_{s=2}^{M_{1}}V_{\bar{1}i\bar{s}1}^{(12)}c_{s}\] \[+\sum_{k,l=2}^{M_{2}}V_{11kl}^{(2)}\alpha_{kli}+\sum_{k,l=2}^{M_{ 2}}V_{1ikl}^{(2)}[(N_{2}-1)(2d_{lk}+d_{k}d_{l})]\] \[+\frac{1}{2}N_{1}(N_{1}-1)\sum_{r,s=2}^{M_{1}}V_{\bar{1}\bar{1}r \bar{s}}^{(1)}(2e_{si}+e_{ri}c_{s}+e_{si}c_{r})\] \[+\sum_{s=2}^{M_{1}}\sum_{l=2}^{M_{2}}V_{\bar{1}\bar{s}l}^{(12)}[ N_{1}(N_{2}-1)(2e_{sli}+2c_{s}d_{li}+d_{l}e_{si})-N_{1}(d_{i}e_{sl}+c_{s}d_{i}d_{ l})]\] \[+N_{1}\sum_{s=2}^{M_{1}}\sum_{l=2}^{M_{2}}V_{\bar{1}i\bar{s}l}^{(1 2)}(e_{sl}+c_{s}d_{l}),\]
where the quantities \(\bar{\alpha}_{r\bar{s}i}\) and \(\alpha_{kli}\) are given by
\[\bar{\alpha}_{r\bar{s}i} =[(N_{1}-1)(N_{1}-2)(3c_{s\bar{i}r}+c_{s\bar{i}}c_{r}+c_{\bar{r}i }c_{s})-(N_{1}-1)(2c_{\bar{i}}c_{rs}+c_{\bar{i}}c_{r}c_{s})],\] \[\alpha_{kli} =(N_{2}-1)(N_{2}-2)(3d_{lik}+d_{li}d_{k}+d_{ki}d_{l})-(N_{2}-1)(2 d_{i}d_{lk}+d_{i}d_{k}d_{l})]. \tag{4.28}\]
In Eqs. 4.26 and 4.27, there are terms, explicitly \(c_{s\bar{i}r}\), \(e_{\bar{i}lk}\), \(e_{\bar{i}sl}\), \(d_{lik}\), \(e_{sri}\), and \(e_{sli}\), which contain triple excitations. These coefficients have to be put equal to zero if coupled-cluster singles doubles
is to be used. Similarly, when we take only the single excitations which leads to the coupled-cluster singles approach, the general equations for the arbitrary sets of orthonormal orbitals are presented in the Appendix D.
Now, to evaluate the working equations of the CCSD-M, we need to determine the set of \(M_{1}(M_{1}-1)/2\), \(M_{2}(M_{2}-1)/2\), and \((M_{1}-1)(M_{2}-1)\) distinct coupled equations resulting from \(\langle\phi_{0}|(a_{1}^{\dagger})^{2}a_{\bar{i}}a_{\bar{j}}\dot{H}|\phi_{0} \rangle=0\), \(\langle\phi_{0}|(b_{1}^{\dagger})^{2}b_{i}b_{j}\dot{H}|\phi_{0}\rangle=0\), and \(\langle\phi_{0}|a_{1}^{\dagger}b_{1}^{\dagger}a_{\bar{i}}b_{i}\dot{H}|\phi_{0} \rangle=0\), respectively. To remind, we only present here the working equations generating from the orbitals of \(F_{1}\) and \(F_{2}\). The sets of coupled-equations for the double excitations with \(\bar{i},\bar{j}=2,3,...,M_{1}\) read
\[(4\mu_{1}^{(1)} - 2\mu_{\bar{i}}^{(1)}-2\mu_{\bar{j}}^{(1)})c_{\bar{i}\bar{j}}=V_{ \bar{i}\bar{j}\bar{1}\bar{1}}^{(1)}+V_{\bar{1}\bar{1}\bar{1}\bar{1}}^{(1)}(2c _{\bar{i}\bar{j}}+c_{\bar{i}}c_{\bar{j}}) \tag{49}\] \[-[V_{\bar{i}\bar{1}\bar{1}\bar{1}}^{(1)}c_{\bar{j}}+V_{\bar{j} \bar{1}\bar{1}\bar{1}}^{(1)}c_{\bar{i}}]-\sum_{s=2}^{M_{1}}V_{\bar{1}\bar{1} \bar{1}\bar{s}}^{(1)}\bar{\beta}_{s\bar{i}\bar{j}}\] \[-\sum_{s=2}^{M_{1}}V_{\bar{i}\bar{1}\bar{s}\bar{1}}^{(1)}(2c_{s \bar{j}}+c_{\bar{j}}c_{s})+\sum_{s=2}^{M_{1}}V_{\bar{i}\bar{1}\bar{s}\bar{1}}^ {(1)}[2(N_{1}-2)c_{s\bar{j}}-c_{\bar{j}}c_{s}]\] \[-\sum_{s=2}^{M_{1}}V_{\bar{j}\bar{1}\bar{s}\bar{1}}^{(1)}(2c_{s \bar{i}}+c_{\bar{i}}c_{s})+\sum_{s=2}^{M_{1}}V_{\bar{1}\bar{j}\bar{s}\bar{1}}^ {(1)}[2(N_{1}-2)c_{s\bar{i}}-c_{\bar{i}}c_{s}]\] \[+\sum_{s=2}^{M_{1}}V_{\bar{i}\bar{j}\bar{s}\bar{1}}^{(1)}c_{s}+ \sum_{s=2}^{M_{1}}V_{\bar{j}\bar{s}\bar{1}}^{(1)}c_{s}+\sum_{r,s=2}^{M_{1}}V_{ \bar{i}\bar{j}r\bar{s}}^{(1)}[2c_{sr}+c_{r}c_{s}]+\sum_{r,s=2}^{M_{1}}V_{\bar{1} \bar{r}\bar{s}}^{(1)}\bar{\gamma}_{rs\bar{i}\bar{j}}\] \[+\sum_{r,s=2}^{M_{1}}V_{\bar{1}\bar{i}r\bar{s}}^{(1)}[2(N_{1}-2)c _{r\bar{j}}c_{s}+2(N_{1}-2)c_{s\bar{j}}c_{r}-2c_{sr}c_{\bar{j}}-c_{\bar{j}}c_ {r}c_{s}]\] \[+\sum_{r,s=2}^{M_{1}}V_{\bar{1}\bar{j}r\bar{s}}^{(1)}[2(N_{1}-2)c _{r\bar{i}}c_{s}+2(N_{1}-2)c_{s\bar{i}}c_{r}-2c_{sr}c_{\bar{i}}-c_{\bar{i}}c_ {r}c_{s}]\] \[+N_{2}\sum_{l=2}^{M_{2}}[V_{\bar{i}\bar{1}l\bar{l}}^{(12)}e_{\bar {j}l}+V_{\bar{j}\bar{1}l\bar{l}}^{(12)}e_{\bar{i}\bar{l}}]-N_{2}\sum_{l=2}^{M_{ 2}}V_{\bar{1}\bar{1}l\bar{l}}^{(12)}(4c_{\bar{i}\bar{j}}d_{l}+c_{\bar{i}}e_{ \bar{j}l}+c_{\bar{j}}e_{\bar{i}l})\] \[+\frac{1}{2}N_{2}(N_{2}-1)\sum_{k,l=2}^{M_{2}}V_{11kl}^{(2)}[e_{ \bar{i}k}e_{\bar{j}l}+e_{\bar{j}k}e_{\bar{i}l}]+\sum_{s=2}^{M_{1}}\sum_{l=2}^ {M_{2}}V_{\bar{1}\bar{1}l\bar{s}}^{(12)}\bar{\delta}_{s\bar{l}\bar{j}}\] \[+N_{2}\sum_{s=2}^{M_{1}}\sum_{l=2}^{M_{2}}[V_{\bar{i}1\bar{l}}^ {(12)}(2c_{s\bar{j}}d_{l}+c_{s}e_{\bar{j}l})+V_{\bar{j}1\bar{s}l}^{(12)}(2c_{ s\bar{i}}d_{l}+c_{s}e_{\bar{i}l})].\]
The quantities \(\bar{\beta}_{s\bar{i}\bar{j}}\), \(\bar{\gamma}_{rs\bar{i}\bar{j}}\), and \(\bar{\delta}_{s\bar{l}\bar{j}}\) have the form
\[\bar{\beta}_{s\bar{i}\bar{j}} =[2(N_{1}-3)(c_{\bar{i}}c_{s\bar{j}}+c_{\bar{j}}c_{s\bar{i}})+4(N_{1 }+2)c_{s}c_{\bar{i}\bar{j}}+2c_{\bar{i}}c_{\bar{j}}c_{s}],\] \[\bar{\gamma}_{rs\bar{i}\bar{j}} =4c_{r\bar{i}}c_{s\bar{j}}(N_{1}-2)(N_{1}-3)-2(N_{1}-2)\big{[}2c_ {\bar{i}\bar{j}}(2c_{rs}+c_{r}c_{s})\] \[+c_{r\bar{i}}c_{s}c_{\bar{j}}+c_{s\bar{i}}c_{r}c_{\bar{j}}+c_{r \bar{j}}c_{s}c_{\bar{i}}+c_{s\bar{j}}c_{r}c_{\bar{i}}\big{]}-(2c_{rs}+c_{r}c_{ s})(2c_{\bar{i}\bar{j}}-c_{\bar{i}}c_{\bar{j}}),\] \[\bar{\delta}_{sl\bar{i}\bar{j}} =2(N_{1}-2)N_{2}(c_{s\bar{i}}c_{\bar{j}l}+c_{s\bar{j}}c_{\bar{i}l})\] \[-N_{2}(4c_{\bar{i}\bar{j}}c_{sl}+4c_{\bar{i}\bar{j}}c_{s}d_{l}+2c _{s\bar{j}}c_{\bar{i}}d_{l}+2c_{\bar{j}}d_{l}c_{s\bar{i}}+c_{\bar{i}}c_{s}e_{ \bar{j}l}+c_{\bar{j}}c_{s}e_{\bar{i}l}). \tag{4.30}\]
Analogously, the \(M_{2}(M_{2}-1)/2\) set of coupled equations generated from \(\langle\phi_{0}|(b_{1}^{\dagger})^{2}b_{i}b_{j}\dot{H}|\phi_{0}\rangle=0\) with \(i,j=2,3,...,M_{2}\) can be written as
\[(4\mu_{1}^{(2)} - 2\mu_{i}^{(2)}-2\mu_{j}^{(2)})d_{ij}=V_{ij11}^{(2)}+V_{1111}^{(2 )}(2d_{ij}+d_{i}d_{j}) \tag{4.31}\] \[-[V_{i111}^{(2)}d_{j}+V_{j111}^{(2)}d_{i}]-\sum_{l=2}^{M_{2}}V_{1 111}^{(2)}\beta_{lij}\] \[-\sum_{l=2}^{M_{2}}V_{i111}^{(2)}(2d_{lj}+d_{j}d_{l})+\sum_{l=2}^ {M_{2}}V_{i1l1}^{(2)}[2(N_{2}-2)d_{lj}-d_{j}d_{l}]\] \[-\sum_{l=2}^{M_{2}}V_{j111}^{(2)}(2d_{li}+d_{i}d_{l})+\sum_{l=2}^ {M_{2}}V_{1j1l}^{(2)}[2(N_{2}-2)d_{li}-d_{i}d_{l}]\] \[+\sum_{l=2}^{M_{2}}V_{ij1l}^{(2)}d_{l}+\sum_{l=2}^{M_{2}}V_{j1l}^ {(2)}d_{l}+\sum_{k,l=2}^{M_{2}}V_{ijkl}^{(2)}[2d_{lk}+d_{k}d_{l}]+\sum_{k,l=2} ^{M_{2}}V_{11kl}^{(2)}\gamma_{kij}\] \[+\sum_{k,l=2}^{M_{2}}V_{1ikl}^{(2)}[2(N_{2}-2)d_{kj}d_{l}+2(N_{2} -2)d_{lj}d_{k}-2d_{lk}d_{j}-d_{j}d_{k}d_{l}]\] \[+\sum_{k,l=2}^{M_{2}}V_{1jkl}^{(2)}[2(N_{2}-2)d_{ki}d_{l}+2(N_{2} -2)d_{li}d_{k}-2d_{lk}d_{i}-d_{i}d_{k}d_{l}]\] \[+N_{1}\sum_{s=2}^{M_{1}}[V_{\bar{1}\bar{i}s\bar{1}}^{(12)}e_{sj}+V _{\bar{1}\bar{j}s\bar{1}}^{(12)}e_{si}]-N_{1}\sum_{s=2}^{M_{1}}V_{\bar{1}\bar{ i}s\bar{1}}^{(12)}(4d_{ij}c_{s}+e_{sj}d_{i}+e_{si}d_{j})\] \[+\frac{1}{2}N_{1}(N_{1}-1)\sum_{r,s=2}^{M_{1}}V_{\bar{1}\bar{i}r \bar{s}}^{(1)}[e_{ri}e_{sj}+e_{rj}e_{si}]+\sum_{s=2}^{M_{1}}\sum_{l=2}^{M_{2}} V_{\bar{1}\bar{i}s\bar{l}}^{(12)}\delta_{slij}\] \[+N_{1}\sum_{s=2}^{M_{1}}\sum_{l=2}^{M_{2}}[V_{\bar{1}\bar{i}s\bar{ 1}}^{(12)}(2d_{lj}c_{s}+e_{sj}d_{l})+V_{\bar{1}\bar{j}s\bar{l}}^{(12)}(2d_{li}c _{s}+e_{si}d_{l})],\]
where the quantities, \(\beta_{lij}\), \(\gamma_{klij}\), and \(\delta_{slij}\), appearing in Eq. 4.31 are
\[\beta_{lij}=[2(N_{2}-3)(d_{i}d_{lj}+d_{j}d_{li})+4(N_{2}+2)d_{l}d_{ ij}+2d_{i}d_{j}d_{l}],\] \[\gamma_{klij}=4d_{ki}d_{lj}(N_{2}-2)(N_{2}-3)-2(N_{2}-2)\big{[}2d_ {ij}(2d_{kl}+d_{k}d_{l})\] \[\qquad+d_{ki}d_{d}d_{j}+d_{li}d_{k}d_{j}+d_{kj}d_{l}d_{i}+d_{lj}d_ {k}d_{i}\big{]}-(2d_{kl}+d_{k}d_{l})(2d_{ij}-d_{i}d_{j}),\] \[\delta_{slij}=2N_{1}(N_{2}-2)(e_{si}d_{lj}+e_{sj}d_{li})\] \[\qquad-N_{1}(4d_{ij}e_{sl}+4d_{ij}c_{s}d_{l}+2c_{s}d_{i}d_{lj}+2c_ {s}d_{j}d_{li}+d_{i}d_{l}e_{sj}+d_{j}d_{l}e_{si}). \tag{4.32}\]
Note that the general forms of the Eqs. 4.29 and 4.31 are presented in Appendix D.
#### iv.2.2 The inter-species coefficients
In this subsection we show the final form of the \((M_{1}-1)(M_{2}-1)\) equations resulting from \(\langle\phi_{0}|a_{1}^{\dagger}b_{1}^{\dagger}a_{i}b_{i}\dot{H}|\phi_{0}\rangle=0\) where \(\bar{i}=2,3,...,M_{1}\) and \(i=2,3,...,M_{2}\) using Fock-like operators and Appendix A. The general form of this expansion is presented in Appendix D. Using the Fock-like operators, one can readily find
\[(\mu_{1}^{(1)}+\mu_{1}^{(2)} - \mu_{i}^{(1)}-\mu_{i}^{(2)})e_{\bar{i}i}=V_{\bar{i}\bar{i}11}^{(12)} -V_{\bar{1}\bar{i}11}^{(12)}c_{i}-V_{\bar{i}1\bar{1}1}^{(12)}d_{i} \tag{4.33}\] \[+V_{1\bar{1}1\bar{1}}^{(12)}(c_{\bar{i}i}+c_{\bar{i}}d_{i})+\sum_{ s=2}^{M_{1}}V_{\bar{i}\bar{i}81}^{(12)}c_{s}+\sum_{l=2}^{M_{2}}V_{\bar{i}\bar{i} \bar{l}1}^{(12)}d_{l}\] \[-\sum_{s=2}^{M_{1}}V_{\bar{1}\bar{1}1\bar{s}}^{(1)}[(N_{1}+1)(c_ {s}e_{\bar{i}i}+c_{\bar{i}}e_{si})]+\sum_{s=2}^{M_{1}}V_{\bar{1}1\bar{s}1}^{(12 )}[N_{2}(c_{s}e_{\bar{i}i}+c_{\bar{i}}e_{si})-\bar{\chi}_{s\bar{i}i}]\] \[-\sum_{l=2}^{M_{2}}V_{111l}^{(2)}[(N_{2}+1)(d_{l}e_{\bar{i}i}+d_{ i}e_{\bar{i}l})]+\sum_{l=2}^{M_{2}}V_{\bar{1}1\bar{l}l}^{(12)}[N_{1}(d_{l}e_{ \bar{i}i}+d_{i}e_{\bar{i}l})-\chi_{l\bar{i}i}]\] \[+\sum_{s=2}^{M_{1}}V_{\bar{1}\bar{i}s\bar{1}}^{(1)}(N_{1}-1)e_{si }-\sum_{s=2}^{M_{1}}V_{\bar{1}\bar{s}1}^{(12)}(e_{si}+c_{s}d_{i})\] \[+\sum_{l=2}^{M_{2}}V_{1l1l}^{(2)}(N_{2}-1)e_{\bar{i}l}-\sum_{l=2} ^{M_{2}}V_{\bar{1}\bar{i}l\bar{l}}^{(12)}(e_{\bar{i}l}+c_{\bar{i}}d_{l})\] \[+\sum_{s=2}^{M_{1}}V_{\bar{1}\bar{i}s\bar{1}}^{(12)}[2(N_{1}-1)c_ {s\bar{i}}-c_{\bar{i}}c_{s}]+\sum_{l=2}^{M_{2}}V_{\bar{1}1\bar{l}l}^{(12)}[2(N _{2}-1)d_{li}-d_{i}d_{l}]\] \[+\sum_{r,s=2}^{M_{1}}V_{\bar{1}1\bar{r}s\bar{\ell}}^{(1)}\bar{ \xi}_{rs\bar{i}i}+(N_{1}-1)\sum_{r,s=2}^{M_{1}}V_{\bar{1}\bar{i}rs}^{(1)}(c_{s }e_{ri}+c_{r}e_{si})\] \[+\sum_{k,l=2}^{M_{2}}V_{11k\bar{l}i}^{(12)}\xi_{k\bar{l}i\bar{i}} +(N_{2}-1)\sum_{k,l=2}^{M_{2}}V_{1ikl}^{(2)}(d_{l}e_{\bar{i}k}+d_{k}e_{\bar{i} l})\] \[+\sum_{s=2}^{M_{1}}\sum_{l=2}^{M_{2}}V_{\bar{1}\bar{i}l\bar{s}}^{( 12)}[(N_{1}-1)(2c_{si}d_{l}+c_{s}e_{\bar{i}l})-(c_{\bar{i}}e_{sl}+c_{\bar{i}}c_ {s}d_{l})]\] \[+\sum_{s=2}^{M_{1}}\sum_{l=2}^{M_{2}}V_{\bar{1}1\bar{s}l}^{(12)}[ (N_{2}-1)(2c_{s}d_{li}+d_{l}e_{si})-(d_{i}e_{sl}+c_{s}d_{i}d_{l})]\] \[+\sum_{s=2}^{M_{1}}\sum_{l=2}^{M_{2}}V_{\bar{1}1\bar{s}l}^{(12)} \zeta_{s\bar{l}i\bar{i}}+\sum_{s=2}^{M_{1}}\sum_{l=2}^{M_{2}}V_{\bar{i}\bar{i}s \bar{l}}^{(12)}(e_{sl}+c_{s}d_{l}),\]
where \(\mu_{1}^{(1)}\) and \(\mu_{1}^{(2)}\) are the chemical potentials for the ground orbitals of species-1 and species-2, respectively. \(\mu_{i}^{(1)}\) and \(\mu_{i}^{(2)}\) are the chemical potential of the \(\bar{i}\)-th and \(i\)-th orbital of species-1 and species-2, respectively. The parameters \(\bar{\xi}_{rs\bar{i}i}\), \(\xi_{k\bar{l}i}\), \(\bar{\chi}_{s\bar{i}i}\), \(\chi_{l\bar{i}i}\), and \(\zeta_{s\bar{i}i\bar{i}}\) appearing in Eq. 4.33 are given by
\[\bar{\xi}_{r\bar{s}\bar{i}i} =(N_{1}-1)[(N_{1}-2)(c_{r\bar{l}}e_{si}+c_{si}e_{r\bar{i}})-2c_{rs}e _{\bar{i}i}-c_{r}c_{s}e_{\bar{i}i}-c_{\bar{i}}c_{s}e_{ri}-c_{\bar{i}}c_{r}e_{si }],\] \[\xi_{kl\bar{i}i} =(N_{2}-1)[(N_{2}-2)(d_{ki}e_{\bar{i}l}+d_{li}e_{\bar{i}k})-2d_{ lk}e_{\bar{i}i}-d_{k}d_{l}e_{\bar{i}i}-d_{i}d_{l}e_{\bar{i}k}-d_{i}d_{k}e_{ \bar{i}l}],\] \[\bar{\chi}_{s\bar{i}i} =(N_{1}+N_{2}-1)c_{s}e_{\bar{i}i}+2(N_{1}-1)d_{i}c_{s\bar{i}}+(N_ {2}-1)c_{\bar{i}}e_{si}-c_{\bar{i}}c_{s}d_{i},\] \[\chi_{l\bar{i}i} =(N_{1}+N_{2}-1)d_{l}e_{\bar{i}i}+(N_{1}-1)d_{i}e_{\bar{i}l}+2(N_ {2}-1)c_{\bar{i}}d_{li}-c_{\bar{i}}d_{i}d_{l},\] \[\zeta_{s\bar{l}i} =4(N_{1}-1)(N_{2}-1)c_{s\bar{i}}d_{li}-(N_{1}+N_{2}-1)(e_{\bar{i} i}e_{sl}+c_{s}d_{l}e_{\bar{i}i})\] \[-(N_{1}-1)(2c_{s\bar{i}}d_{i}d_{l}+c_{s}d_{i}e_{\bar{i}l})-(N_{2} -1)(2c_{\bar{i}}c_{s}d_{li}+c_{\bar{i}}d_{l}e_{si})\] \[+c_{i}d_{i}e_{sl}+c_{\bar{i}}d_{i}c_{s}d_{l}. \tag{4.34}\]
Till now, we have presented the details of the derivation of the working equations of CCSD-M. In the next section, we would like to apply and check their accuracy. Our strategy is to recruit a solvable many-body model for a mixture and study the performance for different interactions, attractive and repulsive, with balanced and imbalanced numbers of bosons.
## V Illustrative examples
Until now, We have formulated the working equations for CCSD-M using the arbitrary sets of orbitals and Fock-like operators. Here, we aim at implementing and benchmarking the new theory by utilizing those working equations. We elaborate a few illustrative examples, specifically with balanced and imbalanced numbers of bosons between the two species and strengths of intra- and inter-species interactions, in case of the harmonic-interaction model and compare the ground-state energy with the corresponding analytical exact results. The exactly solvable many-body harmonic interaction model has been employed to compare various properties of a many-body system as this model is one of the non-trivial scenarios with an analytical solution for the ground state of a many-particle system [54; 55; 56; 31]. This model describes a many-body system when the particles are trapped in a harmonic potential as well as the inter-particle interaction has the form of harmonic nature. For bosonic mixtures, the harmonic interaction model is extended in Ref. [57; 58; 26].
We consider a binary-species mixture having \(N_{1}\) and \(N_{2}\) numbers of bosons in species-1 and species-2, respectively, where both species are trapped in a one-dimensional external harmonic potential. Here, the two species are localized at the origin with the one-body term in the Hamiltonian for species-1 reads \(-\dfrac{1}{2}\dfrac{\partial^{2}}{\partial r_{1}^{2}}+\dfrac{1}{2}\omega^{2}r_ {1}^{2}\) and that for species-2 \(-\dfrac{1}{2}\dfrac{\partial^{2}}{\partial r_{2}^{2}}+\dfrac{1}{2}\omega^{2}r_ {2}^{2}\), where \(\omega\) is the frequency of the trap and it is considered to be one throughout this work. The intra-species
interactions for species-1 and species-2 are \(\lambda_{1}(r_{1}-r_{1}^{\prime})^{2}\) and \(\lambda_{2}(r_{2}-r_{2}^{\prime})^{2}\), respectively, while the inter-species interaction is \(\lambda_{12}(r_{1}-r_{2})^{2}\). Here, we have the degrees-of-freedom of the number of bosons in each species and the strengths and signs of the intra- and inter-species interactions. If the numbers of bosons and intra-species interaction parameters in both species are same then the mixture is called balanced and if otherwise, it is imbalanced.
For the CCSD-M calculation of the energy, we restrict the orbital space upto \(M_{1}=M_{2}=2\) orbitals, i.e., \(\varphi_{1}\) and \(\varphi_{2}\) for species-1, and \(\psi_{1}\) and \(\psi_{2}\) for species-2. The quality of this truncation for bosonic mixtures will be assessed too. To use the coupled-cluster theory for an example of harmonic-interaction model, we (i) solve the mean-field equations to determine \(\mu_{1}^{(1)}\) and \(\mu_{1}^{(2)}\), (ii) solve the Fock-like equations for the second orbitals of the species and to get the chemical potentials, \(\mu_{2}^{(1)}\) and \(\mu_{2}^{(2)}\), and (iii) calculate all matrix elements. As a bonus steps (i)-(iii) can all be carried out analytically.
Here, for each species, the ground orbital, \(\varphi_{1}(\psi_{1})\), possesses gerade symmetry and the excited orbital \(\varphi_{2}(\psi_{2})\) has ungerade symmetry. The normalized ground and first excited orbitals for the two species take on the form as
\[\varphi_{1}(r_{1})=\frac{\Omega_{1}^{1/4}}{\pi^{1/4}}exp\bigg{[} -\frac{\Omega_{1}}{2}r_{1}^{2}\bigg{]},\] \[\varphi_{2}(r_{1})=\frac{\sqrt{2}\Omega_{1}^{3/4}}{\pi^{1/4}}r_{ 1}exp\bigg{[}-\frac{\Omega_{1}}{2}r_{1}^{2}\bigg{]},\] \[\psi_{1}(r_{2})=\frac{\Omega_{2}^{1/4}}{\pi^{1/4}}exp\bigg{[}- \frac{\Omega_{2}}{2}r_{2}^{2}\bigg{]},\] \[\psi_{2}(r_{2})=\frac{\sqrt{2}\Omega_{2}^{3/4}}{\pi^{1/4}}r_{2} exp\bigg{[}-\frac{\Omega_{2}}{2}r_{2}^{2}\bigg{]}, \tag{5.1}\]
where \(\Omega_{1}\) and \(\Omega_{2}\) are dressed frequencies that depend on the inter- and intra-species interaction parameters as follows \(\Omega_{1}=\sqrt{\omega^{2}+2(\Lambda_{1}+\Lambda_{21})}\) and \(\Omega_{2}=\sqrt{\omega^{2}+2(\Lambda_{2}+\Lambda_{12})}\). As we consider only two orbitals for each species and they have different spatial symmetries, therefore in case of harmonic-interaction model, one can readily find that the singles coefficients vanish, \(c_{s}=d_{l}=0\) when \(s=2\) and \(l=2\). Moreover, Eqs. 4.29, 4.31, and 4.33 for the specific case of \(M_{1}=M_{2}=2\) take the simplified form of three quadratic coupled equations for the \(c_{22}\), \(d_{22}\), and \(e_{22}\) coefficients which can be readily found
\[(N_{1}^{2} -7N_{1}+9)V_{1\bar{1}2\bar{2}}^{(1)}c_{22}^{2}+\frac{N_{2}(N_{2}-1)}{4 }V_{1122}^{(2)}e_{22}^{2} \tag{5.2}\] \[+\bigg{[}(\mu_{2}^{(1)}-\mu_{1}^{(1)})+\frac{1}{2}(V_{\bar{1}1\bar {1}1}^{(1)}+V_{\bar{2}2\bar{2}2}^{(1)})+(N_{1}-2)V_{\bar{1}2\bar{2}1}^{(1)}-V_ {\bar{2}1\bar{2}1}^{(1)}\bigg{]}c_{22}\] \[+\frac{N_{2}}{2}V_{\bar{2}1\bar{1}\bar{2}}^{(12)}e_{22}+(N_{1}-3) N_{2}V_{\bar{1}1\bar{2}2}^{(12)}c_{22}e_{22}+\frac{V_{2\bar{2}1\bar{1}}^{(1)}}{4}=0,\]
where Eq. 5.2 couples the coefficients \(c_{22}\) and \(e_{22}\),
\[(N_{2}^{2} -7N_{2}+9)V_{1122}^{(2)}d_{22}^{2}+\frac{N_{1}(N_{1}-1)}{4}V_{ \bar{1}12\bar{2}}^{(1)}e_{22}^{2} \tag{5.3}\] \[+\bigg{[}(\mu_{2}^{(2)}-\mu_{1}^{(2)})+\frac{1}{2}(V_{1111}^{(2) }+V_{2222}^{(2)})+(N_{2}-2)V_{1221}^{(2)}-V_{2121}^{(2)}\bigg{]}d_{22}\] \[+\frac{N_{1}}{2}V_{\bar{1}2\bar{2}1}^{(12)}e_{22}+N_{1}(N_{2}-3) V_{\bar{1}1\bar{2}2}^{(12)}d_{22}e_{22}+\frac{V_{2211}^{(2)}}{4}=0,\]
which couples the coefficients \(d_{22}\) and \(e_{22}\), and
\[(N_{1}+ N_{2}-1)V_{\bar{1}1\bar{2}2}^{(12)}e_{22}^{2}+\bigg{[}(\mu_{1}^{(1)}+ \mu_{1}^{(2)}-\mu_{2}^{(1)}-\mu_{2}^{(2)}) \tag{5.4}\] \[+V_{21\bar{2}1}^{(12)}+V_{\bar{1}2\bar{1}2}^{(12)}-V_{\bar{2}1 \bar{2}1}^{(12)}-V_{\bar{2}2\bar{2}2}^{(12)}-(N_{1}-1)V_{\bar{1}2\bar{2}1}^{(1 )}-(N_{2}-1)V_{1221}^{(2)}\bigg{]}e_{22}\] \[-2(N_{1}-1)V_{\bar{1}2\bar{2}1}^{(12)}e_{22}-2(N_{2}-1)V_{\bar{2} 1\bar{1}2}^{(12)}d_{22}-4(N_{1}-1)(N_{2}-1)V_{\bar{1}1\bar{2}2}^{(12)}c_{22}d _{22}\] \[-(N_{1}^{2}-7N_{1}+6)V_{\bar{1}1\bar{2}2}^{(1)}c_{22}e_{22}-(N_{2} ^{2}-7N_{2}+6)V_{11\bar{2}2}^{(2)}d_{22}e_{22}-V_{22\bar{1}1}^{(12)}=0,\]
which accommodates explicitly the three unknown coefficients, \(c_{22}\), \(d_{22}\), and \(e_{22}\).
Before moving to the solution, let us briefly discuss what would be the general structure of the theory if one would use more virtual orbitals. If we simplify Eqs. 4.26, 4.27, 4.30, 4.32, and 4.34 for the higher orbital numbers, i.e., \(M_{1},M_{2}>2\), then we would obtain additional coupled equations according to Eqs. 4.21 to 4.25. For example, when \(M_{1}=M_{2}=3\), Eqs. 4.26, 4.27, 4.30, 4.32, and 4.34 boil down to two, two, three, and four equations, respectively, and they are coupled to each other. Also for obvious reasons, we require additional unknown coefficients, in this case, \(c_{3}\), \(d_{3}\), \(c_{23}\), \(d_{23}\), \(e_{23}\), and \(e_{32}\), to determine the energy.
In Eqs. 5.2 to 5.4, \(\mu_{1}^{(1)}(\mu_{1}^{(2)})\) and \(\mu_{2}^{(1)}(\mu_{2}^{(2)})\) are the respective chemical potentials of the ground and excited orbitals for species-1 (species-2), respectively. Given the orbitals \(\varphi_{1}\), \(\varphi_{2}\), \(\psi_{1}\), and \(\psi_{2}\) in
Eq. 5.1, the chemical potentials can be determined analytically,
\[\mu_{1}^{(1)}=\frac{\Omega_{1}}{2}+\frac{1}{2}\bigg{(}\frac{\Lambda _{1}}{\Omega_{1}}+\frac{\Lambda_{21}}{\Omega_{2}}\bigg{)},\] \[\mu_{2}^{(1)}=\frac{3\Omega_{1}}{2}+\frac{1}{2}\bigg{(}\frac{ \Lambda_{1}}{\Omega_{1}}+\frac{\Lambda_{21}}{\Omega_{2}}\bigg{)},\] \[\mu_{1}^{(2)}=\frac{\Omega_{2}}{2}+\frac{1}{2}\bigg{(}\frac{ \Lambda_{2}}{\Omega_{2}}+\frac{\Lambda_{12}}{\Omega_{1}}\bigg{)},\] \[\mu_{2}^{(2)}=\frac{3\Omega_{2}}{2}+\frac{1}{2}\bigg{(}\frac{ \Lambda_{2}}{\Omega_{2}}+\frac{\Lambda_{12}}{\Omega_{1}}\bigg{)}. \tag{5.5}\]
Now, using Eq. 4.19 and for \(M_{1}=M_{2}=2\), one can find the correlation energy for the ground state
\[E_{\rm cor}=N_{1}(N_{1}-1)V_{1122}^{(1)}c_{22}+N_{2}(N_{2}-1)V_{1122}^{(2)}d_{ 22}+N_{1}N_{2}V_{1122}^{(12)}e_{22}. \tag{5.6}\]
Here, \(c_{s}\) and \(d_{l}\) are zero due to symmetry and, hence, do not contribute. By solving the coupled Eqs. 5.2 to 5.4, we find the coefficients \(c_{22}\), \(d_{22}\), and \(e_{22}\). We would like to examine the performance of the CCSD-M theory and, thereby, we compare our results with analytical data found for the exact energy of a binary-species mixture. The exact energy of the bosonic mixture of two species reads [59]
\[E_{\rm exact}=\frac{1}{2}\Bigg{[}(N_{1}-1)\sqrt{\omega^{2}+2(N_{ 1}\lambda_{1}+N_{2}\lambda_{12})} + (N_{2}-1)\sqrt{\omega^{2}+2(N_{2}\lambda_{2}+N_{1}\lambda_{12})} \tag{5.7}\] \[+ \sqrt{\omega^{2}+2(N_{1}+N_{2})\lambda_{12}}+\omega\Bigg{]}.\]
The mean-field energy for the binary-mixture is found from the energy functional Eq. 3.1 and Gross-Pitaeveskii coupled equations for mixture Eq. 3.3, see also [59]
\[E_{\rm MF} = \frac{1}{2}\Bigg{[}N_{1}\Omega_{1}+N_{2}\Omega_{2}\Bigg{]}. \tag{5.8}\]
Now, we show our numerical examples by calculating the difference between the CCSD-M energy per particle and the corresponding analytical exact energy per particle, \(\frac{E_{\rm cc}-E_{\rm exact}}{N}\), for various strengths of the intra- and inter-species interaction parameters. The solution of the three coupled equations, Eqs. 5.2 to 5.4, for fixed inter- and intra-species interactions yields eight sets of results for \(c_{22}\), \(d_{22}\), and \(e_{22}\), which eventually generate eight values of the ground state energy. The task is to determine the correct and accurate ground state energy from the eight sets of data. Among the eight sets of ground state energies, we notice that some values are complex and some are larger as
well as smaller compared to the mean-field energy. We can discard those energy values which are complex and, off course, those which are larger compared to the mean-field energy, as we expect any many-body approximation using the coupled-cluster theory to give us lower energy compared to the corresponding mean-field energy. Among the remaining energy values which are smaller compared to the mean-field energy, we observe that, apart from one value, the other energy values and their corresponding coefficients are fluctuating when we slowly change the interaction strength from repulsive to attractive. Finally, we notice that the correct energy value is the one when its respective coefficients (\(c_{22}\) and \(d_{22}\)) have a monotonous feature with the increase of interaction strength and, moreover, that this particular energy value is the closest to the mean-field energy. See Appendix E for the evaluation of the correct values of the coupled-cluster coefficients and energy.
In this work we concentrate on the energy and we examine two features, the variation of the difference between the CCSD-M energy per particle and the corresponding analytical exact energy per particle, \(\Delta_{\rm cc}=\dfrac{E_{\rm cc}-E_{\rm exact}}{N}\) (left column of Fig. 1) and the percentage of correlation energy, \(E_{\rm cor}(\%)=\dfrac{E_{\rm MF}-E_{\rm cc}}{E_{\rm MF}-E_{\rm exact}}\times 100\) (right column of Fig. 1) as a function of the intra-species interaction parameter. The figure presents the combinations of a balanced number of bosons of the two species \(N_{1}=N_{2}=10000\), and imbalanced numbers of bosons, \(N_{1}=10000\) and \(N_{2}=1000\), \(N_{1}=10000\) and \(N_{2}=100\), and \(N_{1}=1000\) and \(N_{2}=100\). The inter-species interaction parameters, \(\Lambda_{12}\) and \(\Lambda_{21}\), are displayed in each panel. Note that the left and right columns correspond to each other with the same numbers of bosons and interaction parameters. Let us define the strength of the intra- and inter-species interactions. If \(\lambda_{1}\), \(\lambda_{2}\), and \(\lambda_{12}\) are \(<0.01\) the system is referred to as weakly interacting, in between \(0.011\) and \(0.05\) defines a medium strength interaction, and \(>0.05\) is strong interaction. To remind the reader \(\Lambda_{1}=\lambda_{1}(N_{1}-1)\), \(\Lambda_{2}=\lambda_{2}(N_{2}-1)\), \(\Lambda_{21}=\lambda_{12}N_{2}\), and \(\Lambda_{12}=\lambda_{12}N_{1}\). Thus, the largest interaction parameter used in the examples below are: \(\Lambda_{1}=10^{4}\), \(\Lambda_{2}=10^{4}\), \(\Lambda_{12}=50\), and \(\Lambda_{21}=50\).
Before we are going to analyze the performance of the CCSD-M, let us take two species which are non-interacting to each other, \(\Lambda_{12}=\Lambda_{21}=0\), and check the performance of the coupled cluster theory. For \(\Lambda_{12}=\Lambda_{21}=0\), the coefficient \(e_{22}=0\) and the performance of the coupled-cluster for mixtures boils down to the coupled-cluster for single species [47], see Eq. 5.6 for correlation energy. The benchmark presented here is very promising.
Fig. 1 (a) and (b) present \(\Delta_{\rm cc}\) and \(E_{\rm cor}(\%)\), respectively, for a basic situation when the two species are non-interacting to each other for several balanced and imbalanced combinations of
boson numbers. It is found that the deviation \(\Delta_{\rm cc}\) is always less than \(10^{-5}\) with, obviously, no deviation when \(\Lambda_{1}=\Lambda_{2}=0\). Moreover, apart from \(\Lambda_{1}=\Lambda_{2}=0\), for a fixed combination of bosons, balanced or imbalanced, the minimal \(\Delta_{\rm cc}\) occurs for the attractive inter-species interaction \(\Lambda_{1}=\Lambda_{2}=+0.1\). Fig. 1 (b) shows that, for \(\Lambda_{12}=\Lambda_{21}=0\), CCSD-M theory with two orbitals in each species can capture more than 99% of the correlation energy, for the considered intra-species interaction parameters, which is very promising.
Now, when we switch on the inter-species interaction parameters, see Figs. 1 (c) and (d), \(\Delta_{\rm cc}\) varies approximately from \(10^{-5}\) to \(10^{-9}\), and the corresponding \(E_{\rm cor}(\%)\) deviates between \(0.01\%\) to \(5\%\) from the exact correlation energy. As we further increase of \(\Lambda_{12}\) and \(\Lambda_{21}\) five times for each cases, \(\Delta_{\rm cc}\) is around \(10^{-5}-10^{-7}\), see Fig. 1 (e). Moreover, the coupled-cluster theory with \(M_{1}=M_{2}=2\) can capture more than 84% of correlation energy. Naturally, for strong inter-species interaction, one requires more orbitals to incorporate the accurate correlation energy. All in all, Fig. 1 exhibits that we require either an additional number of orbitals, or a higher order of excitations, or the combination of both to get the full correlation energy in the particle imbalanced cases with repulsive intra-species interaction parameter.
Till now, the CCSD-M approach shows excellent success for weakly interacting bosonic mixtures. Here, we check the applicability of the CCSD-M theory for a strongly interacting system with a large number of bosons in the two species. Fig. 2 exhibits (a) \(\Delta_{\rm cc}=\dfrac{E_{\rm cc}-E_{\rm exact}}{N}\) and (b) \(E_{\rm cor}(\%)\) for \(N_{1}=N_{2}=10000\) with various inter-species interaction parameters ranging from weak to medium, namely, 0.1, 1, 10, and 50, while the intra-species interaction parameters vary from weak \(10^{-2}\) to strong \(10^{4}\). For weak \(\Lambda_{12}=\Lambda_{21}=0.1\), \(\Delta_{\rm cc}\) increases with the values of \(\Lambda_{1}\) and \(\Lambda_{2}\). Interestingly, for comparatively stronger values \(\Lambda_{12}=\Lambda_{21}=1\), 10, and 50, we observe that \(\Delta_{\rm cc}\) decreases at the beginning and then increases with the intra-species interaction parameter. This change in nature and the crossing of \(\Delta_{\rm cc}\) curves correlates with the \(E_{\rm cc}-E_{\rm MF}\). On the whole, we observe that \(\Delta_{\rm cc}<10^{-4}\) even for strong intra- and inter-species interaction strengths with a large number bosons in each species which proves the remarkable success of CCSD-M theory.
Now we discuss how much of the correlation energy is captured by the CCSD-M with the number of orbitals \(M_{1}=M_{2}=2\), see Fig. 2 (b). When the inter-species interaction is weak, \(\Lambda_{12}=\Lambda_{21}=0.1\), the coupled-cluster theory can determine more than 98% of the correlation energy for the weak to strong intra-species interaction parameters. The correlation energy deviates from the exact results as one increases the inter-species interaction. For inter-species interaction satisfying \(\Lambda_{12}=\Lambda_{21}\geq 1\), it is interesting to see that, when the inter- and intra-species interactions are of the same order, the CCSD-M theory with \(M_{1}=M_{2}=2\) orbitals captures more than
about 85% of correlation energy. Moreover, we observe that for strong values of the intra-species interactions \(\Lambda_{1}=\Lambda_{2}\), two orbitals for each species can produce more than 97.5% of the correlation energy, which exhibits the potential of coupled-cluster theory for mixtures.
## VI Conclusions
In this work, we present the theoretical development and implementation details of the coupled-cluster theory for the bosonic mixture of binary species, with the numbers of bosons \(N_{1}\) and \(N_{2}\) for species-1 and species-2, respectively, in external trap potentials. In the coupled-cluster theory for mixtures, the ansatz of the many-body wavefunction is obtained when the three exponential cluster operators \(e^{T^{(1)}}\), \(e^{T^{(2)}}\), and \(e^{T^{(12)}}\), are applied onto the ground configuration. Since \(T^{(1)}\), \(T^{(2)}\), and \(T^{(12)}\) commute with each other their exponents can be separated. Here, \(T^{(12)}=0\) implies that there is no inter-species interaction between the two bosonic species and the theory derived here boils down to a single-species coupled-cluster ansatz for each of the species. \(T^{(1)}\) and \(T^{(2)}\) incorporate the single, double, triple,... excitations in each species, while \(T^{(12)}\) starts from the double excitations, one for each species. As per the bosons statistics, there is no restriction in occupying a particular orbital for bosons. Our starting point for building up correlations is the standard mean-field for which \(N_{1}\) bosons occupy one orbital of species-1 and \(N_{2}\) bosons are sitting in another orbital of species-2. These orbitals are obtained by the solution of the coupled Gross-Pitaevskii equations of the mixtures.
Next, we have derived the involved working equations for the unknown coefficients in the coupled-cluster theory for bosonic mixtures using an arbitrary sets of orthonormal orbitals. Also, the comparatively simplified version of the working equations are derived for the coefficients using Fock-like operators. The working equations with Fock-like operators as well as those for an arbitrary sets of orthonormal orbitals consist of \(M_{1}-1\) and \(M_{2}-1\) equations for single excitations in species-1 and species-2, respectively, \(M_{1}(M_{1}-1)/2\) and \(M_{2}(M_{2}-1)/2\) equations for double excitations in species-1 and species-2, respectively, and \((M_{1}-1)(M_{2}-1)\) equations for simultaneous excitations in the two species, and so on. Utilizing orthonormal orbitals, we find that the correlation energy for the mixture depends on the different coefficients, namely, \(\{c_{s}\}\), \(\{d_{l}\}\), \(\{c_{rs}\}\), \(\{d_{kl}\}\), and \(\{e_{ls}\}\), which originate for the single and double excitations only.
Furthermore, we have implemented our developed coupled-cluster theory on the harmonic interaction model for mixtures and compared the results with this exactly solvable many-body model for the different strengths of the inter-species and intra-species interactions. The investigation serves
to check the theory, implementation, and the usage of the theory, that is the truncation to CCSD-M and the inclusion of \(M_{1}=M_{2}=2\), for the studied examples. We have calculated the energy for the mixture with \(N_{1}\) and \(N_{2}\) ranging from 100 to 10000 bosons, and in cases for particle balance and imbalance between the two species. To check the performance of our theory, we calculated the difference between the coupled-cluster energy per particle and the corresponding analytical exact energy per particle, \(\Delta_{\rm cc}=\frac{E_{\rm cc}-E_{\rm exact}}{N}\). We have shown how much of the exact correlation energy can be captured by the coupled-cluster theory by calculating \(E_{\rm cor}(\%)=\frac{E_{\rm MF}-E_{\rm cc}}{E_{\rm MF}-E_{\rm exact}}\times 100\). We found that even for rather strong intra- and inter-species interactions and relatively large numbers of bosons for each species \(N_{1}=N_{2}=10^{4}\), the CCSD-M provides remarkable success. The quality of coupled-cluster theory opens the way to investigate few- to many-boson binary mixture up to fairly strong interaction strengths where several orbitals, or higher order excitations, or both of them are required to describe the physics accurately.
All in all, It is found that the coupled-cluster theory for bosonic mixtures is a promising many-body approach. As an outlook, one could be interested to investigate various properties of bosonic mixtures for other external potentials and inter-bosons interactions [28]. Based on our CCSD-M, one can anticipate the development of a time-dependent coupled-cluster theory for bosons first, and then one could expect to expand and explore the dynamics of bosonic mixtures using time-dependent coupled-cluster. This is a challenge to be undertaken in future research. We believe that fermionic time-dependent coupled-cluster theory will be helpful in this direction. [60; 61].
## Acknowledgement
This research was supported by the Israel Science Foundation (Grants No. 1516/19).
Figure 1: Variation of the difference between the coupled-cluster energy per particle and the corresponding analytical exact energy per particle, \(\Delta_{\rm cc}=\dfrac{E_{\rm cc}-E_{\rm exact}}{N}\), with respect to the intra-species interaction parameters \(\Lambda_{1}=\Lambda_{2}\) is shown in panels (a), (c), and (e). Panels (b), (d), and (f) present \(E_{\rm cor}(\%)=\dfrac{E_{\rm MF}-E_{\rm cc}}{E_{\rm MF}-E_{\rm exact}}\times 100\) as a function of the intra-species interaction parameters. The coupled-cluster energy is calculated for the combinations of equal and unequal numbers of bosons of the two species: \(N_{1}=N_{2}=10000\), \(N_{1}=10000\) and \(N_{2}=1000\), \(N_{1}=10000\) and \(N_{2}=100\), and \(N_{1}=1000\) and \(N_{2}=100\). Captions are shown in panel (b) and (e). For panels in the first row, the two species are non-interacting to each other. Panels (a) and (b) serve for benchmarking the coupled-cluster theory for bosonic single species. In panels (c) to (f), \(\Lambda_{12}\) and \(\Lambda_{21}\) are demonstrated in the same color as their corresponding curves. Continuous curves are to guide the eye only. All the quantities are dimensionless.
Figure 2: Variation of (a) the difference between the coupled-cluster energy per particle and the corresponding analytical exact energy per particle, \(\Delta_{\rm cc}=\dfrac{E_{\rm cc}-E_{\rm exact}}{N}\), and (b) \(E_{\rm cor}(\%)=\dfrac{E_{\rm MF}-E_{\rm cc}}{E_{\rm MF}-E_{\rm exact}}\times 100\) as a function of intra-species interaction parameters \(\Lambda_{1}=\Lambda_{2}\). The coupled-cluster energy is calculated for the numbers of bosons \(N_{1}=N_{2}=10000\). Results are presented for the inter-species interaction parameters, \(\Lambda_{12}\) and \(\Lambda_{21}\), 0.1, 1, 10, and 50. It is found that if we increase both intra- and inter-species interaction parameters, CCSD-M can capture higher fractions of the correlation energy. All the quantities are dimensionless.
## Appendix
### Relations between one-body and two-body terms
Using the eigenvalue-equations of the Fock-like operators, \(F_{1}\varphi_{i}=\mu_{i}^{(1)}\varphi_{i}\) and \(F_{2}\psi_{i}=\mu_{i}^{(2)}\psi_{i}\), described in the main text [Eqs. 3.4 and 3.5 ], one can find a few relations between one-body and two-body terms which can assist in simplifying the working equations of the coupled-cluster theory for bosonic mixtures. The relations are as follows
\[h_{1\bar{1}}^{(1)}+(N_{1}-1)V_{\bar{1}\bar{1}\bar{1}\bar{1}}^{(1) }+N_{2}V_{\bar{1}\bar{1}\bar{1}\bar{1}}^{(12)}=\mu_{1}^{(1)},\] \[h_{11}^{(2)}+(N_{2}-1)V_{1111}^{(2)}+N_{1}V_{\bar{1}\bar{1}\bar{1 }\bar{1}}^{(12)}=\mu_{1}^{(2)},\] \[\sum_{s=2}^{M_{1}}\left[h_{\bar{s}\bar{s}}^{(1)}+(N_{1}-1)V_{\bar{ s}\bar{1}\bar{s}\bar{1}}^{(1)}+N_{2}V_{\bar{s}\bar{1}\bar{s}\bar{1}}^{(12)} \right]=\mu_{s}^{(1)},\] \[\sum_{l=2}^{M_{2}}\left[h_{ll}^{(2)}+(N_{2}-1)V_{\bar{l}\bar{l} \bar{1}\bar{1}}^{(2)}+N_{1}V_{\bar{l}\bar{1}\bar{l}\bar{l}}^{(12)}\right]= \mu_{l}^{(2)},\] \[\sum_{s=2}^{M_{1}}\left[h_{1\bar{s}}^{(1)}+(N_{1}-1)V_{\bar{1} \bar{1}\bar{s}\bar{1}}^{(1)}+N_{2}V_{\bar{1}\bar{1}\bar{s}\bar{1}}^{(12)} \right]=0,\] \[\sum_{l=2}^{M_{2}}\left[h_{1\bar{l}}^{(2)}+(N_{2}-1)V_{11\bar{l} \bar{1}\bar{1}}^{(2)}+N_{1}V_{\bar{1}\bar{1}\bar{1}}^{(12)}\right]=0,\] \[\sum_{s=2}^{M_{1}}\left[h_{\bar{s}\bar{1}}^{(2)}+(N_{1}-1)V_{\bar {s}\bar{1}\bar{1}\bar{1}}^{(2)}+N_{2}V_{\bar{s}\bar{1}\bar{1}\bar{1}}^{(12)} \right]=0,\] \[\sum_{l=2}^{M_{2}}\left[h_{l1}^{(2)}+(N_{2}-1)V_{l111}^{(2)}+N_{ 1}V_{\bar{1}\bar{1}\bar{1}}^{(12)}\right]=0. \tag{10}\]
The first two relations are obtained when the ground orbital is sandwiching in Eq. 3.3 the Fock operator. Similarly, the third and fourth relations are found when the same excited orbitals act on the Fock operator from the left and right sides. We get the other relations when ground and excited orbitals are sandwiching the Fock operator in Eq. 3.5. The first four relations involve chemical potentials for the ground and virtual orbitals as the Hamiltonian is sandwiched between the same two orbitals.
### Transformed creation and destruction operators
The derivation of the working equations of the coupled-cluster theory for bosonic mixtures demands the transformation of the creation operators of the ground orbitals, \(\dot{a}_{1}^{\dagger}=a_{1}^{\dagger}-{\cal K}_{1}\) and
\(\dot{b}_{1}^{\dagger}=b_{1}^{\dagger}-{\cal L}_{1}\), and of the destruction operators of the virtual orbitals, \(\dot{a}_{p}=a_{p}-{\cal K}_{p}\) and \(\dot{b}_{i}=b_{i}-{\cal L}_{i}\), for both species. The explicit expansions of the first few terms of \({\cal K}_{1}\), \({\cal K}_{p}\), \({\cal L}_{1}\), and \({\cal L}_{i}\) read
\[{\cal K}_{1} = \sum_{p=2}^{M_{1}}c_{p}a_{p}^{\dagger}+2\sum_{p,q=2}^{M_{1}}c_{pq} a_{p}^{\dagger}a_{q}^{\dagger}a_{1}+3\sum_{p,q,r=2}^{M_{1}}c_{pqr}a_{p}^{ \dagger}a_{q}^{\dagger}a_{r}^{\dagger}a_{1}^{2}+... \tag{111}\] \[+ \sum_{p=2}^{M_{1}}\sum_{i=2}^{M_{2}}e_{pi}a_{p}^{\dagger}b_{i}^{ \dagger}b_{1}+2\sum_{p,q=2}^{M_{1}}\sum_{i=2}^{M_{2}}e_{pqi}a_{p}^{\dagger}a_{ q}^{\dagger}b_{i}^{\dagger}a_{1}b_{1}+\sum_{p=2}^{M_{1}}\sum_{i,j=2}^{M_{2}}e_{pij}a_{ p}^{\dagger}b_{i}^{\dagger}b_{j}^{\dagger}b_{1}^{2}+...,\]
\[{\cal K}_{p} = c_{p}a_{1}+2\sum_{q=2}^{M_{1}}c_{pq}a_{q}^{\dagger}a_{1}^{2}+3 \sum_{q,r=2}^{M_{1}}c_{pqr}a_{q}^{\dagger}a_{r}^{\dagger}a_{1}^{3}+... \tag{112}\] \[+ \sum_{i=2}^{M_{2}}e_{pi}b_{i}^{\dagger}a_{1}b_{1}+2\sum_{q=2}^{M _{1}}\sum_{i=2}^{M_{2}}e_{pqi}a_{q}^{\dagger}b_{i}^{\dagger}a_{1}^{2}b_{1}+ \sum_{i,j=2}^{M_{2}}e_{pij}b_{i}^{\dagger}b_{j}^{\dagger}a_{1}b_{1}^{2}+...,\]
\[{\cal L}_{1} = \sum_{i=2}^{M_{2}}d_{i}b_{i}^{\dagger}+2\sum_{i,j=2}^{M_{2}}d_{ij }b_{i}^{\dagger}b_{j}^{\dagger}b_{1}+3\sum_{i,j,k=2}^{M_{2}}d_{ijk}b_{i}^{ \dagger}b_{j}^{\dagger}b_{k}^{\dagger}b_{1}^{2}+... \tag{113}\] \[+ \sum_{p=2}^{M_{1}}\sum_{i=2}^{M_{2}}e_{pi}a_{p}^{\dagger}b_{i}^{ \dagger}a_{1}+2\sum_{p=2}^{M_{1}}\sum_{i,j=2}^{M_{2}}e_{pij}a_{p}^{\dagger}b_{ i}^{\dagger}b_{j}^{\dagger}a_{1}b_{1}+\sum_{p,q=2}^{M_{1}}\sum_{i=2}^{M_{2}}e_{pqi}a_{ p}^{\dagger}a_{q}^{\dagger}b_{i}^{\dagger}a_{1}^{2}+...,\]
\[{\cal L}_{i} = d_{i}b_{1}+2\sum_{j=2}^{M_{2}}d_{ij}b_{j}^{\dagger}b_{1}^{2}+3 \sum_{j,k=2}^{M_{2}}d_{ijk}b_{j}^{\dagger}b_{k}^{\dagger}b_{1}^{3}+... \tag{114}\] \[+ \sum_{p=2}^{M_{1}}e_{pi}a_{p}^{\dagger}a_{1}b_{1}+2\sum_{p=2}^{M _{1}}\sum_{j=2}^{M_{2}}e_{pij}a_{p}^{\dagger}b_{j}^{\dagger}a_{1}b_{1}^{2}+ \sum_{p,q=2}^{M_{1}}e_{pqi}a_{p}^{\dagger}a_{q}^{\dagger}a_{1}^{2}b_{1}+....\]
The above mentioned expansions are used to find out the unknown coefficients, \(c_{p_{1}p_{2}..p_{n}}\), \(d_{i_{1}i_{2}...i_{m}}\), and \(e_{p_{1}..p_{n^{\prime}}i_{1}...i_{m^{\prime}}}\).
### Transformed two-body operators
The transformed intra-species and inter-species two-body operators \(\dot{V}^{(1)}\), \(\dot{V}^{(2)}\), and \(\dot{V}^{(12)}\) are required to solve the coupled-cluster energy, see Sec. IVB, by finding out the unknown coefficients. \(\dot{V}^{(1)}\) consists of nine terms listed below when each term is a combination of the creation and destruction operators of species-1. The nine terms are as follows
\[\dot{V}^{(1)}=\sum_{w=1}^{9}\dot{V}^{(1)}(w),\] \[\dot{V}^{(1)}(1)=\frac{1}{2}V^{(1)}_{\bar{1}\bar{1}\bar{1}11}\dot{a }^{\dagger}_{1}\dot{a}^{\dagger}_{1}a_{1}, \dot{V}^{(1)}(2)=\sum_{s=2}^{M_{1}}V^{(1)}_{\bar{1}\bar{1}\bar{1}s }\dot{a}^{\dagger}_{1}\dot{a}^{\dagger}_{1}a_{1}\dot{a}_{s},\] \[\dot{V}^{(1)}(3)=\sum_{s=2}^{M_{1}}V^{(1)}_{\bar{s}\bar{1}\bar{1} \bar{1}1}a^{\dagger}_{s}\dot{a}^{\dagger}_{1}a_{1}, \dot{V}^{(1)}(4)=\frac{1}{2}\sum_{r,s=2}^{M_{1}}V^{(1)}_{\bar{1} \bar{r}\bar{s}}\dot{a}^{\dagger}_{1}\dot{a}^{\dagger}_{1}\dot{a}_{r}\dot{a}_{s},\] \[\dot{V}^{(1)}(5)=\frac{1}{2}\sum_{r,s=2}^{M_{1}}V^{(1)}_{\bar{r} \bar{s}\bar{1}1}a^{\dagger}_{r}a^{\dagger}_{s}a_{1}a_{1}, \dot{V}^{(1)}(6)=\sum_{r,s=2}^{M_{1}}(V^{(1)}_{\bar{1}\bar{r} \bar{1}\bar{s}}+V^{(1)}_{\bar{1}\bar{r}\bar{s}\bar{1}})\dot{a}^{\dagger}_{1}a^ {\dagger}_{r}a_{1}\dot{a}_{s},\] \[\dot{V}^{(1)}(7)=\sum_{q,r,s=2}^{M_{1}}V^{(1)}_{\bar{1}q\bar{r}s }\dot{a}^{\dagger}_{1}a^{\dagger}_{q}\dot{a}_{r}\dot{a}_{s}, \dot{V}^{(1)}(8)=\sum_{q,r,s=2}^{M_{1}}V^{(1)}_{\bar{q}\bar{r} \bar{s}\bar{1}}a^{\dagger}_{q}a^{\dagger}_{r}\dot{a}_{s}a_{1},\] \[\dot{V}^{(1)}(9)=\frac{1}{2}\sum_{p,q,r,s=2}^{M_{1}}V^{(1)}_{\bar {p}\bar{q}\bar{r}s}a^{\dagger}_{p}a^{\dagger}_{q}\dot{a}_{r}\dot{a}_{s}.\] (C.1)
Similarly, \(\dot{V}^{(2)}\) has nine terms and each term carries the creation and destruction operators of species-2. The nine terms read
\[\dot{V}^{(2)}=\sum_{w=1}^{9}\dot{V}^{(2)}(w),\] \[\dot{V}^{(2)}(1)=\frac{1}{2}V^{(2)}_{1111}\dot{b}^{\dagger}_{1} \dot{b}^{\dagger}_{1}b_{1}b_{1}, \dot{V}^{(2)}(2)=\sum_{l=2}^{M_{2}}V^{(2)}_{111l}\dot{b}^{ \dagger}_{1}\dot{b}^{\dagger}_{1}b_{l}\dot{b}_{l},\] \[\dot{V}^{(2)}(3)=\sum_{l=2}^{M_{2}}V^{(2)}_{1111}b^{\dagger}_{l} \dot{b}^{\dagger}_{1}b_{1}b_{1}, \dot{V}^{(2)}(4)=\frac{1}{2}\sum_{k,l=2}^{M_{2}}V^{(2)}_{11kl}\dot {b}^{\dagger}_{1}\dot{b}^{\dagger}_{1}\dot{b}_{k}\dot{b}_{l},\] \[\dot{V}^{(2)}(5)=\frac{1}{2}\sum_{k,l=2}^{M_{2}}V^{(2)}_{k\bar{l} 11}b^{\dagger}_{k}b^{\dagger}_{l}b_{1}b_{1}, \dot{V}^{(2)}(6)=\sum_{k,l=2}^{M_{2}}(V^{(2)}_{1k\bar{l}l}+V^{(2)} _{1k\bar{l}1})\dot{b}^{\dagger}_{1}b^{\dagger}_{k}b_{l}b_{l},\] \[\dot{V}^{(2)}(7)=\sum_{j,k,l=2}^{M_{2}}V^{(2)}_{1jkl}\dot{b}^{ \dagger}_{1}b^{\dagger}_{j}\dot{b}_{k}\dot{b}_{l}, \dot{V}^{(2)}(8)=\sum_{j,k,l=2}^{M_{2}}V^{(2)}_{jkl}b^{\dagger}_{j}b^{ \dagger}_{k}\dot{b}_{l}b_{1},\] \[\dot{V}^{(2)}(9)=\frac{1}{2}\sum_{i,j,k,l=2}^{M_{2}}V^{(2)}_{ijkl} b^{\dagger}_{i}b^{\dagger}_{j}\dot{b}_{k}\dot{b}_{l}.\] (C.2)
\(\dot{V}^{(12)}\) is made of sixteen terms and each of the terms is a mixture of the creation and destruction operators of species-1 and species-2. All sixteen terms can be readily found as
\[\dot{V}^{(12)}(1)=\sum_{w=1}^{16}\dot{V}^{(12)}(w),\] \[\dot{V}^{(12)}(1)=V_{\bar{1}11\bar{1}}^{(12)}\dot{a}_{1}^{\dagger} \dot{b}_{1}^{\dagger}a_{1}b_{1}, \dot{V}^{(12)}(2)=\sum_{l=2}^{M_{2}}V_{\bar{1}1\bar{l}l}^{(12)} \dot{a}_{1}^{\dagger}\dot{b}_{1}^{\dagger}a_{1}\dot{b}_{l},\] \[\dot{V}^{(12)}(3)=\sum_{s=2}^{M_{1}}V_{\bar{1}1\bar{s}1}^{(12)} \dot{a}_{1}^{\dagger}\dot{b}_{1}^{\dagger}\dot{a}_{s}b_{1}, \dot{V}^{(12)}(4)=\sum_{l=2}^{M_{2}}V_{\bar{1}l\bar{1}1}^{(12)} \dot{a}_{1}^{\dagger}b_{l}^{\dagger}a_{1}b_{1},\] \[\dot{V}^{(12)}(5)=\sum_{s=2}^{M_{1}}V_{\bar{s}11}^{(12)}\dot{a}_ {s}^{\dagger}\dot{b}_{1}^{\dagger}a_{1}b_{1}, \dot{V}^{(12)}(6)=\sum_{s=2}^{M_{1}}\sum_{l=2}^{M_{2}}V_{\bar{1} 1\bar{s}l}^{(12)}\dot{a}_{1}^{\dagger}\dot{b}_{1}^{\dagger}\dot{a}_{s}\dot{b}_ {l},\] \[\dot{V}^{(12)}(7)=\sum_{s=2}^{M_{1}}\sum_{l=2}^{M_{2}}V_{\bar{s} \bar{l}1\bar{1}}^{(12)}a_{s}^{\dagger}b_{l}^{\dagger}a_{1}b_{1}, \dot{V}^{(12)}(8)=\sum_{r,s=2}^{M_{1}}V_{\bar{r}1\bar{s}1}^{(12)} a_{r}^{\dagger}\dot{b}_{1}^{\dagger}\dot{a}_{s}b_{1},\] \[\dot{V}^{(12)}(9)=\sum_{k,l=2}^{M_{2}}V_{\bar{1}k\bar{l}l}^{(12) }\dot{a}_{1}^{\dagger}b_{k}^{\dagger}a_{1}\dot{b}_{l}, \dot{V}^{(12)}(10)=\sum_{s=2}^{M_{1}}\sum_{l=2}^{M_{2}}V_{\bar{s} 1\bar{1}l}^{(12)}a_{s}^{\dagger}\dot{b}_{1}^{\dagger}a_{1}\dot{b}_{l},\] \[\dot{V}^{(12)}(11)=\sum_{s=2}^{M_{1}}\sum_{l=2}^{M_{2}}V_{\bar{1} l\bar{s}1}^{(12)}\dot{a}_{1}^{\dagger}b_{l}^{\dagger}\dot{a}_{s}b_{1}, \dot{V}^{(12)}(12)=\sum_{s=2}^{M_{1}}\sum_{k,l=2}^{M_{2}}V_{\bar{1} k\bar{s}l}^{(12)}\dot{a}_{1}^{\dagger}b_{k}^{\dagger}\dot{a}_{s}\dot{b}_{l},\] \[\dot{V}^{(12)}(13)=\sum_{r,s=2}^{M_{1}}\sum_{l=2}^{M_{2}}V_{\bar{ r}1\bar{s}l}^{(12)}a_{r}^{\dagger}\dot{b}_{1}^{\dagger}\dot{a}_{s}\dot{b}_{l}, \dot{V}^{(12)}(14)=\sum_{s=2}^{M_{1}}\sum_{k,l=2}^{M_{2}}V_{\bar{s}k \bar{1}l}^{(12)}a_{s}^{\dagger}b_{k}^{\dagger}a_{1}\dot{b}_{l},\] \[\dot{V}^{(12)}(15)=\sum_{r,s=2}^{M_{1}}\sum_{l=2}^{M_{2}}V_{\bar{ r}l\bar{s}1}^{(12)}a_{r}^{\dagger}b_{l}^{\dagger}\dot{a}_{s}b_{1}, \dot{V}^{(12)}(16)=\sum_{r,s=2}^{M_{1}}\sum_{k,l=2}^{M_{2}}V_{\bar{r}k\bar{s}l }^{(12)}a_{r}^{\dagger}b_{k}^{\dagger}\dot{a}_{s}\dot{b}_{l}.\] (C.3)
The working equations of the coupled-cluster theory for bosonic mixtures are developed using the transformed two-body operators along with \(\dot{H}_{0}\), see Eq. 4.12.
### The general working equations
Here we present the general forms of the working equations of the coupled-cluster theory for bosonic mixtures. The derivation of the coupled working equations is very lengthy and involved. We start with the single excitation in both species by solving the equations \(\langle\phi_{0}|a_{1}^{\dagger}a_{i}\dot{H}|\phi_{0}\rangle=0\) and \(\langle\phi_{0}|b_{1}^{\dagger}b_{i}\dot{H}|\phi_{0}\rangle=0\) where \(\bar{i}=2,3,...,M_{1}\) and \(i=2,3,...,M_{2}\). The general form of the single excitation in species-1 can be readily obtained as
\[0 = -h_{1\bar{1}}^{(1)}c_{\bar{i}}+(N_{1}-1)\sum_{s=2}^{M_{1}}h_{1\bar{ s}}^{(1)}(2c_{s\bar{i}})-\sum_{s=2}^{M_{1}}h_{1\bar{s}}^{(1)}(c_{\bar{i}}c_{s})+h_{ 1\bar{1}}^{(1)}+\sum_{s=2}^{M_{1}}h_{1\bar{s}}^{(1)}c_{s}+N_{2}\sum_{l=2}^{M_{2 }}h_{1l}^{(2)}e_{\bar{i}l}\] (D.1) \[-(N_{1}-1)V_{1\bar{1}1\bar{1}}^{(1)}c_{\bar{i}}+(N_{1}-1)(N_{1}- 2)\sum_{s=2}^{M_{1}}V_{1\bar{1}1\bar{s}}^{(1)}(2c_{s\bar{i}})-2(N_{1}-1)\sum_{ s=2}^{M_{1}}V_{1\bar{1}1\bar{s}}^{(1)}(c_{\bar{i}}c_{s})+(N_{1}-1)V_{i\bar{1}1 \bar{1}}^{(1)}\] \[+\sum_{r,s=2}^{M_{1}}V_{1\bar{1}r\bar{s}}^{(1)}\bar{\alpha}_{r \bar{s}i}+(N_{1}-1)\Bigg{[}\sum_{s=2}^{M_{1}}(V_{1\bar{1}i\bar{s}}^{(1)}+V_{1 \bar{1}i\bar{s}}^{(1)})c_{s}\Bigg{]}+(N_{1}-1)\Bigg{[}\sum_{r,s=2}^{M_{1}}V_{1 \bar{1}\bar{r}\bar{s}}^{(1)}(2c_{rs}+c_{r}c_{s})\Bigg{]}\] \[+N_{2}(N_{2}-1)\sum_{l=2}^{M_{2}}V_{111l}^{(2)}e_{\bar{i}l}+ \frac{1}{2}N_{2}(N_{2}-1)\sum_{k,l=2}^{M_{2}}V_{11kl}^{(2)}(2e_{\bar{i}lk}+e_ {\bar{i}k}d_{l}+e_{\bar{i}l}d_{k})\] \[-N_{2}V_{1111}^{(12)}c_{\bar{i}}+(N_{1}-1)N_{2}\sum_{l=2}^{M_{2}} V_{11\bar{1}l}^{(12)}e_{\bar{i}l}-N_{2}\sum_{l=2}^{M_{2}}V_{1\bar{1}l\bar{l}}^{(12 )}c_{\bar{i}}d_{l}\] \[+N_{2}\Bigg{[}(N_{1}-1)\sum_{s=2}^{M_{1}}V_{11s1}^{(12)}(2c_{s \bar{i}})-\sum_{s=2}^{M_{1}}V_{11s1}^{(12)}(c_{i}c_{s})+V_{i111}^{(12)}\Bigg{]}\] \[+(N_{1}-1)N_{2}\sum_{s=2}^{M_{1}}\sum_{l=2}^{M_{2}}V_{1\bar{1}l \bar{s}l}^{(12)}(2e_{\bar{i}sl}+c_{s}e_{\bar{i}l}+2c_{s}d_{l})-N_{2}\sum_{s=2}^ {M_{1}}\sum_{l=2}^{M_{2}}V_{\bar{1}1\bar{s}l}^{(12)}(c_{\bar{i}}e_{sl}+c_{\bar {i}}c_{s}d_{l})\] \[+N_{2}\Bigg{[}\sum_{s=2}^{M_{1}}V_{1\bar{1}s1}^{(12)}c_{s}+\sum_{ l=2}^{M_{2}}V_{1\bar{1}l\bar{l}}^{(12)}d_{l}+\sum_{s=2}^{M_{1}}\sum_{l=2}^{M_{2}}V_{ \bar{1}l\bar{s}l}^{(12)}(e_{sl}+c_{s}d_{l})\Bigg{]}.\]
To calculate Eq. D.1, contributions from the fifth, seventh, and eighth terms of \(\dot{H}_{0}\) in Eq. 4.12, the second, third, fifth, sixth, seventh, eighth, and ninth terms of \(\dot{V}^{(2)}\) in Eq. C.2, and from the fourth, seventh, ninth, eleventh, twelfth, fourteenth, fifteenth, and sixteenth terms of \(\dot{V}^{(12)}\) in Eq. C.3 are zero due to the creation operators for virtual orbitals in the species-2. Also the fifth, eighth, and ninth terms of \(\dot{V}^{(1)}\) in Eq. C.1 do not have any contribution as they all exhibit two creation operators for virtual orbitals of species-1.
The general form of the single excitation in species-2 reads
\[0 = -h_{11}^{(2)}d_{i}+(N_{2}-1)\sum_{l=2}^{M_{2}}h_{1l}^{(2)}(2d_{li})- \sum_{l=2}^{M_{2}}h_{1l}^{(2)}(d_{i}d_{l})+h_{i1}^{(2)}+\sum_{l=2}^{M_{2}}h_{il}^ {(2)}d_{l}+N_{1}\sum_{s=2}^{M_{1}}h_{1\bar{s}}^{(1)}e_{si}\] (D.2) \[-(N_{2}-1)V_{1111}^{(2)}d_{i}+(N_{2}-1)(N_{2}-2)\sum_{l=2}^{M_{2}} V_{111l}^{(2)}(2d_{li})-2(N_{2}-1)\sum_{l=2}^{M_{2}}V_{111l}^{(2)}(d_{i}d_{l})+(N_ {2}-1)V_{i111}^{(2)}\] \[+\sum_{k,l=2}^{M_{2}}V_{11kl}^{(2)}\alpha_{kli}+(N_{2}-1)\Bigg{[} \sum_{l=2}^{M_{2}}(V_{11il}^{(2)}+V_{1il\bar{l}}^{(2)})d_{l}\Bigg{]}+(N_{2}-1) \Bigg{[}\sum_{k,l=2}^{M_{2}}V_{1ik\bar{l}}^{(2)}(2d_{lk}+d_{k}d_{l})\Bigg{]}\] \[+N_{1}(N_{1}-1)\sum_{s=2}^{M_{1}}V_{1\bar{1}\bar{1}\bar{s}}^{(1)} e_{si}+\frac{1}{2}N_{1}(N_{1}-1)\sum_{r,s=2}^{M_{1}}V_{\bar{1}\bar{1}\bar{r} \bar{s}}^{(1)}(2e_{sri}+e_{ri}c_{s}+e_{si}c_{r})\] \[-N_{1}V_{1111}^{(12)}d_{i}+N_{1}(N_{2}-1)\sum_{s=2}^{M_{1}}V_{1 \bar{1}\bar{s}\bar{1}}^{(12)}e_{si}-N_{1}\sum_{s=2}^{M_{1}}V_{1\bar{1}\bar{s} \bar{1}}^{(12)}d_{i}c_{s}\] \[+N_{1}\Bigg{[}(N_{2}-1)\sum_{l=2}^{M_{2}}V_{1\bar{1}\bar{1}\bar{ l}}^{(12)}(2d_{li})-\sum_{l=2}^{M_{2}}V_{1\bar{1}\bar{1}\bar{l}}^{(12)}(d_{i}d_{l})+V _{\bar{1}\bar{i}\bar{1}\bar{1}}^{(12)}\Bigg{]}\] \[+N_{1}(N_{2}-1)\sum_{s=2}^{M_{1}}\sum_{l=2}^{M_{2}}V_{\bar{1} \bar{1}\bar{s}\bar{l}}^{(12)}(2e_{sl}+2c_{s}d_{li}+d_{l}e_{si})-N_{1}\sum_{s=2}^ {M_{1}}\sum_{l=2}^{M_{2}}V_{1\bar{1}\bar{s}\bar{l}}^{(12)}(d_{i}e_{sl}+d_{i}c_ {s}+d_{l})\] \[+N_{1}\Bigg{[}\sum_{s=2}^{M_{1}}V_{1\bar{s}\bar{1}}^{(12)}c_{s}+ \sum_{l=2}^{M_{2}}V_{1\bar{1}\bar{i}\bar{l}}^{(12)}d_{l}+\sum_{s=2}^{M_{1}} \sum_{l=2}^{M_{2}}V_{1\bar{1}\bar{s}\bar{l}}^{(12)}(e_{sl}+c_{s}d_{l})\Bigg{]}.\]
In order to determine Eq. D.2, the first, third, and fourth terms of \(\dot{H}_{0}\) in Eq. 4.12, the second, third, fifth, sixth, seventh, eighth, and ninth terms of \(\dot{V}^{(1)}\) in Eq. C.1, and the fifth, seventh, eighth, tenth, thirteenth, fourteenth, fifteenth, and sixteenth terms of \(\dot{V}^{(12)}\) in Eq. C.3 give rise to zero due to the creation operators for virtual orbitals in the species-1. In addition, the fifth, eighth, and ninth terms of \(\dot{V}^{(2)}\) in Eq. C.2 do not have any contribution due to the presence of two creation operators for virtual orbitals of species-2. Eqs. D.1 and D.2 include coefficients \(\bar{\alpha}_{r\bar{s}\bar{i}}\) and \(\alpha_{kli}\) which are presented in the main text, see Eq. 4.28. For calculation of CCSD-M, in Eqs. D.1 and D.2, one should set the coefficients \(c_{\bar{s}\bar{i}r}\), \(d_{lik}\), \(e_{\bar{i}lk}\), \(e_{\bar{i}sl}\), \(e_{sri}\), and \(e_{sli}\), which involve triple excitations, as zero.
To deduce the unknown coefficients of CCSD-M, it is required to solve the coupled equations resulting from \(\langle\phi_{0}|(a_{1}^{\dagger})^{2}a_{\bar{i}}a_{\bar{j}}\dot{H}|\phi_{0} \rangle=0\), \(\langle\phi_{0}|(b_{1}^{\dagger})^{2}b_{i}b_{j}\dot{H}|\phi_{0}\rangle=0\), and \(\langle\phi_{0}|a_{1}^{\dagger}b_{1}^{\dagger}a_{\bar{i}}b_{i}\dot{H}|\phi_{0} \rangle=0\), for the double excitations in the coupled-cluster theory. Substituting \(\dot{H}\) into the above mentioned coupled equations, one can find the rather lengthy general form for the coupled-cluster double excitations. The explicit form of \(\langle\phi_{0}|(a_{1}^{\dagger})^{2}a_{\bar{i}}a_{\bar{j}}\dot{H}|\phi_{0} \rangle=0\) is
\[0 = -4h_{\bar{1}\bar{1}}^{(1)}(c_{\bar{i}\bar{j}})-\sum_{s=2}^{M_{1}}h_{ \bar{1}\bar{s}}^{(1)}(2c_{\bar{i}}c_{s\bar{j}}+2c_{\bar{j}}c_{s\bar{i}}+4c_{s}c _{\bar{i}\bar{j}})+\sum_{s=2}^{M_{1}}\left[2h_{\bar{i}\bar{s}}^{(1)}c_{s\bar{j} }+2h_{\bar{j}\bar{s}}^{(1)}c_{s\bar{i}}\right]\] (D.3) \[-2(2N_{1}-3)V_{\bar{1}\bar{1}\bar{1}\bar{1}}^{(1)}c_{\bar{i}\bar{ j}}+V_{\bar{1}\bar{1}\bar{1}\bar{1}}^{(1)}c_{\bar{i}\bar{c}_{j}}-\left[V_{\bar{1} \bar{1}\bar{1}\bar{1}}^{(1)}c_{\bar{j}}+V_{\bar{j}\bar{1}\bar{1}\bar{1}}^{(1) }c_{\bar{i}}\right]\] \[-\sum_{s=2}^{M_{1}}V_{\bar{1}\bar{1}\bar{1}\bar{s}}^{(1)}\Bigg{[} 4(N_{1}-2)(c_{\bar{i}}c_{s\bar{j}}+c_{\bar{j}}c_{s\bar{i}}+2c_{s}c_{\bar{i}\bar{ j}})+(4c_{s}c_{\bar{i}\bar{j}}-2c_{\bar{i}}c_{\bar{j}}c_{k})\Bigg{]}\] \[+\sum_{r,s=2}^{M_{1}}V_{\bar{1}\bar{1}\bar{r}\bar{s}}^{(1)} \tilde{\gamma}_{rs\bar{i}\bar{j}}+\sum_{s=2}^{M_{1}}\bigg{[}V_{\bar{1}\bar{1} \bar{1}\bar{s}}^{(1)}+V_{\bar{1}\bar{s}\bar{1}}^{(1)}\bigg{]}[2(N_{1}-2)c_{s \bar{j}}-c_{\bar{j}}c_{s}]\] \[+V_{\bar{i}\bar{j}\bar{1}}^{(1)}+\sum_{s=2}^{M_{1}}\bigg{[}V_{ \bar{1}\bar{j}\bar{1}\bar{s}}^{(1)}+V_{\bar{1}\bar{j}\bar{s}\bar{1}}^{(1)} \bigg{]}[2(N_{1}-2)c_{s\bar{i}}-c_{\bar{i}}c_{s}]\] \[+\sum_{r,s=2}^{M_{1}}V_{\bar{1}\bar{i}\bar{r}\bar{s}}^{(1)} \bigg{[}2(N_{1}-2)c_{r\bar{j}}c_{s}+2(N_{1}-2)c_{s\bar{j}}c_{r}-2c_{sr}c_{\bar{ j}}-c_{\bar{j}}c_{r}c_{s}\bigg{]}\] \[+\sum_{r,s=2}^{M_{1}}V_{\bar{1}\bar{j}\bar{r}\bar{s}}^{(1)} \bigg{[}2(N_{1}-2)c_{r\bar{i}}c_{s}+2(N_{1}-2)c_{s\bar{i}}c_{r}-2c_{sr}c_{\bar{ i}}-c_{\bar{i}}c_{r}c_{s}\bigg{]}\] \[+\sum_{s=2}^{M_{1}}V_{\bar{i}\bar{j}\bar{s}\bar{1}}^{(1)}c_{s}+ \sum_{s=2}^{M_{1}}V_{\bar{j}\bar{s}\bar{1}}^{(1)}c_{s}+\sum_{r,s=2}^{M_{1}}V_{ \bar{i}\bar{j}\bar{r}\bar{s}}^{(1)}[2c_{sr}+c_{r}c_{s}]\] \[+\frac{1}{2}N_{2}(N_{2}-1)\sum_{k,l=2}^{M_{2}}V_{1\bar{1}kl}^{(2) }[c_{\bar{i}k}e_{\bar{j}l}+e_{\bar{j}k}e_{\bar{i}l}]-4N_{2}V_{1\bar{1}\bar{1} \bar{1}}^{(12)}(c_{\bar{i}\bar{j}})\] \[-N_{2}\sum_{s=2}^{M_{1}}V_{\bar{1}\bar{1}\bar{s}\bar{1}}^{(12)}(4 c_{\bar{i}\bar{j}}c_{s}+2c_{s\bar{j}}c_{\bar{i}}+2c_{s\bar{i}}c_{\bar{j}})\] \[-N_{2}\sum_{l=2}^{M_{2}}V_{\bar{1}\bar{1}\bar{l}\bar{l}}^{(12)}(4 c_{\bar{i}\bar{j}}d_{l}+c_{\bar{i}}e_{\bar{j}l}+c_{\bar{j}}e_{\bar{i}l})+\sum_{s=2}^{M _{1}}\sum_{l=2}^{M_{2}}V_{\bar{1}\bar{1}\bar{l}}^{(12)}\bar{\delta}_{s\bar{l} \bar{j}}\] \[+2N_{2}\sum_{s=2}^{M_{1}}\Bigg{[}V_{\bar{i}\bar{1}\bar{s}\bar{1} }^{(12)}c_{s\bar{j}}+V_{\bar{j}\bar{1}\bar{s}\bar{1}}^{(12)}c_{s\bar{i}}\Bigg{]} +N_{2}\sum_{l=2}^{M_{2}}\Bigg{[}V_{\bar{i}\bar{1}\bar{1}}^{(12)}e_{\bar{j}l}+V _{\bar{j}\bar{1}\bar{1}\bar{l}}^{(12)}e_{\bar{i}l}\Bigg{]}\] \[+N_{2}\sum_{s=2}^{M_{1}}\sum_{l=2}^{M_{2}}\Bigg{[}V_{\bar{i}\bar{1} \bar{s}\bar{l}}^{(12)}(2c_{s\bar{j}}d_{l}+c_{s}e_{\bar{j}l})+V_{\bar{j}\bar{1} \bar{s}\bar{l}}^{(12)}(2c_{s\bar{i}}d_{l}+c_{s}e_{\bar{i}l})\Bigg{]}.\]
Here, Eq. D.3 involves the first, second, and fourth terms of Eq. 4.12, all terms of Eq. C.1, only the fourth term of Eq. C.2, and the first, second, third, sixth, eighth, tenth, and thirteenth terms of Eq. C.3. The remaining terms do not contribute.
Now, writing down the transformed Hamiltonian in \(\langle\phi_{0}|(b_{1}^{\dagger})^{2}b_{i}b_{j}\dot{H}|\phi_{0}\rangle=0\), one can find for species-2
\[0 = -4h_{11}^{(2)}d_{ij}-\sum_{l=2}^{M_{2}}h_{1l}^{(2)}(2d_{i}d_{lj}+2d_ {j}d_{li}+4d_{l}d_{lj})+\sum_{l=2}^{M_{2}}\left[2h_{il}^{(2)}d_{lj}+2h_{jl}^{(2) }d_{li}\right]\] (D.4) \[-2(2N_{2}-3)V_{1111}^{(2)}d_{ij}+V_{1111}^{(2)}d_{i}d_{j}-\left[V_ {i111}^{(2)}d_{j}+V_{j111}^{(2)}d_{i}\right]\] \[-\sum_{l=2}^{M_{2}}V_{111l}^{(2)}\Bigg{[}4(N_{2}-2)(d_{i}d_{lj}+d _{j}d_{li}+2d_{l}d_{ij})+(4d_{l}d_{ij}-2d_{i}d_{j}d_{l})\Bigg{]}\] \[+\sum_{k,l=2}^{M_{2}}V_{11kl}^{(2)}\gamma_{klij}+\sum_{l=2}^{M_{2 }}\bigg{[}V_{1i1l}^{(2)}+V_{1i1l}^{(2)}\bigg{]}[2(N_{2}-2)d_{lj}-d_{j}d_{l}]\] \[+V_{ij11}^{(2)}+\sum_{l=2}^{M_{2}}\bigg{[}V_{1j1l}^{(2)}+V_{1j1l }^{(2)}\bigg{]}[2(N_{2}-2)d_{li}-d_{i}d_{l}]\] \[+\sum_{k,l=2}^{M_{2}}V_{1ikl}^{(2)}\bigg{[}2(N_{2}-2)d_{kj}d_{l}+ 2(N_{2}-2)d_{lj}d_{k}-2d_{lk}d_{j}-d_{j}d_{k}d_{l}\bigg{]}\] \[+\sum_{k,l=2}^{M_{2}}V_{1jkl}^{(2)}\bigg{[}2(N_{2}-2)d_{ki}d_{l}+ 2(N_{2}-2)d_{li}d_{k}-2d_{lk}d_{i}-d_{i}d_{k}d_{l}\bigg{]}\] \[+\sum_{l=2}^{M_{2}}V_{ij1l}^{(2)}d_{l}+\sum_{l=2}^{M_{2}}V_{jil1} ^{(2)}d_{l}+\sum_{k,l=2}^{M_{2}}V_{ijkl}^{(2)}[2d_{lk}+d_{k}d_{l}]\] \[+\frac{1}{2}N_{1}(N_{1}-1)\sum_{r,s=2}^{M_{1}}V_{\bar{1}\bar{r}s }^{(1)}[e_{ri}e_{sj}+e_{rj}e_{si}]-4N_{1}V_{\bar{1}1\bar{1}}^{(12)}d_{ij}\] \[-N_{1}\sum_{l=2}^{M_{2}}V_{1\bar{1}\bar{1}}^{(12)}(4d_{ij}d_{l}+ 2d_{lj}d_{i}+2d_{li}d_{j})\] \[-N_{1}\sum_{s=2}^{M_{1}}V_{\bar{1}\bar{s}\bar{1}}^{(12)}(4d_{ij}c _{s}+e_{sj}d_{i}+e_{si}d_{j})+\sum_{s=2}^{M_{1}}\sum_{l=2}^{M_{2}}V_{\bar{1} \bar{1}\bar{s}\bar{l}}^{(12)}\delta_{slij}\] \[+2N_{1}\sum_{l=2}^{M_{2}}\Bigg{[}V_{\bar{1}\bar{1}\bar{l}l}^{(12) }d_{lj}+V_{\bar{1}\bar{j}\bar{1}l}^{(12)}d_{li}\Bigg{]}+N_{1}\sum_{s=2}^{M_{1}} \Bigg{[}V_{\bar{1}i\bar{s}\bar{1}}^{(12)}e_{sj}+V_{\bar{1}\bar{j}\bar{s}\bar{ 1}}^{(12)}e_{si}\Bigg{]}\] \[+N_{1}\sum_{s=2}^{M_{1}}\sum_{l=2}^{M_{2}}\Bigg{[}V_{\bar{1}i\bar {s}\bar{l}}^{(12)}(2d_{lj}c_{s}+e_{sj}d_{l})+V_{\bar{1}\bar{j}\bar{s}\bar{l}}^{ (12)}(2d_{li}c_{s}+e_{si}d_{l})\Bigg{]}.\]
For calculating Eq. D.4, the fifth, sixth, and eighth terms of Eq. 4.12, only the fourth term of Eq. C.2, all terms of Eq. C.1, and the first, second, third, sixth, ninth, eleventh, and twelfth terms of Eq. C.3 contribute.
Now, the general form of \(\langle\phi_{0}|a_{1}^{\dagger}b_{1}^{\dagger}a_{i}^{\ast}b_{i}\dot{H}|\phi_{0} \rangle=0\) with general orbitals becomes
\[0 = -[h_{\bar{1}\bar{1}}^{(1)}+h_{11}^{(2)}]e_{\bar{i}i}-\sum_{s=2}^{M_{1} }h_{\bar{1}\bar{s}}^{(1)}(e_{\bar{i}i}c_{s}+e_{si}c_{\bar{i}})-\sum_{l=2}^{M_{2} }h_{1l}^{(2)}(e_{\bar{i}i}d_{l}+e_{\bar{i}l}d_{i})+\sum_{s=2}^{M_{1}}h_{\bar{i} \bar{s}}^{(1)}e_{si}+\sum_{l=2}^{M_{2}}h_{il}^{(2)}e_{\bar{i}l}\] (D.5) \[-[(N_{1}-1)V_{\bar{1}\bar{1}\bar{1}\bar{1}}^{(1)}+(N_{2}-1)V_{111 1}^{(2)}]e_{\bar{i}i}+\sum_{r,s=2}^{M_{1}}V_{\bar{1}\bar{r}\bar{s}}^{(1)}\bar{ \xi}_{rs\bar{i}i}+\sum_{k,l=2}^{M_{2}}V_{11kl}^{(2)}\xi_{kl\bar{i}i}\] \[-(N_{1}-1)\sum_{s=2}^{M_{1}}V_{\bar{1}\bar{1}\bar{s}}^{(1)}(2c_{ s}e\bar{i}i+2c_{\bar{i}}e_{si})-(N_{2}-1)\sum_{l=2}^{M_{2}}V_{11il}^{(2)}(2d_{ l}e_{\bar{i}i}+2d_{i}e_{\bar{i}l})\] \[+(N_{1}-1)\sum_{s=2}^{M_{1}}(V_{\bar{1}\bar{1}\bar{s}}^{(1)}+V_{ \bar{1}\bar{i}\bar{s}}^{(1)})e_{si}+(N_{2}-1)\sum_{l=2}^{M_{2}}(V_{11il}^{(2)} +V_{1i\bar{i}l}^{(2)})e_{\bar{i}l}\] \[+(N_{1}-1)\sum_{r,s=2}^{M_{1}}V_{\bar{1}\bar{i}r\bar{s}}^{(1)}(c _{s}e_{ri}+c_{r}e_{si})+(N_{2}-1)\sum_{k,l=2}^{M_{2}}V_{1ikl}^{(2)}(d_{l}e_{ \bar{i}k}+d_{k}e_{\bar{i}l})\] \[-V_{\bar{1}\bar{1}\bar{1}\bar{1}}^{(12)}[(N_{1}+N_{2}-1)e_{\bar{i }i}-c_{\bar{i}}d_{i}]-\sum_{s=2}^{M_{1}}V_{\bar{1}\bar{1}\bar{s}}^{(12)}\bar{ \chi}_{s\bar{i}i}-\sum_{l=2}^{M_{2}}V_{1\bar{1}\bar{1}\bar{l}}^{(12)}\chi_{l \bar{i}i}\] \[-V_{\bar{1}\bar{i}\bar{1}}^{(12)}c_{\bar{i}}-V_{\bar{1}\bar{1} \bar{1}}^{(12)}d_{i}+\sum_{s=2}^{M_{1}}\sum_{l=2}^{M_{2}}V_{\bar{1}1\bar{s}l}^ {(12)}\zeta_{sl\bar{i}i}+\sum_{s=2}^{M_{1}}\sum_{l=2}^{M_{2}}V_{\bar{i}\bar{s} l}^{(12)}(e_{sl}+c_{s}d_{l})\] \[+V_{\bar{i}\bar{i}\bar{1}}^{(12)}+\sum_{s=2}^{M_{1}}V_{\bar{i} \bar{s}1}^{(12)}[(N_{2}-1)e_{si}-c_{s}d_{i}]+\sum_{l=2}^{M_{2}}V_{\bar{1}i\bar{ l}l}^{(12)}[(N_{1}-1)e_{\bar{i}l}-c_{\bar{i}}d_{l}]\] \[+\sum_{s=2}^{M_{1}}V_{\bar{1}\bar{s}l}^{(12)}[2(N_{1}-1)c_{s\bar{ i}}-c_{\bar{i}}c_{s}]+\sum_{l=2}^{M_{2}}V_{\bar{i}\bar{1}\bar{l}l}^{(12)}[2(N_{2}-1)d_{ li}-d_{i}d_{l}]\] \[+\sum_{s=2}^{M_{1}}\sum_{l=2}^{M_{2}}V_{\bar{1}\bar{i}l}^{(12)}[( N_{1}-1)(2c_{si}d_{l}+c_{s}e_{\bar{i}l})-(c_{\bar{i}}e_{sl}+c_{\bar{i}}c_{s}d_{l})]+ \sum_{s=2}^{M_{1}}V_{\bar{i}\bar{i}\bar{s}1}^{(12)}c_{s}\] \[+\sum_{s=2}^{M_{1}}\sum_{l=2}^{M_{2}}V_{\bar{i}1\bar{s}l}^{(12)}[ (N_{2}-1)(2c_{s}d_{li}+d_{l}e_{si})-(d_{i}e_{sl}+c_{s}d_{i}d_{l})]+\sum_{l=2}^{ M_{2}}V_{\bar{i}\bar{i}\bar{l}}^{(12)}d_{l}.\]
In order to deduce Eq. D.5, the first, second, fourth, fifth, sixth, and eighth terms of Eq. 4.12, the first, second, fourth, sixth, and seventh terms of Eq. C.1 and of Eq. C.2, and all terms of Eq. C.3 contribute. The terms \(\bar{\gamma}_{rs\bar{i}\bar{j}}\) and \(\bar{\delta}_{sl\bar{i}\bar{j}}\) present in Eq. D.3, \(\gamma_{klij}\) and \(\delta_{slij}\) in Eq. D.4, and \(\bar{\xi}_{rs\bar{i}i}\), \(\xi_{kl\bar{i}i}\), \(\bar{\chi}_{s\bar{i}i}\), \(\chi_{l\bar{i}i}\), and \(\zeta_{sl\bar{i}i}\) in Eq. D.5 are displayed in the main text. Eqs. D.1 to D.5 are determined for the arbitrary sets of orthonormal orbitals. Using the relations extracting from the Fock-like operators, see Eq. A.1, and Eqs. D.1 to D.5, one can transform the one-body operators to two-body operators and find the working equations of the coupled-cluster theory for bosonic mixtures.
### Finding the coefficients \(c_{22}\), \(d_{22}\), and \(e_{22}\)
Here we discuss the details of calculating the coupled-cluster coefficients, \(c_{22}\), \(d_{22}\), and \(e_{22}\), which are required to determine the energy of the bosonic mixture for the illustrative examples employing the harmonic interaction model. This section gives us the knowledge of how we deduce the unknown coefficients. To display the coefficients, we choose a particular set, among all the results presented in the main text, where \(N_{1}=N_{2}=10000\) and \(\Lambda_{12}=\Lambda_{21}=0.5\), see Fig 1 (e) and (f) of the main text. The coupled equations 5.2 to 5.4 derived in the main text give us eight families of solutions for the coefficients \(c_{22}\), \(d_{22}\), and \(e_{22}\). Among them, four families are real and the other four are complex-valued. The real and complex-valued solutions of \(c_{22}\), \(d_{22}\), and \(e_{22}\) are presented in Fig. A1. For the complex-valued coefficients, we plot the absolute values. It is obvious that the complex-valued families of the coefficients, right column of Fig. A1, yield a complex-valued coupled-cluster energy and thereby they are discarded. Among the four real-valued families of the coefficients, three coefficients are fluctuating as a function of the intra-species interaction parameters. Therefore, they are also discarded. All in all, the blue curve with triangles in Fig. A1 (a), (c), and (e) is the true solution for the coefficients. We take only the true solutions of \(c_{22}\), \(d_{22}\) and \(e_{22}\) and present them in Fig. A2. It can be seen from Fig. A2 (a) that \(c_{22}\) and \(d_{22}\) show the monotonous nature when moving from repulsive to attractive intra-species interactions and cross zero value at about \(\Lambda_{1}=\Lambda_{2}=-0.05\). Since the two-species are interacting with each other having \(\Lambda_{12}=\Lambda_{21}=0.5\), \(c_{22}\) and \(d_{22}\) cross the zero value at a finite value of intra-species interaction parameter. At the same value of \(\Lambda_{1}\) and \(\Lambda_{2}\), i.e., -0.05, \(e_{22}\) reaches its maximal value. Therefore, it is a very careful process to determine the true solution of the unknown coefficients.
Figs. A3 (a) and (b) display \((E_{\rm MF}-E_{\rm cc})/N\) for all the real and complex families, respectively, of the coefficients, \(c_{22}\), \(d_{22}\) and \(e_{22}\). As discussed before the energies are complex in Fig. A3 (b) and they are discarded. Now, \((E_{\rm MF}-E_{\rm cc})/N\) in Fig. A2 (a) presents that, among the four families of real energies, three families have more than one value of energy for which \((E_{\rm MF}-E_{\rm cc})/N\) is negative and therefore they could not be a solution for the coupled-cluster theory. Moreover, only one solution (blue color line) always gives a positive value for \((E_{\rm MF}-E_{\rm cc})/N\) and is closest to the zero line throughout the range of the intra-species interaction parameters. Therefore, the actual coefficients will have the following properties: (i) the coefficients \(c_{22}\) and \(d_{22}\) have the monotonous nature as a function of intra-species interaction parameter and (ii) the coupled-cluster energy with those coefficients are always smaller and closest to the corresponding mean-field energy.
Figure 11: Variation of the coefficients, \(c_{22}\), \(d_{22}\), and \(e_{22}\) as a function of the intra-species interaction parameters. Here, the numbers of bosons are \(N_{1}=N_{2}=10000\) and the inter-species interaction parameters \(\Lambda_{12}=\Lambda_{21}=0.5\). Eight solutions of coefficients \(c_{22}\), \(d_{22}\), and \(e_{22}\) are shown. Among the eight solutions, four real and four complex-valued solutions are presented in the left and right columns, respectively. Absolute values are plotted in panels (b), (d), and (f). The blue color line in panels (a), (c), and (e) corresponds to the true values of the coefficients by which the coupled-cluster energy is calculated. The rest are discarded for lacking various properties, see the text for further discussion. All the quantities are dimensionless.
Figure 13: Variation of \((E_{\rm MF}-E_{\rm cc})/N\) as a function of the intra-species interaction parameters. The numbers of bosons are \(N_{1}=N_{2}=10000\) and the inter-species interaction parameters \(\Lambda_{12}=\Lambda_{21}=0.5\). The eight solutions of the energy are shown. The corresponding coefficients are presented in Fig. 11. Among the eight solutions, four real and four complex-valued solutions are presented in panels (a) and (b), respectively. See the text for further details of the determination of the true solutions [marked by blue triangles in panel (a)]. All the quantities are dimensionless. |
2309.03631 | Insights Into the Inner Workings of Transformer Models for Protein
Function Prediction | Motivation: We explored how explainable artificial intelligence (XAI) can
help to shed light into the inner workings of neural networks for protein
function prediction, by extending the widely used XAI method of integrated
gradients such that latent representations inside of transformer models, which
were finetuned to Gene Ontology term and Enzyme Commission number prediction,
can be inspected too. Results: The approach enabled us to identify amino acids
in the sequences that the transformers pay particular attention to, and to show
that these relevant sequence parts reflect expectations from biology and
chemistry, both in the embedding layer and inside of the model, where we
identified transformer heads with a statistically significant correspondence of
attribution maps with ground truth sequence annotations (e.g. transmembrane
regions, active sites) across many proteins. Availability and Implementation:
Source code can be accessed at https://github.com/markuswenzel/xai-proteins . | Markus Wenzel, Erik Grüner, Nils Strodthoff | 2023-09-07T10:54:06Z | http://arxiv.org/abs/2309.03631v2 | # Insights Into the Inner Workings of Transformer Models for Protein Function Prediction
###### Abstract
**Motivation:** We explored how explainable AI (XAI) can help to shed light into the inner workings of neural networks for protein function prediction, by extending the widely used XAI method of integrated gradients such that latent representations inside of transformer models, which were finetuned to Gene Ontology term and Enzyme Commission number prediction, can be inspected too. **Results:** The approach enabled us to identify amino acids in the sequences that the transformers pay particular attention to, and to show that these relevant sequence parts reflect expectations from biology and chemistry, both in the embedding layer and inside of the model, where we identified transformer heads with a statistically significant correspondence of attribution maps with ground truth sequence annotations (e.g., transmembrane regions, active sites) across many proteins. **Availability and Implementation:** Source code can be accessed at this https URL.
## 1 Introduction
### Protein function prediction
**Proteins - constituents of life**. Proteins are versatile molecular machines, performing various tasks in basically all cells of every organism, and are modularly constructed from chains of amino acids. Inferring the function of a given protein merely from its amino acid sequence is a particularly interesting problem in bioinformatics research.
Function prediction can help to rapidly provide valuable pointers in the face of so far unfamiliar proteins of understudied species, such as of emerging pathogens. Moreover, it makes the analysis of large, unlabeled protein data sets possible, which becomes more and more relevant against the backdrop of the massive and evermore growing databases of unlabeled nucleic acid sequences, which again can be translated into amino acid sequences. Next-generation DNA sequencers can read the nucleic acid sequences present in a sample or specimen at decreasing costs (Mardis, 2017; Shendure _et al._, 2017), much faster than experimenters can determine the function of the genes and corresponding proteins. Therefore, databases with genes and corresponding amino acid sequences grow much more rapidly than those of respective experimental gene and protein labels or annotations. Besides, gaining knowledge about the mapping between amino acid sequence and protein function can help to engineer proteins for dedicated purposes too (Alley _et al._, 2019; Yang _et al._, 2019; Hie and Yang, 2022; Ferruz _et al._, 2022; Madani _et al._, 2023).
**Machine learning approaches to protein function prediction** can include inferring enzymatic function (Dalkiran _et al._, 2018; Li _et al._, 2018; Zou _et al._, 2019; Yu _et al._, 2023), Gene Ontology (GO) terms (Kulmanov _et al._, 2017; Kulmanov and Hoehndorf, 2019, 2022; You _et al._, 2018, 2021; Strodthoff _et al._, 2020; Littmann _et al._, 2021), protein-protein/-drug interaction, remote homology, stability, sub-cellular location, and other properties (Bepler and Berger, 2021; Rao _et al._, 2019). For structure prediction, the objective is to infer how the amino
acid sequence folds into the secondary [Zhang _et al._2018; Rives _et al._2021] and tertiary protein structure [Torrisi _et al._2020; AlQuraishi, 2021; Jumper _et al._2021; Weissenow _et al._2022]. Several of the prediction tasks can be approached as well by transferring labels from similar sequences obtained via multiple sequence alignment (MSA) [Buchfink _et al._2014; Gong _et al._2016]. Protein prediction models are compared by the scientific community in systematic performance benchmarks, e.g., for function annotation [CAFA, Radivojac _et al._2013; Jiang _et al._2016; Zhou _et al._2019], for structure prediction [CASP, Kryshtafovych _et al._2019, 2021], or for several semi-supervised tasks [Rao _et al._2019; Fenoy _et al._2022]. Machine learning methods are continuing to win ground in comparison to MSA-techniques with respect to performance, have a short inference time, and can process sequences from the so-called 'dark proteome' too, where alignments are not possible [Perdigao _et al._2015; Rao _et al._2019; Lin _et al._2023].
### Protein language modeling and transfer learning
**Relations to NLP**. Amino acid sequences share some similarities with the sequences of letters and words occurring in written language, in particular with respect to the complex interrelations between distant elements, which are arranged in one-dimensional chains. Thus, recent progress in research on natural language processing (NLP) employing language modeling in a transfer learning scheme [Howard and Ruder2018] has driven forward protein function prediction too [e.g., Strodthoff _et al._2020].
**Self-supervised pretraining**. Typically, a language model is first pretrained on large numbers of unlabeled sequences in an unsupervised fashion, e.g., by learning to predict masked tokens (cloze task) or the respective next token in the sequences (which is why this unsupervised approach is also dubbed self-supervised learning). In this way, the model learns useful representations of the sequence statistics (i.e. language). These statistics possibly arise because the amino acid chains need to be stable under physiological conditions and are subject to evolutionary pressure. The learned representations can be transferred to separate downstream tasks, where the pretrained model can be further finetuned in a supervised fashion on labeled data, which are usually available in smaller amounts, considering that sequence labelling by experimenters is costly and lengthy.
**Model architectures**. Transformer models [Vaswani _et al._2017] making use of the attention mechanism [Niu _et al._2021], such as bidirectional encoder representations from transformers [BERT, Devlin _et al._2018] are currently prevailing architectures in NLP. Transformers have been recently applied to the study of amino acid sequences too, pushing the state of the art in the field of proteomics as well [Rao _et al._2019, 2021; Nambiar _et al._2020; Bepler and Berger2021; Rives _et al._2021; Littmann _et al._2021; Elnaggar _et al._2022; Unsal _et al._2022; Brandes _et al._2022; Fenoy _et al._2022; Olenyi _et al._2023]. Recurrent neural networks (RNNs) using long short term memory (LSTM) cells are another model architecture that is particularly suited to process sequential data. RNNs have been successfully employed to protein [Strodthoff _et al._2020] and peptide [Vielhaben _et al._2020] property prediction as well, within the scheme of language modeling combined with transfer learning, as sketched out above.
### Explainable machine learning
**Need for explainability**. Transformers and other modern deep learning models are notorious for having often millions and sometimes billions of trainable parameters, and it can be very difficult to interpret the decision making logic or strategy of such complex models. The research field of explainable machine learning [Lundberg and Lee2017; Montavon _et al._2018; Tjoa and Guan2020; Samek _et al._2021; Arrieta _et al._2020; Covert _et al._2021] aims at developing methods that enable humans to better interpret - or to a limited degree: understand - such 'opaque', complex models. In certain cases, it was demonstrated that the methods can even help to uncover flaws and unintended biases of the models, such as being mislead by spurious correlations in the data [Lapuschkin _et al._2019].
**Attribution methods**, such as integrated gradients (IG) [Sundararajan _et al._2017], layerwise-relevance propagation [Bach _et al._2015; Binder _et al._2016] or gradient-weighted class activation mapping [Selvaraju
_et al._, 2017), make it possible to identify those features in the input space that the model apparently focuses on, because these features turn out to be particular relevant for the final classification decision of the model. Further examples of model explainability methods include probing classifiers (Belinkov, 2022), testing with concept activation vectors (Kim _et al._, 2018), and studying the attention mechanism (Serrano and Smith, 2019; Bai _et al._, 2021; Niu _et al._, 2021; Jain and Wallace, 2019). Explainability methods have been employed in NLP too (Arras _et al._, 2019; Pascual _et al._, 2021; Manning _et al._, 2020; Chefer _et al._, 2021). Moreover, researchers have started to explore using explainability methods in the area of protein function prediction (Upmeier zu Belzen _et al._, 2019; Vig _et al._, 2021; Taujale _et al._, 2021; Vu _et al._, 2023; Zhou _et al._, 2023; Hou _et al._, 2023).
### Contributions of the paper
**Goal of the study**. Building upon this previous research on the interpretation of protein classification models, we aimed at exploring how explainability methods can further help to gain insights into the inner workings of the now often huge neural networks, and proceeded as follows.
**Specific contributions**. First, we finetuned pretrained transformers on selected prediction tasks and could push or reach the state-of-the-art. Then, we quantified the relevance of each amino acid of a protein for the function prediction model. Subsequently, we investigated if these relevant sequence regions match expectations informed by knowledge from biology or chemistry, by correlating the relevance attributions with annotations from sequence databases (see Figure 1). For instance, we addressed the question if a classification model that is able to infer if a protein is situated in the cell membrane does indeed focus systematically on transmembrane regions or not. We conducted this analysis on the embedding level and 'inside' of the model with a novel adaptation of IG. In this way, we identified transformer heads with a statistically significant correspondence of the attribution maps with ground truth annotations, across many proteins and thus going beyond anecdotes of few selected cases.
Figure 1: Illustration of the experimental design. Top: From the amino acid sequence, the finetuned transformer model infers the applicable Gene Ontology (GO) terms (represented as multi-label class membership vector). (The depicted exemplary ‘catalase-3’ should be labeled with the GO terms ‘catalase activity’ as ‘molecular function’, ‘response to hydrogen peroxide’ as ‘biological process’, ‘cytoplasm’ as ‘cellular component’ etc.; about 5K of about 45K GO terms were considered.) Center: Relevance indicative for a selected GO term was attributed to the amino acids per protein and correlated with corresponding annotations per amino acid. This correlation between relevance attributions and annotations was then statistically assessed across the test data set proteins. The analysis was conducted for the embedding layer and ‘inside’ of the model, for each head in each layer, and was repeated for different GO terms (see Section 2.1). Bottom: Specific amino acids of a protein are annotated in sequence databases like UniProt, because they serve as binding or active sites or are located in the cell membrane etc. Active sites can, e.g., be found at the histidine (‘H’ at position 65) and asparagine (‘N’ at position 138) of ‘catalase-3’ (structure created by AlphaFold (Jumper _et al._, 2021) under the CC-BY 4.0 licence).
System and methods
### Revealing insights into function prediction models
**The prediction tasks** of inferring Gene Ontology (GO) terms and Enzyme Commission (EC) numbers, that the proteins are labeled with, from their amino acid sequence are detailed in Appendix B. This supplementary material also explains the finetuning of the transformers 'ProtBert-BFD' and 'ProtT5-XL-UniRef50' [10] on the GO and EC tasks, and contains statements about data availability and composition.
**Overall approach**. We investigated whether specific positions or areas on the amino acid sequence that had been annotated in sequence data bases are particularly relevant for the classification decision of the model (see Figure 1). Annotations included UniProtKB/Swiss-Prot 'active' and 'binding sites', 'transmembrane regions','short sequence motifs', and PROSITE patterns related to a GO term and its children terms in the ontology. Definitions of the aforementioned UniProt annotations (per amino acid) and matching GO terms (class labels of proteins) are compiled in Table A.11. First, we attributed relevance indicative for a given class (either a selected GO term or EC number) to each amino acid of a protein. Then, we correlated the relevance heat map obtained for the amino acid chain of a protein with corresponding binary sequence annotations. To study the information representation within the model, the explainability analysis was conducted at the embedding layer and repeated 'inside' of the model, separately for its different heads and layers, using a novel method building upon IG, described below in Section 3.
Footnote 1: Tables/figures with prefix letters are shown in the supplement. material.
**Experimental setup**. For the experimental evaluation, we focus on the pretrained ProtBert model that was finetuned either to the multi-label GO-classification on the GO '2016' data set, or to the multi-class EC number classification on the 'EC50 level L1' data set. This decision is based on the computational benefit of a substantially smaller memory footprint of BERT in comparison to T5. We consider the comparatively narrow EC task in addition to the much more comprehensive GO prediction, because the test split of the EC data set contains a larger number of samples that are both labeled per protein and annotated per amino acid, which is beneficial for the conducted explainability analysis.
## 3 Algorithm
**Integrated gradients**[20] represents a model-agnostic attribution method, which can be characterized as unique attribution method satisfying a set of four axioms (Invariance, Sensitivity, Linearity, and Completeness). In this formalism, the attribution for feature \(i\) is defined via the line integral (along a path, parameterized as \(\gamma(t)\) with \(t\in[0,1]\), between some chosen baseline \(\gamma(0)=x^{\prime}\) and the sample to be explained \(\gamma(1)=x\)),
\[\text{IG}_{i}^{\gamma}=\int_{0}^{1}\text{d}\alpha\frac{\partial F(\gamma( \alpha))}{\partial\gamma_{i}}\frac{\text{d}\gamma_{i}}{\text{d}\alpha}\,, \tag{1}\]
where \(F\) is the function we aim to explain. Choosing \(\gamma\) as straight line connecting \(x^{\prime}\) and \(x\) makes IG the unique method satisfying the four axioms from above and an additional symmetry axiom. This path is the typical choice in applications applied directly to the input layer for computer vision or to the embedding layer for NLP. The approach can be generalized to arbitrary layers if one replaces \(x\) and \(x^{\prime}\) by the hidden feature representation of the network up to this layer (referred to as 'layer IG' in the popular 'Captum' library [14]).
**Head-specific attribution maps**. To obtain attributions for individual heads, we have to target the output of the multi-head self-attention (MHSA) block of a particular layer; see Figure 2 for a visualization of the transformer architecture. Properly separating the attributions of the individual heads from the attribution contribution obtained from the skip connection necessitates to target directly the output of the MHSA. Now, one cannot just simply choose an integration path that connects baseline and sample as encoded by the MHSA block because the input for the skip connection has to be varied consistently. To keep an identical path in all cases, we fix the integration path as a straight line in the embedding layer, which then gets encoded into a, in general, curvilinear path seen as input for some intermediate layer. Choosing not a straight path only leads to the violation of the symmetry axiom, which is not of paramount practical importance in this application; see [20, 15].
_et al._, 2021) for other applications with IG applied to general paths. For every sample, this application of IG yields a relevance map of shape \(\text{seq}\times n_{\text{model}}\), where the first \(n_{\text{model}}/n_{\text{heads}}\) entries in the last dimension correspond to the first head, followed by the second head etc. By summing over \(n_{\text{model}}/n_{\text{heads}}\) entries in the last dimension, we can reduce the relevance map to a \(\text{seq}\times n_{\text{heads}}\) attribution map, i.e., one relevance sequence per head.
**Correlation coefficients and statistical significance**. Each sequence of relevance attributions can then be correlated with sequence annotations to find out if the model focuses on the annotated amino acids. Coefficients of point biserial correlation (Kornbrot, 2005), which is equivalent to Pearson correlation, were calculated between the continuous relevance values and the corresponding binary annotations per amino acid. This correlation analysis was conducted separately for each head in each transformer layer. The resulting correlation coefficients were then assembled into a \(n_{\text{layer}}\times n_{\text{head}}\) matrix per protein, which entered the subsequent statistical analysis across proteins. Summary statistics over all proteins (which belong to the respective GO or EC class, and, which are part of the respective test data set split) were obtained by computing t-tests across the correlation coefficients. The resulting p-values were corrected for the multiple tests per condition (16 heads times 30 layers equals 480 hypothesis tests) by controlling the false discovery rate (Benjamini and Hochberg, 1995).
**Summed attribution maps**. In parallel to the correlation analysis, we furthermore sum the aforementioned attribution map along the sequence dimension, and obtain \(n_{\text{heads}}\) entries that specify the relevance distribution onto the different heads. We can carry out the same procedure for every transformer layer and combine all results into an \(n_{\text{layer}}\times n_{\text{head}}\) relevance map of summed attributions. This map makes it possible to identify heads with a positive relevance with respect to the selected class. One map was obtained per protein. Heads with a significantly positive relevance were singled out by calculating a summary statistic across proteins with the Wilcoxon signed-rank test.
Finally, the two parallel analysis tracks were combined by identifying transformer heads that feature both a significantly positive (A) relevance-annotation-correlation and (B) relevance (this overlay is displayed in the figures by masking A with B).
## 4 Implementation
Appendix C shows implementation details as supplementary material.
## 5 Results and Discussion
### Predictive performance
The performance results for the ProtT5 and ProtBert transformers finetuned to the GO and EC protein function tasks are presented in Appendix D as supplementary material. In summary, we show that finetuning pretrained large transformer models leads to competitive results, in particular in the most relevant comparison in the single-model category, often on par with MSA-approaches. The approach of finetuning the entire model including the encoder shows its particular strength in the 'CAFA3' benchmark.
### Explainability analysis: embedding layer
**Research question**. Starting with embedding layer attribution maps, as the most widely considered type of attribution, we investigate whether there are significant correlations between attribution maps and sequence annotations from external sources (see Section 2.1). We aim to answer this question in a statistical fashion going beyond anecdotal evidence based on single examples, which can sometimes be encountered in the literature.
**GO prediction: GO'membrane' attributions correlate in particular with UniProt 'transmembrane regions'**. Figure 4 shows the results of the explainability
Figure 2: Visualization of the explainability method based on IG that can attribute relevance to sequence tokens (here: amino acids) separately for each head and layer of the transformer (adapted from Vaswani _et al._, 2017).
analysis for the embedding layer of ProtBert finetuned to GO classification. The relevance of each amino acid indicative for selected GO terms was computed with IG, and then correlated with UniProt and PROSITE sequence annotations. Subsequently, it was tested whether the correlation coefficients across all annotated proteins from the test set were significantly positive (see Section 2.1). A significant correlation was observed when relevance attributions indicative for the GO label'membrane' were correlated with UniProt 'transmembrane regions' (\(p\ll 0.05\)) and with 'active sites' (\(p<0.05\)). Correlation was not observed in the GO 'catalytic activity' and 'binding' cases.
**EC prediction: attributions correlate significantly with several types of sequence annotations**. Figure 5 shows the results of the explainability analysis for the embedding layer of ProtBert finetuned to EC number classification ('EC50 level L1' data set; i.e., the differentiation between the six main enzyme classes). Relevance per amino acid for each of the six EC classes was correlated with the UniProt annotations as 'active sites', 'binding sites', 'transmembrane regions', and'short sequence motifs'. It can be observed that the relevance attributions correlated significantly (\(p<0.05\)) with 'active site' annotations for all six EC classes, with 'binding site' annotations for four EC classes, and with 'transmembrane
Figure 4: Attribution maps for the embedding layer of ProtBert finetuned to GO term classification were correlated with sequence annotations. Left: Relevance attributions indicative for the GO label ‘membrane’ correlate significantly (\(*\)) with UniProt annotations as ‘transmembrane regions’ (\(p\ll 0.05\)) and ‘active sites’ (\(p<0.05\)). Center and right: Attribution-annotation-correlation was not observed for GO ‘catalytic activity’ and ‘binding’. Numbers of test split samples both labeled with the GO term and annotated per amino acid are listed below the x-axis.
Figure 3: Inside ProtBert; GO ‘membrane’ (GO:0016020). Left: The relevance attribution (along the sequences) indicative for the GO term ‘membrane’ was correlated with UniProt annotations as ‘transmembrane regions’, for each transformer head and layer. Biserial correlation coefficients (r), obtained for each attribution-annotation-pair, were aggregated in population statistics with t-tests. The resulting p-values of the t-tests were adjusted with the Benjamini/Hochberg method for the multiple hypothesis tests conducted in order to limit the false discovery rate. A significance threshold was applied (family-wise error rate of 0.05). The negative logarithm of the corrected and thresholded p-values is displayed. All colored pixels indicate statistically significant results. Center: ProtBert heads with a sig. positive relevance (sum along the sequence; indicative for the GO term ‘membrane’) were singled out with the Wilcoxon signed-rank test. The matrix plots show the negative logarithm of the resulting p-values (adjusted with Benjamini/Hochberg and a threshold). Right: ProtBert heads with a sig. positive attribution-annotation-correlation (p-values from t-tests plotted) that are also characterized by a sig. positive relevance (the latter overlaid as mask). Only results for UniProt ‘transmembrane regions’ are shown, omitting the results for ‘active/binding sites’, ‘motifs’, and PROSITE patterns, which did not feature heads with both a sig. positive relevance and attribution-annotation-correlation.
regions' and'short sequence motifs' for two EC classes.
**Discussion**. Attribution maps obtained for the embedding layer correlated with UniProt annotations on the amino acid level, in particular, in the EC case, but also for the GO term'membrane'. To summarize, across two tasks, we provide first quantitative evidence for the meaningfulness and specificity of attribution maps beyond anecdotal evidence. Note that the EC case has the benefit of often several hundred annotated samples contained in the test split (except for 'transmembrane regions' and'motifs'; see right panel of Figure 5). In comparison, the GO case provides fewer samples in the test split of the data set that were also annotated on the amino acid level (see numbers in brackets below the x-axis in Figure 4).
### Explainability analysis: Peeking inside the transformer
**Research question**. Given the encouraging results presented in Section 5.2, we aim to go one step further and try to answer the more specific question if there are specialized heads inside of the model architecture for specific prediction tasks, using our IG variant that calculates relevance on the amino acid level per transformer head and layer (see Section 3).
**GO-prediction: membrane**. Figure 3 shows the results of the explainability analysis inspecting the latent representations inside of the ProtBert model focusing on the selected class of the GO term'membrane' (GO:0016020). Relevance attributions indicative for GO'membrane' per amino acid were correlated with the UniProt annotations as 'transmembrane regions' separately for each transformer head and layer (matrix plot pixels in Figure 3). In parallel, ProtBert heads were singled out with a significantly positive relevance (sum along the sequence) indicative for'membrane' (see also Section 2.1 and Section 3). Both parallel analysis streams were combined by identifying ProtBert heads with both a significantly positive attribution-annotation-correlation and relevance. Several ProtBert heads in different layers feature a significantly positive correlation of relevance attributions per amino acid with the UniProt annotations as 'transmembrane regions', going along with a significantly positive relevance for the GO class'membrane'. In contrast, correlation of relevance attributions with UniProt 'active' or 'binding sites' or'motifs' or PROSITE patterns accompanied by a positive relevance was not observed (hence these cases were not included in Figure 3).
**GO prediction: catalytic activity**. Figure D.1 (in Appendix D as supplementary material) shows the results of the explainability analysis for the case where the GO term 'catalytic activity' was selected (GO:0003824). Different ProtBert heads stand out characterized by a positive relevance accompanied by a positive correlation of attributions with PROSITE patterns and with UniProt 'active sites' and 'transmembrane regions' (but neither with 'binding sites' nor'motifs').
**GO-prediction: binding**. Figure D.2 (in Appendix D) repeats the explainability analysis inside ProtBert for the GO term 'binding' (GO:0005488). For several transformer heads and layers, a positive relevance went along with a correlation of relevance attributions with corresponding PROSITE patterns, and with UniProt 'transmembrane regions' (but neither with UniProt 'active' nor 'binding sites' nor'motifs').
**EC-prediction**. Subsequently, we conducted the explainability analysis for the case where ProtBert had been finetuned to EC number classification on EC50 level L1. Here, the model had learned to differentiate between the six main enzyme classes. Figure D.3 (in Appendix D) identifies ProtBert heads characterized both by a positive
Figure 5: Attribution maps calculated for the embedding layer of ProtBert finetuned to ‘EC50 level L1’ classification were correlated with UniProt sequence annotations. Left: Relevance attributions correlated significantly (\(p<0.05\)) with ‘active sites’ for all six EC classes, with ‘binding sites’ for four EC classes, and with ‘transmembrane regions’ and ‘short sequence motifs’ for two EC classes each. Right: Numbers of annotated samples in the test split per annotation type and EC class.
relevance (sum along the sequence) with respect to the EC class, and by a positive attribution-annotation-correlation (on the amino acid level). The analysis was conducted separately for UniProt annotations as 'active'/'binding sites', 'transmembrane regions', and'motifs'. (The absence of identified heads for EC4, EC5, and EC6 in the 'transmembrane regions' rows and for EC1 and EC5 in the'motif' rows of Figure D.3 goes along with the availability of relatively few 'transmembrane' and'motif' annotations for these EC classes; see histogram in Figure 5.)
**Discussion**. In summary, we propose a constructive method suited to identify heads inside of the transformer architecture that are specialized for specific protein function or property prediction tasks. The proposed method comprises a novel adaptation of the XAI method of IG combined with a subsequent statistical analysis. We first attributed relevance to the single amino acids per protein (per GO term or EC class), separately for each transformer head and layer. Then, we inspected the correlation between relevance attributions and annotations, in a statistical analysis across the annotated proteins from the test split of the respective data set. Apparently, different transformer heads are sensitive to different annotated and thus biologically and, respectively, chemically'meaningful' sites, regions or patterns on the amino acid sequence. The methods is complementary to probing approaches [20].
### Uncovering collective dynamics
Finally, we studied collective dynamics potentially emerging among the transformer heads (ProtBert, EC50, level L1) by a visualization of the originally high-dimensional, summed attribution maps in two dimensions, taking their similarities into account. For this purpose, the attribution maps that were summed along the amino acid sequence and represented as \(n_{\text{layer}}\times n_{\text{head}}\) matrices (see Section 3) were flattened, resulting in one vector per protein. The dimensionality of these vectors was then reduced with principal component analysis to 50 dimensions, and subsequently to two dimensions with t-distributed stochastic neighbor embedding [t-SNE; van der Maaten and Hinton, 2008], using the default t-SNE parameters. The resulting 2D points were visualized as scatter plot and colored according to the corresponding six main enzyme classes (Figure 6).
The points form distinctive clusters matching the EC labels. Apparently, a structure emerges in the attribution maps, that seems to indicate class-specific collective dynamics among several ProtBert heads. It is important to stress that the attribution map underlying the clustering no longer contains any reference to specific positions in the sequence but relies on the relevance distribution on the different heads through all layers of the model. The emergence of class-specific structures therefore indicates that there are specific combinations of heads that are relevant for a specific classification decision.
## 6 Conclusion
This work provides additional evidence for the effectiveness of the currently predominant paradigm in deep-learning-based protein analysis through the finetuning of large language models. For different protein function prediction tasks, this approach leads to best-performing models according to single-model performance. The performance level is in many cases on par with MSA-approaches. The proposed models can even be effectively combined with the latter through the formation of ensembles.
Considering the ever increasing model complexity, XAI has started to gain traction in the field of protein analysis too [32, 20, 21, 22, 23, 24], but quantitative evidence for its appli
Figure 6: PCA and t-SNE visualization of summed attribution maps (ProtBert, EC50, L1).
cability beyond single examples was lacking up to now. We provide statistical evidence for the alignment of attribution maps with corresponding sequence annotations, both on the embedding level as well as for specific heads inside of the model architecture, which led to the identification of specialized heads for specific protein function prediction tasks. Emerging class-specific structures suggest that these specialized transformer heads act jointly to decide together in specific combinations.
XAI promises to tap into the presumably substantial knowledge contained in large models pretrained on massive data sets of amino and/or nucleic acid sequences [21, 22]. Therefore, we expect that XAI will play an increasingly important role in the future of bioinformatics research. We see potential applications of XAI methods for model validation and for scientific discovery (e.g., of novel discriminative sequence patterns or motifs that have not been identified by experiments or MSA so far). Identifying specialized heads might also help to prune or distill overly large models, making them smaller and more efficient.
## Funding
This work was supported by the Bundesministerium fur Bildung und Forschung through the BIFOLD - Berlin Institute for the Foundations of Learning and Data [grant numbers 01IS18025A, 01IS18037A].
|
2309.07457 | Efficient circular Dyson Brownian motion algorithm | Circular Dyson Brownian motion describes the Brownian dynamics of particles
on a circle (periodic boundary conditions), interacting through a logarithmic,
long-range two-body potential. Within the log-gas picture of random matrix
theory, it describes the level dynamics of unitary ("circular") matrices. A
common scenario is that one wants to know about an initial configuration
evolved over a certain interval of time, without being interested in the
intermediate dynamics. Numerical evaluation of this is computationally
expensive as the time-evolution algorithm is accurate only on short time
intervals because of an underlying perturbative approximation. This work
proposes an efficient and easy-to-implement improved circular Dyson Brownian
motion algorithm for the unitary class (Dyson index $\beta = 2$, physically
corresponding to broken time-reversal symmetry). The algorithm allows one to
study time evolution over arbitrarily large intervals of time at a fixed
computational cost, with no approximations being involved. | Wouter Buijsman | 2023-09-14T06:34:10Z | http://arxiv.org/abs/2309.07457v2 | # Efficient circular Dyson Brownian motion algorithm
###### Abstract
Circular Dyson Brownian motion describes the equilibrium and non-equilibrium level dynamics of unitary ("circular") matrices within the log-gas picture of random matrix theory. A common scenario is that one wants to know about a spectrum evolved over a certain interval of time, without being interested in the intermediate dynamics. Numerical evaluation of this is computationally expensive as the time-evolution algorithm is accurate only on short time intervals because of an underlying perturbative approximation, leading to the need of extensively many intermediate evaluations. This work proposes an efficient and easy-to-implement improved circular Dyson Brownian motion algorithm for models with broken time-reversal symmetry. This algorithm can be seen as a generalization of a commonly used algorithm generating samples from the circular unitary ensemble. The algorithm allows one to study time-evolution over arbitrarily large intervals of time at a fixed computational cost, with no approximations being involved.
## I Introduction
The theory of random matrices plays a key role in the understanding of quantum statistical mechanics and quantum chaos [1; 2; 3]. The eigenvalue statistics of random matrices can typically be studied using the so-called _log-gas picture_[4; 5]. For this, the joint probability distribution \(P\) of the eigenvalues is written as a Boltzmann factor
\[P=\frac{1}{\mathcal{Z}}e^{-\beta H}, \tag{1}\]
where \(\mathcal{Z}\) is a normalization constant that has the interpretation of a partition function, \(\beta>0\) is a parameter known as the Dyson index that has the interpretation of an inverse temperature, and \(H\) is a function depending on the eigenvalues (or eigenphases when considering unitary matrices) that has the interpretation of a classical Hamiltonian. Within the log-gas picture, \(H\) describes a collection of massless particles repelling each other through a logarithmic pairwise potential, possibly (depending on the setting) confined by a one-body potential. The log-gas picture has been found, for example, to accurately describe the level statistics across the many-body localization transition [6]. As the Hamiltonian does not contain a kinetic term, it obeys non-trivial dynamics. The equilibrium as well as the non-equilibrium level dynamics are described by a phenomenon referred to as _Dyson Brownian motion_[7; 8]. For a recent review on the connections between level dynamics and quantum chaos, the reader is referred to Ref. [9].
Dyson Brownian motion (as well as the corresponding stochastic evolution of the eigenstates [10; 11]) has found applications in studies on for example disordered systems [12; 13; 14; 15], random matrix models [16; 17; 18; 19], many-body localization [20; 21], and cosmological inflation [22; 23; 24; 25]. A common scenario is that one wants to know about a spectrum evolved over a certain interval of time, without being interested in the intermediate dynamics. Dyson Brownian motion for Hermitian matrices can be evolved over a time interval of arbitrary length at a constant computational cost, with no approximations being involved (see below for a more detailed explanation). Circular Dyson Brownian motion (as it is named typically) for unitary ("circular") matrices, however, requires extensively many evaluations over small intermediate intervals because of a perturbative approximation underlying the time-evolution algorithm. Circular Dyson Brownian motion is thus a process that is computationally expensive to simulate, which moreover is subject to a loss of accuracy with progressing time. Despite significant recent [26] and less recent [27; 28; 29] analytical progress circular Dyson Brownian motion out-of-equilibrium, improved numerical capabilities are thus desired.
This work proposes an improved, easy-to-implement circular Dyson Brownian algorithm for the unitary class (corresponding to systems with broken time-reversal symmetry). The algorithm does not require intermediate evaluations, and thus operates at dramatically lower computational cost compared to the currently used algorithm. Moreover, it does not involve approximations, and is thus not subject to a loss of accuracy with progressing time. In short, it generalizes a commonly used algorithm generating samples from the circular unitary ensemble from samples of the Ginibre unitary ensemble through QR-decompositions [30; 31]. The algorithm is obtained by including a term with a time-dependent magnitude to account for the initial condition to the matrix of which the QR-decomposition is taken.
## II Dyson Brownian motion for Hermitian and unitary matrices
One distinguishes between orthogonal (\(\beta=1\)), unitary (\(\beta=2\)), and symplectic (\(\beta=4\)) random matrix ensem
bles [4]. These names reflect the type of transformations under which the ensembles remain invariant. Physically, the type of invariance determines the behavior of a system under time reversal. For example, the orthogonal class correspond to time-reversal systems, whereas the unitary class correspond to systems with broken time-reversal symmetry. Dyson Brownian motion and its circular analog are here introduced for the orthogonal and unitary classes. A slightly more involved discussion for the symplectic class, which is arguably less physically relevant in the current context, can be found for example in Ref. [5].
Let \(H(t)\) be an \(N\times N\) Hermitian matrix with elements depending on time \(t\)[7]. The initial condition \(H(0)\) can be either random or deterministic. Dyson Brownian motion for Hermitian matrices of the unitary class is a stochastic process described by
\[H(t+dt)=H(t)+\sqrt{dt}M, \tag{2}\]
where the time step \(dt\) is supposed to be small enough such that the eigenvalues of \(H(t+dt)\) can be obtained accurately by second-order perturbation theory. Here, \(M\) is a sample from the Gaussian unitary ensemble that is resampled at each evaluation. An \(N\times N\) matrix \(M\) sampled from the Gaussian unitary ensemble can be constructed as
\[M=\frac{1}{2}(A+A^{\dagger}), \tag{3}\]
where \(A\) is an \(N\times N\) matrix with complex-valued elements \(A_{nm}=u_{nm}+iv_{nm}\) with \(u_{nm}\) and \(v_{nm}\) sampled independently from the normal distribution with mean zero and variance \(1/2\).
Let \(\tilde{M}=U^{\dagger}(t)\,M\,U(t)\), where the time-dependent unitary matrix \(U(t)\) is chosen such that it diagonalizes \(H(t)\). Since the distribution of \(M\) is invariant under unitary transformations, \(\tilde{M}\) can be replaced by a new sample from the Gaussian unitary ensemble. The displacements \(d\lambda_{n}(t)=\lambda_{n}(t+dt)-\lambda_{n}(t)\) of the eigenvalues \(\lambda_{n}(t)\) when evolving from time \(t\) to \(t+dt\) obey
\[d\lambda_{n}(t)=\sqrt{dt}\tilde{M}_{nm}+\sum_{m\neq n}\frac{dt|\tilde{M}_{nm}| ^{2}}{\lambda_{m}(t)-\lambda_{n}(t)}, \tag{4}\]
where terms of order three and higher have been ignored. It can be shown that this time-evolution indeed describes a Brownian motion process, for example by writing down the corresponding Fokker-Planck equation. For \(t\rightarrow\infty\), \(H(t)\) converges to a (scaled) sample from the Gaussian unitary ensemble irrespective of the initial condition \(H(0)\).
Dyson Brownian motion can also be considered for unitary matrices [7]. Let \(Q(t)\) be an \(N\times N\) unitary matrix with time-dependent elements. Similar to the above, the initial condition \(Q(0)\) can be either random or deterministic. Circular Dyson Brownian motion for the unitary class is generated by
\[Q(t+dt)=Q(t)e^{i\sqrt{dt}M}, \tag{5}\]
where again \(M\) is an \(N\times N\) sample from from the Gaussian unitary ensemble that is re-sampled at each evaluation. For the dynamics to be Brownian, Eq. (5) needs to be accurate when being expanded up to first order, which requires \(dt\) to be small. Let \(\tilde{M}=U^{\dagger}(t)\,M\,U(t)\) with the time-dependent unitary matrix \(U(t)\) chosen such that it diagonalizes \(Q(t)\). As before, \(\tilde{M}\) can be replaced by a new sample from the Gaussian unitary ensemble. Brownian motion of the eigenvalues \(e^{i\theta_{1}(t)},e^{i\theta_{2}(t)},\ldots,e^{i\theta_{N}(t)}\) can be obtained perturbatively. The displacements \(d\theta_{n}(t)=\theta_{n}(t+dt)-\theta_{n}(t)\) of the eigenphases \(\theta_{n}(t)\) when evolving from time \(t\) to \(t+dt\) are given by
\[d\theta_{n}(t)=\sqrt{dt}\tilde{M}_{nm}+\sum_{m\neq n}\frac{dt|\tilde{M}_{nm}|^ {2}}{2\tan\frac{1}{2}(\theta_{m}(t)-\theta_{n}(t))}, \tag{6}\]
where terms of order three and higher have been ignored. Observe that Eqs. (4) and (6) describe similar dynamics on microscopic scales since \(2\tan(x/2)=x+\mathcal{O}(x^{2})\). For \(t\rightarrow\infty\), \(Q(t)\) converges towards a sample from the circular unitary ensemble irrespective of the initial condition \(Q(0)\). Dyson Brownian motion and its circular analog for the orthogonal class can be obtained by setting the imaginary part of \(M\) to zero.
## III The algorithm
The Gaussian random matrix ensembles have the property that the sum of \(n\) independent samples is a sample again, although with a prefactor \(\sqrt{n}\), as can be shown in a rather straightforward way. Eq. (2) and its equivalents for the orthogonal and symplectic classes thus do not require the time step \(dt\) to be small. This implies that numerically obtaining \(H(T)\) from \(H(0)\) can be done in a single instance, at a computational cost independent of \(T\). Eq. (5) for the evolution of unitary matrices does not allow for a similar argument since \(e^{A}e^{B}\neq e^{A+B}\) when \(A\) and \(B\) do not commute. Thus, time-evolution for unitary matrices can naively only be accomplished by subsequently evolving over infinitesimal time intervals. Eq. (5) moreover is subject to a loss of accuracy with progressing time as it describes the desired dynamics only up to first order.
Let \(A\) be an \(N\times N\) matrix with elements \(A_{nm}=u_{nm}+iv_{nm}\) with \(u_{nm}\) and \(v_{nm}\) sampled independently from the normal distribution with mean zero and unit variance. Such a matrix is known as a sample from the Ginibre unitary ensemble [5; 32]. The QR-decomposition
\[A=QR \tag{7}\]
decomposes \(A\) in a unitary matrix \(Q\) and an upper-triangular matrix \(R\) with real-valued diagonal elements. Numerically, QR-decompositions can be obtained with a single line of code in all major scientific computing software packages. A QR-decomposition is not unique [30; 31]. It can be made unique by fixing the signs of the
diagonal elements of \(R\), for example, by requiring them to be non-negative. Following the approach outlined in Ref. [31], let \(\Lambda\) be the \(N\times N\) diagonal matrix
\[\Lambda=\text{diag}\left(\frac{R_{11}}{|R_{11}|},\frac{R_{22}}{|R_{22}|},\ldots, \frac{R_{NN}}{|R_{NN}|}\right). \tag{8}\]
One can prove that matrices \(\tilde{Q}=Q\Lambda\) obey the distribution of the circular unitary ensemble. For completeness, it is noted that various (arguably less popular) alternative algorithms generating samples from the circular unitary ensemble have been proposed [33]. When referring to QR-decompositions below, the diagonal elements of \(R\) are supposed to be non-negative.
In what follows, it is shown how the algorithm generating samples from the circular unitary ensemble outlined above can be generalized to simulate circular Dyson Brownian motion for the unitary class. Let \(Q(t)\) be an \(N\times N\) unitary matrix with time-dependent elements. Next, let \(A(t)\) denote another, not necessarily unitary, time-dependent matrix initialized as \(A(0)=Q(0)\). The dynamics of \(Q(t)\) are interferred from the dynamics of \(A(t)\), which over (for now, infinitesimal) time intervals \(dt\) evolves stochastically as
\[A(t+dt) =Q(t)+\sqrt{dt}M \tag{9}\] \[=Q(t+dt)\,R(t+dt). \tag{10}\]
Here, \(M\) is a sample from the Ginibre unitary ensemble that is re-sampled at each evaluation. A QR-decomposition is applied in the second line.
The Ginibre unitary ensemble is invariant under transformations \(M\to U_{1}MU_{2}\) for unitary matrices \(U_{1}\) and \(U_{2}\). By taking \(U_{1}=Q^{\dagger}(t)\) and \(U_{2}=\mathbb{1}_{N}\), Eq. (9) can thus equivalently be written as
\[A(t+dt)=Q(t)(\mathbb{1}_{N}+\sqrt{dt}M). \tag{11}\]
The QR-decomposition of \(A(t+dt)\) can be obtained by applying a QR-decomposition on \(\mathbb{1}+\sqrt{dt}M\) and by multiplying the resulting unitary matrix by \(Q(t)\) from the left. Write \(Q(t+dt)=Q(t)\,U\). Here, \(U\) is the unitary matrix resulting from the QR-decomposition of \(\mathbb{1}_{N}+dA\). For infinitesimal \(dt\), one then obtains
\[Q(t+dt)=Q(t)(\mathbb{1}_{N}+i\sqrt{dt}H), \tag{12}\]
where \(H\) is a sample from the Gaussian unitary ensemble. Terms of order two and higher have been ignored. A comparison with Eq. (5) expanded up to first order confirms that the eigenvalue dynamics of \(Q(t)\) obey circular Dyson Brownian motion for the unitary class. It is easy to see from Eq. (9) that a sample from the circular unitary ensemble results for \(dt\rightarrow\infty\).
## IV Evaluation over time intervals of arbitrary length
Above, it was shown that the the circular Dyson Brownian motion algorithm of Eqs. (9) and (10) yields correct results for time steps \(dt\to 0\) and \(dt\rightarrow\infty\). Here, the focus is on the intermediate regime. For a state \(|\psi\rangle\), each evolution over an infinitesimal time interval \(dt\) gives an infinitesimal random rotation,
\[(\mathbb{1}_{N}+i\sqrt{dt}M)|\psi\rangle=\frac{1}{\mathcal{N}}\bigg{(}|\psi \rangle+\sqrt{dt}|R\rangle\bigg{)}, \tag{13}\]
where \(\mathcal{N}\) is a normalization constant and \(|R\rangle\) is an unnormalized random vector with independent elements \(R_{n}=u_{n}+iv_{n}\) with \(u_{n}\) and \(v_{n}\) sampled from the Gaussian distribution with mean zero and variance \(1/2\). Taking \(n\) of such infinitesimal time intervals gives a transformation
\[|\psi\rangle\rightarrow\frac{1}{\mathcal{N}}\bigg{(}|\psi\rangle+\sqrt{n\,dt }|R\rangle\bigg{)}. \tag{14}\]
Observe that, consistent with the remark made above, a random state results for \(n\rightarrow\infty\).
In order to understand how Eqs. (9) and (10) act when \(dt\) is not taken small, it is useful to see how a QR-decomposition is performed. For a QR-decomposition \(A=QR\), the columns \(q_{n}\) of the unitary matrix \(Q\) can be obtained by performing Gram-Schmidt orthonormalization on the columns \(a_{n}\) of \(A\). The columns \(q_{n}\) are then obtained from the columns \(a_{n}\) as
\[q_{n}=v_{n}/||v_{n}||, \tag{15}\]
\[v_{n}=a_{n}-\sum_{m=1}^{n-1}(q_{m}\cdot a_{n})q_{m}, \tag{16}\]
with \(q_{1}=a_{1}/||a_{1}||\) given by the first column of \(A\) normalized to unit length. In words, the \(n\)-th column of \(Q\) is obtained by orthonormalizing the \(n\)-th column of \(A\) to the first \(n-1\) columns of \(Q\).
From now on, suppose that \(dt\) is not necessarily small. Applying a QR-decomposition on the rightmost term \(\mathbb{1}_{N}+\sqrt{dt}M\) of Eq. (11) gives a unitary matrix \(U\) with columns \(u_{n}\) that, slightly abusing notation, can be denoted by
\[u_{n}=\frac{1}{\mathcal{N}}\bigg{(}|e_{n}\rangle+\sqrt{dt}|R\rangle\bigg{)}, \tag{17}\]
where \(\mathcal{N}\) is a normalization constant, \(|e_{n}\rangle\) is a basis vector with the \(n\)-th component equal to unity, and \(|R\rangle\) is a random vector as introduced in Eq. (13). The randomness is subject to the orthogonality constraint \(u_{n}\cdot u_{m}=\delta_{nm}\). For such matrices, when multiplied from the right by a state \(|\psi\rangle\), one gets
\[U|\psi\rangle=\frac{1}{\mathcal{N}}\bigg{(}|\psi\rangle+\sqrt{dt}|R\rangle \bigg{)}, \tag{18}\]
where \(|R\rangle\) and \(\mathcal{N}\) are typically different from the ones appearing in Eq. (17). A comparison with Eq. (14) shows that the algorithm of Eqs. (9) and (10) can be used without assuming \(dt\) to be small. Thus, it allows one to simulate circular Dyson Brownian motion for the unitary class over abitrarily long time intervals in a single instance, independent of the length of the interval.
## V Numerical evaluation
Here, the use of the algorithm proposed above is verified and illustrated through a numerical evaluation. A second, more extensive numerical verification involving large-dimensional matrices (up to \(N=10^{4}\)) can be found in the upcoming Ref. [34] on the unitary equivalent of the Rosenzweig-Porter model. Here, the focus is on the non-equilibrium dynamics of \(N=30\) eigenvalues \(e^{i\theta_{n}(t)}\) (\(n=1,2,\ldots,N\)) with initial conditions \(\theta_{n}(0)=0\) for all \(n\). More specifically, the focus is on the dynamics of a matrix \(Q(t)\) of dimension \(30\) which is intialized as
\[Q(0)=\text{diag}(1,1,\ldots,1). \tag{19}\]
The recent Ref. [26] presents a numerical evaluation of the same setting with the aim to evaluate the spectral form factor at different instances of time. The parameters are chosen here the same in order to allow for a direct comparison. The matrices \(Q(t)\) are obtained from Eqs. (9) and (10) by substituting \(t\to 0\) and \(dt\to t\).
Let the eigenphases \(\theta_{n}(t)\in[-\pi,\pi)\) of \(Q(t)\) be ordered ascendingly, \(\theta_{1}(t)\leq\theta_{2}(t)\leq\cdots\leq\theta_{N}(t)\). Eigenphases do not cross each other because of level repulsion, as can be inferred from the denominator of Eq. (6). Fig. 1 shows the probability distribution (obtained by averaging over a large number of samples) of the smallest and largest eigenphase as a function of time. Ref. [26] shows consistent results.
## VI Conclusions and outlook
In summary, this work proposed an easy-to-implement algoritm [Eqs. (9) and (10)] to simulate circular Dyson Brownian motion for the unitary class, physically corresponding to broken time-reversal symmetry. This algorithm is a generalization of a commonly used algorithm generating samples from the circular unitary ensemble proposed in Refs. [30; 31]. In constrast to the currently used circular Dyson Brownian motion algorithm [Eq. (5)], here the time step \(dt\) does not have to be small, and no approximations have been involved. This allows one to study time-evolution over arbitrarily large time intervals at a computational cost independent of the length of the time interval, without loss of accuracy. In typical settings, this algorithm dramatically reduces the computational costs, thereby for example opening the possibility to perform large-scale simulations without the need for high-performance computing facilities.
An arguably interesting follow-up question would be how to modify the algorithm for the orthogonal and symplectic classes. From a sample \(Q\) of the circular unitary ensemble, a sample \(S\) from the circular orthogonal ensemble can be obtained as \(S=Q^{T}Q\)[35]. It is thus tempting to hypothesize that circular Dyson Brownian motion for the orthogonal class can be simulated by the algorithm of Eqs. (9) and (10), and by taking the product of the transpose of the resulting unitary matrix and the resulting unitary matrix itself as the output. It is noted that the algorithm can be adjusted such that for \(t\to\infty\) a sample from the (less well-known) circular real ensemble (see Ref. [36]) results by setting the imaginary components of \(M\) in Eq. (9) to zero.
Circular Dyson Brownian motion can be used to numerically generate non-ergodic unitary matrices ("unitaries") with fractal eigenstates and a tunable degree of complexity [19; 34]. Next to what is mentioned above, this work can thus be expected to be relevant for future studies on the emergence and breakdown of statistical mechanics in the context of unitary (periodically driven) systems. Dyson Brownian motion recently attracted a spurge of interest in the context of the Brownian SYK model [37; 38; 39; 40; 41; 42]. Unitary Brownian quantum systems are of current interest in the context of Brownian quantum circuits [43; 44; 45; 46; 47; 48]. This work finally can be expected to provide new opportunities in the context of the non-trivial dynamics of Brownian quantum systems.
###### Acknowledgements.
The author acknowledges support from the Kreitman School of Advanced Graduate Studies at Ben-Gurion University.
Figure 1: The distribution (in arbitrary units) of the smallest and largest eigenphase of the dynamics of \(Q(0)=\text{diag}(1,1,\ldots,1)\) with \(N=30\) as a function of the scaled time \(\tau=tN\). The data is obtained using the algorithm of Eqs. (9) and (10) by substituting \(t\to 0\) and \(dt\to t\). Fig. 1 of Ref. [26], which considers a single realization of the same non-equilibrium setting, has a horizontal axis running from \(\tau=0\) to \(\tau=3.6\) in the units considered here. |
2309.06991 | Unsupervised Contrast-Consistent Ranking with Language Models | Language models contain ranking-based knowledge and are powerful solvers of
in-context ranking tasks. For instance, they may have parametric knowledge
about the ordering of countries by size or may be able to rank product reviews
by sentiment. We compare pairwise, pointwise and listwise prompting techniques
to elicit a language model's ranking knowledge. However, we find that even with
careful calibration and constrained decoding, prompting-based techniques may
not always be self-consistent in the rankings they produce. This motivates us
to explore an alternative approach that is inspired by an unsupervised probing
method called Contrast-Consistent Search (CCS). The idea is to train a probe
guided by a logical constraint: a language model's representation of a
statement and its negation must be mapped to contrastive true-false poles
consistently across multiple statements. We hypothesize that similar
constraints apply to ranking tasks where all items are related via consistent,
pairwise or listwise comparisons. To this end, we extend the binary CCS method
to Contrast-Consistent Ranking (CCR) by adapting existing ranking methods such
as the Max-Margin Loss, Triplet Loss and an Ordinal Regression objective.
Across different models and datasets, our results confirm that CCR probing
performs better or, at least, on a par with prompting. | Niklas Stoehr, Pengxiang Cheng, Jing Wang, Daniel Preotiuc-Pietro, Rajarshi Bhowmik | 2023-09-13T14:36:26Z | http://arxiv.org/abs/2309.06991v2 | # Unsupervised Contrast-Consistent Ranking with Language Models
###### Abstract
Language models contain ranking-based knowledge and are powerful solvers of in-context ranking tasks. For instance, they may have parametric knowledge about the ordering of countries by size or may be able to rank reviews by sentiment. Recent work focuses on pairwise, pointwise, and listwise prompting techniques to elicit a language model's ranking knowledge. However, we find that even with careful calibration and constrained decoding, prompting-based techniques may not always be self-consistent in the rankings they produce. This motivates us to explore an alternative approach that is inspired by an unsupervised probing method called Contrast-Consistent Search (CCS). The idea is to train a probing model guided by a logical constraint: a model's representation of a statement and its negation must be mapped to contrastive true-false poles consistently across multiple statements. We hypothesize that similar constraints apply to ranking tasks where all items are related via consistent pairwise or listwise comparisons. To this end, we extend the binary CCS method to Contrast-Consistent Ranking (CCR) by adapting existing ranking methods such as the Max-Margin Loss, Triplet Loss, and Ordinal Regression objective. Our results confirm that, for the same language model, CCR probing outperforms prompting and even performs on a par with prompting much larger language models.
## 1 Introduction
"What is the correct ordering of the following countries by size: [USA, China, Russia, Canada,...]?"
Language models have been shown to store plenty of facts and have powerful reasoning capacities (Petroni et al., 2019; Brown et al., 2020). Ranking tasks require both of these skills: multiple items have to be put in order based on a comparison criterion. We are posing the question: what is the best way to elicit a model's ranking knowledge and in-context ranking capacities without supervision? Knowing the answer to this question brings the following benefits: we can evaluate what ranking knowledge a language model contains and uncover knowledge gaps, outdated information, and existing biases before applying the model. Once we trust a model, we can then query ranking-based facts or apply the model to solve ranking tasks.
A natural starting point for unsupervised ranking is prompting (Li et al., 2022). In SS2, we explore different task formulations: pairwise, pointwise, and listwise prompting, as outlined in Fig. 1. In the pairwise setting, any two items are compared and pairwise results are converted into a global ranking post-hoc. In pointwise prompting, the model assigns a score to each item individually. The listwise approach tasks the model to directly decode the entire ranking. For all of these approaches, constrained decoding is essential to ensure the output can be converted into a ranking including all items. Yet, even with constrained decoding and calibration, we find that prompting often leads to inconsistent rankings.
Figure 1: We study pairwise, pointwise, and listwise prompting and probing for unsupervised ranking.
For this reason, we turn to the model-internal representations of ranking tasks and their items in SS3. We train a "probing model" with various unsupervised ranking objectives to find a latent ordering direction of the items' vector representations. Burns et al. (2023) recently proposed the Contrast-Consistent Search (CCS) method to find a direction in a language model's activation space that distinguishes truthful statements from false ones Li et al. (2023). This is achieved with a loss that imposes a logical constraint: the representation of a statement and its negation must be mapped to opposite (contrasting) poles.
Ranking tasks share similar properties. We can convert a ranking task into multiple pairwise comparisons and train a probe to find a "ranking direction" that allows ranking one item higher than the other consistently across all pairs. This has one significant advantage over the original CCS method for factual statements--instead of requiring a training set of multiple yes-no questions, we can source all pairwise permutations from a list of items which allows training the probe on a single ranking task.
We translate the original binary CCS method into Contrast-Consistent Ranking (CCR) by exploring pairwise (SS3.1), pointwise (SS3.2), and listwise (SS3.3) approaches as illustrated in Fig. 1. Pairing items in the prompt and obtaining the vector representations of all pairs is computationally expensive. Moreover, binary contrastive poles may not be ideally suited for ranking tasks where the distances between items are not unit-length. In line with the pointwise approach, we instead embed each item individually, e.g., "The size of the U.S. is [MASK],...". We then pair the items represented by the activations of the [MASK] tokens in the loss function. In particular, we propose variants of the well-known Max-Margin and Triplet Loss by including a _consistency_ and _confidence_ component. As a final adjustment, we mitigate the limitation that pairwise and pointwise objectives do not guarantee transitivity: item A may be ranked above B, B above C, but C above A, creating a circular contradiction. To address this, we introduce an unsupervised ordinal regression objective for listwise CCR probing.
Our experiments in SS4 confirm that CCR probing outperforms prompting with DeBERTa He et al. (2021) and GPT-2 Jiang et al. (2021) models. Among the CCR probing methods, the Triplet Loss variant performs best on average. CCR probing with DeBERTa and GPT-2 even achieves similar performance to prompting a much larger MPT-7B MosaicML (2023) model across \(6\) datasets. In addition, CCR probing has the advantage of better control and interpretability as we discuss in SS5.
## 2 Prompting for Rankings
Prompting is an accessible way to test a language model's ranking knowledge Li et al. (2022). We experiment with three different prompt types outlined in Table 1: pairwise, pointwise, and listwise prompting Qin et al. (2023). All prompt types contain at least one item to be ranked, a criterion to rank on, and what we refer to as comparison token. In every setting, we rely on some form of "constrained decoding" or "constrained mask-filling" with encoder-only models. In essence, we restrict the vocabulary to a list of candidates and select the highest-scoring tokens.
Pairwise Prompting.ItemPair p: _Is [item A] more positive than [item B]? Yes / No_--Between any two items, the language model is tasked to make ranking decisions which are then converted into a ranking post-hoc as elaborated in SS4.3. With
\begin{table}
\begin{tabular}{l|l|l|c}
**prompt type** & **template** & **prompting** & **CCR probing** \\ \hline ItemPair p & Is [item A] more positive than [item B]? \(X\) & constrain \(X\) to & set \(X\) to \\ & & & \(\{\)Yes / No\(\}\) & \(\{\)Yes / No\(\}\) \\ \hline ItemSingle S & \begin{tabular}{l} ("optional”: On a scale from 0 to 10,) \\ The stance of [item] is \(X\) \\ \end{tabular} & \begin{tabular}{l} constrain \(X\) to \\ \{0, 1,...,10\} \\ \end{tabular} & \begin{tabular}{l} set \(X\) to \\ \{\(\)MASK\(\}\) \\ \end{tabular} \\ \hline ItemList L & \begin{tabular}{l} ("optional”: context). Order by stance. Options: “A” \\ \{item A\}, "B" [item B]....The correct ordering is: \(X\) \\ \end{tabular} & \begin{tabular}{l} constrain \(X\) to \\ \{A, B,...\} \\ \end{tabular} &
\begin{tabular}{l} embed via itemSingle \\ then listwise loss \\ \end{tabular} \\ \end{tabular}
\end{table}
Table 1: We consider three different prompt types, ItemPair p, ItemSingle S, and ItemList L, that all consist of a ranking criterion, a comparison token, and one or multiple items to be ranked. ItemPair and ItemSingle can be used for prompting and CCR probing in a similar fashion. To realize listwise CCR probing, we first obtain individual vector representations of items via ItemsSingle and then connect all items through a listwise loss objective.
out calibration (Zhao et al., 2021), the model tends to always output the token most frequently observed during training, basically disregarding the context. Following (Burns et al., 2023), we compute the mean score of the "Yes" and "No" tokens in all pairwise prompts and then subtract the respective mean from each token score.
Pointwise Prompting.ItemSingle 5: _On a scale from \(0\) to \(10\), the stance of [item] is \(X\)_--In pointwise prompting, the language model ranks one item at a time. If two items are assigned the same rank (i.e., the same candidate token from the list \(X\in\{0,1,2,\dots,10\}\)), we break the tie by sorting tokens by their scores.
Listwise Prompting.ItemList l _optional: context. Order by stance. Options: "A" [item A], "B" [item B]... The correct ordering is: \(X\)_--For list-wise prompting, we apply a step-wise approach: we let the model select the highest-scoring item from the list of candidates \(X\in\{A,B,...\}\), remove this token from the list, and append it to the prompt. We repeat the process until the candidate list is exhausted. Importantly, the ordering of the candidate options in the prompt poses a "positional bias" (Han et al., 2023; Wang et al., 2023). Therefore, we randomly shuffle the ordering of the options and repeat the listwise prompting multiple times.
## 3 Unsupervised Probing for Rankings
Querying a language model's knowledge via prompting, we limit ourselves to prompt design and evaluating token scores. In contrast, probing accesses the information contained within a language model more directly by operating on latent vector representations. Conventionally, probing involves training a "diagnostic classifier" to map the vector representations of an utterance to a target label of interest (e.g., tense, gender bias) in a supervised fashion (Pimentel et al., 2022). The goal typically is to measure what information is contained within a language model. While the motivation of this work is closely related, we focus on an unsupervised probing variant and consider supervised probing only as a performance upper bound for validation purposes in SS4.5 and SS5.
Contrast-Consistent Search (CCS).Burns et al. (2023) propose Contrast-Consistent Search (CCS), an unsupervised probing method which seeks to train a probe to satisfy logical constrains on the model's activations. Instead of labels, CCS requires paired prompts in the form of yes-no questions:
\[x_{i}^{+} =\text{``Are elephants mammals? Yes''} \tag{1}\] \[x_{i}^{-} =\text{``Are elephants mammals? No''}\]
Both statements \(x_{i}^{+}\) and \(x_{i}^{-}\) are fed into a language model and the activations of the model's last hidden layer corresponding to the "Yes" and "No" token, \(\mathbf{x}_{i}^{+}\) and \(x_{i}^{-}\) (bolded), are considered in subsequent steps. First, the vector representations \(\mathbf{x}_{i}^{+}\) and \(x_{i}^{-}\) from different yes-no questions have to be Z-score normalized to ensure they are no longer forming two distinct clusters of all "Yes" and "No" tokens. Next, the paired vectors are projected to a score value \(s_{i}\) via the probe \(f_{\theta}(\mathbf{x}_{i})=\sigma(\mathbf{\theta}^{T}\mathbf{x}_{i}+b)\) which is trained using the origCCS loss objective:
\[\text{origCCS}= \overbrace{\left(f_{\theta}(\mathbf{x}_{i}^{+})-\left(1-f_{\theta}(x _{i}^{-})\right)\right)^{2}}^{consistency} \tag{2}\] \[+\underbrace{\min\left(f_{\theta}(\mathbf{x}_{i}^{+}),f_{\theta}(x_{ i}^{-})\right)^{2}}_{confidence}\]
origCCS comprises two terms: the _consistency_ term encourages \(f_{\theta}(\mathbf{x}_{i}^{+})\) and \(f_{\theta}(x_{i}^{-})\) that sum up to \(1\). The _confidence_ term pushes the scalars away from a deficient \(f_{\theta}(\mathbf{x}_{i}^{+})=f_{\theta}(x_{i}^{-})=0.5\) solution, and instead encourages one to be close to \(0\) and the other to be close to \(1\). This means, the origCCS objective promotes mapping true and false statements to either \(0\) or \(1\) consistently, when the probe is trained on multiple yes-no questions.1
\begin{table}
\begin{tabular}{l|l l|l l} & **prompt type** & **emb calls** & **loss / model** & **datapoints** \\ \hline \multirow{4}{*}{**Condation**} & ItemPair & \(\mathcal{O}(N^{2})\) & origCCS & \(\mathcal{O}(N^{2})\) \\ & ItemSingle & S & \(\mathcal{O}(N)\) & origCCS & \(\mathcal{O}(N^{2})\) \\ & ItemSingle & S & \(\mathcal{O}(N)\) & MarginCCR & \(\mathcal{O}(N^{2})\) \\ & ItemSingle & S & \(\mathcal{O}(N)\) & TripletCCR & \(\mathcal{O}(N^{3})\) \\ & ItemSingle & S & \(\mathcal{O}(N)\) & OrdRegCCR & \(\mathcal{O}(N)\) \\ \hline \multirow{4}{*}{**Condation**} & ItemPair & \(\mathcal{O}(N^{2})\) & MLM / causal & \(\mathcal{O}(N)\) \\ & ItemSingle & S & \(\mathcal{O}(N)\) & MLM / causal & \(\mathcal{O}(N)\) \\ \cline{1-1} & ItemList & \(\mathcal{O}(N)\) & MLM / causal & \(\mathcal{O}(1)\) \\ \end{tabular}
\end{table}
Table 2: Complexity of each approach as a factor of the number of items \(N\) per ranking task. We distinguish between the number of required calls of an “embedding function” (i.e., a language model) and the number of resulting data points to be considered in a subsequent loss objective. The asymptotic complexity of permutations and combinations is both \(\mathcal{O}(N^{2})\).
**From Yes-No Questions to Rankings.** On a more abstract level, origCCS relies on logical constraints to identify a true-false mapping in the models' activations. We argue that ranking properties can similarly be expressed as logical constraints which are discernable by a probing model. In fact, the pairing of yes-no statements in Eq. (1) resembles the ItemPair prompt type presented in Table 1. However, instead of true-false poles, ItemPair expresses an ordering relationship.
One advantage of ranking tasks is that we can source many pairwise comparisons from a single ranking task, which reduces the need for a training set of different yes-no questions. The original CCS paper showed that a training set of as few as \(8\) pairwise comparisons can be enough for good test set performance. A ranking task of \(8\) items allows for \(28\) comparisons when considering all pairwise combinations, and even \(56\) comparisons when considering all pairwise permutations.
We adapt binary CCS to CCR by gradually modifying three components of the original method: the prompt design, the loss function, and the probing model. In SS3.1, we start by changing the binary prompt to the ItemPair prompt type. Next, we explore pointwise CCR probing in SS3.2 and modify the prompt type and loss function. Finally, in SS3.3, we alter prompt type, loss function, and probe model altogether to propose a transitivity-consistent listwise approach.
### Pairwise CCR Probing
Pairwise CCR probing for rankings is straightforward as we only need to change the binary prompt in Eq. (1) to the ItemPair P prompt type in SS3.1, but apply the original origCCS objective (Eq. (2)), which we abbreviate as "origCCS (P)".
### Pointwise CCR Probing
We observe several methodological shortcomings of the pairwise CCR probing approach based on origCCS that we address in the following. We start with the observation that it is computationally expensive to "embed" all pairwise item permutations as depicted in Table 2. Instead, we propose to "embed" each item individually and to pair their representations in the subsequent loss objective. To this end, we consider the ItemSingle S prompt type for CCR probing which requires fewer "calls" of a language model:
\[x_{n}^{A} =\text{``The size of country A is [MASK]''} \tag{3}\] \[x_{n}^{B} =\text{``The size of country B is [MASK]''}\] \[x_{n}^{C} =\text{``The size of country C is [MASK]''}\] \[\ldots\]
In the original CCS approach, one data point \(i\) is given by a binary yes-no question. Adapted to ranking, we denote a ranking task with \(i\) and index its \(N\) items with \(n\). Since we never compare items between different ranking tasks, we omit the \(i\) index for simplicity. Now, the probing model \(f_{\theta}\) assigns a ranking score \(s_{n}=\sigma(\mathbf{\theta}^{T}\mathbf{x}_{n}+b)\) directly to each item \(x_{n}\). The scores \(s_{n}\) can then be paired with the origCCS objective resulting in "origCCS (S)".
However, the origCCS loss enforces a hard binary decision, while an important property of rankings is that the distances between items do not have unit length. This "ordinal property" is typically reflected by some notion of "margin" in existing ranking objectives such as the Max-Margin and Triplet Loss. To incorporate this, we propose the MarginCCR loss which represents a modification of the well-known Max-Margin loss.
\[\min\bigg{(} \max\Big{(}0,\big{(}f_{\theta}(\mathbf{x}_{n}^{A})-f_{\theta}(\mathbf{x}_ {n}^{B})\big{)}+m\Big{)}, \tag{4}\] \[\max\Big{(}0,\big{(}f_{\theta}(\mathbf{x}_{n}^{B})-f_{\theta}(\mathbf{x} _{n}^{A})\big{)}+m\Big{)}\bigg{)}\]
MarginCCR enforces that \(x_{n}^{A}\) ranks higher or lower than \(x_{n}^{B}\) by at least a margin \(m\), which can be seen as a _confidence_ property. However, since there are no labels, the probe has to figure out whether scoring \(x_{n}^{A}\) higher or lower than \(x_{n}^{B}\) yields better _consistency_ and reduces the loss across all item pair permutations.
In a similar style, we can adapt the popular Triplet Loss to TripletCCR. To simplify notation, we denote the distance \(|f_{\theta}(\mathbf{x}_{n}^{A})-f_{\theta}(\mathbf{x}_{n}^{B})|\) between two items \(x_{n}^{A}\) and \(x_{n}^{B}\) as \(d(x_{n}^{A},x_{n}^{B})\) and compute TripletCCR according to:
\[\min\Big{(} \max\big{(}0,d(x_{n}^{C},x_{n}^{A})-d(x_{n}^{C},x_{n}^{B})+m\big{)},\] \[\max\big{(}0,d(x_{n}^{C},x_{n}^{B}))-d(x_{n}^{C},x_{n}^{A})+m \Big{)}\Big{)}\]
In simple words, the objective forces the "positive item" to be closer to a third item \(x_{n}^{C}\), referred to as "anchor", than a "negative item", plus a _confidence_ margin \(m\). Yet, this is enforced without knowing which item should be labeled as "positive" and "negative". Instead, the probe is trained to make
this decision by being _consistent_ across all items in a given ranking task. In the following, we refer to both presented methods as "MarginCCR (S)" and "TripletCCR (S)" and provide further technical details on batching and normalization in App. A.
### Listwise CCR Probing
Pairwise and pointwise methods are not guaranteed to yield transitivity-consistent rankings: item A may win over B, B may win over C, yet C may win over A, creating a circular ordering Cao et al. (2007). To tackle this shortcoming, we design a listwise probing method with a loss objective that considers all items at the same time. Various existing ordinal regression methods are based on binary classifiers Li and Lin (2006); Niu et al. (2016); Shi et al. (2021), making them a natural candidate for a CCS-style objective that does not require additional parameters. These methods often rely on the extended binary representation Li and Lin (2006) of ordered classes, where, for instance, rank \(k=3\) out of \(K=4\) would be represented as \([1,1,1,0]\), as illustrated on the right side of Fig. 2.
We first obtain a vector representation \(\mathbf{x}_{n}\) of item \(x_{n}\) using the ItemSingle prompt type. Next, we consider the Consistent Rank Logits (CORAL) model Cao et al. (2020), which offers guarantees for rank-monotonicity by training a probe \(f_{\theta,k}\) to map \(\mathbf{x}_{n}\) to one of \(K\) ranks. The probe consists of the weight vector \(\mathbf{\theta}^{T}\) and \(K\) separate bias terms \(b_{k}\) to assign a rank score \(s_{n}^{k}\) according to \(s_{n}^{k}=f_{\theta,k}(\mathbf{x}_{n})=\sigma(\mathbf{\theta}^{T}\mathbf{x}_{n}+b_{k})\). In essence, for each item \(n\), the CORAL probe outputs a vector of \(K\) scores. Scores are monotonically decreasing because the bias terms \(b_{k}\) are clipped to be monotonically decreasing as \(k\) grows larger. Predicting a rank in the extended binary representation thus comes down to \(\hat{k}=1+\sum_{k=1}^{K}\mathds{1}[s^{k}>0.5]\).
In a listwise approach, all \(N\) items are to be jointly considered and assigned a rank \(k\).2 The predicted scores can thus be represented as a square \(N\times K\) matrix as displayed in Fig. 2. We enforce a unique rank assignment via an unsupervised ordinal regression objective, which we term OrdRegCCR:
Footnote 2: We note that the number of ranks \(K\) equals the number of items \(N\), but keep both letters for notation simplicity.
\[\overbrace{\sum_{k}^{K-1}\Big{(}\big{(}K-(k-1)\big{)}-\sum_{n}^{N }s_{n}^{k}\Big{)}}^{consistency}+\] \[\underbrace{\sum_{n}^{N}\sum_{k}^{K}\min\Big{(}s_{n}^{k},\big{(}1 -s_{n}^{k}\big{)}\Big{)}}_{confidence} \tag{5}\]
For a ranking of \(K=4\) items, the _consistency_ term encourages each column to sum up to \(4\), \(3\),..., \(1\) respectively, as visualized in Fig. 2. Yet, to avoid a deficient solution, the _confidence_ term enforces each score towards either \(0\) or \(1\).
When applying this "OrdRegCCR (S)" approach, there are two difficulties to overcome. First, we require the number of parameters of the probing model to be the same across different approaches to ensure a fair comparison. Second, we prefer training a probing model whose parameters are independent from the number of items of a given ranking task. To mitigate both issues, we parametrize the \(K\) bias terms via a polynomial function. In turn, this function is parametrized by only two parameters, \(\alpha\) and \(\beta\), which are optimized during training.
## 4 Experimental Design
### Language Models
We evaluate the prompting and CCR probing methods on an encoder-only and a decoder-only model. For the encoder-only model, we choose deberta-v1-base He et al. (2021) which has \(100\) million parameters and is the best-performing encoder-only model for answering yes-no questions in the original CCS paper. For the decoder-only model, we consider GPT-2 (small) Jiang et al. (2021) which has \(124\) million parameters. We compare these models against prompting results achieved with a much bigger, \(7\) billion parameter MPT-7B MosaicML (2023) model.
Figure 2: We translate the two aspects of _consistency_ and _confidence_ from the binary CCS objective to an ordinal multi-class setting resulting in OrdRegCCR.
### Ranking Task Datasets
We consider "fact-based" and "context-based" ranking tasks. Solving "fact-based" ranking tasks depends mostly on world knowledge. All datasets, displayed in Table 3, are publicly available and we discard all ranking tasks with fewer than four items and those that include ties between items.
Fact-based Ranking Tasks.SynthFacts: We manually conceive two synthetic ranking tasks with six items each. One task asks to rank the adjectives "horrible, bad, okay, good, great, awesome" based on sentiment, and the other to rank numbers based on their cardinality. ScalarAdj: We consider rankings of scalar adjectives based on de Melo and Bansal (2013) and curated by Gari Soler and Apidianaki (2020) adjectives that are ordered by their semantic intensity, e.g., "small, smaller, tiny". WikiLists: We manually ensemble \(14\) rankings that pertain to constant (e.g., countries ordered by size) or changing (e.g., countries ordered by GDP) facts using Wikipedia as a reference.
In-Context Ranking Tasks.SynthContext: Analogously to SynthFacts, we design two synthetic in-context ranking tasks. The first concerns ranking colors by popularity, where the popularity is unambiguously stated in a prepended context. The second task is about ordering entities by their wealth, as described in context. EntSalience: As another in-context ranking task, we consider the Salient Entity Linking (SEL) task (Trani et al., 2016). Given a news passage, we ask the model to rank the mentioned entities by salience.
### Evaluation Metrics
We are considering pairwise, pointwise, and listwise approaches as displayed in Table 1. This means we need to convert pairwise results to a listwise ranking and vice versa and consider evaluation metrics for both pairwise and listwise results. Following the original CCS method, our evaluation is direction-invariant as further discussed in Appendix A. In essence, the ranking \(A>B>C\) is considered the same as \(C>B>A\).
Pairwise Metric and Ranking Conversion.We rely on accuracy to evaluate pairwise comparisons. To account for direction-invariance, we reverse the predicted order if the reverse order yields better results. This leads to a baseline accuracy of \(50\)%. For aggregating pairwise results into a listwise ranking, we follow Qin et al. (2023): if an item wins a pairwise comparison it gains a point and points are summed to obtain a ranking. If the sum of wins is tied between items, we break the tie by considering the sum of the items' scores for all comparisons.
Ranking Metric and Pairwise Conversion.To evaluate rankings, we consider Kendall's tau correlation, which is independent of the number of items per ranking task and the directionality of the ordering. These desiderata are not given by other ranking and retrieval metrics such as the Normalized Discounted Cumulative Gain (NDCG) (Wang et al., 2013). We derive pairwise comparisons from a ranking by permuting any two items and labeling the pairs based on their position in the ranking.
\begin{table}
\begin{tabular}{l|l c c l} & **dataset** & **tasks** & **items** & **ranking example** \\ \hline \hline \multirow{4}{*}{**Dataset**} & SynthFacts & 2 & 6.00 & criterion: order the numbers by cardinality \\ & & & & items: \{1, 10, 100, 1000... \} \\ \cline{2-5} & ScalarAdj & 38 & 4.47 & criterion: order the adjectives by semantic intensity \\ & & & & items: \{small, smaller, tiny, microscopic...\} \\ \cline{2-5} & WikiLists & 14 & 14.43 & criterion: order the countries by size \\ & & & & items: \{Russia, Canada, China, United States... \} \\ \hline \hline \multirow{4}{*}{**Dataset**} & SynthContext & 2 & 6.00 & context: “Tom owns $100, Jenny has $1000,...” \\ & & & & items: \{Tom, Jenny, Emily, Sam...\} \\ \cline{1-1} \cline{2-5} & EntSalience & 362 & 7.5 & criterion: order entities by wealth \\ \cline{1-1} \cline{2-5} & & & & context: “The UN secretary met with climate activists...” \\ \cline{1-1} \cline{2-5} & & & & items: \{UN secretary, climate activists, US government... \} \\ \cline{1-1} \cline{2-5} & & & & criterion: order the entities by salience in the given text \\ \end{tabular}
\end{table}
Table 3: Overview of datasets, their number of ranking tasks, and the average number of items per task. We consider datasets that require knowledge of facts (fact-based) and in-context reasoning (context-based).
### Supervised Ceilings
The prompting as well as CCR probing approaches can all be applied zero-shot without a train-test split, which is infeasible for supervised probing. As an alternative, we simply use the same ranking task for training and testing a supervised probe. The performance of this probe indicates a performance upper bound of what can possibly be extracted given the difficulty of a task and the prompt design. For instance, if a prompt is entirely random, a supervised probe trained and tested on the same ranking task would not be able to discriminate between different items. We rely on the unaltered loss functions, e.g., Binary Cross-Entropy instead of origCCS, Max-Margin loss instead of MarginCCR, for training the supervised probes (see Fig. 5 for an overview).
### Results
For ease of evaluation, Fig. 3 presents the mean results over all datasets containing either fact-based or context-based ranking tasks. The plot presents the mean results and their standard deviation over \(5\) runs. All individual results are provided in Fig. 6 from Appendix A. Most importantly, we find that CCR probing outperforms prompting for the same model. Among the CCR probing methods, TripletCCR is the best performing approach across all models and datasets. The orange dashed lines represent the supervised ceilings for each of the CCR probing approaches as motivated in SS4.4. Interestingly, Triplet Loss does not have the highest upper bound. Between the fact-based and context-based datasets, CCR probing and prompting performance drops overall, but more for the much smaller encoder-only DeBERTa than for the other models. When considering the listwise metric, our results confirm that listwise prompting is inferior to pairwise and, surprisingly, also to pointwise prompting (Qin et al., 2023; Liusie et al., 2023). However, pairwise methods (P symbol) are also computationally more expensive, making CCR probing even more favorable. For pairwise methods, we observe a discrepancy between the pairwise and listwise results. This stems from the fact that pairwise methods are more fault-tolerant--some of the pairwise comparisons may be erroneous, but, in aggregate, the resulting ranking can still be correct. Similarly, we observe that listwise approaches (L) are generally more volatile, possibly due to more difficult calibration, positional biases (Han et al., 2023; Wang et al., 2023), and low fault-tolerance--listwise approaches directly return a single ranking.
## 5 Discussion
To scrutinize our results, we explore settings with a train-test split, and discuss interpretability considerations of CCR probing.
Figure 3: Pairwise and listwise results of the prompting and CCR probing methods for the DeBERTa, GPT-2, and MPT-7B model, averaged over all fact-based and context-based learning datasets. Results show mean and standard deviation over \(5\) runs. We find that CCR probing outperforms prompting for the same-size model. Among the CCR probing methods, TripletCCR is the best-performing. Orange bars represent ceilings of a supervised probe trained and tested on the same ranking task.
Ranking Direction across Tasks.Instead of training our probes on a single ranking task, we train them on a training set of multiple rankings and evaluate on a held-out set. To this end, we use 4-fold cross-validation, which allows comparing CCR probing against supervised probing in a fair setup. This setup is more similar to the experiments in the original CCS paper Burns et al. (2023) and thus rests on a similar hypothesis: is there are a more universal "ranking direction" in the activations of a language model that holds across ranking tasks? Fig. 5 in Appendix A presents the results of this k-fold validation experiment. First, our probes identify ranking properties that exist across different ranking tasks. This particularly holds for ranking tasks that resemble each other more closely as in ScalarAdj. Second, CCR probing does not fall far behind supervised probing. Since this is especially evident for datasets with fewer ranking tasks, we hypothesize that CCR probing is less likely to overfit and instead exploits general ranking properties.
Interpretability.Besides performance, another argument for CCR probing is control and post-hoc interpretability offered by the parametric probe. For instance, in Fig. 4 we plot the scores \(s_{n}=\sigma(\mathbf{\theta}^{T}\mathbf{x}_{n}+b)\) for each item yielded by the probe trained with TripletCCR. This allows us to inspect the distances between items projected onto the latent ranking scale.
## 6 Related Work
Pairwise and listwise prompting has been explored on different tasks Ma et al. (2023); Lee and Lee (2023); Liusie et al. (2023), but is most frequently focused on document retrieval Ferraretto et al. (2023). Pairwise (RankNet) Burges et al. (2005) and listwise (ListNet) Cao et al. (2007) ranking approaches have also been compared outside of language model prompting. We additionally explore pointwise prompting and find that, other than expected, pointwise often outperforms listwise prompting. To move beyond prompting, we propose an expansion of the Contrast-Consistent Search (CCS) Burns et al. (2023) method to rankings. Recent work explores calibrated versions of CCS Tao et al. (2023) and adapting CCS to order-invariant, multi-class settings Zancaneli et al. (2023). Our CCR probing approach is strongly influenced by unsupervised ranking Frydenlund et al. (2022) and probing of semantic axes Gari Soler and Apidianaki (2020); Li et al. (2022); Engler et al. (2022); Stoehr et al. (2023).
## 7 Conclusion
We analyze the ranking capabilities of language models by comparing pairwise, pointwise, and listwise prompting techniques and find that the latter is most susceptible to mistakes. We then propose an unsupervised probing method termed Contrast-Consistent Ranking (CCR) and find that, for the same model, CCR probing improves upon prompting. CCR learns an affine mapping between a language model's activations and a model-inherent ranking direction. On a more abstract level, we relate multiple language model queries through a surrogate model that projects the language model's outputs to a shared ranking scale.
The direction-invariance of both CCS and CCR poses a potential limitation that may be lifted by future work as further outlined in Appendix A. In particular for pointwise and listwise prompting, omitting the direction of a desired ranking can hurt performance. The language model may be confused whether to rank the highest or lowest item first, leading the items' corresponding scores to cannibalize each other.
Since we do not consider a train-validation-test set split, we refrain from hyperparameter-tuning (e.g., margins, learning rate, sub-batching, probe initialization). However, based on initial prototyping, we see performance boosts for CCR when tuning these hyperparameters. Other promising directions for future work are testing CCR probing with larger language models, lifting the direction-invariance of both CCS and CCR (Appendix A), and experimenting with more expressive probes.
Figure 4: CCR probing offers interpretability benefits such as the post-hoc analysis of the probe’s parameters.
## Acknowledgments
We would like to thank Ozan Irsoy, Atharva Tendle, Faner Lin, Umut Topkara, Ziyun Zhang, Ashim Gupta, Suchin Gururangan, Nikita Soni and the entire Bloomberg AI Group for valuable discussions and feedback on the manuscript. Moreover, we would like to express special thanks to Kevin Du and Luca Beurer-Kellner from ETH Zurich for early-stage discussions and initial prototyping.
|
2309.06749 | A quorum sensing active matter in a confined geometry | Inspired by the problem of biofilm growth, we numerically investigate
clustering in a two-dimensional suspension of active (Janus) particles of
finite size confined in a circular cavity. Their dynamics is regulated by a
non-reciprocal mechanism that causes them to switch from active to passive
above a certain threshold of the perceived near-neighbor density (quorum
sensing).A variety of cluster phases -- glassy, solid (hexatic) and liquid --
is observed depending on the particle dynamics at the boundary, the quorum
sensing range, and the level of noise | Yuxin Zhou, Yunyun Li, Fabio Marchesoni | 2023-09-13T06:46:15Z | http://arxiv.org/abs/2309.06749v1 | # A quorum sensing active matter in a confined geometry
###### Abstract
Inspired by the problem of biofilm growth, we numerically investigate clustering in a two-dimensional suspension of active (Janus) particles of finite size confined in a circular cavity. Their dynamics is regulated by a non-reciprocal mechanism that causes them to switch from active to passive above a certain threshold of the perceived near-neighbor density (_quorum sensing_). A variety of cluster phases - glassy, solid (hexatic) and liquid - is observed depending on the particle dynamics at the boundary, the quorum sensing range, and the level of noise.
## I Introduction
Bacteria are capable of adjusting their motility to form large colonies, like biofilms. While motile bacteria have the advantage to swim efficiently towards food sources, biofilms aggregates are able to resist environmental threats such as antibacterial substances. Understanding the basic physical mechanisms of biofilm growth is a topic of ongoing research by many teams worldwide. Recent studies suggest that a motility-based clustering phenomenon is involved in the formation of bacterial swarms [1] and in the transition from bacterial swarms to biofilms [2]. Moreover, it is demonstrated that synthetic active materials, such as Janus colloids, can undergo motility-induced aggregation, not only via high-density steric mechanisms [3], or lower density mutual interactions [4], but also by simply adjusting their velocity according to the direction [5] and the local density of their peers [6], largely insensitive to pair interactions. These situations have been modeled theoretically using both particle-based models and field theoretical approaches [7]. In this context, it was shown that the active systems may exhibit motility-induced phase separation (MIPS), whereby self-propelled particles, with only repulsive interactions, form aggregates by reducing their swimming speed in response to a local density value greater than a given threshold (a mechanism called _quorum sensing_[8; 9]).
In its simplest form, MIPS has been shown to be analogous to a gas-liquid phase separation. However, recent non-equilibrium field theories have predicted intriguing behaviors, like microphase separation [10] and an active foam phase with slowly coalescing bubbles [11]. In fact, our understanding of how the microscopic details of the single-particle dynamics lead to different collective behaviors is presently far from satisfactory. Finally, it has been shown that motile _E. coli_ bacteria spontaneously aggregate within minutes when subject to controlled convective flows produced by a microfluidic device [12]. It is still unclear, however, which physical ingredients are required for a minimal active-particles model to reproduce such a behavior [13; 14].
To a closer look, it is apparent that, while the emergence of steady aggregates of motile particles is largely driven by the nature of their mutual interactions, which ultimately influence their motility, the properties of such aggregates are strongly determined by the combined action of spatial confinement and fluctuations of both the suspension fluid and the self-propulsion mechanism. In this Letter we revisit the model of non-reciprocal particle interaction proposed by Bechinger and coworkers [6] (see also Ref. [5]), by investigating the effects of the particle dynamics against the container walls at different noise levels. Contact and far-field reciprocal (pair) interactions have been neglected; no alignment mechanism has been invoked to trigger particle aggregation: Quorum sensing under spatial confinement is the one mechanism considered here. As a result, we observed a variety of cluster phases, glassy, solid (hexatic) and liquid, and determined the relevant model phase diagram.
## II Model
_Single particle dynamics_. We considered the simplest realization of a synthetic microswimmer, namely a two-dimensional (2D) Janus particle (JP) [15]. An active JP of label \(i\) gets a continuous push from the suspension fluid, which in the overdamped regime amounts to a self-propulsion velocity, \(\mathbf{v}_{0i}\), with constant modulus, \(v_{0}\), and orientation, \(\theta_{i}\), fluctuating with time constant, \(\tau_{\theta}\), under the combined action of thermal noise and the rotational fluctuations intrinsic to the specific self-propulsion mechanism. In 2D. its bulk dynamics obeys the Langevin equations [16]
\[\dot{x}_{i} = v_{0}\cos\theta_{i}+\xi_{xi}(t) \tag{1}\] \[\dot{y}_{i} = v_{0}\sin\theta_{i}+\xi_{yi}(t)\] \[\dot{\theta}_{i} = \xi_{\theta i}(t),\]
where \({\bf r}_{i}=(x_{i},y_{i})\) are the coordinates of the particle center of mass subject to the Gaussian noises \(\xi_{pi}(t)\), with \(\langle\xi_{qi}(t)\rangle=0\) and \(\langle\xi_{qi}(t)\xi_{pi}(0)\rangle=2D_{0}\delta_{qp}\delta(t)\) for \(q,p=x,y\), modeling the equilibrium thermal fluctuations in the suspension fluid. The orientational fluctuations of the propulsion velocity are modeled by the Gaussian noise \(\xi_{\theta i}(t)\) with \(\langle\xi_{\theta i}(t)\rangle=0\) and \(\langle\xi_{\theta i}(t)\xi_{\theta i}(0)\rangle=2D_{\theta}\delta(t)\), where \(D_{\theta}=1/\tau_{\theta}\) is the relaxation rate of the self-propulsion velocity.
The simplifications introduced in Eq. (1) are not limited to the reduced dimensionality of the system. All noise sources have been treated as independent, although, strictly speaking, spatial and orientational fluctuations are statistically correlated to some degree. Moreover, we ignored hydrodynamic effects which may favor the clustering of active particles at high packing fractions. However, we made sure that the parameters used in our simulations are experimentally accessible, as apparent on expressing times in seconds and lengths in microns. The stochastic differential Eqs. (1) were numerically integrated by means of a standard Euler-Maruyama scheme [17]. To ensure numerical stability, the numerical integrations have been performed using an appropriately short time step, \(10^{-3}\).
_Boundary conditions_. In this study, the JPs are confined to a restricted area, say, a circular cavity of radius \(R\) [Fig. 1(a)]. One can think of motile bacteria spreading on a Petri dish. Equations (1) still apply away from the walls; we only need to set a prescription to treat the particle collisions with the boundaries. Following Refs. [18; 19], we assume that, upon hitting it, a JP is captured by the wall and immediately re-injected into the cavity, at an angle \(\phi\) with respect to the radius (i.e., the perpendicular) through the collision point [Fig. 1(a)]. A finite (short) trapping time does not affect the conclusions of the present work. A commonly accepted distribution for such a boundary scattering angle is [18]
\[p(\phi)=2\exp(\lambda\cos\phi)/[\pi I_{0}(\lambda)] \tag{2}\]
where \(I_{0}\) is the modified Bessel function of the first kind and the parameter \(\lambda\) depends on the temperature and the physio-chemical properties of the particle and the cavity wall. Notice that in the limit \(\lambda\rightarrow\infty\) we recover the reflecting boundary conditions adopted in Ref. [6], namely, \(p(\phi)=\delta(\phi)\). We considered other cavity geometries as well; an example is discussed at the bottom of the forthcoming section.
_Non-reciprocal interaction (sensing)_. When \(N\) identical, independent active particles of Eq. (1) are confined into the cavity, interactions among them cannot be neglected. In our simulations we consider only two kinds of interactions: _(i) hard-core repulsion_, whereby the particles are modeled as hard discs of radius \(r_{0}\). Further
Figure 1: Schematics of our model. (a) Illustration of the non-reciprocal interaction mechanism for \(N\) active Janus particles with visual half-angle \(\alpha\) and horizon \(d_{c}\). The distribution, \(p(\phi)\), of the boundary scattering angle, Eq. (2), is plotted for \(\lambda=100\): the distance of a point on the chart line from the scattering point on the boundary is proportional to the probability that the particle gets scattered in the direction of that point. (b) Quorum sensing protocol: for values of the sensing function above the threshold \(P_{0}(\alpha)\), Eq. (4), the particle turns from active to passive. (c) Example of passive cluster formation for \(\lambda=100\), \(\alpha=(7/8)\pi\), \(d_{c}=16\), \(D_{\theta}=0.001\), \(v_{0}=0.5\), \(R=45\), \(r_{0}=1\), \(N=304\), and \(D_{0}=0.01\) (snapshot taken at \(t=2\cdot 10^{4}\)). Active/passive particles are represented by red/blue circles. (d) Passive (blue), active (red) and total (black) radial particle distributions for the parameters in (c). These distributions have been averaged over time (10 snapshots taken every 1,000 time units starting at \(t=10^{4}\)) and initial conditions (200 realizations).
Figure 2: Phase diagram in the space parameter \((\lambda,d_{c})\). Four distinct regions are distinguishable, namely, I (below the red curve): all particles remain active; II: a liquid condensate of passive particle coexists with a gas of active particles; III: the passive condensate solidifies into a steady-state hexatic structure; and IV: periodic formation of hexatic passive clusters (see also Fig. 3). Snapshots of the relevant suspension patterns taken at time \(t=2\cdot 10^{4}\) are shown as insets for different \((\lambda,d_{c})\); red and blue circles denote respectively active and passive particles.
reciprocal interactions have been discarded; _(ii) neighbor perception_, a mechanism governing the motility of each particle depending on the spatial distribution of its neighbors. In biological systems this process is mediated by some form of inter-particle communication (mostly chemical in bacteria colonies [8; 9]). On the other hand, the motility of artificial microswimmers grows less efficient with increasing their density [15]. Without entering the details of the specific perception mechanisms, we can define the sensing function of particle \(i\) as follows [6] (see also [20])
\[P_{i}(\alpha)=\sum_{j\in V_{i}^{\alpha}}\frac{1}{2\pi r_{ij}}, \tag{3}\]
where \(r_{ij}\) is the distance between particles \(i\) and \(j\) and \(V_{i}^{\alpha}\) denotes the visual cone of particle \(i\), centered around the direction of its self-propulsion velocity, \(\mathbf{v}_{0i}\), with finite horizon, \(r_{ij}\leq d_{c}\). This means that each particle senses the presence of other particles only within a restricted visual cone and a finite distance, \(d_{c}\). For a uniform active suspension of density, \(\rho_{0}=N/\pi R^{2}\), the sensing function of a particle placed at the center of the cavity reads [5]
\[P_{0}(\alpha)=(\alpha/\pi)\rho_{0}R \tag{4}\]
We assume now that the particle motility is governed by the following simple _quorum sensing protocol_ [Fig. 1(b)],
\[|\mathbf{v}_{0i}|=\begin{cases}v_{0}&P_{i}(\alpha)\leq P_{0}(\alpha)\\ 0&P_{i}(\alpha)>P_{0}(\alpha).\end{cases} \tag{5}\]
Clearly, this form of particle interaction is non-reciprocal, since \(j\) may be perceived by \(i\) and, therefore, influence its dynamics, without being itself affected by the presence of \(i\). The dynamical implications of the non-reciprocal interactions in biological matter are discussed at length by Bechinger and coworkers in Refs. [5; 6]. For an earlier and more elaborated quorum sensing model of synthetic active matter, the reader is referred to Ref. [21]. What matters here, is that for appropriate choices of the horizon range, \(d_{c}\), and the visual angle, \(\alpha\), clustering may occur, as illustrated in Fig. 1(c).
## III Results
The number of tunable parameters of our model is quite large. In our simulations we kept the particle radius, \(r_{0}\), and self-propulsion speed, \(v_{0}\), fixed, which amounted to setting space and time units. The particle number, \(N\), and the cavity radius, \(R\), played no key role as long as the suspension packing fraction, \(\phi_{0}=N(r_{0}/R)^{2}\), was kept sufficiently small (typically \(\phi_{0}<0.2\)), to avoid steric clustering [3]. We remind that the active-passive transition threshold, \(P_{0}(\alpha)\), of Eq. (4) scales like \(N/R\). All remaining parameters, \(D_{0},D_{\theta}=1/\tau_{\theta},\alpha,d_{c}\) and \(\lambda\), were varied to shed light on the underlying collective dynamics.
Our main findings are summarized in Fig. 2 for the optimal case of full visual perception, \(\alpha=\pi\), persistence length, \(l_{\theta}=v_{0}\tau_{\theta}\) much larger than the cavity diameter, and small translational noise, \(D_{0}\) (whereby the time for a particle to diffuse a distance of the order of its diameter is much larger than to self-propel the same distance). The resulting 2D parameter space, \((\lambda,d_{c})\), is traversed by a continuous separatrix curve, \(d_{c}\) vs. \(\lambda\), whereby below (above) it all JPs retain (lose) their active nature. Since the packing fraction of the simulated active suspension is too small to trigger steric clustering, no active aggregates were detected (region I). For small \(\lambda\) values the suspension maintains its initial uniform distribution, whereas at large \(\lambda\), transient, short-lived active clusters form and dissolve (see Fig. 1 of the Supplemental Material [22])
Above the separatrix curve, the active-passive transitions induced by quorum sensing, Eq. (4), sustain the formation of large clusters, with core consisting of passive particles. For large \(d_{c}\) values, region II, one typically observes a large passive condensate surrounded by a low-density gas of active particles [see also the active and passive radial distributions of Fig. 1(d)], which bears a certain similarity with the situation analyzed in Ref. [23]. The passive constituents of the condensate keep fluctu
Figure 3: Cluster stability. Top panel: the fraction of passive particles, \(N_{p}(t)/N\), vs. \(t\) for different \(d_{c}\) (see legend). The scattering angle distribution here is \(p(\phi)=\delta(\phi)\), which corresponds to the limit \(\lambda\rightarrow\infty\) of Eq. (2). Inset: frequency spectrum of the subtracted function \([N_{p}(t)-N_{p}(\infty)]/N\) in the stationary regime for \(t\in(2\times 10^{3},2\times 10^{4})\). Bottom panel: snapshots showing the aggregation and evaporation of an hexatic cluster for \(d_{c}=10\). In both panels, active/passive particles are denoted by red/blue circles and the remaining simulation parameters are: \(\alpha=\pi\), \(D_{\theta}=0.001\), \(v_{0}=0.5\), \(R=45\), \(r_{0}=1\), \(N=304\), and \(D_{0}=0.01\).
ating subject to thermal noise, as expected in a liquid phase. At large \(\lambda\), when the wall scatters the colliding particles mostly toward the center of the cavity, the clustering mechanism exhibits additional distinct features. Lowering \(d_{c}\), we detected two more regions, III and IV, characterized by very dense clusters made of a passive core surrounded by an active layer; in both, the particles are closely packed into hexatic structures [24]. The particles of cluster active layer show larger motility than in the cluster core, but substantially lower than the surrounding active gas particles, As a major difference, in region III the clusters are stationary in time, whereas in region IV the clusters appear and disappear over time. The time oscillating clustering process of region IV is further analyzed in Fig. 3. In the top panel there we plotted the fraction of passive particles, \(N_{p}/N\), versus time for \(\lambda\to\infty\). One notices immediately that for \(d_{c}\) values corresponding to the regions I-III, this ratio approaches a steady state value after a transient of the order of the ballistic cavity crossing time, \(R/v_{0}\). In passing, we also remark for all curves \(N_{p}(t\to\infty)/N<1\), no matter what \(d_{c}\), which suggests that a gas of active particles is always at work. Vice versa, as anticipated above, for system configurations in the region IV, \(N_{p}/N\) appears to execute persistent irregular time oscillations. A spectral analysis of the time dependent ratio \(N_{p}(t)/N\) confirms that: (i) \(N_{p}(t)\) fluctuates around a stationary asymptotic value \(N_{p}(\infty)<1\); (ii) the spectral density of the subtracted ratio, \([N_{p}(t)-N_{p}(\infty)]/N\) (an example is shown in the inset of Fig. 3), peaks around a finite frequency of the order of \(D_{\theta}\). We further observed that, on increasing \(D_{\theta}\), region IV in Fig. 2 shrinks and finally disappears.
The numerical results of Figs. 2 and 3 demonstrate the role of the boundary in the cluster formation. At large \(\lambda\), the distribution of the scattering angle, \(p(\phi)\), is strongly peaked around \(\phi=0\); the boundary exerts a lensing effect on the active particles by re-directing them toward the center of the cavity. This clearly enhances the probability that the sensing function, \(P_{i}(\alpha)\), of the particles there overcome the threshold value, \(P_{0}(\alpha)\), of Eq. (4), thus triggering the clustering process. This is the key "herding" function of the gas of active particles continuously bouncing between the cavity and the cluster border. Accordingly, we noticed that in region II the clusters tend to be denser along the border. In the opposite limit of wide scattering angle distribution, i.e., for small \(\lambda\), the active suspension is no longer focused toward the cavity center and clustering is suppressed. The separatrix curve in Fig. 2 clearly shows that for \(\lambda\to 0\) clustering requires that \(d_{c}\sim R\), as implicit in the quorum sensing protocol adopted with Eqs. (3) and (4).
## IV Discussion and Conclusions
The phase diagram of Fig. 2 was obtained for a convenient choice of the tunable parameters \(D_{0},D_{\theta}\) and \(\alpha\). We now briefly discuss the role of these parameters in the clustering process.
_(i) role of spatial noise, \(D_{0}\)._ The additive noises \(\xi_{xi}(t)\) and \(\xi_{yi}(t)\) in Eq. (1) keep the particles diffusing even after they underwent the active-to-passive transition. This has a twofold consequence. On one side, it hampers the cluster formation by delaying it in time and pushing the separatrix curve of Fig. 2 to higher \(d_{c}\) values [compare Figs. 4(a) and (b)]. On the other hand, for \(D_{0}=0\) the passive particles come immediately at rest after having reset their self-propulsion velocity to zero. This leads to the buildup of frozen clusters with an amorphous glassy structure [Fig. 4(c)]. Vice versa, adding a little amount of noise allows clusters to rearrange themselves in the denser hexatic structures of region III [Fig. 4(d)].
_(ii) role of rotational noise, \(D_{\theta}\)._ In Fig. 2 we assumed that the particle persistence length, \(l_{\theta}=v_{0}/D_{\theta}\), was much larger than the cavity diameter. That choice was convenient in that it enhanced the role of the boundary dynamics in the clustering process. Indeed, under this condition, active JPs may hit the cavity walls repeatedly before grouping at the center, where eventually undergo the active-to-passive transition. To clarify the role of the persistence time, \(\tau_{\theta}=1/D_{\theta}\), we simulated the time evolution of the same suspension for increasing values of \(D_{\theta}\) and observed that cluster formation gets, indeed, progressively suppressed (see the Supplemental Material [22] for details). This comes as no surprise, since upon increasing \(D_{\theta}\), the persistence length, \(l_{\theta}\), decreases and the active particles' dynamics resembles more and more a standard Brownian motion with strength \(v_{0}^{2}/2D_{\theta}\).
Figure 4: Role of additive noise. Left panels: \(D_{0}=0\) and snapshot time \(t=10^{5}\); right panels: \(D_{0}=0.001\) and \(t=2\times 10^{4}\); top panels: \(d_{c}=24\) and \(D_{\theta}=0.01\); bottom panels: \(d_{c}=16\) and \(D_{\theta}=0.001\). The remaining simulation parameters are: \(\alpha=\pi\), \(v_{0}=0.5\), \(R=45\), \(\tau_{0}=1\), and \(N=304\). The \(\phi\) distribution and the particle color code are as in Fig. 3.
_(iii) role of the visual angle, \(\alpha\)_. We consider now cases when, contrary to Fig. 2, \(\alpha<\pi\). This means that the neighbor preception of particle \(i\) is restricted to a visual cone directed along its instantaneous self-propulsion velocity vector, \(\mathbf{v}_{0i}\)[5]. This enhances the non-reciprocal nature of the particle interactions. As a consequence, the active-passive transitions at the periphery of the forming clusters become more frequent. Indeed, an incoming particle perceives a comparatively much larger neighbor density than a particle moving outward. We remind here that all particles, active and passive alike, keep rotating randomly [third Eq. (1)] with correlation time \(\tau_{\theta}\). This mechanism tends to destabilize the forming clusters, so that one expects that shrinking the visual cone eventually suppresses clustering. Our simulations confirm this guess, even though the asymptotic value of the ratio \(N_{p}(t)/N\) exhibits a non-monotonic \(\alpha\) dependence with a maximum for \(\alpha/\pi\gtrsim(3/4)\) (Fig. 5) - compare the snapshots for \(d_{c}=16\) and \(\alpha=(7/8)\pi\) in Fig. 1(c) and \(\alpha=\pi\) in Fig. 2. We attribute his behavior to the combined effect of the mechanism above and the \(\alpha\) dependence of the sensing threshold \(P_{0}(\alpha)\) (see the Supplemental Material [22] for details).
We conclude this report briefly discussing a limiting case, where the cavity wall is replaced by periodic boundaries. We considered a square 2D simulation box of size \(L\): particle dynamics and quorum sensing protocols are the same as for the circular cavity; as a difference, a particle \(i\) crossing a box side, is re-injected into the box through the opposite side with same self-propulsion vector \(\mathbf{v}_{0i}\). In this regard, periodic boundaries are reminiscent of the scattering wall of the initial model for \(\lambda\to 0\), in that the self-propulsion direction of the re-injected particle tends to be uniformly distributed. Similarly to Fig. 2, one then expects that clustering only occurs for \(d_{0}\sim L/2\), as a consequence of the very definition of the active-passive transition threshold, \(P_{0}(\alpha)\). Direct numerical simulations (not shown) confirm this expectation.
**Acknowledgement**: Y.L. is supported by the NSF China under grant No. 12375037 and No. 11935010.
|
2309.07443 | Learning Tube-Certified Neural Robust Contraction Metrics | Control design for general nonlinear robotic systems with guaranteed
stability and/or safety in the presence of model uncertainties is a challenging
problem. Recent efforts attempt to learn a controller and a certificate (e.g.,
a Lyapunov function or a contraction metric) jointly using neural networks
(NNs), in which model uncertainties are generally ignored during the learning
process. In this paper, for nonlinear systems subject to bounded disturbances,
we present a framework for jointly learning a robust nonlinear controller and a
contraction metric using a novel disturbance rejection objective that certifies
a tube bound using NNs for user-specified variables (e.g. control inputs). The
learned controller aims to minimize the effect of disturbances on the actual
trajectories of state and/or input variables from their nominal counterparts
while providing certificate tubes around nominal trajectories that are
guaranteed to contain actual trajectories in the presence of disturbances.
Experimental results demonstrate that our framework can generate tighter
(smaller) tubes and a controller that is computationally efficient to
implement. | Vivek Sharma, Pan Zhao, Naira Hovakimyan | 2023-09-14T05:53:22Z | http://arxiv.org/abs/2309.07443v2 | # Learning Tube-Certified Control Using Robust Contraction Metrics
###### Abstract
Control design for general nonlinear robotic systems with guaranteed stability and/or safety in the presence of model uncertainties is a challenging problem. Recent efforts attempt to learn a controller and a certificate (e.g., a Lyapunov function or a contraction metric) jointly using neural networks (NNs), in which model uncertainties are generally ignored during the learning process. In this paper, for nonlinear systems subject to bounded disturbances, we present a framework for jointly learning a robust nonlinear controller and a contraction metric using a novel disturbance rejection objective that certifies a universal \(\mathcal{L}_{\infty}\) gain bound using NNs for user-specified variables. The learned controller aims to minimize the effect of disturbances on the actual trajectories of state and/or input variables from their nominal counterparts while providing certificate tubes around nominal trajectories that are guaranteed to contain actual trajectories in the presence of disturbances. Experimental results demonstrate that our framework can generate tighter tubes and a controller that is computationally efficient to implement. Code is available at github.com/viveksharmaaa/NNRCCM.
Learning for control, robust control, robot safety
## I Introduction
Learning-enabled control has demonstrated impressive performance in solving challenging control problems in robotics. However, such performance often comes with a lack of stability and/or safety guarantees, which prevents the learned controllers from being deployed to safety-critical systems. To resolve this issue, researchers have attempted to additionally learn a certificate alongside a controller using neural networks (NNs). Such a certificate can be a Lyapunov function that certifies the stability of a fixed point [1, 2, 3, 4], a contraction metric that certifies incremental stability, i.e., convergence to desired trajectories [5, 6, 7, 8], or a barrier function that certifies set invariance [9, 10], among others. Traditional methods for synthesizing these certificates often resort to the special structures of dynamics, e.g., strict feedback forms, or sum of squares (SOS) programming [11, 12] that is only applicable to polynomial dynamical systems of low and medium dimensions. In contrast, NN-based certificate synthesis is generally applicable and scalable to high-dimensional systems. Nevertheless, prevailing methods for generating certificates using NNs typically assume precise knowledge of the dynamics and are susceptible to performance degradation when confronted with model uncertainties. There is a need to synthesize robust controllers that minimize the effect of disturbances and certificates that establish the performance guarantee [10].
**Related Work. Contraction metrics**: Contraction theory [13] provides a powerful tool for examining the incremental stability of nonlinear systems, i.e., the convergence between pairs of state trajectories towards each other, by analyzing the differential dynamics of the system. Recently, this theory has been extended for constructive control design through control contraction metrics (CCMs) [14]. In comparison to existing methods that rely on (incremental) Lyapunov functions, CCM offers a more versatile solution for designing nonlinear tracking control [15]. Moreover, the search for a suitable CCM can be transformed into a convex optimization problem.
Contraction-based adaptive and robust control has also been investigated for nonlinear systems subject to unknown parameters [16], state-dependent uncertainties [17, 18], and external disturbances [19, 20, 21]. In particular, [19] proposed a robust CCM to synthesize a robust nonlinear tracking controller that guarantees transient tracking performance via certificate tubes, which can be leveraged for safe planning under disturbances. However, to search for the (robust) CCMs, all the above approaches rely on SOS optimization, which does not scale well to high-dimensional systems [22]. Additionally, to construct the control law, all the aforementioned approaches involve solving a nonlinear programming problem (to get the minimum energy path) at each time step, which is computationally heavy.
**Control certificate learning**: Certificate-based learning control differs from traditional learning control approaches, such as most reinforcement learning methods, which primarily focus on searching for a control policy. Instead, certificate-based methods simultaneously search for a control policy and a certificate that validates the properties of the closed-loop system such as stability and safety. NNs have proven to be powerful function approximators in learning control certificates, including Lyapunov functions [1, 2, 3, 4], control barrier functions [9, 10], and contraction metrics [5, 6, 7, 8], among others. Lyapunov-based certificates guarantee the stabilizability of a control policy around a fixed point, while contraction-based certificates ensure stability along a trajectory. On the other hand, barrier-based certificates concentrate on guaranteeing the safety of a control policy through an invariant set. Recent advancements have introduced a new class of reinforcement learning methods that focus on jointly learning a control policy and certificate, as demonstrated in [23, 24]. It should be noted, however, that all these methods generally do not explicitly account for
disturbances during the learning process. For a comprehensive overview of these methods, readers can refer to [25, 26].
**Tube-based planning and predictive control**: Motion planning or predictive control for nonlinear uncertain systems with guaranteed safety is a challenging problem. Feedback motion planning (FMP) or tube model predictive control (MPC) aims to mitigate the effect of uncertainties through the use of an ancillary controller that tracks a nominal (or) desired trajectory. The ancillary controller typically provides tubes around nominal trajectories, which are guaranteed to contain actual trajectories despite uncertainties. Such tubes can be used to plan safety-guaranteed trajectories through constraint tightening. Different approaches have been proposed to synthesize the ancillary controller and its associated tube, such as local linearization [27, 28], sliding mode control [29, 30], LQR plus SOS verification [31] and incremental Lyapunov function (iLF) [32]. These approaches either need to re-compute the tube for each specific trajectory [27, 28, 31], or apply to only a specific class of systems such as fully-actuated systems [29, 30], or necessitate the existence of a known iLF which is challenging to find for general nonlinear systems. In contrast, the recently proposed contraction metrics-based approaches [19, 20] are applicable to general nonlinear control-affine systems, and the metrics can be systematically synthesized using semidefinite programming (SDP). However, these approaches still suffer from the scalability issue of SDP and the high computational cost in implementing the controller, which motivates [5] and this work that aims to jointly learn the contraction metrics and controllers using NNs. NNs-based approximation methods have also been employed for (robust) MPC [33, 34]. [35] outlines an approach to feedback-based motion planning for systems with unknown dynamics, leveraging deep control-affine approximations acquired from a dynamics dataset and optimizing a tracking-error bound while learning a controller. In contrast, our approach distinguishes itself by including a novel objective aimed at rejecting disturbances as part of the learning process.
Table I provides a summary of the key characteristics of our approach and existing relevant approaches mentioned above.
**Statement of Contributions**: For nonlinear control-affine systems subject to bounded disturbances, this work presents a novel approach to jointly learning a robust nonlinear controller and a contraction metric using NNs. The learned controller aims to minimize the effect of disturbances on the deviations of actual state and/or input trajectories from their nominal counterparts and provide certificate tubes around nominal trajectories where actual ones are guaranteed to remain in the presence of disturbances. Our approach is primarily motivated by [5] and [19]. Compared to [5], our approach explicitly considers the disturbance rejection objective in learning the metric and controller and allows for optimizing the tube size for user-specified states and/or inputs. Additionally, the controller yielded by our approach is computationally much cheaper to implement, compared to [19], which necessitates solving a nonlinear programming problem to compute the control signal at each time step and is only applicable for dynamical systems with lower-order polynomial approximations. To the best of our knowledge, this work represents the first attempt to use NNs to learn certified nonlinear robust controllers with explicit disturbance rejection properties.
_Notations_. Let \(\mathbb{R}^{n}\), \(\mathbb{R}^{n\times n}\) and \(\mathbb{R}^{+}\) denote an \(n\)-dimensional vector space, space of \(n\times n\) matrices and set of positive real numbers respectively. We use the notation \(A\succ 0\)\((A\prec 0)\) and \(A\succeq 0\)\((A\preceq 0)\) to denote positive definite (negative definite) and positive semi-definite (negative semi-definite) symmetric matrices respectively. For a matrix valued function \(M(x):\mathbb{R}^{n}\rightarrow\mathbb{R}^{n\times n}\), its Lie derivative along a vector \(v\in\mathbb{R}^{n}\) element wise, is computed as \(\partial_{v}M(x):=\sum_{i}v^{i}\frac{\partial M}{\partial x^{i}}\). The notation \(v^{i}\) is used to denote the \(i\)_-th_ element of a vector \(v\). \((A)\) is the shorthand notation for \(A+A^{T}\). Also, \(\left\lVert\cdot\right\rVert\) denotes the \(2\)-norm of a vector or matrix. The notation \(x\in\mathcal{L}_{\infty}\) indicates that \(\left\lVert x(t)\right\rVert\) is bounded for all \(t\geq 0\). The \(\mathcal{L}_{\infty}\) and truncated \(\mathcal{L}_{\infty}\) norm of a function \(x(t):\mathbb{R}^{+}\rightarrow\mathbb{R}^{n}\) are defined as \(\left\lVert x\right\rVert_{\mathcal{L}_{\infty}}\triangleq\sup_{t\geq 0} \left\lVert x(t)\right\rVert\) and \(\left\lVert x\right\rVert_{\mathcal{L}_{\infty}^{[0,T]}}\triangleq\sup_{0\leq t \leq T}\left\lVert x(t)\right\rVert\) respectively.
## II Problem Statement and Preliminaries
Consider a nonlinear control affine system of the form
\[\dot{x}(t) =f(x(t))+B(x(t))u(t)+B_{w}(x(t))w(t) \tag{1}\] \[z(t) =g(x(t),u(t)),\]
where \(x(t)\in\mathcal{X}\subseteq\mathbb{R}^{n}\), \(u(t)\in\mathcal{U}\subseteq\mathbb{R}^{m}\) and \(w(t)\in\mathcal{W}\subseteq\mathbb{R}^{l}\)\(\forall t\in\mathbb{R}^{+}\) are the vector of states, inputs and unknown disturbances, respectively. Here \(\mathcal{X}\), \(\mathcal{U}\), and \(\mathcal{W}\) are compact sets representing state space, input space, and disturbance space respectively. The vector/matrix-valued functions \(f(x)\), \(g(x)\), \(B(x)\), and \(B_{w}(x)\) are known smooth functions of appropriate dimensions. The output variable \(z(t)\in\mathbb{R}^{p}\) represents the variables whose deviation from the nominal value should be minimized. We use the notation \(b_{i}\) and \(b_{w,i}\) to represent _ith_ column of matrix \(B\) and \(B_{w}\) respectively.
For the system in (1), assume we have a nominal state trajectory \(x^{*}(t)\) and input trajectory \(u^{*}(t)\), satisfying the nominal dynamics
\[\dot{x}^{*}(t) =f(x^{*}(t))+B(x^{*}(t))u^{*}(t)+B_{w}(x^{*}(t))w^{*}(t) \tag{2}\] \[z^{*}(t) =g(x^{*}(t),u^{*}(t)),\]
where \(w^{*}(t)\) is a vector of nominal disturbances (with \(w^{*}(t)\equiv 0\) being a special case).
The goal of this paper is to learn a state-feedback controller for the system (1) of the form
\[u(t)=u^{*}(t)+k(x(t),x^{*}(t)) \tag{3}\]
that minimizes the gain from disturbance deviation (\(w-w^{*}\)) to output deviation (\(z-z^{*}\)) of the closed-loop system (obtained by applying the control (3) to the system (1)) given by
\[\dot{x}(t)= f(x(t))+B(x(t))(u^{*}(t)+k(x(t),x^{*}(t))) \tag{4}\] \[+B_{w}(x(t))w(t)\] \[z(t)= g(x(t),u^{*}(t)+k(x(t),x^{*}(t))).\]
Specifically, such gain is quantified through the concept of _universal \(\mathcal{L}_{\infty}\) gain_[19] as defined below.
**Definition 1**.: The control system in (4) achieves a universal \(\mathcal{L}_{\infty}\) gain bound of \(\alpha\), if for any target trajectory \(x^{*}\), \(w^{*}\) and \(z^{*}\) satisfying (4), any initial condition \(x(0)\) and any disturbance \(w\) such that \(w-w^{*}\in\mathcal{L}_{\infty}\), for any \(T\geq 0\), the condition
\[\|z-z^{*}\|^{2}_{Z^{[0,T]}_{\infty}}\leq\alpha^{2}\|w-w^{*}\|^{2}_{\mathcal{L} ^{[0,T]}_{\infty}}+\beta(x(0),x^{*}(0)) \tag{5}\]
holds for a function \(\beta(x_{1},x_{2})\geq 0\) with \(\beta(x,x)=0\).
_Remark 1_.: The gain \(\alpha\) in Definition 1, in the similar spirit of tube size, is used to quantify the deviation of closed-loop trajectory \(z(\cdot)\) from the nominal \(z^{*}(\cdot)\) trajectory.
### _Robust Contraction Metrics_
Contraction theory [13] analyzes the incremental stability of a system by studying the evolution of distance between two arbitrarily close trajectories. This theory applies Lyapunov conditions for studying the stability of the differential version of system (1). The differential dynamics of the system (1) can be represented as:
\[\dot{\delta}_{x}=A(x,u,w)\delta_{x}+B(x)\delta_{u}+B_{w}(x)\delta_ {w} \tag{6}\] \[\delta_{z}=C(x,u)\delta_{x}+D(x,u)\delta_{u},\]
where \(A(x,u,w):=\frac{\partial f}{\partial x}+\sum_{i=1}^{m}\frac{\partial b_{i}}{ \partial x}u_{i}+\sum_{i=1}^{p}\frac{\partial b_{w,..i}}{\partial x}w_{i}\), \(C(x,u):=\frac{\partial g}{\partial x}\) and \(D(x,u):=\frac{\partial g}{\partial u}\). \(\delta_{x}\), \(\delta_{u}\) and \(\delta_{w}\) denote the infinitesimal displacement between a pair of state, control, and disturbance trajectories respectively.
Likewise, the differential dynamics of closed-loop system (4) can be obtained as:
\[\dot{\delta}_{x}=\mathcal{A}\delta_{x}+\mathcal{B}\delta_{u},\ \delta_{z}= \mathcal{C}\delta_{x}+\mathcal{D}\delta_{u}, \tag{7}\]
where \(\mathcal{A}\triangleq A+BK\), \(\mathcal{B}\triangleq B_{w}\), \(\mathcal{C}\triangleq C+DK\) and \(\mathcal{D}\triangleq 0\). Here \(K(x,x^{*})\triangleq\frac{\partial k}{\partial x}\) with \(k\) representing the state-feedback part of the controller as defined in (3).
Contraction theory introduces a method to quantify the virtual displacement (\(\delta_{x}\)) between two arbitrarily close trajectories using a positive definite metric denoted as \(M(x):\mathcal{X}\mapsto\mathbb{R}^{n\times n}\). This theory extends the principles of Lyapunov theory to study incremental stability by incorporating a differential analog of a Lyapunov function of the form \(V(x,\delta_{x})=\delta_{x}^{T}M(x)\delta_{x}\). By demonstrating that this function exponentially decreases, meaning \(\dot{V}(x,\delta_{x})\leq-2\lambda V(x,\delta_{x})\) for some positive constant \(\lambda\), incremental exponential stability of the system can be established.
In [14], the authors present an important theorem for calculating a CCM using matrix inequalities. The theorem states that if a positive-definite metric \(W(x)\) satisfies the following conditions for all \(x\) and some \(\lambda>0\)
\[B_{\perp}^{T}\left(\partial_{f}W(x)+\langle\frac{\partial f(x)}{ \partial x}W(x)\rangle+2\lambda W(x)\right)B_{\perp}\prec 0, \tag{8}\] \[B_{\perp}^{T}\left(\partial_{b_{j}}W(x)-\langle\frac{\partial b_ {j}(x)}{\partial x}W(x)\rangle\right)B_{\perp}=0,\ j=1,..,m, \tag{9}\]
where \(B_{\perp}\) is a matrix such that \(B_{\perp}^{T}B=0\) and \(W(x)=M^{-1}(x)\) is the dual metric verifying \(\underline{w}I\preceq W(x)\preceq\overline{w}I\), with \(\underline{w}=1/\overline{m}\) and \(\overline{w}=1/\underline{m}\), then there exists a tracking controller \(k(x,x^{*})\) such that closed loop trajectory \(x(t)\) of the system (4) exponentially converges to the nominal trajectory \(x^{*}(t)\) of the system (2), with the rate \(\lambda\).
We next state an important lemma on _sufficient_ conditions for a closed-loop system (4) to admit a _guaranteed universal \(\mathcal{L}_{\infty}\) gain_.
**Lemma 1**.: _[_19_]_ _The closed-loop system (4) has a universal \(\mathcal{L}_{\infty}\) gain bound of \(\alpha>0\), if there exists a uniformly-bounded symmetric metric \(\underline{m}I\preceq M(x)\preceq\overline{m}I\) with \(0<\underline{m}\leq\overline{m}\) and positive constants \(\lambda\) and \(\mu\), such that \(\forall x,x^{*},w\), we have:_
\[\begin{bmatrix}\langle M\mathcal{A}\rangle+\dot{M}+\lambda M& \mathcal{M}\mathcal{B}\\ \mathcal{B}^{T}M&-\mu I_{p}\end{bmatrix}\preceq 0 \tag{10}\] \[\begin{bmatrix}\lambda M&0\\ 0&(\alpha-\mu)I_{p}\end{bmatrix}-\alpha^{-1}\begin{bmatrix}\mathcal{C}^{T}\\ \mathcal{D}^{T}\end{bmatrix}\begin{bmatrix}\mathcal{C}&\mathcal{D}\end{bmatrix} \succeq 0, \tag{11}\]
_where \(\dot{M}=\sum_{i=1}^{n}\frac{\partial M}{\partial x_{i}}\dot{x}_{i}\) and \(\dot{x}_{i}\) is given by (4). The matrices \(\mathcal{A}\), \(\mathcal{B}\), \(\mathcal{C}\) and \(\mathcal{D}\) are defined in (7)._
_Remark 2_.: The metric \(M(x)\) in Lemma 1 is termed as a robust CCM (RCCM) in [19].
## III Learning Robust Controller and Contraction Metrics
Inspired by [5], we use machine learning methods to jointly learn a robust controller and an RCCM for the system (1) while minimizing the universal \(\mathcal{L}_{\infty}\) gain. Both the controller and metric are parameterized as neural networks and the parameters are optimized using loss functions inspired by contraction theory and Lemma 1. The training data for learning is sampled independently from the dataset \(\{(x_{i},x_{i}^{*},u_{i}^{*},w_{i})\in\mathcal{X}\times\mathcal{X}\times \mathcal{U}\times\mathcal{W}\}_{i=1}^{N}\).
### _Joint learning of the controller and RCCM_
The controller \(u(x,x^{*},u^{*};\theta_{u})\) and the dual metric \(W(x;\theta_{w})\) are modeled as neural networks, parameterized by \(\theta_{u}\) and \(\theta_{w}\) respectively. The gain value of \(\alpha\) and the variable \(\mu\) defined in (10) and (11) are optimization variables. We want to learn a controller and a metric that
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline
**Controller** & **Disturbance** & **Minimizing Tube Size for** & **Tube for** & **Computational** \\ & **Rejection** & **User-Specified Variables** & **inputs** & **Cost (Online)** \\ \hline
**NCM**[6] & No & Difficult & Unavailable & Low \\ \hline
**NN-CCM**[5] & No & Difficult & Unavailable & Low \\ \hline
**SOS-RCCM**[19] & Yes & Easy & Available & High \\ \hline
**NN-RCCM (ours)** & Yes & Easy & Available & Low \\ \hline \end{tabular}
\end{table} TABLE I: Summary of key characteristics of our approach compared to existing approaches.
minimizes the \(\mathcal{L}_{\infty}\) gain, \(\alpha\). The gain quantifies the tube size in which the closed-loop system trajectories are bound to stay despite disturbances. Ideally, we would want the smallest tube size possible for the chosen state or input or a combination thereof with a given disturbance bound. We construct \(u(x,x^{*},u^{*};\theta_{u})\) to ensure that if \(x=x^{*}\) then \(u(x,x^{*},u^{*};\theta_{u})=u^{*}\vee\theta_{u}\). Also \(W(x,\theta_{w})\) is a symmetric matrix by construction and \(W(x,\theta_{w})\succeq\underline{w}I,\ \forall\ x\) and \(\theta_{w}\). Here, \(\underline{w}\) is a hyperparameter and is used to lower bound the smallest eigenvalue of the dual metric.
We denote the _LHS_ of (10) and (11) from Lemma 1 by \(C_{1}(x,x^{*},u^{*},w;\theta_{u},\theta_{w},\mu)\) and \(C_{2}(x,x^{*},u^{*},w;\theta_{u},\theta_{w},\alpha,\mu)\) respectively. Let \(\rho(S)\) denote the uniform distribution over the set \(S\), where \(S:=\mathcal{X}\times\mathcal{X}\times\mathcal{U}\times\mathcal{W}\). The _robust contraction risk_ of the system is defined as follows:
\[\mathcal{L}_{C_{1}}(\theta_{w},\theta_{u},\mu)=\mathbb{E}_{(x,x^{*},u^{*},w) \sim\rho(S)}\ L_{PD}(-C_{1}(\cdot)) \tag{12}\]
\[\mathcal{L}_{C_{2}}(\theta_{w},\theta_{u},\alpha,\mu)=\mathbb{E}_{(x,x^{*},u^ {*},w)\sim\rho(S)}\ L_{PD}(C_{2}(\cdot)), \tag{13}\]
where \(L_{PD}(\cdot)\geq 0\) is a loss function used for penalizing the negative definiteness of its argument. \(L_{PD}(A)=0\) if and only if \(A\succeq 0\). The optimal values of \((\theta_{w}^{*},\theta_{u}^{*},\alpha^{*},\mu^{*})\) will ensure that the controller \(u(x,x^{*},u^{*};\theta_{u}^{*})\) and dual metric \(W(x;\theta_{w}^{*})\) satisfy (11) and (12) exactly, with \(\alpha^{*}\) being the optimal gain (or tube size).
To guide the optimization process, two auxiliary loss terms, inspired by the contraction theory (8) and (9) that define sufficient conditions for contraction, are used. Denoting the _LHS_ of (8) and (9) by \(C_{3}(x,\theta_{w})\) and \(\{C_{4}^{j}(x,\theta_{w})\}_{j=1}^{m}\) respectively, the following risk functions are used:
\[\mathcal{L}_{w_{1}}(\theta_{w}) =\mathbb{E}_{(x,x^{*},u^{*},w)\sim\rho(S)}\ L_{PD}(-C_{3}(\cdot)) \tag{14}\] \[\mathcal{L}_{w_{2}}(\theta_{w}) =\sum_{j=1}^{m}\mathbb{E}_{(x,x^{*},u^{*},w)\sim\rho(S)}\ \|C_{4}^{j}(\cdot)\|_{F}, \tag{15}\]
where \(\|\cdot\|_{F}\) is the Frobenius norm.
Putting everything together, we have the following loss function to train the neural network using sampled data
\[\mathcal{L}(\theta_{w},\theta_{u},\alpha,\mu)=\frac{1}{N}\sum_{i =1}^{N}L_{PD}(-C_{1}(\cdot))\!+\!L_{PD}(C_{2}(\cdot))\!+\] \[L_{PD}(-C_{3}(\cdot))\!+\!\sum_{j=1}^{m}\ \|C_{4}^{j}(\cdot)\|_{F}\!+\!\alpha, \tag{16}\]
where the training data \(\{x_{i},x_{i}^{*},u_{i}^{*},w_{i}\}_{i=1}^{N}\) is sampled independently from \(\rho(S)\). The arguments have been omitted for brevity. \(L_{PD}\) is defined as follows: Given a matrix \(X\in\mathbb{R}^{n\times n}\), \(\xi\) number of points are randomly sampled from a unit norm ball i.e. \(\{\eta_{j}\in\mathbb{R}^{n}\ |\ \|\eta_{j}\|_{2}=1\}_{j=1}^{\xi}\) and \(L_{PD}\) is calculated as \(L_{PD}(X)=\frac{1}{\xi}\sum_{j=1}^{\xi}\max\{0,-\eta_{j}^{T}X\eta_{j}\}\).
_Remark 3_.: \(\alpha\) and \(\mu\) are optimization variables that are constrained to be always positive during learning.
### _Refinement of state and input tubes_
When formulating the learning objective stated in (16), the primary focus is often on minimizing the universal \(\mathcal{L}_{\infty}\) gain for the vector \(z\) in (1). This vector \(z\) contains weighted states and inputs, and the goal is to strike a balance between tracking performance and control efforts. Specifically, the vector \(z\) can be represented as \(z=[(Qx)^{T},(Ru)^{T}]^{T}\), where \(Q\) and \(R\) are weighting matrices. Once the metric and controller have been learned, it is possible to obtain smaller tubes for various combinations of states, inputs, or both by appropriately selecting \(g(x,u)\) in (1) or matrices \(C\) and \(D\) in (6). The introduction of new matrices \(C\) and \(D\) does not violate the matrix inequalities established for a different variable, as demonstrated in [19]. This eliminates the need for retraining to optimize \((\theta_{w},\theta_{u})\) for the new variable \(z\). The primary objective of the refinement process is to minimize \(\alpha\) exclusively for the new \(z\), utilizing the specified cost functions given by (12) and (13), while maintaining the fixed values of the parameters \((\theta_{w},\theta_{u})\). The constraint of keeping the parameter \(\theta_{w}\) fixed removes reliance on the costs outlined in (14) and (15), which solely depend on \(W(x;\theta_{w})\). The optimization problem to refine \(\alpha\) is solved offline, using the same learning framework, by detaching (14) and (15) from the computation graph and fixing \((\theta_{w},\theta_{u})\).
### _Verification of the metric_
Ensuring the stability and robustness of the closed-loop system can be achieved by finding a metric \(M(x)\) and controller gain \(K(x,x^{*})\) that satisfy the matrix inequalities presented in (10) and (11) for all points in the uncountable set \(S\). However, verifying the satisfaction of these inequalities at every point within the uncountable set poses a significant challenge. Existing methods for neural network verification, such as mixed-integer linear programming [36] and satisfiability modulo theories (SMT)-based methods [2], have been proposed but are currently limited to small neural networks or require restrictive assumptions on neural network architectures, such as specific choices of activation functions. Other techniques for verifying NN controllers include statistical validation [34, 37] and constrained learning [38, 39]. The problem of verifying whether a given network satisfies desired properties remains an open research question. While it can be challenging to verify the inequality for every single point in the state space, our empirical results below have shown promising results in terms of robustness and tracking performance. It's also worth noting that recent advancements in the theory of almost Lyapunov functions [40], show that a system can still demonstrate stability even when the Lyapunov stability conditions are not satisfied at all points. Rigorous theoretical guarantees for the correctness of our learned metric through the satisfaction of matrix inequalities at every data point within the considered sets are indeed attainable. This can be achieved by computing the Lipschitz constants of the relevant variables and imposing stricter versions of the matrix inequalities to accommodate the gap between sampled points and an arbitrary point in the considered sets, as adopted in [5, Proposition 2].
## IV Evaluation of Performance
In order to evaluate the efficacy of our proposed framework, we test it on the four benchmark systems, namely, (1) a planar vertical takeoff and landing vehicle (**PVTOL**) [20], (2) a **Quadrotor**[20], (3) Neural Lander (**NL**) [41] and (4) Two-Link Planar Robotic Arm (**TLPRA**) [42]. The two latter benchmark systems, specifically **NL** and **TLPRA**, exhibit complex dynamics that cannot be effectively approximated by lower-degree polynomials, making it impossible to apply the SOS-based methods [19].
In order to highlight the benefits of our framework in selectively optimizing tube sizes for specific states, we focus on learning a controller that minimizes tube sizes exclusively for the _position states_. This approach is particularly suitable for motion planning tasks that prioritize obstacle avoidance. By reducing the tube sizes (and tracking errors) specifically for position states, collisions with obstacles can be effectively avoided. To achieve this, we introduce a controller referred to as **NN-RCCM-P**, where the function \(g(x,u)\) is set to \(p\), representing a vector that contains only the position states (e.g., \([p_{x},p_{z}]\) for PVTOL). For comparison, we designed a CCM-based neural controller in [5], which we refer to as **NN-CCM**. Additionally, for PVTOL, we also designed a RCCM-based controller using SOS programming that optimizes the tubes for the position states following the method described in [19], referred to as **SOS-RCCM-P**.
### _Implementation Details_
To ensure a fair comparison, we adopted the same neural network architecture and hyperparameter values as the CCM-based method in [5]. The dual metric \(W(x;\theta_{w})\) in our framework was modeled as \(C(x;\theta_{w})C^{T}(x;\theta_{w})+\underline{w}I\), where \(C(x;\theta_{w})\) is a two-layer neural network with 128 neurons in the hidden layer, and \(\underline{w}>0\) is a hyperparameter. The controller was constructed as \(u(x,x^{*},u^{*},\theta_{u})=u^{*}+\phi_{2}\cdot\tanh(\phi_{1}\cdot(x-x^{*}))\), where \(\phi_{1}\) and \(\phi_{2}\) are two two-layer neural networks with 128 neurons in the hidden layer with parameters denoted as \(\theta_{u}=\{\theta_{u1},\theta_{u2}\}\) and \(\tanh(\cdot)\) is the hyperbolic tangent function. The lowest eigenvalue of the dual metric is bounded by \(\underline{w}\), and the optimization variables \(\alpha\) and \(\mu\) are initialized randomly and constrained to be positive. Both \(\lambda\) and \(\underline{w}\) are treated as hyperparameters.
To ensure the effectiveness of our training data, we randomly sample data points from the set \(S\). Consequently, it becomes necessary to define the state space \(\mathcal{X}\), reference control space \(\mathcal{U}\), and disturbance space \(\mathcal{W}\). The training process is not limited by a specific structure for the nominal trajectory dataset, which means that our learned controller can track any nominal trajectory. Moreover, when simulating the nominal and closed-loop trajectories within the bounded time horizon \([0,T]\) for tracking error comparisons, we sampled the initial nominal state from the set \(\mathcal{X}_{0}\) and the error between the initial nominal and actual states from a set \(\mathcal{X}_{e0}\). We used the same set definitions as described in [5], for defining the sets \(\mathcal{X}\), \(\mathcal{U}\), \(\mathcal{X}_{0}\) and \(\mathcal{X}_{e0}\). Also, the disturbance vector \(w\) is sampled from a compact set represented as \(\mathcal{W}:=\{w(t)\in\mathbb{R}^{p}||\underline{w}\|_{\mathcal{L}_{\infty}} \leq\sigma\}\), where \(\sigma\) is a constant denoting the bound of the disturbance. In simulations, we used a disturbance with a bound of \(1\), i.e., \(\sigma=1\).
#### Iv-A1 Tracking error
The results are documented in Fig. 1 and Table II. In our closed-loop simulations, we utilize a piecewise constant function to simulate the disturbance. For each interval of constant time, the length of the interval and the norm bound of the disturbance within that interval are uniformly sampled from the ranges \([0,1]\) seconds and \([0.1,\sigma]\), respectively. In order to evaluate the tracking performance, we employ a quality metric referred to as the total tracking error. This metric is defined as follows: when presented with the tracking error curve \(x_{e}(t)\) for \(t\in[0,T]\), and given a specific \(\sigma\) value as well as the initial condition \(x(0)=x^{*}(0)\), we standardize the error curve by dividing it by the time duration \(T\). The total tracking error is then represented by the area beneath this normalized error curve \(x_{e}(t)/T\).
From Fig. 1 and Table II, we can observe that the position tracking error for PVTOL, NL, and TLPRA is similar under both approaches. However, for Quadrotor, our **NN-RCCM-P** yields a position tracking error that is approximately half of the error obtained with **NN-CCM**. Furthermore, it is important to highlight that the tracking error remains within the pre-computed bounds determined by the tube sizes.
Fig. 1: Tracking error comparisons for the four benchmark systems under **NN-CCM** and **NN-RCCM-P** in the presence of a disturbance with \(\sigma=1\). The \(y\) axes are in log scale. The shaded regions are tracking errors between mean plus and minus one standard deviation over \(100\) trajectories with the same initial conditions, i.e. \(x(0)=x^{*}(0)\).
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**System** & **NN-CCM** & **NN-RCCM-P** \\ \hline PVTOL & \(0.074\pm 0.047\) & \(0.064\pm 0.042\) \\ \hline Quadrotor & \(0.051\pm 0.026\) & \(0.032\pm 0.015\) \\ \hline NL & \(0.008\pm 0.003\) & \(0.007\pm 0.002\) \\ \hline TLPRA & \(0.034\pm 0.017\) & \(0.030\pm 0.013\) \\ \hline \end{tabular}
\end{table} TABLE II: Total position tracking error. Mean \(\pm\) standard deviation over \(100\) trajectories with same initial conditions, i.e. \(x(0)=x^{*}(0)\).
#### Iv-C2 Tube sizes and execution times
Table III presents a comparison of tube sizes for position states yielded by the three methods for PVTOL and Quadrotor. In **NN-RCCM-P** and **SOS-RCCM-P**, the tube size is determined by the \(\mathcal{L}_{\infty}\)-gain bound, \(\alpha\). Conversely, for the CCM-based method described in [5], the tube size is calculated using ISS stability analysis. Notably, our **NN-RCCM-P** and **SOS-RCCM-P** yield similar tube sizes, while the tube size obtained for **NN-CCM** is much larger. Additionally, we provide tube sizes for control tubes in Table IV for PVTOL and Quadrotor, and it is evident that our framework offers control inputs with tighter tubes when compared to **SOS-RCCM.** Even when subjected to bounded disturbances (as shown in Appendix V-C), the control inputs stay within these tubes.
To assess the effectiveness of tubes in ensuring safe planning and control, we examine both our **NN-RCCM-P** and **SOS-RCCM-P**. We focus on the task of maneuvering the PVTOL while encountering obstacles and compare the tracking error, tube size, and time between the **NN-RCCM-P** (our approach) and the **SOS-RCCM-P** approaches. The motion planner generates a nominal trajectory that aims to minimize the control effort and travel time while considering tube size constraints. We conducted simulations to evaluate the performance of the two tracking controllers in the presence of a bounded wind disturbance.
The results are illustrated in Fig. 2. One can see that **SOS-RCCM-P** with **NN-RCCM-P** have similar tube sizes for position states as described in Table III. In comparison to **SOS-RCCM-P**, our **NN-RCCM-P** achieved comparable tracking performance, as indicated by Fig. 2. However, Table III indicates that online execution of our **NN-RCCM-P** is computationally much cheaper than the execution of **SOS-RCCM-P**, which necessitates solving a nonlinear programming problem at each step [19]. It's worth mentioning that the execution time reported in Table III was obtained on MATLAB 2022b running on a PC equipped with an Intel(R) i7-1065G7 CPU and 16 GB RAM.
The online computational cost associated with our framework is \(10\) times less than that associated with **SOS-RCCM-P** as demonstrated in Table III., as the latter involves solving a nonlinear programming (NLP) problem at each time to compute the geodesic for the control. This reduced computational complexity alleviates the burden of extensive computations, enabling agile and time-sensitive applications that were previously hindered by the computational demands of SOS-based methods. As an example, to facilitate real-world experiments on a Quadrotor system, the authors of [20], which used SOS optimization, had to approximate the geodesic with a straight line to avoid solving the NLP problem for online calculation of the control signal. As for performance, our framework achieves a much (over 15 times) tighter tube for position states compared to **NN-CCM**. Tighter tubes can facilitate more efficient planning in tight spaces that may be impossible otherwise. We conducted a statistical assessment of the correctness of our learned certificate, particularly focusing on the frequency of certificate breaches. Our findings indicate that by employing the loss terms, we managed to meet the matrix inequalities with minimal certificate breaches (details in Appendix V-D).
## V Conclusions
For nonlinear systems subject to bounded disturbances, we have introduced a novel framework for joint learning of a robust contraction metric and a tube-certified controller with explicit disturbance rejection capabilities using neural networks. Our framework falls into the category of robust control that considers worst-case scenarios, which often lead to conservative performance. One possible way to mitigate the conservatism is to combine the proposed method with uncertainty compensation-based methods [16, 43, 44], in which the matched uncertainties (that enter the system through the same channels as control inputs) can be estimated and compensated for while the effect of unmatched uncertainties can be minimized using the proposed framework.
Exploring how the safety guarantees hold in hardware experiments and investigating the impact of state estimation errors on tube size and the resulting robustness guarantees of the learned controller are potential avenues for future research.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**System** & **NN-RCCM** & **SOS-RCCM** \\ \hline PVTOL & \(0.4\) & \(1.0\) \\ \hline Quadrotor & \(18.7\) & \(28.2\) \\ \hline \end{tabular}
\end{table} TABLE IV: Tube sizes for control inputs for PVTOL and Quadrotor.
Fig. 2: Trajectory planning and tracking for PVTOL under **NN-RCCM-P** and **SOS-RCCM-P**. Dotted lines denote planned trajectories. Shaded areas denote the tubes for the position states. Note that the tubes yielded by two controllers almost completely align with each other.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Controller** & **PVTOL** & **Quadrotor** & **Execution Time (ms)** \\ \hline NN-CCM & 18.0 & 16.0 & 1.0 \(\sim\) 9.0 \\ \hline NN-RCCM-P & 0.70 & 0.6 & 1.0 \(\sim\) 9.0 \\ \hline SOS-RCCM-P & 0.71 & 0.7 & 100.0 \(\sim\) 150.0 \\ \hline \end{tabular}
\end{table} TABLE III: Tube sizes for position states and online execution times per step (ms).
## Appendix
### _Hyperparameters_
Consistent with the methodology outlined in [5], we use \(\lambda=0.5\) and \(\underline{w}=0.1\) as the hyperparameter values in our study for the PVTOL, Quadrotor, Neural Lander and TLPRA systems. Our methodology involves learning parameters \((\theta_{w},\theta_{u})\) for the metric and controller correspondingly, through joint minimization of pointwise violations in the matrix inequalities and \(\mathcal{L}_{\infty}\)-gain denoted as \(\alpha\). We trained the neural network for \(15\) epochs using the Adam optimizer on a training dataset of size \(N_{train}=130,000\) uniformly sampled from the set \(S\).
### _Ablation study for the initialization of \(\alpha\)_
Our loss function relies on the parameter \(\alpha\). During the learning phase, we aim to minimize \(\alpha\) while also penalizing violations of the robust contraction conditions to jointly learn the metric and controller. Consequently, the ultimate value that \(\alpha\) converges to may be influenced by its initial value. To investigate this further, we conducted an ablation study on various initial values of \(\alpha\) for the PVTOL system. The results, as depicted in Fig. 3, demonstrate that the convergent value of \(\alpha\) remains unaffected by its initial value as the learning process progresses.
### _Tube size minimization for control inputs_
Our framework also facilitates the computation of tube-bound for control inputs while concurrently learning the metric and controller. This can be accomplished by configuring the output variable as \(z=u\) and calculating matrices \(C\) and \(D\). There's no necessity to learn the metric and controller from scratch, as the refinement approach elaborated in Section III-B can be employed. A comparison of tracking errors for control inputs is showcased in Fig. 4. Within Fig. 4, control input tubes are visualized, outlining the limits that confine control tracking errors. These control tube bounds can be seamlessly incorporated as constraints in motion planning applications. Furthermore, a quantitative contrast between our **NN-RCCM** and **SOS-RCCM**[19] is presented in Table IV. Notably, our approach yields considerably smaller control tubes in comparison to **SOS-RCCM**. **NN-CCM**[5] cannot provide such tube bounds for control inputs.
### _Statistical evaluation of the correctness of the learned metric_
While SOS-based methods [19] offer stringent guarantees regarding the validity of the stability certificate, our learning-based approach to stability certificates is established through the minimization of breaches in matrix inequalities at specific points. Our approach is suitable for generating stability certificates for systems where SOS methods would fail and showcases superior computational efficiency compared to SOS-RCCM [19]. To ensure a fair comparison, we provide an assessment of the validity of the learned certificate by quantifying the instances of violations in the four matrix inequalities employed for metric and controller learning within the PVTOL benchmark system. This evaluation is carried out at the final iteration of training, upon the convergence of our \(\mathcal{L}_{\infty}\)-gain \(\alpha\), as elaborated in Table V. We achieved a level of performance comparable to SOS-based certificates while experiencing minimal certificate breaches.
|
2309.08770 | Constrained Bimanual Planning with Analytic Inverse Kinematics | In order for a bimanual robot to manipulate an object that is held by both
hands, it must construct motion plans such that the transformation between its
end effectors remains fixed. This amounts to complicated nonlinear equality
constraints in the configuration space, which are difficult for trajectory
optimizers. In addition, the set of feasible configurations becomes a measure
zero set, which presents a challenge to sampling-based motion planners. We
leverage an analytic solution to the inverse kinematics problem to parametrize
the configuration space, resulting in a lower-dimensional representation where
the set of valid configurations has positive measure. We describe how to use
this parametrization with existing motion planning algorithms, including
sampling-based approaches, trajectory optimizers, and techniques that plan
through convex inner-approximations of collision-free space. | Thomas Cohn, Seiji Shaw, Max Simchowitz, Russ Tedrake | 2023-09-15T21:24:45Z | http://arxiv.org/abs/2309.08770v2 | # Constrained Bimanual Planning with Analytic Inverse Kinematics
###### Abstract
In order for a bimanual robot to manipulate an object that is held by both hands, it must construct motion plans such that the transformation between its end effectors remains fixed. This amounts to complicated nonlinear equality constraints in the configuration space, which are difficult for trajectory optimizers. In addition, the set of feasible configurations becomes a measure zero set, which presents a challenge to sampling-based motion planners. We leverage an analytic solution to the inverse kinematics problem to parametrize the configuration space, resulting in a lower-dimensional representation where the set of valid configurations has positive measure. We describe how to use this parametrization with existing algorithms for motion planning, including sampling-based approaches, trajectory optimizers, and techniques that plan through convex inner-approximations of collision-free space.
## I Introduction
Constrained bimanual planning presents a major challenge to traditional motion planning algorithms. When moving an object that is held by both hands, the robot must carefully move both arms in concert to ensure that the transformation between the end effectors remains constant. Such task space constraints appear as complicated nonlinear equalities in configuration space. In effect, the feasible set becomes measure zero, so samples must either be drawn directly from the constraint manifold or projected onto it. Furthermore, these implicit constraints do not have an obvious explicitization.
In the existing literature, there are general techniques for handling task-space constraints in configuration-space planning. Sampling-based planners can project samples onto the constraint manifold [1], or use numerical continuation [2] to construct piecewise-linear approximations. Constraints can also be relaxed [3] or enforced directly with trajectory optimization [4]. In the case of certain bimanual planning problems, there is additional structure that is not exploited by these general methods. For certain classes of robot arms, _analytic inverse kinematics_ (analytic IK) can be used to map an end-effector pose (along with additional parameters to resolve kinematic redundancy) to joint angles in closed form. Such solutions are specific to certain classes of robot arms, but are a powerful tool to be leveraged if available. Fortunately, analytic IK _is_ available for many popular robot arms available today, including the KUKA iwa. See Figure 1.
If a robot must move an object that it is holding with both hands, we propose constructing a plan for one "controllable" arm, and then the other "subordinate" arm can be made to follow it via an analytic IK mapping. Configurations where the subordinate arm cannot reach the end-effector of the primary arm, or where doing so would require violating joint limits, are treated as obstacles. In this way, we parametrize the constraint manifold, so that the feasible set has positive measure in the new planning space. This enables most standard motion planning algorithms to be applied with only slight modifications.
The remainder of this paper is organized as follows. First, we give an overview of the existing techniques used for constrained motion planning, and describe the available analytic IK solutions. Then, we present our parametrization of the constraint manifold for bimanual planning, and discuss its relevant geometric and topological properties. We describe the slight modifications which are necessary to adapt standard planning algorithms (including sampling-based planning and trajectory optimization) to operate in this framework. We then present a technique for generating convex sets in this new configuration space, such that every configuration within such a set is collision free and kinematically valid. These sets are essential for planning frameworks such as the Graph of Convex Sets (GCS) [5]. Finally, we present various experiments demonstrating the efficacy of these new techniques.
## II Related Work
Motion planning with task constraints is a well-studied problem in robotics. Techniques for sampling-based planning
Fig. 1: Hardware setup for our experiments. The two KUKA iwa arms must work together to move an objects between the shelves, avoiding collisions and respecting the kinematic constraint.
can broadly be categorized by their methodologies:
* Relax the constraints (with real or simulated compliance) to give the feasible set nonzero volume [3, 6].
* Project samples to the constraint manifold [1, 7, 8].
* Construct piecewise-linear approximations of the constraint manifold [9, 10, 11, 12].
* Parametrize of the constraint manifold to eliminate constraints [13, 14].
* Build offline approximations of the constraint manifold, to simplify online planning [15, 16].
See the survey paper [17] for an overview of these methods.
Beyond sampling-based planning, standard nonconvex trajectory optimization approaches can handle arbitrary constraints, although they will generally only converge to a feasible solution with good initialization [17]. [4] performed nonconvex trajectory optimization on manifolds.
Inverse kinematics (IK) - computing robot joint angles so as to place the end effector at a given configuration - is a powerful tool for handling certain kinematic constraints. IK can be leveraged to find stable, collision-free configurations for a humanoid robot, towards whole-body planning [18], to help a robot arm follow a prescribed task-space trajectory [19], and to satisfy the kinematic constraints that arise when manipulating articulated objects [20]. Differential IK techniques can be used to follow task space trajectories, while satisfying constraints [21, SS10.3], [22, SS3.10].
A key part of our work is to use IK to parametrize the constraint manifold, thus eliminating the nonlinear equality constraints. IK solutions are often computed by solving a nonconvex mathematical program. The tools of algebraic geometry can be used to reformulate certain IK problems as systems of polynomial equations, which can be solved as eigenvalue problems [23, 24, 25]. While simpler than a nonconvex optimization problem, we require a closed-form solution for our desired parametrization. For robot arms with six revolute joints with certain kinematic structure, closed-form geometric solutions can be found by dividing the joints into two sets of three joints, and treating each of these as "virtual" spherical joints [26, SS2.12]. IKFast [27] can be used to automatically construct analytic IK solutions for broad classes of robot arm kinematics, and is available as part of the OpenRAVE toolkit [28]. Some arms have geometric solutions, such as the Universal Robotics UR-5/UR-10 [29].
Robot arms with more than six degrees of freedom have kinematic redundancy - the arm can be moved while keeping its end effector fixed. This is called _self-motion_ and is useful for avoiding obstacles and joint limits, but implies the kinematic mapping cannot be bijective. [30] avoids this problem by computing a globally-consistent pseudoinverse, but this discards the redundancy, artificially restricting the configuration space. Other approaches characterize the redundancy as an additional parameter to be controlled in addition to the end-effector pose. [31] presents a strategy for treating specific joints in a 7DoF arm as free parameters, reducing the problem to that of a 6DoF arm with a structure amenable to a closed-form solution. IKFast can discretize any additional joints. Similar to the sphere-sphere 6DoF arms, certain 7DoF arms have a sphere-revolute-sphere kinematic structure, leading to elegant geometric solutions [32, 33]. Specific geometric solutions are available for many common robot arms, including the KUKA iwa [34], Franka Emika Panda [35], and the Barrett WAM [36].
Our parametrization can be combined with many planning algorithms to form a complete system. In this paper, we specifically examine the canonical sampling-based planners: Rapidly-Exploring Random Trees (RRTs) [37] and Probabilistic Roadmaps (PRMs) [38]. Our contributions can also be used with the many extensions to these techniques [39, 40, 41, 42, 43, 44, 45]. We also describe how to use standard kinematic trajectory optimization techniques [22, SS7.2], [46, 47, 48]. Finally, we describe how to extend the IRIS-NP algorithm [49] for computing convex collision-free sets to use our parametrization of the configuration space; such sets can be planned across with the GCS planning framework [5]. (These sets can also be used with other "convex set planning algorithms" [50, 51, 52].)
## III Methodology
We introduce a bijective mapping between joint angles and end-effector pose for a single arm with analytic IK. We then use this mapping to parametrize the set of valid configurations for constrained bimanual manipulation. The joint angles of one arm are treated as free variables for the parametrized configuration space, and the aforementioned mapping is used to determine the joint angles for the other arm. Finally, we explain the modifications needed to adapt existing planning algorithms to utilize this parametrization.
### _Topology of Inverse Kinematics_
The topological and geometric properties of inverse kinematic mappings are a classic area of study in robotics [53, 54, 55]. For an arm with \(n\geq 6\) revolute joints, the configuration space is \(\mathcal{C}\subseteq\mathbb{T}^{n}\), where \(\mathbb{T}^{n}\) denotes the \(n\)-torus. The forward kinematic mapping \(f:\mathcal{C}\rightarrow\mathrm{SE}(3)\) computes the end-effector pose of the arm for a given choice of each joint angle. We define the reachable set \(\mathcal{X}=\{f(\theta):\theta\in\mathcal{C}\}\subseteq\mathrm{SE}(3)\). To construct a homeomorphism between subsets of \(\mathcal{C}\) and \(\mathcal{X}\), we must restrict our domain of attention to avoid singular configurations, and augment \(\mathcal{X}\) with additional degrees of freedom to match dimensions.
We give an overview of the terminology introduced in [54] for describing the global behavior of inverse kinematic mappings. A configuration for which the Jacobian of \(f\) is full-rank is called a _regular point_; otherwise, it is called a _critical point_. Because \(f\) is not injective, the preimage of a single end-effector pose may contain only critical points, only regular points, or some of both; it is respectively called a _critical value_, _regular value_, and _coregular value_. \(\mathcal{W}\)_-sheets_ are the connected components of regular values in \(\mathcal{X}\) whose boundaries are the coregular values of \(f\). The connected components of the preimages of \(\mathcal{W}\)-sheets are called \(\mathcal{C}\)-_bundles_ and form a partition the regular points of \(\mathcal{C}\). For
a regular value \(x\in\mathcal{X}\), we have
\[f^{-1}(x)=\bigcup_{i=1}^{m}\mathcal{M}_{i}(x), \tag{1}\]
where the \(\mathcal{M}_{i}(x)\) are _self-motion manifolds_ of \(x\), so called because motion within them does not affect the end-effector pose. The label \(i\) is called the _global configuration parameter_, and a choice of \(\psi\in\mathcal{M}_{i}(x)\) is called the _redundancy parameter_. For robot arms in 3D space, the number of self-motion manifolds is at most 16; within a \(\mathcal{C}\)-bundle, the self-motion manifolds are homotopic; and if the arm has only revolute joints, then the self-motion manifolds are diffeomorphic to \(\mathbb{T}^{n-6}\)[54]. (If \(n=6\), then the \(\mathcal{M}_{i}\) are zero-dimensional, i.e., discrete points.) Examples of the continuous and discrete self motions for a 7DoF arm are shown in Figure 2.
The \(\mathcal{C}\)-bundle/\(\mathcal{W}\)-sheet machinery allows us to construct well-defined IK mappings. Let \(\mathcal{W}_{j}\subseteq\mathcal{X}\) be a \(\mathcal{W}\)-sheet, and let \(x_{0}\in\mathcal{W}_{j}\). Then there is an smooth injection \(g_{i,j}:\mathcal{W}_{j}\times\mathcal{M}_{i}(x_{0})\to\mathcal{C}\). Since the self-motion manifolds are homotopic within a \(\mathcal{C}\)-bundle, they are uniquely described in terms of their choice of \(\mathcal{C}\)-bundle and \(\mathcal{W}\)-sheet, so we use the shorthand \(\mathcal{M}_{i,j}\) in place of \(\mathcal{M}_{i}(x_{0})\). If we let \(h_{i,j}\) map joint angles to their corresponding redundancy parameter, then \((f,h_{i,j})\circ g_{i,j}\) is the identity mapping on \(\mathcal{W}_{j}\times\mathcal{M}_{i,j}\). Thus, with appropriate restrictions in domain and range, we have a bijection between the arm's joint angles and the product of its end-effector pose and redundancy parameters. The set \(\mathcal{C}_{i,j}\), defined as the image of \(g_{i,j}\), is the set of joint angles which can be handled by these mappings.
### _Parametrizing the Kinematically Constrained Space_
Now, we turn our attention to the bimanual case. We use an additional subscript to denote which arm the sets and maps correspond to; for example, \(\mathcal{X}_{\mathbf{L}}\) is the reachable set of the "left" arm, and \(g_{i,j,\mathbf{R}}\) denotes the inverse kinematic mapping for the "right" arm.
When a rigid object is held with both end effectors, a rigid transformation \(\mathcal{T}\in\mathrm{SE}(3)\) between them becomes fixed; we let \(\phi_{\mathcal{T}}:\mathcal{X}_{\mathbf{L}}\to\mathrm{SE}(3)\) take in an end-effector pose for the left arm (henceforth called the _controlled arm_), and output the target end-effector pose for the right arm (henceforth called the _subordinate arm_). We let \(\mathcal{X}_{\mathcal{T}}:=\{(x,\phi_{\mathcal{T}}(x)):x\in\mathcal{X}_{ \mathbf{L}}\}\subset\mathcal{X}_{\mathbf{L}}\times\mathrm{SE}(3)\) denote the space of end-effector poses which are feasible for the controlled arm and for which the pose of subordinate end-effector respects transformation \(\mathcal{T}\). Note that this latter pose may not be reachable for the subordinate arm, and a choice of redundancy parameter may require a violation of its joint limits. We treat both of these cases as abstract obstacles in the configuration space.
For the remainder of the paper, we fix the global configuration parameter \(i\) and choice of \(\mathcal{W}\)-sheet \(j\) for the second arm. Let \(\mathcal{T}\) be the desired end-effector transformation. We define a _parametrized_ configuration space \(\mathcal{Q}:=\mathcal{C}_{\mathbf{L}}\times\mathcal{M}_{i,j,\mathbf{R}}\). \(q\in\mathcal{Q}\) determines joint angles for both arms via the mapping
\[\xi:(\theta_{\mathbf{L}},\psi_{\mathbf{R}})\mapsto(\theta_{\mathbf{L}},g_{i,j,\mathbf{R}}(\phi_{\mathcal{T}}(f_{\mathbf{L}}(\theta_{\mathbf{L}})),\psi_{ \mathbf{R}})). \tag{2}\]
For more details on why we select this specific parametrization, see Section V. Let \(\theta_{\min}\) and \(\theta_{\max}\) be the lower and upper joint limits. A configuration \((\theta_{\mathbf{L}},\psi_{\mathbf{R}})\) is valid if:
\[\phi_{\mathcal{T}}(f_{\mathbf{L}}(\theta_{\mathbf{L}}))\in\mathcal{W }_{j,\mathbf{R}}\] (Respect reachability.) (3a) \[\theta_{\min}\leq\xi(\theta_{\mathbf{L}},\psi_{\mathbf{R}})\leq \theta_{\max}\] (Respect joint limits.) (3b)
We call the set of configurations satisfying these constraints \(\mathcal{Q}_{\mathrm{VALID}}\). For \(q\in\mathcal{Q}\), if the robot is collision free for the joint angles \(\xi(q)\), we say \(q\in\mathcal{Q}_{\mathrm{FREE}}\).
### _Reformulating the Motion Planning Problem_
Let \(\mathfrak{s},\mathfrak{t}\in\mathcal{C}_{\mathbf{L}}\times\mathcal{C}_{\mathbf{ R}}\) be the start and goal configurations. The _constrained motion planning problem_ requires finding a path \(\gamma=(\gamma_{\mathbf{L}},\gamma_{\mathbf{R}}):[0,1]\to\mathcal{C}_{\mathbf{L} }\times\mathcal{C}_{\mathbf{R}}\) by solving:
\[\operatorname*{argmin} L(\gamma)\] (4a) s.t. \[\gamma(t)\] collision free \[\forall t\in[0,1] \tag{4b}\] \[\phi_{\mathcal{T}}(f_{\mathbf{L}}(\gamma_{\mathbf{L}}(t)))=f_{ \mathbf{R}}(\gamma_{\mathbf{R}}(t)) \forall t\in[0,1]\] (4c) \[\gamma(0)=\mathfrak{s},\;\gamma(1)=\mathfrak{t}. \tag{4d}\]
(\(L\) denotes the arc length functional, but can be replaced with another cost.) The main challenge this formulation presents is the nonlinear equality constraint (4c), as this requires \(\gamma\) lie along a measure-zero set. Trajectory optimizers may struggle with (4c), and sampling-based planners must use one of the techniques described in Section II.
Our _parametrized motion planning problem_ is written in terms of a trajectory \(\bar{\gamma}:[0,1]\to\mathcal{Q}\), with start \(\bar{\mathfrak{s}}\) and goal \(\bar{\mathfrak{t}}\) satisfying \(\xi(\bar{\mathfrak{s}})=\mathfrak{s}\) and \(\xi(\bar{\mathfrak{t}})=\mathfrak{t}\):
\[\operatorname*{argmin} L(\xi\circ\bar{\gamma})\] (5a) s.t. \[(\xi\circ\bar{\gamma})(t)\] collision free \[\forall t\in[0,1] \tag{5b}\] \[\bar{\gamma}(t)\in\mathcal{Q}_{\mathrm{VALID}} \forall t\in[0,1]\] (5c) \[\bar{\gamma}(0)=\bar{\mathfrak{s}},\;\bar{\gamma}(1)=\bar{ \mathfrak{t}}. \tag{5d}\]
This formulation includes the implicit requirement that the entire planned trajectory be within a single \(\mathcal{C}\)-bundle, due to the restricted domain of \(\xi\). In Section IV, we demonstrate that this theoretical limitation is not a major roadblock to our framework's efficacy. A major advantage our parametrization (and parametrization methods in general) is that by construction, the end-effector poses \((f_{\mathbf{L}},f_{\mathbf{R}})\circ\xi(\bar{\gamma}(t))\) are _guaranteed_ to be related by transformation \(\mathcal{T}\). For other methodologies, the constraints are only satisfied at discrete points along the trajectory. (See Figure 3.)
Fig. 2: Continuous (left) and discrete (right) self motions of a 7DoF arm.
### _Motion Planning with the Parametrization_
Constraint (5c) is a nonlinear _inequality_ constraint, so feasible trajectories are constrained to lie in a positive volume set \(\mathcal{Q}_{\mathrm{VALID}}\cap\mathcal{Q}_{\mathrm{FREE}}\). This enables standard, unconstrained motion planning algorithms to function with only slight modifications.
#### Iii-D1 Sampling-Based Planning
The changes required for sampling based planners can be summarized as treating points outside \(\mathcal{Q}_{\mathrm{VALID}}\) as being in collision. Because \(\mathcal{Q}_{\mathrm{VALID}}\cap\mathcal{Q}_{\mathrm{FREE}}\) has positive measure, rejection sampling can be used to draw valid samples. When connecting samples (as in the "Extend" procedure of an RRT or "Connect" procedure of a PRM), the frequency with which collisions are checked must be adjusted, since distance in the parametrized space \(\mathcal{Q}\) differs from distance in the full configuration space \(\mathcal{C}_{\mathbf{L}}\times\mathcal{C}_{\mathbf{R}}\). In particular, a small motion in \(\mathcal{Q}\) can lead to a relatively large motion in \(\mathcal{C}_{\mathbf{L}}\times\mathcal{C}_{\mathbf{R}}\), so collision checking must be done more frequently (or at a varying scale).
#### Iii-D2 Trajectory Optimization
Trajectory optimization in configuration space is already nonconvex, so implementing constraints (5b) and (5c) requires no algorithmic changes. As with sampling-based planning, collision avoidance (and other constraints applied to the full configuration space) must be enforced at a finer resolution.
#### Iii-D3 Graph of Convex Sets
Let \(\mathcal{U}\subseteq\mathcal{Q}_{\mathrm{VALID}}\cap\mathcal{Q}_{\mathrm{FREE}}\) be convex. Then the kinematic validity (and collision-free nature) of a linear path through \(\mathcal{U}\) is guaranteed if its endpoints are contained in \(\mathcal{U}\). Thus, the Graph of Convex Sets Planner (GCS) can function as expected with two small modifications. We minimize the arc length in the parametrized space \(L(\bar{\gamma})\), as this objective provides a useful convex surrogate for the true (nonconvex) objective (5a). Also, for robot arms composed of revolute joints, the self-motion parameters are angle-valued, so one can either make cuts to the configuration space and treat it as Euclidean, or use the extension _Graphs of Geodesically-Convex Sets_ (GGCS) [56]. The product of the angle-valued self-motion parameters will be a circle or n-torus, both of which admit a flat metric [57, p.345]. If we plan across geodesically convex (g-convex) subsets of \(\mathcal{Q}_{\mathrm{VALID}}\cap\mathcal{Q}_{\mathrm{FREE}}\), then the problem satisfies the sufficient conditions presented in Assumptions 1 and 2 of [56]. These assumptions guarantee that the resulting path will be kinematically valid and collision-free at all times.
### _Constructing Convex Valid Sets_
To use (G)GCS, one must construct (g-)convex subsets of \(\mathcal{Q}_{\mathrm{VALID}}\cap\mathcal{Q}_{\mathrm{FREE}}\). Our approach is based on the IRIS-NP algorithm [49], which uses a counterexample search to find configurations where the robot is in collision, and putting up hyperplanes to avoid such configurations. Given a hyperellipsoid \(\mathcal{E}(C,d)=\{q:\left\|q-d\right\|_{C}^{2}\leq 1\}\) (using the notation \(\left\|q-d\right\|_{C}^{2}=(q-d)^{T}C^{T}C(q-d)\)), a halfspace intersection \(\mathcal{H}(A,b)=\{q:Aq\leq b\}\), and a _constraint set_CS, the _generalized counterexample search program_ is
\[\min_{q} \left\|q-d\right\|_{C}^{2}\] (6a) s.t. \[Aq\leq b \tag{6b}\] \[q\not\in\mathsf{CS}. \tag{6c}\]
Given a bounding box \(\mathcal{H}_{0}(A_{0},b_{0})\), a hyperellipsoid \(\mathcal{E}(C,d)\) with \(d\in\mathcal{H}_{0}(A_{0},b_{0})\), and a list of configuration-space constraints \(\mathsf{CS}_{1},\ldots,\mathsf{CS}_{k}\) to enforce, Algorithm 1 produces a halfspace intersection \(\mathcal{H}(A,b)\subseteq\mathcal{H}_{0}(A_{0},b_{0})\) such that every point in \(\mathcal{H}(A,b)\) satisfies the constraints.
```
0: Bounding Box \(\mathcal{H}_{0}(A_{0},b_{0})\)
0: Hyperellipsoid \(\mathcal{E}(C,d)\) s.t. \(d\in\mathcal{H}_{0}(A_{0},b_{0})\)
0: Constraint Sets \(\mathsf{CS}_{1},\ldots,\mathsf{CS}_{k}\)
0: Halfspace Intersection \(\mathcal{H}(A,b)\)
1:\(A\gets A_{0}\), \(b\gets b_{0}\)
2for\(\mathsf{CS}=\mathsf{CS}_{1},\ldots,\mathsf{CS}_{k}\)do
3:repeat
4:\((a^{*},b^{*})\leftarrow\textsc{Solve}[\eqref{eq:GCS},\{A,b,C,d,\mathsf{CS}\}]\)
5:\(A\leftarrow\textsc{VStack}(A,a^{*}),b\leftarrow\textsc{VStack}(b,b^{*})\)
6:until Infeasible
7:return\(\mathcal{H}(A,b)\)
```
**Algorithm 1**Constrained IRIS (Single Iteration)
First, we require that any inverse trigonometric functions used in the analytic IK mapping \(g_{i,j,\mathbf{R}}\) do not violate their domains. Although this constraint would be enforced by the later constraints, specifically handling this case first greatly improves the performance of the later counterexample searches. For example, [34, Eq. 4] takes the \(\arccos\) of an argument \(w\), so we encode (6c) as \(\left|w\right|\geq 1+\epsilon\). When using the analytic IK solution for the KUKA iwa, we enforce this constraint for equations (4), (6), (18), and (23) of [34].
Next, we check the joint limits (3b), encoded for (6c) as
\[\max(\xi(q)-\theta_{\max},\theta_{\min}-\xi(q))\geq\epsilon.\]
For reachability counterexamples (3a), we compute the squared Frobenius norm of the difference between desired
Fig. 3: Most existing planners can only enforce constraints at discrete points along the trajectory. Parametrization-based planners (including our approach) satisfy constraints at all points by construction.
and realized end-effector pose, encoding (6c) as
\[||\phi_{\mathcal{T}}(f_{\mathbf{L}}(\theta_{\mathbf{L}}))-f_{\mathbf{R}}(\theta_{ \mathbf{R}})||_{F}^{2}\geq\epsilon.\]
These three constraints will ensure \(\mathcal{H}(A,b)\subseteq\mathcal{Q}_{\mathrm{VALID}}\). To also enforce \(\mathcal{H}(A,b)\subseteq\mathcal{Q}_{\mathrm{FREE}}\), we search for configurations \(q\) such that the robot is in collision. We separately find counterexamples for each pair of collision bodies, using equation (2) of [49]. Note that this equation operates on the full configuration \((\theta_{\mathbf{L}},\theta_{\mathbf{R}})\), as obtained from the parametrized configuration with \(\xi\). Because (6) is a nonlinear program, we solve it using SNOPT [58] with random initializations until a solution is obtained or a predefined number of consecutive failures is reached (and in that case, return infeasible).
## IV Results
We demonstrate our new constrained planning framework using a bimanual manipulation setup with two _KUKA iwa_ 7DoF arms. Interactive recordings of all trajectories are available online at [https://cohnt.github.io/Bimanual-Web/](https://cohnt.github.io/Bimanual-Web/). We compute the analytic IK maps according to the methodology presented in [34]. To evaluate the merits of our IK parametrization for constrained planning, we consider a task where the two arms must move an object around a set of shelves, while avoiding collisions. We test four approaches under our parametrization:
1. [leftmargin=*]
2. _IK-BiRRT._ We use the single-query bidirectional RRT (BiRRT) algorithm [39].
3. _IK-Trajopt_ We directly solve (5) with kinematic trajectory optimization [59, SS10.3], using the Drake modeling toolbox [60]. We use the output of the BiRRT planner as the initial guess for the trajectory optimizer.
4. _IK-PRM._ We use the multi-query PRM algorithm [38], initialized with nodes from multiple BiRRTs to ensure connectivity, as in [5, SSC].
5. _IK-GCS._ We use GCS-planner [5] with \(19\) regions, constructed from hand-selected seed points.
For both the BiRRT and PRM plans, we use short-cutting to post-process the paths [61]. We solved the GCS problems with Mosek [62]. We compare these parametrized planners with constrained planning baselines.
1. [leftmargin=*]
2. _Constrained Trajectory Optimization._ We solve (4) with kinematic trajectory optimization, using the IK-BiRRT plan as the initial guess to compare with IK-Trajopt.
3. _Sampling-Based Planning._ For sampling-based planners, we use the single-query Atlas-BiRRT and multi-query Atlas-PRM algorithms [12], as implemented in the Open Motion Planning Library [63]. The atlas and PRM are initialized from multiple Atlas-BiRRT runs.
We do not compare to any GCS baseline without IK, as the constraint manifold is inherently nonconvex; IK-GCS is the first proposal for extending GCS to this class of problems.
**Constraint Violations:** Because the baseline methods can only enforce the kinematic constraint at discrete points, the constraint violation can be significant between such points. The OMPL planners experienced a maximum constraint violation of 6.76 cm, and the trajectory optimization baseline experienced a maximum constraint violation of 7.00 cm. In comparison, our parametrization methods maintained all constraints within 0.001 cm. Plans from the trajectory optimization baseline also collided with obstacles.
**Path Length & Planning Time:** Across all methods, for various start and goal configurations, we compare path length in Table I and online planning time in Table II. We ran both BiRRT methods 10 times for each plan, and report the average path length and planning time. We set a maximum planning time of 10 minutes for Atlas-BiRRT, and omit these from the averaging. Out of the 30 runs used for the table, Atlas-BiRRT timed out 22 times. IK-BiRRT never timed out; the longest plan took 319.12 seconds to compute. In Figure 4, we visualize several plans produced by the various constrained planning algorithms.
Table II does not include offline compute time. The time to construct the Atlas-PRM varies greatly; with three random seeds, it took 326.30, 1878.30, and 5405.54 seconds. Constructing the IK-PRM took 12124.21 seconds, and constructing the IRIS regions for GCS took 18361.36 seconds (966.39 seconds per region on average). The IRIS region construction can also be parallelized, improving runtime.
Overall, GCS is consistently able to achieve the fastest online runtimes, as a result of the offline precomputation of IRIS regions. IK-Trajopt is sometimes able to find shorter paths than GCS, since it has fewer constraints, but it can get stuck in local minima (see Figure 4). Although the atlas methodologies may find shorter paths than their IK counterparts, this is at the cost of significantly higher runtimes and potentially large kinematic constraint violations.
**Task Space Coverage of IRIS Regions.** In Figure 5, we superimpose the end-effector poses from many sampled bimanual configurations within individual IRIS regions. Despite the complicated nonlinear mapping, these convex sets
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Method & Top to Middle & Middle to Bottom & Bottom to Top \\ \hline Trajopt & 43.78 & 48.03 & 61.50 \\ \hline Atlas-BiRRT & 118.66 & 253.34 & 421.30 \\ \hline Atlas-PRM & 7.54 & 14.71 & 17.28 \\ \hline IK-Trajopt & 63.92 & 94.92 & 109.84 \\ \hline IK-BiRRT & 57.10 & 70.50 & 97.08 \\ \hline IK-PRM & 31.38 & 45.88 & 32.36 \\ \hline IK-GCS & **3.41** & **2.32** & **3.32** \\ \hline \end{tabular}
\end{table} TABLE II: Online planning time (in seconds) for each method with various start and goal configurations. Atlas-BiRRT runtimes were only averaged over successful runs (not including timeouts).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Method & Top to Middle & Middle to Bottom & Bottom to Top \\ \hline Trajopt & 4.24* & 2.66* & 6.10* \\ \hline Atlas-BiRRT & 5.57 & 5.60 & 6.76 \\ \hline Atlas-PRM & 7.24 & 7.09 & 8.56 \\ \hline IK-Trajopt & 2.64 & **3.00** & **4.75** \\ \hline IK-BiRRT & 9.19 & 10.63 & 18.36 \\ \hline IK-PRM & 4.13 & 13.75 & 13.65 \\ \hline IK-GCS & **2.09** & 3.32 & 5.62 \\ \hline \end{tabular}
\end{table} TABLE I: Path lengths (measured in configuration space) for each method with various start and goal configurations. Paths marked with an asterisk were not collision-free.
are able to cover large swaths of task space, as shown in Figure 5 (a). In Figure 5 (b), we demonstrate that IRIS regions can reliably encompass the motions required to reach into and out of a shelf. And in Figure 5 (c), we visualize an IRIS region that allows the grasp distance to vary. GCS can use such regions to plan motions for objects of different; we include hardware demonstrations in our results video.
## V Discussion
We presented a novel parametrization of the constrained configuration space that arises in bimanual manipulation, which can be leveraged by both sampling-based planners and trajectory optimizers for more efficient planning. Our parametrization can be used to find shorter paths more quickly than existing approaches, and these paths will satisfy the kinematic constraints at all points along the trajectory. This parametrization also enables the use of planners such as GCS, which previously could not be applied to configuration spaces with nonlinear equality constraints.
Our parametrization is inherently asymmetric. Other choices of parametrization may seem more natural, such as:
1. Treating the end-effector configuration and redundancy parameters for both arms as the free variables, and using analytic IK for both arms.
2. Treating the first four joints of each arm as free variables, and solving IK for the remaining six joints as a virtual 6DoF arm whose middle link is represented by the object held by both end-effectors.
For the first option, we would have to choose global configuration parameters for both arms; in the case of the KUKA iwa, this involves 64 choices (instead of the 8 options for our parametrization). Also, the shortest paths for the end effector may lead to very inefficient paths in joint space. (Our parametrization can at least minimize the work for one of the arms.) Finally, it requires planning over \(\mathrm{SO}(3)\), which cannot be used in with GCS (see [56, Thm. 5]).
For the second option, the choice of end-effector transformation \(\mathcal{T}\) determines the kinematic structure of the virtual arm, so different grasps would require different analytic IK solutions. Constructing such solutions would be time-consuming, and they may not always exist.
Fig. 4: Planned trajectories for reaching into shelves. The paths denote the end effector’s motion, and are colored by method.
Fig. 5: Robot configurations sampled from various IRIS regions. |
2309.11221 | Hardness Transitions of Star Colouring and Restricted Star Colouring | We study how the complexity of the graph colouring problems star colouring
and restricted star colouring vary with the maximum degree of the graph.
Restricted star colouring (in short, rs colouring) is a variant of star
colouring. For $k\in \mathbb{N}$, a $k$-colouring of a graph $G$ is a function
$f\colon V(G)\to \mathbb{Z}_k$ such that $f(u)\neq f(v)$ for every edge $uv$ of
$G$. A $k$-colouring of $G$ is called a $k$-star colouring of $G$ if there is
no path $u,v,w,x$ in $G$ with $f(u)=f(w)$ and $f(v)=f(x)$. A $k$-colouring of
$G$ is called a $k$-rs colouring of $G$ if there is no path $u,v,w$ in $G$ with
$f(v)>f(u)=f(w)$. For $k\in \mathbb{N}$, the problem $k$-STAR COLOURABILITY
takes a graph $G$ as input and asks whether $G$ admits a $k$-star colouring.
The problem $k$-RS COLOURABILITY is defined similarly. Recently, Brause et al.
(Electron. J. Comb., 2022) investigated the complexity of 3-star colouring with
respect to the graph diameter. We study the complexity of $k$-star colouring
and $k$-rs colouring with respect to the maximum degree for all $k\geq 3$. For
$k\geq 3$, let us denote the least integer $d$ such that $k$-STAR COLOURABILITY
(resp. $k$-RS COLOURABILITY) is NP-complete for graphs of maximum degree $d$ by
$L_s^{(k)}$ (resp. $L_{rs}^{(k)}$).
We prove that for $k=5$ and $k\geq 7$, $k$-STAR COLOURABILITY is NP-complete
for graphs of maximum degree $k-1$. We also show that $4$-RS COLOURABILITY is
NP-complete for planar 3-regular graphs of girth 5 and $k$-RS COLOURABILITY is
NP-complete for triangle-free graphs of maximum degree $k-1$ for $k\geq 5$.
Using these results, we prove the following: (i) for $k\geq 4$ and $d\leq k-1$,
$k$-STAR COLOURABILITY is NP-complete for $d$-regular graphs if and only if
$d\geq L_s^{(k)}$; and (ii) for $k\geq 4$, $k$-RS COLOURABILITY is NP-complete
for $d$-regular graphs if and only if $L_{rs}^{(k)}\leq d\leq k-1$. | Shalu M. A., Cyriac Antony | 2023-09-20T11:21:03Z | http://arxiv.org/abs/2309.11221v1 | # Hardness Transitions of Star Colouring
###### Abstract
We study how the complexity of the graph colouring problems star colouring and restricted star colouring vary with the maximum degree of the graph. Restricted star colouring (in short, rs colouring) is a variant of star colouring, as the name implies. For \(k\in\mathbb{N}\), a \(k\)-colouring of a graph \(G\) is a function \(f\colon V(G)\to\mathbb{Z}_{k}\) such that \(f(u)\neq f(v)\) for every edge \(uv\) of \(G\). A \(k\)-colouring of \(G\) is called a \(k\)-star colouring of \(G\) if there is no path \(u,v,w,x\) in \(G\) with \(f(u)=f(w)\) and \(f(v)=f(x)\). A \(k\)-colouring of \(G\) is called a \(k\)-rs colouring of \(G\) if there is no path \(u,v,w\) in \(G\) with \(f(v)>f(u)=f(w)\). For \(k\in\mathbb{N}\), the problem \(k\)-Star Colourability takes a graph \(G\) as input and asks whether \(G\) admits a \(k\)-star colouring. The problem \(k\)-RS Colourability is defined similarly. Recently, Brause et al. (Electron. J. Comb., 2022) investigated the complexity of \(3\)-star colouring with respect to the graph diameter. We study the complexity of \(k\)-star colouring and \(k\)-rs colouring with respect to the maximum degree for all \(k\geq 3\). For \(k\geq 3\), let us denote the least integer \(d\) such that \(k\)-Star Colourability (resp. \(k\)-RS Colourability) is NP-complete for graphs of maximum degree \(d\) by \(L_{s}^{(k)}\) (resp. \(L_{rs}^{(k)}\)).
We prove that for \(k=5\) and \(k\geq 7\), \(k\)-Star Colourability is NP-complete for graphs of maximum degree \(k-1\) (i.e., \(L_{s}^{(k)}\leq k-1\)). We also show that \(4\)-RS Colourability is NP-complete for planar \(3\)-regular graphs of girth \(5\) and \(k\)-RS Colourability is NP-complete for triangle-free graphs of maximum degree \(k-1\) for \(k\geq 5\) (i.e., \(L_{rs}^{(k)}\leq k-1\)). Using these results, we prove the following: (i) for \(k\geq 4\) and \(d\leq k-1\), \(k\)-Star Colourability is NP-complete for \(d\)-regular graphs if and only if \(d\geq L_{s}^{(k)}\); and (ii) for \(k\geq 4\), \(k\)-RS Colourability is NP-complete for \(d\)-regular graphs if and only if \(L_{rs}^{(k)}\leq d\leq k-1\). It is not known whether result (ii) has a star colouring analogue.
## 1 Introduction and Definitions
Star colouring is a colouring variant introduced by Grunbaum [2], which is used in the estimation of sparse Hessian matrices [3]. Restricted star colouring (abbreviated rs colouring) is a variant of star colouring first introduced specifically for this application [4]. It was later introduced independently under the names independent set star partition [5], and with the order of colours reversed, unique-superior colouring [6] and \(2\)-ranking [7, 8]. The complexity of star colouring is studied in various graph classes [1, 9, 10, 11, 20, 21]. Although rs colouring is not as widely known, there are already five papers that focus on rs colouring [5, 6, 7, 8]. Interestingly, rs colouring can be defined in terms of locally constrained graph homomorphisms (see Section 3.2 for details).
Brause et al. [20] investigated the complexity of \(3\)-star colouring with respect to the graph diameter. For \(k\geq 3\), we study the complexity of \(k\)-star colouring and \(k\)-rs colouring with respect to the maximum degree focusing on graphs of maximum degree \(d\) and \(d\)-regular graphs. Our interest is in the values of \(d\) for which the complexity of \(k\)-star colouring in graphs of maximum degree \(d\) (resp. \(d\)-regular graphs) differ drastically from that in graphs of maximum degree \(d-1\) (resp. \((d-1)\)-regular graphs); we call such values of \(d\) as _hardness transitions_ of \(k\)-star colouring with respect to maximum degree for the class of graphs of maximum degree \(d\) (resp. \(d\)-regular graphs); see Section 1.2 for details. Hardness transitions of rs colouring are defined similarly.
This paper is organised as follows. Subsections 1.1, 1.2 and 1.3 present basic definitions, definitions related to hardness transitions, and an overview of our results, respectively. This is followed by Section 2, devoted to star colouring and Section 3, devoted to rs colouring.
Section 2 is subdivided into subsections 2.1, 2.2 and 2.3, which present (i) an introduction and literature survey on star colouring, (ii) details of our results on hardness transitions of star colouring and (iii) consequences of our results on the values of \(L_{s}^{(k)}\) and two similar parameters, respectively.
Similarly, Section 3 is subdivided into subsections 3.1, 3.2, 3.3 and 3.4, which present (i) an introduction and literature survey on rs colouring, (ii) characterisation of rs colouring in terms of graph homomorphisms, (iii) details of our results on hardness transitions of rs colouring and (iv) consequences of our results on the value of \(L_{rs}^{(k)}\), respectively. We conclude with Section 4.
### Basic Definitions
All graphs considered in this paper are finite, simple and undirected unless otherwise specified. We follow West [22] for graph theory terminology and notation. When the graph is clear from the context, we denote the number of edges of the graph by \(m\) and the number of vertices by \(n\). For a graph \(G\), we denote the maximum degree of \(G\) by \(\Delta(G)\). For a subset \(S\) of the vertex set of \(G\), the _subgraph of \(G\) induced by \(S\)_ is denoted by \(G[S]\). An _orientation_\(\vec{G}\) of a graph \(G\) is the directed graph obtained by assigning a direction on each edge of \(G\); that is, if \(uv\) is an edge in \(G\), then either \((u,v)\) or \((v,u)\) is an arc in \(\vec{G}\). An orientation is also called an _oriented graph_. If \((u,v)\) is an arc in an oriented graph \(\vec{G}\), then \(u\) is an _in-neighbour_ of \(v\) in \(\vec{G}\). We denote the neighbourhood of a vertex \(v\) in a graph \(G\) by \(N_{G}(v)\), and the in-neighbourhood of a vertex \(v\) in an oriented graph \(\vec{G}\) by \(N_{\vec{G}}(v)\). The _girth_ of a graph with a cycle is the length of its shortest cycle. The treewidth of \(G\) is denoted as \(\operatorname{tw}(G)\). A 3-regular graph is also called a _cubic graph_, and a graph of maximum degree 3 is called a _subcubic graph_. A graph \(G\) is _\(2\)-degenerate_ if there exists a left-to-right ordering of its vertices such that every vertex has at most two neighbours to its left.
If \(G\) is a graph and \(u,v\) are non-adjacent vertices in \(G\), the operation of _identifying_ vertices \(u\) and \(v\) in \(G\) involves (i) introducing a new vertex \(w^{*}\), (ii) joining \(w^{*}\) to each vertex in \(N_{G}(u)\cup N_{G}(v)\), and (iii) removing vertices \(u\) and \(v\)[22]. That is, in the resultant graph, say \(G^{*}\), the neighbourhood of \(w^{*}\) is \(N_{G^{*}}(w^{*})=N_{G}(u)\cup N_{G}(v)\). Since vertices \(u\) and \(v\) are removed, the new vertex \(w^{*}\) in \(G^{*}\) can be unambiguously called as \(u\) (or \(v\) for that matter). Observe that by definition, if vertices \(u\) and \(v\) have a common neighbour \(x\) in \(G\), then there is only one edge from \(w^{*}\) to \(x\) in \(G^{*}\) (i.e., no parallel edges in \(G^{*}\)).
A \(k\)-colouring of a graph \(G\) is a function \(f\) from the vertex set of \(G\) to a set of \(k\) colours, say \(\{0,1,\ldots,k-1\}\), such that \(f\) maps every pair of adjacent vertices to different colours. A \(k\)-star colouring of \(G\) is a \(k\)-colouring \(f\) of \(G\) such that there is no bicoloured \(P_{4}\) in \(G\) (i.e., there is no path \(u,v,w,x\) in \(G\) with \(f(u)=f(w)\) and \(f(v)=f(x)\)). A \(k\)-rs colouring of \(G\) is a \(k\)-colouring \(f\) of \(G\) such that there is no bicoloured \(P_{3}\) in \(G\) with higher colour on its middle vertex (i.e., there is no path \(u,v,w\) in \(G\) with \(f(v)>f(u)=f(w)\)). See Figure 1 for examples.
Figure 1: (a) a 4-colouring of a graph, which is not a 4-star colouring (bicoloured \(P_{4}\) highlighted), (b) a 3-star colouring of a graph, which is not a 3-rs colouring (bicoloured \(P_{3}\) with higher colour on middle highlighted), and (c) a 4-rs colouring of a graph.
The _star chromatic number_\(\chi_{s}(G)\) of a graph \(G\) is the least integer \(k\) such that \(G\) is \(k\)-star colourable. The _rs chromatic number_\(\chi_{rs}(G)\) is defined similarly. The star chromatic number of line graph of \(G\) is called the _star chromatic index_ of \(G\). The problem Star Colourability takes a graph \(G\) and a positive integer \(k\) as input and asks whether \(G\) is \(k\)-star colourable. For \(k\in\mathbb{N}\), the decision problem \(k\)-Colourability takes a graph \(G\) as input and asks whether \(G\) is \(k\)-colourable. The problems \(k\)-Star Colourability and \(k\)-RS Colourability are defined analogously. To denote the restriction of a decision problem, we write the conditions in parenthesis. For instance, \(4\)-Star Colourability(bipartite, \(\Delta=5\)) denotes the problem \(4\)-Star Colourability restricted to the class of bipartite graphs \(G\) with \(\Delta(G)=5\).
A _homomorphism_ from a graph \(G\) to a graph \(H\) is a function \(\psi\colon V(G)\to V(H)\) such that \(\psi(u)\psi(v)\) is an edge in \(H\) whenever \(uv\) is an edge in \(G\). A _homomorphism_ from an oriented graph \(\vec{G}\) to an oriented graph \(\vec{H}\) is a function \(\psi\colon V(\vec{G})\to V(\vec{H})\) such that \((\psi(u),\psi(v))\) is an arc in \(\vec{H}\) whenever \((u,v)\) is an arc in \(\vec{G}\). A homomorphism \(\psi\) from an oriented graph \(\vec{G}\) to an oriented graph \(\vec{H}\) is _neighbourhood injective_ if for every vertex \(v\) of \(\vec{G}\), the restriction of \(\psi\) to the in-neighbourhood \(N^{-}_{\vec{G}}(v)\) is an injective function from \(N^{-}_{\vec{G}}(v)\) to \(N^{-}_{\vec{H}}(\psi(v))\) (i.e., \(\psi\) maps distinct in-neighbours of \(v\) to distinct in-neighbours of \(\psi(v)\)). An _automorphism_ of a graph \(G\) is a bijective function \(\psi\colon V(G)\to V(G)\) such that \(\psi(u)\psi(v)\) is an edge in \(G\) if and only if \(uv\) is an edge in \(G\).
### Hardness Transitions
Analysing the boundary between easy (i.e., polynomial-time solvable) and hard (e.g., NP-complete) problems is a common theme in complexity theory [23]. Studying the change in the complexity of a problem in response to a change in a single parameter falls in this category. Brause et al. [20] studied the complexity of \(3\)-Star Colourability with the diameter of the graph as the parameter. For \(k\geq 3\), we study the complexity of \(k\)-star colouring and \(k\)-rs colouring with the maximum degree of the graph as the parameter. Recall that we write the conditions in parenthesis to denote the restriction of a decision problem; e.g.: \(4\)-Star Colourability(bipartite, \(\Delta=5\)) denotes the problem \(4\)-Star Colourability restricted to the class of bipartite graphs \(G\) with \(\Delta(G)=5\). We assume \(\mathrm{P}\neq\mathrm{NP}\) throughout this paper; thus, \(\mathrm{NP}\) is partitioned into three classes: \(\mathrm{P}\), \(\mathrm{NPC}\) and \(\mathrm{NPI}\)[24]. We emphasise that our interest is in the classification of NP-problems with respect to the \(\mathrm{P}\) vs. \(\mathrm{NPC}\) vs. \(\mathrm{NPI}\) trichotomy: that is, the complexity classes dealt with in this paper are only \(\mathrm{P}\), \(\mathrm{NPC}\) and \(\mathrm{NPI}\).
A decision problem \(\Pi\) in \(\mathrm{NP}\) has a _hardness transition_ with respect to a discrete parameter \(d\) at a point \(d=x\) if \(\Pi(d=x)\) and \(\Pi(d=x-1)\) belong to different complexity classes among \(\mathrm{P}\), \(\mathrm{NPC}\) and \(\mathrm{NPI}\) (e.g.: \(\Pi(d=x)\in\mathrm{NPC}\) whereas \(\Pi(d=x-1)\in\mathrm{P}\); see [25] for a discussion). For example, \(3\)-Colourability of a graph of maximum degree \(d\) is polynomial-time solvable for \(d=3\) (due to Brook's theorem) and \(\mathrm{NP}\)-complete for \(d=4\)[23]. That is, \(3\)-Colourability(\(\Delta=3\)) \(\in\mathrm{P}\) and \(3\)-Colourability(\(\Delta=4\)) \(\in\mathrm{NPC}\). Hence, \(3\)-Colourability has a hardness transition with respect to the maximum degree \(d\) at the point \(d=4\). Note that each hardness transition presumably deals with the \(\mathrm{P}\) vs. \(\mathrm{NPC}\) boundary since no 'natural' problem is known to be \(\mathrm{NP}\)-intermediate [26].
The number of hardness transitions depends on the problem as well as the parameter under consideration. Interestingly, a decision problem can have infinitely many hardness transitions. Cseh and Kavitha [27] proved that the popular matching problem on complete graph \(K_{n}\) is in \(\mathrm{P}\) for odd \(n\) whereas it is \(\mathrm{NP}\)-complete for even \(n\). Therefore, the popular matching problem on complete graph with respect to the number of vertices \(n\) has infinitely many hardness transitions.
Let us consider the complexity of \(k\)-colouring in bounded degree graphs for fixed \(k\geq 3\). Emden-Weinert et al. [28] proved that \(k\)-Colourability is NP-complete for graphs of maximum degree \(k-1+\left\lceil\sqrt{k}\right\rceil\). Observe that if \(k\)-Colourability is \(\mathrm{NP}\)-complete for graphs of maximum degree \(d\), then it is \(\mathrm{NP}\)-complete for graphs of maximum degree \(d+1\) (to produce a reduction, it suffices to add a disjoint copy of \(K_{1,d+1}\)). This suggests the following problem.
**Problem 1**.: _For \(k\geq 3\), what is the least integer \(d\) such that \(k\)-Colourability is \(\mathrm{NP}\)-complete for graphs of maximum degree \(d\)?_
Observe that Problem 1 deals with locating a point of hardness transition. By the same argument, if \(k\)-Star Colourability is \(\mathrm{NP}\)-complete for graphs of maximum degree \(d\), then \(k\)-Star Colourability is \(\mathrm{NP}\)-complete for graphs of maximum degree \(d+1\). The same is true of \(\mathrm{rs}\) colouring. For \(k\geq 3\), \(k\)-Star Colourability and \(k\)-RS Colourability are \(\mathrm{NP}\)-complete for graphs of maximum degree \(k\)[18, 19]. Therefore, for each \(k\geq 3\), there exists a unique integer \(d^{*}\) such that
\(k\)-Colourability (resp. \(k\)-Star Colourability or \(k\)-RS Colourability) in graphs of maximum degree \(d\) is NP-complete if and only if \(d\geq d^{*}\). Thus, one can ask the counterpart of Problem 1 for star colouring and rs colouring. Let \(L^{(k)}\), \(L^{(k)}_{s}\) and \(L^{(k)}_{rs}\) denote the answers to Problem 1 and its counterparts for star colouring and rs colouring; that is, \(L^{(k)}\) (resp. \(L^{(k)}_{a}\) or \(L^{(k)}_{rs}\)) is the least integer \(d\) such that \(k\)-Colourability (resp. \(k\)-Star Colourability or \(k\)-RS Colourability) is NP-complete for graphs of maximum degree \(d\).
Due to Brook's theorem, \(k\)-Colourability is polynomial-time solvable for graphs of maximum degree \(k\), and thus \(L^{(k)}\geq k+1\). For \(k\geq 3\), \(k\)-Colourability is NP-complete for graphs of maximum degree \(k-1+\left\lceil\sqrt{k}\right\rceil\)[28], and thus \(k+1\leq L^{(k)}\leq k-1+\left\lceil\sqrt{k}\right\rceil\). Hence, \(L^{(3)}=4\), \(L^{(4)}=5\), \(6\leq L^{(5)}\leq 7\), and so on. For sufficiently large \(k\) and \(d<k-1+\left\lceil\sqrt{k}\right\rceil\), the problem \(k\)-Colourability is in P for graphs of maximum degree \(d\)[29, Theorem 43]. Therefore, \(L^{(k)}=k-1+\left\lceil\sqrt{k}\right\rceil\) for sufficiently large \(k\). Yet, the exact value of \(L^{(k)}\) is unknown for small values of \(k\) such as \(k=5\) even though we know that \(L^{(5)}\in\{6,7\}\) (the complexity of \(5\)-Colourability in graphs of maximum degree \(6\) is open [30]).
### Our Results
For \(k\geq 3\), \(k\)-Star Colourability and \(k\)-RS Colourability are NP-complete for graphs of maximum degree \(k\)[18, 19]. For \(k\geq 4\), we improve the maximum degree in these NP-completeness results to \(k-1\), except for \(k\)-Star Colourability with \(k\in\{4,6\}\).
We show that \(4\)-RS Colourability is NP-complete for planar \(3\)-regular graphs of girth \(5\), and \(k\)-RS Colourability is NP-complete for triangle-free graphs of maximum degree \(k-1\) for \(k\geq 4\). We also prove that for \(k=5\) and \(k\geq 7\), \(k\)-Star Colourability is NP-complete for graphs of maximum degree \(k-1\). In contrast, \(k\)-Star Colourability (resp. \(k\)-RS Colourability) is polynomial-time solvable for graphs of maximum degree at most \(0.33\,k^{2/3}\) (resp. \(\sqrt{k}\)). Hence, for \(k\geq 4\), we have \(0.33\,k^{2/3}<L^{(k)}_{s}\leq k\) and \(\sqrt{k}<L^{(k)}_{rs}\leq k-1\).
The slight improvement of the maximum degree in the NP-completeness results of [18, 19] (see the first paragraph in this subsection) allows us to prove the following.
* For \(k\geq 4\) and \(d\leq k-1\), \(k\)-Star Colourability is NP-complete for \(d\)-regular graphs if and only if \(d\geq L^{(k)}_{s}\).
* For \(k\geq 4\), \(k\)-RS Colourability is NP-complete for \(d\)-regular graphs if and only if \(L^{(k)}_{rs}\leq d\leq k-1\).
It is not known whether the preceding result has a star colouring analogue (i.e. a result of the following form: for \(k\geq 4\), there exist integers \(\ell_{k}\) and \(h_{k}\) such that \(k\)-Star Colourability is NP-complete for \(d\)-regular graphs if and only if \(\ell_{k}\leq d\leq h_{k}\)).
## 2 Star Colouring
### Introduction and Literature Survey
Star colouring is studied in various graph classes such as planar graphs, bipartite graphs, regular graphs, sparse graphs and line graphs. For surveys on star colouring of planar graphs and line graphs, see [31, Section 14] and [32] respectively.
Albertson et al. [10] proved that the maximum among star chromatic numbers of planar graphs is between \(10\) and \(20\). Kierstead et al. [33] proved that the maximum among star chromatic numbers of planar bipartite graphs is between \(8\) and \(14\). Fertin et al. [34] proved that \(\chi_{s}(G)\leq\binom{\text{tw}(G)+2}{2}\). They also proved that \(\chi_{s}(G)=O(d^{\frac{3}{2}})\) where \(d=\Delta(G)\); and that there exist graphs \(G\) with \(\chi_{s}(G)=\Omega(d^{\frac{3}{2}}/(\log d)^{\frac{1}{2}})\). The star chromatic number of the \(d\)-dimensional hypercube is at most \(d+1\)[34]. Nesetril and Mendez [35] related the star chromatic number of a graph to the chromatic numbers of its minors. Albertson et al. [10] and independently Nesetril and Mendez [35] found that star colourings of a graph \(G\) are associated with orientations of \(G\). For every \(3\)-regular graph \(G\), we have \(4\leq\chi_{s}(G)\leq 6\)[36, 37].
Lyons [11] proved that \(\chi_{s}(G)=\text{tw}(G)+1\) for every cograph \(G\). Linhares-Sales et al. [12] designed a linear-time algorithm to compute the star chromatic number for two superclasses of cographs called \(P_{4}\)-tidy graphs and \((q,q-4)\)-graphs (for each fixed \(q\)). Omoomi et al. [16] designed a polynomial-time algorithm to compute the star chromatic number of line graph of trees.
Coleman and More [9] proved that for \(k\geq 3\), \(k\)-Star Colourability is NP-complete for (2-degenerate) bipartite graphs. Albertson et al. [10] proved that 3-Star Colourability's NP-complete for planar bipartite graphs. Lei et al. [15] proved that 3-Star Colourability is NP-complete for line graphs of subcubic graphs. Bok et al. [17] provided complexity dichotomy results on Star Colourability and \(k\)-Star Colourability in \(H\)-free graphs except for one open case, namely Star Colourability in \(2K_{2}\)-free graphs. Bok et al. [38] and independently Shalu and Antony [18] proved that Star Colourability is NP-complete for co-bipartite graphs. Brause et al. [20] proved that 3-Star Colourability in graphs of diameter at most \(d\) is polynomial-time solvable for \(d\leq 3\), but NP-complete for \(d\geq 8\). Bok et al. [17] and independently Shalu and Antony [19] proved that 3-Star Colourability is NP-complete for planar (bipartite) graphs of maximum degree 3 and arbitrarily large girth.
Gebremedhin et al. [4] proved that for all \(\epsilon>0\), it is NP-hard to approximate the star chromatic number of a (2-degenerate) bipartite graph within \(n^{\frac{1}{3}-\epsilon}\). In contrast, every 2-degenerate graph admits a star colouring with \(n^{\frac{1}{3}}\) colours [6, Theorem 6.2] (recall that every unique superior colouring is an rs colouring with colours reversed, and thus a star colouring). Hence, the star chromatic number of a 2-degenerate graph is approximable within \(n^{\frac{1}{3}}\).
For \(k\in\mathbb{N}\), \(k\)-Star Colourability can be expressed in Monadic Second-Order logic without edge set quantification (i.e., \(\text{MSO}_{1}\)) [19], and thus admits FPT algorithms with parameter either treewidth or cliquewidth by Courcelle's theorem [39, 40]. It is easy to observe that the transformation from \(k\)-Colourability to \(k\)-Star Colourability in [9] is a Polynomial Parameter Transformation (PPT) [41] when both problems are parameterized by treewidth (e.g. see [19]). As a result, for \(k\geq 3\), \(k\)-Star Colourability parameterized by treewidth does not admit a polynomial kernel unless \(\text{NP}\subseteq\text{coNP}/\text{poly}\). Bhyravarapu and Reddy [21] proved that Star Colourability is fixed-parameter tractable when parameterized by (i) neighbourhood diversity, (ii) twin-cover, and (iii) the combined parameters cliquewidth and the number of colours.
### Hardness Transitions
We show that for \(k=5\) and \(k\geq 7\), \(k\)-Star Colourability is NP-complete for graphs of maximum degree \(k-1\). First, we deal with \(k\geq 7\) (smaller values of \(k\) are discussed later). We employ Construction 1 below to prove that for every \(k\geq 7\), \(k\)-Star Colourability is NP-complete for graphs of maximum degree \(k-1\). Fix an integer \(k\geq 7\).
The graph in Figure 1(a), called the gadget component, is used to build the main gadget in Construction 1. In the gadget component, \(w_{i}\) is adjacent to \(x_{j}\) for all \(i,j\in\{1,2,\ldots,k-2\}\). Consider the \(k\)-colouring of the gadget component exhibited in Figure 1(b). Under this colouring, (i) every vertex with a binary colour (i.e., colour 0 or 1) has at most one neighbour with a binary colour (thus ruling out the possibility of a 4-vertex path coloured using only colours 0 and 1), and (ii) for each non-binary colour \(c\), exactly one vertex in the gadget has colour \(c\) (thus ruling out the possibility of colour \(c\) appearing in a bicoloured \(P_{4}\)). Hence, the \(k\)-colouring exhibited is a \(k\)-star colouring of the gadget component. Clearly, the graph in Figure 3 is a subgraph of the gadget component. Since the gadget component is \(k\)-star colourable, so is the graph in Figure 3. Let \(U=\{u_{1},u_{2},\ldots,u_{k-4}\}\)
Figure 2: (a) The gadget component, and (b) a \(k\)-star colouring of it.
\(W=\{w_{1},w_{2},\ldots,w_{k-2}\}\), \(X=\{x_{1},x_{2},\ldots,x_{k-2}\}\), and \(Y=\{y_{1},y_{2},\ldots,y_{k-4}\}\). The next lemma shows that \(U\cup W\) or \(X\cup Y\) is bicoloured by each \(k\)-star colouring of the graph in Figure 3. This is later used to prove that \(U\cup W\) is bicoloured by each \(k\)-star colouring of the gadget component.
**Lemma 1**.: _At least \(k\) colours \((k\geq 7)\) are required to star colour the graph in Figure 3. Moreover, for every \(k\)-star colouring \(f\) of the graph in Figure 3, there exist distinct colours \(c_{1}\) and \(c_{2}\) such that either \(f(U\cup W)\subseteq\{c_{1},c_{2}\}\) or \(f(X\cup Y)\subseteq\{c_{1},c_{2}\}\)._
Proof.: Let \(f\) be a star colouring of the graph in Figure 3. Note that repetition of colours can occur in at most one of the sets \(W\) and \(X\) (if \(f(w_{i})=f(w_{j})\) for distinct \(w_{i},w_{j}\in W\) and \(f(x_{p})=f(x_{q})\) for distinct \(x_{p},x_{q}\in X\), then \(w_{i},x_{p},w_{j},x_{q}\) is a bicoloured \(P_{4}\)). Hence, vertices in \(W\) have pairwise distinct colours, or vertices in \(X\) have pairwise distinct colours. If vertices in \(X\) have pairwise distinct colours, then \(k-2\) colours are required for vertices in \(X\) and two new colours are required for \(w_{k-3}\) and \(w_{k-2}\) since \(w_{k-3}w_{k-2}\) is an edge. Similarly, if vertices in \(W\) have pairwise distinct colours, then at least \(k\) colours are required to star colour the graph in Figure 3. Hence, at least \(k\) colours are required to star colour the graph in Figure 3.
Suppose that \(f\) is a \(k\)-star colouring of the graph in Figure 3. We know that vertices in \(W\) have pairwise distinct colours, or vertices in \(X\) have pairwise distinct colours. Note that both cannot occur at the same time (if vertices in \(W\) have pairwise distinct colours (i.e, \(|f(W)|=k-2\)), then only two colours are available for each vertex in \(X\) under \(f\), and thus at least two vertices in \(X\) have the same colour by pigeonhole principle). Thus, either vertices in \(W\) have pairwise distinct colours, or vertices in \(X\) have pairwise distinct colours. We prove that (i) if vertices in \(X\) have pairwise distinct colours, then \(U\cup W\) is bicoloured by \(f\), and (ii) if vertices in \(W\) have pairwise distinct colours, then \(X\cup Y\) is bicoloured by \(f\). By symmetry, it suffices to prove (i).
Suppose that the vertices in \(X\) have pairwise distinct colours. That is, \(k-2\) colours are used in set \(X\), and thus only two colours, say colour \(0\) and colour \(1\), are not used in \(X\). Without loss of generality, assume that \(f(x_{j})=j+1\) for all \(x_{j}\in X\). For every vertex \(w_{i}\in W\), only two colours are available, namely colour \(0\) and colour \(1\) (i.e., \(f(w_{i})\in\{0,1\}\)).
Since \(w_{k-3}w_{k-2}\) is an edge and \(f(w_{k-3}),f(w_{k-2})\in\{0,1\}\), we can assume without loss of generality that \(f(w_{k-3})=0\) and \(f(w_{k-2})=1\). Consider the colour at vertices \(w_{1}\) and \(u_{1}\). If \(f(w_{1})=0\), then \(f(u_{1})\neq f(x_{j})\) for any \(x_{j}\in X\) (otherwise, \(u_{1},w_{1},x_{j},w_{k-3}\) is a bicoloured \(P_{4}\)). Similarly, if \(f(w_{1})=1\), then \(f(u_{1})\neq f(x_{j})\) for any \(x_{j}\in X\) (otherwise, \(u_{1},w_{1},x_{j},w_{k-2}\) is a bicoloured \(P_{4}\)). Thus, in both cases, \(f(u_{1})\notin\{f(x_{1}),f(x_{2}),\ldots,f(x_{k-2})\}=\{2,3,\ldots,k-1\}\). That is, \(f(u_{1})\in\{0,1\}\). Applying the same argument to vertices \(w_{j}\) and \(u_{j}\) reveals that \(f(u_{j})\in\{0,1\}\) for \(1\leq j\leq k-4\). We know that \(f(w_{i})\in\{0,1\}\) for \(1\leq i\leq k-2\), and thus \(f(U\cup W)\subseteq\{0,1\}\). Since the colours \(0\) and \(1\) are chosen arbitrarily, there exist distinct colours \(c_{1}\) and \(c_{2}\) such that \(f(U\cup W)\subseteq\{c_{1},c_{2}\}\).
Thanks to Figure 1(b), the gadget component admits a \(k\)-star colouring. Let \(f\) be a \(k\)-star colouring of the gadget component. Then, \(f\) is a \(k\)-star colouring of its subgraph displayed in Figure 3 as well, and thus either \(U\cup W\) or \(X\cup Y\) is bicoloured by \(f\) by Lemma 1. If \(X\cup Y\) is bicoloured by \(f\), then \(x_{1},y_{1},y_{2},x_{2}\) is a path in the gadget component bicoloured by \(f\), a contradiction. Hence, \(U\cup W\) is bicoloured by \(f\). Thus, we have the following lemma.
**Lemma 2**.: _For every \(k\)-star colouring \(f\) of the gadget component (with \(k\geq 7\)), there exist distinct colours \(c_{1}\) and \(c_{2}\) such that \(f(U\cup W)\subseteq\{c_{1},c_{2}\}\). _
Figure 3: A subgraph of the gadget component (only the edge \(y_{1}y_{2}\) is missing).
Using the gadget component, we construct a gadget called the chain gadget. To construct a chain gadget, first introduce \(t\) copies of the gadget component (for some \(t\in\mathbb{N}\)). Let us refer to the vertex \(w_{1}\) (resp. \(u_{1}\)) in the first copy of the gadget component as \(w_{1,1}\) (resp. \(u_{1,1}\)), the vertex \(w_{1}\) in the second copy of the gadget component as \(w_{1,2}\), and so on (see Figure 4). The vertices \(u_{2,1,\ldots},u_{k-5,1},\ldots,u_{2,t},\ldots,u_{k-5,t}\) are marked as terminals; thus, we have \(k-6\) (\(\geq 1\)) terminals per gadget component. Let \(U_{1}=\{u_{1,1},u_{2,1},\ldots,u_{k-4,1}\}\), \(W_{1}=\{w_{1,1},w_{2,1},\ldots,w_{k-2,1}\}\), \(X_{1}=\{x_{1,1},x_{2,1},\ldots,x_{k-2,1}\}\), \(Y_{1}=\{y_{1,1},y_{2,1},\ldots,y_{k-4,1}\}\), and so on. Next, we perform a sequence of vertex identification operations (see Section 1.1 for definition). Identify the vertex \(w_{k-4,1}\) with \(u_{1,2}\) and identify the vertex \(u_{k-4,1}\) with \(w_{1,2}\). This operation in this context is the same as deleting vertices \(u_{k-4,1}\) and \(u_{1,2}\), and adding edge \(w_{k-4,1}w_{1,2}\); the small technical difference is that with vertex identification, the vertex \(w_{k-4,1}\) can also be referred to as \(u_{1,2}\) and the vertex \(w_{1,2}\) can also be referred to as \(u_{k-4,1}\). This small technical difference is the reason we prefer to present the operation as vertex identification. In general, for \(\ell\in\{1,2,\ldots,t-1\}\), identify the vertex \(w_{k-4,\ell}\) with \(u_{1,\ell+1}\) and identify the vertex \(u_{k-4,\ell}\) with \(w_{1,\ell+1}\) (compare Figure 4 with Figure 5). Observe that in the chain gadget (see Figure 5), the set \(U_{1}\cup W_{1}\cup X_{1}\cup Y_{1}\) induces a copy of the gadget component, which we shall call as the first copy of the gadget component in the chain gadget. The fact that the vertex \(w_{k-4,1}(=u_{1,2})\) also belongs to \(U_{2}\) and the vertex \(u_{k-4,1}(=w_{1,2})\) also belongs to \(W_{2}\) does not cause us trouble. Similarly, for \(1\leq\ell\leq t\), the subgraph of the chain gadget induced by \(U_{\ell}\cup W_{\ell}\cup X_{\ell}\cup Y_{\ell}\) is the \(\ell\)th copy of the gadget component in the chain gadget.
A \(k\)-star colouring of the chain gadget is exhibited in Figure 6.
**Lemma 3**.: _For every \(k\)-star colouring of the chain gadget (with \(k\geq 7\)), there exist distinct colours \(c_{1}\) and \(c_{2}\) such that the terminals of the gadget and their neighbours within the gadget are coloured either \(c_{1}\) or \(c_{2}\)._
Proof.: Let \(f\) be a \(k\)-star colouring of the chain gadget. Applying Lemma 2 to the first (resp. second) copy of the gadget component in the chain gadget reveals that \(U_{1}\cup W_{1}\) (resp. \(U_{2}\cup W_{2}\)) is bicoloured
Figure 4: \(t\) copies of the gadget component, where \(k\geq 7\).
Figure 5: Chain gadget (with \(t\) copies of the gadget component), where \(k\geq 7\).
Figure 6: A \(k\)-star colouring of the chain gadget, where \(k\geq 7\).
by \(f\). Suppose that \(U_{1}\cup W_{1}\) is bicoloured by \(f\) using two colours \(c_{1}\) and \(c_{2}\) (i.e., \(f(U_{1}\cup W_{1})\subseteq\{c_{1},c_{2}\}\)). Since \(u_{k-4,1}w_{k-4,1}\) is an edge in the chain gadget, \(\{f(w_{k-4,1}),f(u_{k-4,1})\}=\{c_{1},c_{2}\}\). Since \(w_{k-4,1}=u_{1,2}\) and \(u_{k-4,1}=w_{1,2}\), we have \(\{f(w_{1,2}),f(u_{1,2})\}=\{c_{1},c_{2}\}\). Hence, \(U_{2}\cup W_{2}\) is bicoloured by \(f\) using colours \(c_{1}\) and \(c_{2}\). By repeating the same argument, we can show that \(U_{\ell}\cup W_{\ell}\) is bicoloured by \(f\) using colours \(c_{1}\) and \(c_{2}\) for \(1\leq\ell\leq t\). Since every terminal of the chain gadget is a vertex of the form \(u_{i,\ell}\in U_{\ell}\) and the neighbour of a terminal is of the form \(w_{i,\ell}\in W_{\ell}\), the lemma is proved.
The next construction is employed to show that \(k\)-Star Colourability is NP-complete for graphs of maximum degree \(k-1\) (where \(k\geq 7\)).
**Construction 1**.:
_Parameter:_ An integer \(k\geq 7\).
_Input:_ A \((k-2)\)-regular graph \(G\).
_Output:_ A graph \(G^{\prime}\) of maximum degree \(k-1\).
_Guarantee:_\(G\) is \((k-2)\)-edge colourable if and only if \(G^{\prime}\) is \(k\)-star colourable.
_Steps:_
Let \(v_{1},v_{2},\ldots,v_{n}\) be the vertices and \(e_{1},e_{2},\ldots,e_{m}\) be the edges in \(G\). Introduce a chain gadget \(H\) (see Figure 5) with \(q\) copies of the gadget component (i.e., use \(t=q\)), where \(q=\lceil\frac{3n}{k-6}\rceil\). We know that there are exactly \(k-6\) terminals in each gadget component of the chain gadget. Thus, the choice of \(q\) ensures that the chain gadget has at least \(3n\) terminals. For each vertex \(v_{i}\) of \(G\), choose three terminals of the chain gadget \(H\) which are not already chosen, and label them \(v_{i,1}\), \(v_{i,2}\) and \(v_{i,3}\), respectively. For each edge \(e_{\ell}=v_{i}v_{j}\) of \(G\), introduce a new vertex \(e_{\ell}\) in \(G^{\prime}\) and join it to the vertices \(v_{i,1},v_{i,2},v_{i,3},v_{j,1},v_{j,2}\) and \(v_{j,3}\). To clarify, \(V(G^{\prime})=V(H)\cup E(G)\) and \(E(G^{\prime})=E(H)\cup\{vite\colon v_{i}\in V(G),e_{\ell}\in E(G),\) and \(v_{i}\) is incident on \(e_{\ell}\) in \(G\)). Moreover, the subgraph of \(G^{\prime}\) induced by \(\{v_{i,j}\colon 1\leq i\leq n,\,1\leq j\leq 3\}\bigcup\{e_{\ell}\colon 1\leq \ell\leq m\}\) is a bipartite graph with degree \(\deg_{G}(v_{i})\) for each vertex \(v_{i,j}\) and degree \(6\) for each vertex \(e_{\ell}\).
_Proof of guarantee._ Suppose that \(G\) admits a \((k-2)\)-edge colouring \(f\colon E(G)\to\{2,3,\ldots,k-1\}\). Note that colours \(0\) and \(1\) are not used by \(f\). We use \(f\) to obtain a \(k\)-colouring \(f^{\prime}\) of \(G^{\prime}\). Consider the function \(f^{\prime}\colon V(G^{\prime})\to\{0,1,\ldots,k-1\}\) obtained by employing the colouring scheme in Figure 6 on the chain gadget \(H\), and by assigning \(f^{\prime}(e_{\ell})=f(e_{\ell})\) for each \(e_{\ell}\in E(G)\). We know that \(f^{\prime}(e_{\ell})=f(e_{\ell})\geq 2\) for \(1\leq\ell\leq m\), whereas each terminal in \(G^{\prime}\) is coloured \(0\) or \(1\) by \(f^{\prime}\) (see Figure 6). Since vertices of the form \(e_{\ell}\) are adjacent only to terminals in \(G^{\prime}\), \(f^{\prime}\) is a \(k\)-colouring of \(G^{\prime}\).
**Claim 1:**\(f^{\prime}\) is a \(k\)-star colouring of \(G^{\prime}\).
Contrary to the claim, assume that there is a \(4\)-vertex path \(Q\) in \(G^{\prime}\) bicoloured by \(f^{\prime}\). Observe that \(f^{\prime}\) employs a \(k\)-star colouring scheme on the chain gadget \(H\) (see Figure 6). Moreover, the restriction of \(f^{\prime}\) to \(E(G)\) is a star colouring of \(G^{\prime}[E(G)]\) since \(E(G)\) is an independent set in \(G^{\prime}\). Hence, the bicoloured \(4\)-vertex path \(Q\) contains an edge of the form \(v_{i,j}e_{\ell}\), where \(v_{i,j}\in V(H)\) and \(e_{\ell}\in E(G)\). We have two cases: either (i) \(Q\) does not contain any edge from the chain gadget \(H\) (i.e., \(E(Q)\cap E(H)=\emptyset\)), or (ii) \(Q\) contains an edge from \(H\). In Case (i), \(Q\) is of the form \(e_{s},v_{i,j},e_{t},v_{p,q}\) where \(f^{\prime}(e_{s})=f^{\prime}(e_{t})\) and \(f^{\prime}(v_{i,j})=f^{\prime}(v_{p,q})\). By the definition of \(G^{\prime}\), \(e_{s}\) and \(e_{t}\) are edges of \(G\) incident on the vertex \(v_{i}\) of \(G\), and thus \(f(e_{s})\neq f(e_{t})\) (because \(f\) is an edge colouring of \(G\)). Since \(f(e_{s})=f^{\prime}(e_{s})=f^{\prime}(e_{t})=f(e_{t})\), we have a contradiction. This rules out Case (i). Consider Case (ii); that is, \(Q\) contains an edge from the chain gadget \(H\). Since the path \(Q\) contains an edge from \(H\) as well as an edge of the form \(v_{i,j}e_{\ell}\), the path \(Q\) contains a \(3\)-vertex path segment of the from \(w,v_{i,j},e_{\ell}\), where \(w\) is the neighbour of the terminal \(v_{i,j}\) within the chain gadget \(H\). By the colouring scheme employed on the chain gadget (namely Figure 6), \(f^{\prime}(v_{i,j}),f^{\prime}(w)\in\{0,1\}\). Since \(v_{i,j}w\) is an edge in \(G^{\prime}\), there is a binary colour \(b\in\{0,1\}\) such that \(f^{\prime}(v_{i,j})=b\) and \(f^{\prime}(w)=1-b\). Since \(f^{\prime}(e_{\ell})=f(e_{\ell})\geq 2\), the segment \(w,v_{i,j},e_{\ell}\) of the path \(Q\) is tricoloured by \(f^{\prime}\). Hence, \(f^{\prime}\) uses at least three colours on the path \(Q\), a contradiction. This rules out Case (ii). Since both Case (i) and Case (ii) are ruled out, there is no \(4\)-vertex path in \(G^{\prime}\) bicoloured by \(f^{\prime}\). That is, \(f^{\prime}\) is indeed a \(k\)-star colouring of \(G^{\prime}\). This proves Claim 1.
Conversely, suppose that \(G^{\prime}\) admits a \(k\)-star colouring \(f^{\prime}:V(G^{\prime})\to\{0,1,\ldots,k-1\}\). By Lemma 3, there exist distinct colours \(c_{1}\) and \(c_{2}\) such that the terminals of the chain gadget and their neighbours within the chain gadget are coloured either \(c_{1}\) or \(c_{2}\). Without loss of generality, assume that \(c_{1}=0\) and \(c_{2}=1\). Thus, we have the following claim.
**Claim 2:** All terminals of the chain gadget and their neighbours within the gadget have binary colours (i.e., colour \(0\) or colour \(1\)).
**Claim 3:** For each \(e_{\ell}\in E(G)\), the vertex \(e_{\ell}\) of \(G^{\prime}\) has a non-binary colour under \(f^{\prime}\) (i.e., \(f^{\prime}(e_{\ell})\geq 2\)).
On the contrary, assume that \(f^{\prime}(e_{\ell})=b\) for some \(b\in\{0,1\}\), where \(e_{\ell}\in E(G)\). Let \(v_{i}\) be a vertex incident on the edge \(e_{\ell}\) in \(G\). In \(G^{\prime}\), the vertex \(e_{\ell}\) is adjacent to \(v_{i,1}\), \(v_{i,2}\) and \(v_{i,3}\), and thus \(f^{\prime}(v_{i,1})=f^{\prime}(v_{i,2})=f^{\prime}(v_{i,3})=1-b\) (because \(f^{\prime}(v_{i,j})\in\{0,1\}\) by Claim 2). For \(1\leq j\leq 3\), the neighbour of the terminal \(v_{i,j}\) in the chain gadget has a binary colour by Claim 2. As shown in Figure 7, this signals a 4-vertex path in \(G^{\prime}\) bicoloured by \(f^{\prime}\). This contradiction proves Claim 3.
Let \(f\) be the restriction of \(f^{\prime}\) to \(E(G)\). Due to Claim 3, \(f\) uses only colours \(2,3,\ldots,k-1\). Hence, let us view \(f\) as a function from \(E(G)\) to \(\{2,3,\ldots,k-1\}\).
**Claim 4:**\(f\) is a \((k-2)\)-edge colouring of \(G\).
On the contrary, assume that \(f(e_{s})=f(e_{t})\) for two edges \(e_{s}\) and \(e_{t}\) of \(G\) incident on a common vertex \(v_{i}\) in \(G\). By the definition of \(G^{\prime}\), both vertices \(e_{s}\) and \(e_{t}\) of \(G^{\prime}\) are adjacent to vertices \(v_{i,1}\), \(v_{i,2}\) and \(v_{i,3}\) in \(G^{\prime}\). Recall that \(f^{\prime}(v_{i,1}),f^{\prime}(v_{i,2}),f^{\prime}(v_{i,3})\in\{0,1\}\) by Claim 2. Hence, by pigeonhole principle, at least two of these vertices have the same colour, say \(f^{\prime}(v_{i,1})=f^{\prime}(v_{i,2})\). As a result, \(v_{i,1},e_{s},v_{i,2},e_{t}\) is a 4-vertex path in \(G^{\prime}\) bicoloured by \(f^{\prime}\); a contradiction. Therefore, \(f\) is indeed a \((k-2)\)-edge colouring of \(G\). This proves Claim 4.
Note that the chain gadget has \((4k-14)q+2\) vertices and \((k^{2}-2k-2)q+1\) edges. Hence, \(G^{\prime}\) has \((4k-14)q+2+m=O(m+n)\) vertices and \((k^{2}-2k-2)q+1+6m=O(m+n)\) edges, where \(m=|E(G)|\) and \(n=|V(G)|\) (because \(q=O(n)\)). Thus, Construction 1 requires only time polynomial in \(m+n\). Leven and Galil [42] proved that for all \(k\geq 3\), Edge \(k\)-Colourability is NP-complete for \(k\)-regular graphs. Thus, we have the following theorem by Construction 1.
**Theorem 1**.: _For \(k\geq 7\), \(k\)-Star Colourability is NP-complete for graphs of maximum degree \(k-1\). _
Next, let us deal with smaller values of \(k\). For \(k\leq 3\), \(k\)-Star Colourability in graphs of maximum degree \(k-1\) is polynomial-time solvable. The status is open for \(k=4\). Using Construction 2 below, we show that 4-Star Colourability is NP-complete for graphs of maximum degree 4. Interestingly, the graph used as the gadget component in Construction 2, which is Petersen graph minus one vertex, has maximum degree 3. We suspect that 4-Star Colourability is NP-complete for graphs of maximum degree 3, and Petersen graph minus one vertex might be useful in producing an NP-completeness reduction.
We use Petersen graph minus one vertex as the gadget component to build gadgets in Construction 2. See Figure 8 for a diagram of the gadget component. Clearly, the gadget component has girth five. The following lemma explains why it is interesting for 4-star colouring.
**Lemma 4**.: _Every 4-star colouring of Petersen graph minus one vertex must assign the same colour on all three degree-2 vertices of the graph \((\)namely, \(w_{1},w_{4}\) and \(v_{5})\)._
Proof.: We fix a drawing of the Petersen graph, and assume that the vertex removed is from the outer \(C_{5}\) (see Figure 8). To star colour \(C_{5}\), four colours are needed. Moreover, in every 4-star colouring of \(C_{5}\), exactly one colour should repeat. Hence, without loss of generality, we assume that the inner \(C_{5}\) is coloured in the pattern 1,0,1,2,3. So, exactly one of the following holds: (i) \(f(v_{1})=0\), (ii) \(f(v_{2})=0\), (iii) \(f(v_{3})=0\), (iv) \(f(v_{4})=0\), or (v) \(f(v_{5})=0\). Up to symmetry and swapping of colours 2 and 3, we have only the three cases displayed in Figure 8 (note that \(f(v_{4})=0\) is symmetric to Case 2, and \(f(v_{2})=0\) is symmetric to Case 3). Let \(f\) be a 4-star colouring of Petersen graph minus one vertex. We need to prove that \(f(w_{1})=f(w_{4})=f(v_{5})\).
Case 1:
If \(f(w_{1})=1\), then the bicoloured path \(w_{1},w_{2},v_{2}\) will be part of a bicoloured \(P_{4}\) irrespective of the
Figure 7: A binary colour at \(e_{\ell}\) implies a bicoloured \(P_{4}\).
colour at \(w_{2}\) (see Figure 10). Hence, \(f(w_{1})\neq 1\). Similarly, \(f(w_{4})\neq 1\). We show that \(f(w_{1})=0\). On the contrary, assume that \(f(w_{1})\neq 0\). Note that \(f(w_{2})\neq 0\) (if not, path \(w_{2},v_{2},v_{5},v_{3}\) is a bicoloured \(P_{4}\)). So, \(f(w_{1}),f(w_{2})\in\{2,3\}\) (because colours \(0\) and \(1\) are ruled out). Therefore, path \(w_{2},w_{1},v_{1},v_{4}\) is a \(P_{4}\) coloured with only two colours \(2\) and \(3\), a contradiction. This proves that \(f(w_{1})=0\). By symmetry, \(f(w_{4})=0\) as well. So, \(f(w_{1})=f(w_{4})=f(v_{5})=0\) in Case 1.
Case 2:
If \(f(w_{3})=0\), then \(w_{3},v_{3},v_{1},v_{4}\) is a bicoloured \(P_{4}\). So, \(f(w_{3})\in\{2,3\}\). We show that \(f(w_{3})=2\). On the contrary, assume that \(f(w_{3})=3\). Then, \(f(w_{2})\in\{0,1\}\). If \(f(w_{2})=1\), then \(w_{2},w_{3},v_{3},v_{5}\) is a bicoloured \(P_{4}\). So, \(f(w_{2})=0\). This leads to a contradiction as the path \(w_{2},w_{1},v_{1}\) will be part of a bicoloured \(P_{4}\) irrespective of the colour at \(w_{1}\) (see Figure 10). Thus, by contradiction, \(f(w_{3})=2\). So, \(f(w_{4})\in\{0,3\}\). If \(f(w_{4})=0\), then \(w_{4},v_{4},v_{1},v_{3}\) is a bicoloured \(P_{4}\). Hence, \(f(w_{4})=3\). As a result, \(f(w_{2})\neq\{1,3\}\) (if not, either path \(w_{3},w_{2},v_{2},v_{4}\) or path \(w_{4},w_{3},w_{2},v_{2}\) is a bicoloured \(P_{4}\)). So, \(f(w_{2})=0\). This in turn forces \(f(w_{1})=3\) (if \(f(w_{1})\in\{1,2\}\), then either \(w_{2},w_{1},v_{1},v_{4}\) or \(w_{3},w_{2},w_{1},v_{1}\) a bicoloured \(P_{4}\)). So, \(f(w_{1})=f(w_{4})=f(v_{5})=3\) in Case 2.
Case 3:
If \(f(w_{3})=2\), then the path \(w_{3},w_{2},v_{2}\) will be part of a bicoloured \(P_{4}\) irrespective of the colour at \(w_{2}\) (see Figure (a)a). If \(f(w_{3})=3\), then the path \(w_{3},w_{4},v_{4}\) will be part of a bicoloured \(P_{4}\) irrespective of the colour at \(w_{4}\) (see Figure (b)b). So, \(f(w_{3})=1\). Clearly, \(f(w_{2})\in\{0,3\}\) and \(f(w_{4})\in\{0,2\}\). Observe that \(f(w_{2})\neq 0\) and \(f(w_{4})\neq 0\) (if not, either \(v_{1},v_{3},w_{3},w_{2}\) or \(v_{1},v_{3},w_{3},w_{4}\) is a bicoloured \(P_{4}\)). So, \(f(w_{2})=3\) and \(f(w_{4})=2\). But, then path \(w_{2},v_{2},v_{4},w_{4}\) is a bicoloured \(P_{4}\). This contradiction rules out Case 3.
Since Case 3 is ruled out by contradiction, we have \(f(w_{1})=f(w_{4})=f(v_{5})\) by Cases 1 and 2. This completes the proof.
**Construction 2**.:
_Input:_ A \(4\)-regular graph \(G\).
_Output:_ A graph \(G^{\prime}\) of maximum degree four and girth five.
_Guarantee:_\(G\) is \(3\)-colourable if and only if \(G^{\prime}\) is \(4\)-star colourable.
_Steps:_
Let \(v_{1},v_{2},\ldots,v_{n}\) be the vertices in \(G\). First, replace each vertex of \(G\) by a vertex gadget as shown in Figure 12. The vertex gadget for \(v_{i}\) has five terminals, and the terminals \(v_{i,1},v_{i,2},v_{i,3},v_{i,4}\) accommodate the four edges incident on \(v_{i}\) in \(G\) in a one-to-one fashion (order does not matter). So,
Figure 8: Possible ways of \(4\)-star colouring inner \(C_{5}\).
corresponding to each edge \(v_{i}v_{j}\) in \(G\), there is an edge \(v_{i,k}v_{j,\ell}\) in \(G^{\prime}\) for some \(k,\ell\in\{1,2,3,4\}\). Finally, introduce the chain gadget displayed in Figure 13, and join \(v_{i,0}\) to \(v_{i}^{*}\) for \(i=1,2,\ldots,n\).
Proof of Guarantee.: Suppose that \(G\) admits a \(3\)-colouring \(f\colon V(G)\to\{1,2,3\}\). A \(4\)-star colouring \(f^{\prime}\) of \(G^{\prime}\) is constructed as follows. Assign \(f^{\prime}(v_{i,j})=f(v_{i})\) for \(1\leq i\leq n\) and \(0\leq j\leq 4\). This partial colouring can be extended into a \(4\)-star colouring of each vertex gadget by the scheme in Figure 14 (if terminals of the gadget are coloured \(c\in\{2,3\}\), swap colour \(1\) with colour \(c\)). Also, assign \(f^{\prime}(v_{i}^{*})=0\) for \(1\leq i\leq n\). This can be extended into a \(4\)-star colouring of the chain gadget; for instance, use a scheme similar to the one in Figure 14 (it does not matter which \(4\)-star colouring extension is used).
Note that for each \(3\)-vertex path \(Q\) in a vertex/chain gadget with a terminal of the gadget as an endpoint, \(Q\) is not bicoloured by \(f^{\prime}\). Hence, there is no bicoloured \(P_{4}\) in \(G^{\prime}\) with three vertices from one gadget and one vertex from another. To prove that \(f^{\prime}\) is a \(4\)-star colouring, it suffices to show that there is no bicoloured \(P_{4}\) in \(G^{\prime}\) with two vertices from one gadget and two vertices from another gadget. Observe that for \(1\leq i\leq n\) and \(1\leq k\leq 4\), neighbours of \(v_{i,k}\) within the vertex gadget for \(v_{i}\) are coloured \(0\). So, there is no bicoloured \(P_{4}\) in \(G^{\prime}\) containing an edge of the form \(v_{i,k}v_{j,\ell}\) as its middle vertex. Moreover, for \(1\leq i\leq n\), neighbours of \(v_{i,0}\) within the vertex gadget are not coloured \(0\). Hence, there is no bicoloured \(P_{4}\) in \(G^{\prime}\) containing an edge of the form \(v_{i,0}v_{i}^{*}\) as its middle vertex (recall that \(f^{\prime}(v_{i}^{*})=0\)). Therefore, there is no bicoloured \(P_{4}\) in \(G^{\prime}\) containing two vertices from one gadget and two vertices from another gadget. This proves that \(f^{\prime}\) is a \(4\)-star colouring of \(G^{\prime}\).
Conversely, suppose that \(G^{\prime}\) admits a \(4\)-star colouring \(f^{\prime}\). Thanks to Lemma 4, terminals of a vertex/chain gadget should get the same colour. That is, \(f^{\prime}(v_{i,0})=f^{\prime}(v_{i,1})=f^{\prime}(v_{i,2})=f^{\prime}(v_{i,3} )=f^{\prime}(v_{i,4})\) for all \(v_{i}\in V(G)\), and \(f^{\prime}(v_{1}^{*})=f^{\prime}(v_{2}^{*})=\cdots=f^{\prime}(v_{n}^{*})\). Without loss of generality, assume that \(f^{\prime}(v_{i}^{*})=0\) for \(i=1,2,\ldots,n\). Since \(v_{i,0}v_{i}^{*}\) is an edge for \(1\leq i\leq n\), the chain gadget forbids colour \(0\) at terminals \(v_{i,j}\) for \(1\leq i\leq n\) and \(0\leq j\leq 4\). Consider the function \(f\colon V(G)\to\{1,2,3\}\) defined as \(f(v_{i})=f^{\prime}(v_{i,0})\) for \(1\leq i\leq n\). For each edge \(v_{i}v_{j}\) of \(G\), there exists an edge in \(G^{\prime}\) between terminals \(v_{i,k}\) and \(v_{j,\ell}\) for some \(k,\ell\in\{1,2,3,4\}\), and thus \(f^{\prime}(v_{i,k})\neq f^{\prime}(v_{j,\ell})\). So, \(f(v_{i})\neq f(v_{j})\) for each edge
Figure 11: \(f(w_{3})=1\) in Case 3.
Figure 12: Replacement of vertex by vertex gadget.
Figure 13: Chain gadget in Construction 2.
\(v_{i}v_{j}\) of \(G\) (due to Lemma 4, \(f(v_{i})=f^{\prime}(v_{i,0})=f^{\prime}(v_{i,k})\) and \(f(v_{j})=f^{\prime}(v_{j,0})=f^{\prime}(v_{j,\ell})\)). Therefore, \(f\) is a 3-colouring of \(G\). This proves the converse part.
Construction 2 establishes a reduction from 3-Colourability(4-regular) to 4-Star Colourability(\(\Delta=4\), girth = 5). Note that Construction 2 requires only time polynomial in \(m+n\) because \(|E(G^{\prime})|=61n+m\) and \(|V(G^{\prime})|=41n+1\) (where \(m=|E(G)|\) and \(n=|V(G)|\)). Thus, we have the following theorem.
**Theorem 2**.: \(4\)-Star Colourability _is NP-complete for graphs of maximum degree four and girth five. _
Next, we show that 5-Star Colourability is NP-complete for graphs of maximum degree 4. Construction 3 below is employed to establish a reduction from 3-Colourability(4-regular) to 5-Star Colourability(triangle-free, 4-regular). Construction 3 is similar to Construction 2, albeit a bit more complicated. For instance, we will need two chain gadgets this time because two colours should be forbidden. The gadgets used in the construction are made of two gadgets called 2-in-2-out gadget and not-equal gadget. These are in turn made of one fixed graph, namely Grotzsch graph minus one vertex; we call it the gadget component (in Construction 3) for obvious reason. The gadget component is displayed in Figure 14(a). The following lemma explains why it is interesting for 5-star colouring.
**Lemma 5**.: _Under every 5-star colouring of the gadget component, the degree-2 vertices of the graph should get pairwise distinct colours. Moreover, every 5-star colouring of the gadget component must be of the form displayed in Figure 14(b) or Figure 14(c) up to colour swaps._
Proof.: Let \(f\) be a 5-star colouring of the gadget component that uses colours 0,1,2,3 and 4.
**Claim 1:**\(f\) must use all five colours on the 5-vertex cycle \((v_{1},v_{2},v_{3},v_{4},v_{5})\).
On the contrary, assume that two vertices of the 5-vertex cycle are assigned the same colour by \(f\). Without loss of generality, assume that \(f(v_{1})=f(v_{3})=1\) and \(f(v_{2})=0\). Since \(v_{1},v_{2},v_{3},v_{4}\) and \(v_{5},v_{1},v_{2},v_{3}\) are \(P_{4}\)'s, \(f(v_{4})\neq 0\) and \(f(v_{5})\neq 0\); hence, new colours must be assigned at \(v_{4}\) and \(v_{5}\). Without loss of generality, assume that \(f(v_{4})=2\) and \(f(v_{5})=3\). If \(f(w_{2})\in\{0,2,3\}\), then one of the three paths (i) \(v_{1},w_{2},v_{3},v_{2}\), (ii) \(v_{1},w_{2},v_{3},v_{4}\), or (iii) \(v_{5},v_{1},w_{2},v_{3}\) is a bicoloured \(P_{4}\). So, \(f(w_{2})=4\). If \(f(w_{4})=0\) or \(f(w_{4})=4\), then either \(v_{1},v_{2},v_{3},w_{4}\) or \(v_{1},w_{2},v_{3},w_{4}\) is a bicoloured \(P_{4}\). Hence, \(f(w_{4})\notin\{0,4\}\) and thus \(f(w_{4})=2\). Similarly, \(f(w_{5})\notin\{0,4\}\) and thus \(f(w_{5})=3\) (if \(f(w_{5})\in\{0,4\}\), then either \(w_{5},v_{1},v_{2},v_{3}\) or \(w_{5},v_{1},w_{2},v_{3}\) is a bicoloured \(P_{4}\)). But, then path \(w_{4},v_{5},v_{4},w_{5}\) is a bicoloured \(P_{4}\). This contradiction proves Claim 1.
Thanks to Claim 1, we assume without loss of generality that \(f(v_{i})=i-1\) for \(1\leq i\leq 5\).
**Claim 2:** For \(1\leq i\leq 5\), \(f(w_{i})\neq f(v_{i})\).
Assume the contrary, say for \(i=1\), i.e., \(f(w_{1})=f(v_{1})=0\). If \(f(w_{2})=1\) or \(4\), then either \(w_{1},v_{2},v_{1},w_{2}\) or \(w_{1},v_{5},v_{1},w_{2}\) is a bicoloured \(P_{4}\). So, \(f(w_{2})=3\). If \(f(w_{5})=1\) or \(4\), then either \(w_{1},v_{2},v_{1},w_{5}\) or \(w_{1},v_{5},v_{1},w_{5}\) is a bicoloured \(P_{4}\). So, \(f(w_{5})=2\). But, then path \(w_{2},v_{3},v_{4},w_{5}\) is a bicoloured \(P_{4}\). This contradiction proves Claim 2.
Due to Claim 2, \(f(w_{1})\in\{2,3\}\). Similarly, \(f(w_{2})\in\{3,4\}\), \(f(w_{3})\in\{4,0\}\), \(f(w_{4})\in\{0,1\}\) and \(f(w_{5})\in\{1,2\}\).
Figure 14: A 4-star colouring of the vertex gadget with colour 1 at terminals.
Case 1: \(f(w_{1})=2\).
This forces colour \(0\) at \(w_{4}\) (if \(f(w_{4})=1\), then \(w_{1},v_{2},v_{3},w_{4}\) is a bicoloured \(P_{4}\)). This in turn forces colour \(3\) at \(w_{2}\) similarly. By repeating this argument, we can show that \(f\) must be of the form displayed in Figure 14(b) up to colour swaps.
Case 2: \(f(w_{1})=3\).
This forces colour \(0\) at \(w_{3}\) (if \(f(w_{3})=4\), then \(w_{1},v_{5},v_{4},w_{3}\) is a bicoloured \(P_{4}\)). This in turn forces colour \(2\) at \(w_{5}\) similarly. By repeating this argument, we can show that \(f\) must be of the form displayed in Figure 14(c) up to colour swaps.
The 2-in-2-out gadget is displayed in Figure 16. Observe that two copies of the gadget component are part of this gadget. The following lemma shows why 5-star colouring of this gadget is interesting.
Figure 16: (a) The 2-in-2-out gadget, (b) its symbolic representation, and (c) a 5-star colouring of the gadget.
Figure 15: (a) Gadget component, (b,c) General form of 5-star colouring of it.
**Lemma 6**.: _For every 5-star colouring \(f\) of the 2-in-2-out gadget, there exist two distinct colours \(c_{1}\) and \(c_{2}\) such that \(f(y_{1})=f(y_{2})=f(z_{1}^{*})=f(z_{2}^{*})=c_{1}\) and \(f(y_{1}^{*})=f(y_{2}^{*})=f(z_{1})=f(z_{2})=c_{2}\). Moreover, every 3-vertex path containing one of the pendant edges of the gadget is triclooured by \(f\)._
Proof.: The 2-in-2-out gadget is displayed in Figure 16. Let \(f\) be a 5-star colouring of the 2-in-2-out gadget that uses colours 0,1,2,3 and 4. We prove Lemma 6 for the case when the bottom copy of the gadget component in the 2-in-2-out gadget is coloured by the scheme in Figure 15; the proof is similar when the scheme in Figure 15. is used instead. That is, vertices \(v_{1},v_{2},v_{3},v_{4},v_{5}\) and vertices \(w_{4},w_{5},w_{1},x_{2},x_{3}\) are coloured \(0,1,2,3,4\). Clearly, \(f(z_{1}),f(z_{2})\in\{3,4\}\). Note that \(f(z_{1})\neq 4\) (if not, \(v_{1},v_{5},w_{4},z_{1}\) is a biclooured \(P_{4}\)). Hence, \(f(z_{1})=3\). Similarly, \(f(z_{2})=3\). Since \(z_{1}\) and \(z_{2}\) have common neighbours coloured 0,1 and 2, both \(z_{1}^{*}\) and \(z_{2}^{*}\) must be coloured 4 by \(f\) (e.g.: if \(f(z_{1}^{*})=0\), then \(z_{1}^{*},z_{1},w_{4},z_{2}\) is a biclooured \(P_{4}\)).
Observe that \(f(u_{2})\neq 3\) (if not, \(v_{5},v_{4},x_{3},u_{2}\) is a biclooured \(P_{4}\)). Similarly, \(f(u_{4})\neq 3\). By Claim 1 in the proof of Lemma 5, all five colours must be used by \(f\) on the inner \(C_{5}\) of a gadget component. So, colour 3 must be used on the cycle \((u_{1},u_{2},u_{3},u_{4},u_{5})\), and thus \(f(u_{5})=3\) (see Figure 17).
By Lemma 5, vertices \(x_{1}\), \(x_{2}\), \(x_{3}\), \(x_{4}\), \(x_{5}\) and vertices \(u_{1}\), \(u_{2}\), \(u_{3}\), \(u_{4}\), \(u_{5}\) must be coloured by the same cyclic order of colours (see Figure 15). So, \(f(u_{1})=4\). For the same reason, \(f(x_{4})=f(u_{2})\), \(f(x_{5})=f(u_{3})\) and \(f(x_{1})=f(u_{4})\). Since all five colours must be used on the cycle \((u_{1},u_{2},u_{3},u_{4},u_{5})\), we have \(\{f(x_{4}),f(x_{5}),f(x_{1})\}=\{f(u_{2}),f(u_{3}),f(u_{4})\}=\{c_{3},c_{4},c_ {5}\}\) where \(\{c_{3},c_{4},c_{5}\}\) is a permutation of \(\{0,1,2\}\). So, \(f(y_{1}),f(y_{2})\in\{3,4\}\). If \(f(y_{1})=3\), then \(u_{4},u_{5},x_{1},y_{1}\) is a biclooured \(P_{4}\). Hence, \(f(y_{1})=4\). Similarly, \(f(y_{2})=4\). Since \(y_{1}\) and \(y_{2}\) have common neighbours coloured 0,1 and 2, both \(y_{1}^{*}\) and \(y_{2}^{*}\) must be coloured 3 by \(f\). That is, the colouring \(f\) is as shown in Figure 17. Clearly, there exist distinct colours \(c_{1}\) and \(c_{2}\) such that \(f(y_{1})=f(y_{2})=f(z_{1}^{*})=f(z_{2}^{*})=c_{1}\) and \(f(y_{1}^{*})=f(y_{2}^{*})=f(z_{1})=f(z_{2})=c_{2}\) (here, \(c_{1}=4\) and \(c_{2}=3\)). Also, every 3-vertex path containing a pendant edge of the gadget is triclooured by \(f\). This completes the proof.
The not-equal gadget is the graph displayed in Figure 18. The not-equal gadget is made from one 2-in-2-out gadget by identifying vertex \(y_{1}^{*}\) of the 2-in-2-out gadget with vertex \(y_{2}^{*}\) and identifying
Figure 17: (a) A partial 5-star colouring of the 2-in-2-out gadget, (b) a 5-star colouring of the 2-in-2-out gadget where \(\{c_{3},c_{4},c_{5}\}\) is a permutation of \(\{0,1,2\}\), and (c) the symbolic representation of the colouring in (b).
vertex \(z_{1}^{*}\) with vertex \(z_{2}^{*}\). Hence, the next lemma follows from Lemma 6 (note that \(c_{1}\neq c_{2}\) in Lemma 6).
**Lemma 7**.: _The terminals of the not-equal gadget should get different colours under each 5-star colouring \(f\). Moreover, every 3-vertex path within the gadget with a terminal as one endpoint is tricoloured by \(f\)._
We are now ready to present the construction.
**Construction 3**.:
_Input:_ A 4-regular graph \(G\).
_Output:_ A triangle-free graph \(G^{\prime}\) of maximum degree four.
_Guarantee:_\(G\) is 3-colourable if and only if \(G^{\prime}\) is 5-star colourable.
_Steps:_
Let \(v_{1},v_{2},\ldots,v_{n}\) be the vertices in \(G\). First, replace each vertex \(v_{i}\) of \(G\) by a vertex gadget as shown in Figure 19. The vertex gadget for \(v_{i}\) has six terminals namely \(v_{i,0},v_{i,1},v_{i,2},v_{i,3},v_{i,4}\) and \(v_{i,5}\). The terminals \(v_{i,1},v_{i,2},v_{i,3},v_{i,4}\) accommodate the edges incident on \(v_{i}\) in \(G\). The replacement of vertices by vertex gadgets converts each edge \(v_{i}v_{j}\) of \(G\) to an edge between terminals \(v_{i,k}\) and \(v_{j,\ell}\) for some \(k,\ell\in\{1,2,3,4\}\).
Next, replace each edge \(v_{i,k}\,v_{j,\ell}\) between terminals by a not-equal gadget between \(v_{i,k}\) and \(v_{j,\ell}\) (that is, introduce a not-equal gadget, identify one terminal of the gadget with vertex \(v_{i,k}\) and identify the
Figure 19: Replacement of vertex by vertex gadget.
Figure 18: (a) A not-equal gadget between terminals \(y\) and \(z\) (it is made of one 2-in-2-out gadget), and (b) its symbolic representation.
other terminal with the vertex \(v_{j,\ell}\)). Next, introduce two chain gadgets. The chain gadget is displayed in Figure 20.
Next, add a not-equal gadget between \(v_{i,0}\) and \(v_{i,1}^{*}\) for \(1\leq i\leq n\). Similarly, introduce a not-equal gadget between \(v_{i,5}\) and \(v_{i,2}^{*}\) for \(1\leq i\leq n\). Finally, add a not-equal gadget between \(x_{1,1}\) and \(x_{1,2}\).
Proof of Guarantee.: For convenience, let us call the edges \(y_{1}y_{1}^{*},y_{2}y_{2}^{*}\) of a 2-in-2-out gadget (see Figure 16) as in-edges of the 2-in-2-out gadget, edges \(z_{1}z_{1}^{*},z_{2}z_{2}^{*}\) as out-edges of the 2-in-2-out-gadget, vertices \(y_{1}^{*},y_{2}^{*}\) as in-vertices of the 2-in-2-out gadget, and vertices \(z_{1}^{*}\), \(z_{2}^{*}\) as out-vertices of the 2-in-2-out gadget.
The next claim follows from Lemma 6.
**Claim 1:** If an in-edge of a 2-in-2-out gadget is an out-edge of another 2-in-2-out gadget, the colour of the out-vertices of both gadgets must be the same.
Next, we point out a property of the vertex gadget and the chain gadget.
**Claim 2:** All terminals of a vertex gadget (resp. chain gadget) should get the same colour under a 5-star colouring.
By Claim 1, if an in-edge of a 2-in-2-out gadget is an out-edge of another 2-in-2-out gadget, the colour of out-vertices of both gadgets must be the same. Repeated application of this idea proves Claim 2 (see supplement for a detailed proof).
We are now ready to prove the guarantee. Suppose that \(G\) admits a 3-colouring \(f\colon V(G)\to\{2,3,4\}\). A 5-colouring \(f^{\prime}:V(G^{\prime})\to\{0,1,2,3,4\}\) of \(G^{\prime}\) is constructed as follows. First, assign \(f^{\prime}(v_{i,j})=f(v_{i})\) for \(1\leq i\leq n\) and \(0\leq j\leq 5\). Extend this into a 5-star colouring of the vertex gadget by using the scheme in Figure (c)c on each 2-in-2-out-gadget within the vertex gadget (use the scheme in Figure (c)c if \(f^{\prime}(v_{i,j})=4\); suitably swap colours in other cases). To colour the first chain gadget, colour each 2-in-2-out gadget within this chain gadget using the scheme obtained from Figure (c)c by swapping colour 4 with colour 0. Similarly, for the second chain gadget, colour each 2-in-2-out gadget within the chain gadget using the scheme obtained from Figure (c)c by swapping colour 4 with colour 1. To complete the colouring, it suffices to extend the partial colouring to not-equal gadgets. For each not-equal gadget between two terminals, say terminal \(y\) and terminal \(z\), colour the 2-in-2-out gadget within the not-equal gadget using the scheme obtained from Figure (c)c by swapping colour 3 with colour \(f^{\prime}(y)\) and swapping colour 4 with colour \(f^{\prime}(z)\).
By Lemma 6 and Lemma 7 (see the second statements in both lemmas), every 3-vertex path in any gadget in \(G^{\prime}\) containing a terminal of the gadget as an endpoint is tricoloured by \(f^{\prime}\). In addition,
Figure 20: \(t\)-th chain gadget in Construction 3 if \(n\) is even, where \(t=1\) or \(2\). If \(n\) is odd, the \(t\)-th chain gadget is the same except that it has only \(n+1\) terminals \(v_{1,t}^{*},v_{2,t}^{*},\ldots,v_{n,t}^{*}\) and \(x_{1,t}\). A chain gadget is similar to a vertex gadget; the only difference is that it has more levels and terminals.
the construction of the graph \(G^{\prime}\) is merely glueing together terminals of different gadgets. Therefore, there is no \(P_{4}\) in \(G^{\prime}\) bicoloured by \(f^{\prime}\); that is, \(f^{\prime}\) is a \(5\)-star colouring of \(G^{\prime}\).
Conversely, suppose that \(G^{\prime}\) admits a \(5\)-star colouring \(f^{\prime}\colon V(G^{\prime})\to\{0,1,2,3,4\}\). By Claim 2, all terminals of a vertex/chain gadget should have the same colour under \(f^{\prime}\). As there is a not-equal gadget between \(x_{1,1}\) and \(x_{1,2}\), \(f^{\prime}(x_{1,1})\neq f^{\prime}(x_{1,2})\) (by Lemma 7). Without loss of generality, assume that \(f^{\prime}(x_{1,1})=0\) and \(f^{\prime}(x_{1,2})=1\). By Claim 2, all terminals of the first chain gadget have colour \(0\); that is, \(f^{\prime}(x_{1,1})=f(x_{1,2})=f^{\prime}(v_{i,1}^{*})=0\) for \(1\leq i\leq n\). Similarly, all terminals of the second chain gadget have colour \(1\); that is, \(f^{\prime}(x_{1,2})=f^{\prime}(x_{2,2})=f^{\prime}(v_{i,2}^{*})=1\) for \(1\leq i\leq n\). By Claim 2, all terminals of the vertex gadget for \(v_{1}\) have the same colour under \(f^{\prime}\), say colour \(c\). Since there is a not-equal gadget between \(v_{1,0}\) and \(v_{1,1}^{*}\), we have \(c=f^{\prime}(v_{1,0})\neq f^{\prime}(v_{1,1}^{*})=0\). Since there is a not-equal gadget between \(v_{1,5}\) and \(v_{1,2}^{*}\), we have \(c=f^{\prime}(v_{1,5})\neq f^{\prime}(v_{1,2}^{*})=1\). So, \(c\in\{2,3,4\}\). Hence, for \(0\leq j\leq 5\), \(f^{\prime}(v_{1,j})\in\{2,3,4\}\). Similarly, for \(1\leq i\leq n\) and \(0\leq j\leq 5\), \(f^{\prime}(v_{i,j})\in\{2,3,4\}\). Moreover, whenever \(v_{i}v_{j}\) is an edge in \(G\), there is a not-equal gadget between terminals \(v_{i,k}\) and \(v_{j,\ell}\) in \(G^{\prime}\) for some \(k,\ell\in\{1,2,3,4\}\) and hence \(f^{\prime}(v_{i,k})\neq f^{\prime}(v_{j,\ell})\). Therefore, the function \(f\colon V(G)\to\{2,3,4\}\) defined as \(f(v_{i})=f^{\prime}(v_{i,0})\) is indeed a \(3\)-colouring of \(G\). This proves the converse part and thus the guarantee.
**Theorem 3**.: \(5\)-Star Colourability _is NP-complete for triangle-free graphs of maximum degree four._
Proof.: We employ Construction 3 to establish a reduction from \(3\)-Colourability(\(4\)-regular) to \(5\)-Star Colourability(triangle-free, \(\Delta=4\)). Let \(G\) be an instance of \(3\)-Colourability(\(4\)-regular). From \(G\), construct an instance \(G^{\prime}\) of \(5\)-Star Colourability(triangle-free, \(\Delta=4\)) by Construction 3.
Let \(m=|E(G)|\) and \(n=|V(G)|\). In \(G^{\prime}\), there are at most \(6n+m+2(1+2+\cdots+\lceil(n+1)/2\rceil+n)+1\leq\frac{1}{4}(n^{2}+46n+12)\)\(2\)-in-2-out gadgets and in addition at most \(16n+8\) vertices and \(32n+12\) edges. So, \(G^{\prime}\) can be constructed in time polynomial in \(n\). By the guarantee in Construction 3, \(G\) is \(3\)-colourable if and only if \(G^{\prime}\) is \(5\)-star colourable.
We have the following theorem by combining Theorem 1 and Theorem 3.
**Theorem 4**.: _For \(k=5\) and \(k\geq 7\), \(k\)-Star Colourability is NP-complete for graphs of maximum degree \(k-1\). _
The complexity status of \(k\)-Star Colourability in graphs of maximum degree \(k-1\) is open for \(k=4\) and \(k=6\).
Next, let us shift our attention to regular graphs. We prove that for all \(k\geq 3\) and \(d<k\), the complexity of \(k\)-Star Colourability is the same for graphs of maximum degree \(d\) and \(d\)-regular graphs. That is, for all \(k\geq 3\) and \(d<k\), \(k\)-Star Colourability restricted to graphs of maximum degree \(d\) is in P (resp. NP-complete) if and only if \(k\)-Star Colourability restricted to \(d\)-regular graphs is in P (resp. NP-complete). First, we show that for all \(k\geq 3\), the complexity of \(k\)-Star Colourability is the same for graphs of maximum degree \(k-1\) and \((k-1)\)-regular graphs.
**Construction 4**.:
_Parameter:_ An integer \(k\geq 3\).
_Input:_ A graph \(G\) of maximum degree \(k-1\).
_Output:_ A \((k-1)\)-regular graph \(G^{\prime}\).
_Guarantee 1:_ \(G\) is \(k\)-star colourable if and only if \(G^{\prime}\) is \(k\)-star colourable.
_Guarantee 2:_ If \(G\) is triangle-free (resp. bipartite), then \(G^{\prime}\) is triangle-free (resp. bipartite).
_Steps:_
Introduce two copies of \(G\). For each vertex \(v\) of \(G\), introduce \((k-1)-deg_{G}(v)\) filler gadgets (see Figure 21) between the two copies of \(v\).
Figure 21: A filler gadget for \(v\in V(G)\).
Proof of Guarantee 1.: If \(G^{\prime}\) is \(k\)-star colourable, then \(G\) is \(k\)-star colourable because \(G\) is a subgraph of \(G^{\prime}\). Conversely, suppose that \(G\) admits a \(k\)-star colouring \(f\colon V(G)\to\{0,1,\ldots,k-1\}\). We produce a \(k\)-colouring \(f^{\prime}\) of \(G^{\prime}\) as follows. Colour both copies of \(G\) by \(f\) (i.e., for each \(v\in V(G)\), assign the colour \(f(v)\) to both copies of \(v\)). For each vertex \(v\) of \(G\), consider each filler gadget for \(v\) one by one and do the following for each filler gadget under consideration: (i) choose a colour \(c\) not yet used in the closed neighbourhood of \(v\) in \(G^{\prime}\) (the scheme we employ ensures that colours already used in the closed neighbourhood are exactly the same for the first copy of \(v\) and the second copy of \(v\)), and (ii) colour the filler gadget by the \(k\)-star colouring scheme obtained from Figure 22 by swapping colours in filler gadget suitably so that copies of \(v\) get colour \(f(v)\) and their neighbours in the filler gadget get colour \(c\).
See Figure 23 for an example. Clearly, \(f^{\prime}\) is a \(k\)-colouring of \(G^{\prime}\).
**Claim 1:**\(f^{\prime}\) is a \(k\)-star colouring of \(G^{\prime}\).
Assume that there is a \(4\)-vertex path \(u,v,w,x\) in \(G^{\prime}\) bicoloured by \(f^{\prime}\) (i.e., \(f^{\prime}(u)=f^{\prime}(w)\) and \(f^{\prime}(v)=f^{\prime}(x)\)). We know that \(f^{\prime}\) employs a \(k\)-star colouring scheme on both copies of \(G\) and each filler gadget. So, path \(u,v,w,x\) must contain vertices from one filler gadget as well as vertices from a copy of \(G\) or another filler gadget. In both cases, one of the two middle vertices in path \(u,v,w,x\) must be a terminal of a filler gadget. Suppose that \(v\) is a terminal of a filler gadget \(G_{1}\), \(w\) is a vertex in \(G_{1}\), and \(u\) is either in a copy of \(G\) or in another filler gadget \(G_{2}\). If \(u\) is in another filler gadget \(G_{2}\), we may assume without loss of generality that the filler gadget \(G_{1}\) is coloured after the filler gadget \(G_{2}\) is coloured. When the filler gadget \(G_{1}\) was coloured, a colour not yet used in the closed neighbourhood of \(v\) in \(G^{\prime}\) was chosen as the colour of \(w\); this is a contradiction to \(f(w)=f(u)\) (because \(u\) was already coloured and thus \(f(u)\) was already present in \(N_{G}[v]\)). This proves the claim by contradiction. This completes the proof of the converse part.
Proof of Guarantee 2.: Note that the filler gadget is a bipartite graph. Suppose that \(G\) is triangle-free (resp. bipartite). Then, the graph with two disjoint copies of \(G\) (i.e., \(2G\)) is also triangle-free (resp. bipartite). Moreover, for each \(v\in V(G)\), the operation of adding a filler gadget between the two copies of \(v\) preserves triangle-free property (resp. bipartiteness).
For \(k\geq 3\), Construction 4 establishes a reduction from \(k\)-Star Colourability(\(\Delta=k-1\)) to \(k\)-Star Colourability(\((k-1)\)-regular). Hence, for \(k\geq 3\), if \(k\)-Star Colourability is NP-complete
Figure 23: (a) A \(4\)-star colouring \(f\) of \(G\), and (b) the corresponding \(4\)-star colouring \(f^{\prime}\) of \(G^{\prime}\).
Figure 22: A \(k\)-star colouring scheme for the filler gadget for \(v\) (also, swap colour \(0\) with \(f(v)\) and colour \(k-1\) with the chosen colour \(c\)).
for graphs of maximum degree \(k-1\), then \(k\)-Star Colourability is NP-complete for \((k-1)\)-regular graphs. Clearly, if \(k\)-Star Colourability is NP-complete for \((k-1)\)-regular graphs, then \(k\)-Star Colourability is NP-complete for graphs of maximum degree \(k-1\). Thus, we have the following theorem.
**Theorem 5**.: _For all \(k\geq 3\), \(k\)-Star Colourability is NP-complete for graphs of maximum degree \(k-1\) if and only if \(k\)-Star Colourability is NP-complete for \((k-1)\)-regular graphs. In addition, for \(k\geq 3\), \(k\)-Star Colourability is NP-complete for triangle-free \((\)resp. bipartite\()\) graphs of maximum degree \(k-1\) if and only if \(k\)-Star Colourability is NP-complete for triangle-free \((\)resp. bipartite\()\)\((k-1)\)-regular graphs. _
Therefore, we have the following by Theorem 1 and Theorem 3.
**Theorem 6**.: _For \(k=5\) and \(k\geq 7\), \(k\)-Star Colourability is NP-complete for \((k-1)\)-regular graphs. Moreover, 5-Star Colourability is NP-complete for triangle-free 4-regular graphs. _
**Construction 5**.:
_Parameters:_ Integers \(k\geq 3\) and \(d\leq k-1\).
_Input:_ A graph \(G\) of maximum degree \(d\).
_Output:_ A \(d\)-regular graph \(G^{*}\).
_Guarantee:_\(G\) is \(k\)-star colourable if and only if \(G^{*}\) is \(k\)-star colourable.
_Steps:_
Introduce two copies of \(G\). For each vertex \(v\) of \(G\), introduce \(d-deg_{G}(v)\) filler gadgets (see Figure 24) between the two copies of \(v\).
To prove the guarantee, observe that \(G\) is a subgraph of \(G^{*}\) and \(G^{*}\) is a subgraph of \(G^{\prime}\) (the output graph in Construction 4).
Thanks to Construction 5, we have the following theorem.
**Theorem 7**.: _For all \(k\geq 3\) and \(d\leq k-1\), \(k\)-Star Colourability is NP-complete for \((\)triangle-free/bipartite\()\) graphs of maximum degree \(d\) if and only if \(k\)-Star Colourability is NP-complete for \((\)triangle-free/bipartite\()\)\(d\)-regular graphs. _
We have the following corollary since \(L_{s}^{(k)}\) is the least integer \(d\) such that \(k\)-Star Colourability is NP-complete for graphs of maximum degree \(d\) (where \(k\geq 3\)).
**Corollary 1**.: _For \(k\geq 4\) and \(d\leq k-1\), \(k\)-Star Colourability is NP-complete for \(d\)-regular graphs if and only if \(d\geq L_{s}^{(k)}\). _
### On Values of \(L_{s}^{(k)}\) and Two Similar Parameters
Recall that for \(k\geq 3\), \(L_{s}^{(k)}\) is the least integer \(d\) such that \(k\)-Star Colourability in graphs of maximum degree \(d\) is NP-complete. Bear in mind that we assume \(\mathrm{P}\neq\mathrm{NP}\) throughout this paper; thus, \(\mathrm{NP}\) is partitioned into three classes: \(\mathrm{P}\), \(\mathrm{NPC}\) and \(\mathrm{NP}\)[43]. If a problem in \(\mathrm{NP}\) is not NP-complete (i.e., not in \(\mathrm{NPC}\)), then it is either in \(\mathrm{P}\) or in \(\mathrm{NPI}\). By the definition of \(L_{s}^{(k)}\), \(k\)-Star Colourability\((\Delta=d)\) is not NP-complete for \(d<L_{s}^{(k)}\), which means that the problem is either in \(\mathrm{P}\) or in \(\mathrm{NP}\) (we do not know which is the case).
Clearly, the star chromatic number of a graph of maximum degree \(d\) can be computed in polynomial time if \(d\leq 2\). Hence, \(L_{s}^{(k)}\geq 3\) for \(k\geq 3\). For \(k\geq 3\), \(k\)-Star Colourability is NP-complete for graphs of maximum degree \(k\)[19, Theorems 10 and 16], and thus \(L_{s}^{(k)}\leq k\). Next, we show that \(L_{s}^{(k)}=\Omega(k^{2/3})\) for all \(k\geq 3\).
Figure 24: A filler gadget for \(v\in V(G)\) in Construction 5.
**Observation 1**.: _For \(d\leq 0.33\,k^{2/3}\), \(k\)-Star Colourability is polynomial-time solvable for graphs of maximum degree \(d\). Hence, \(L_{s}^{(k)}>0.33\,k^{2/3}\) for all \(k\geq 3\)._
Proof.: The observation is trivially true for \(d\leq 2\). It suffices to prove the observation for \(d\geq 3\). Suppose that \(d\geq 3\). Ndreca et al. [44] proved that \(\chi_{s}(G)<4.34\,d^{3/2}+1.5\,d\) for every graph \(G\) of maximum degree \(d\). Since \(d\geq 3\), we have \(d^{1/2}\geq 3^{1/2}>1/0.58\), and thus \(d<0.58\,d^{3/2}\). Thus, \(\chi_{s}(G)<(4.34+1.5\times 0.58)d^{3/2}=5.21\,d^{3/2}\) for every graph \(G\) of maximum degree \(d\). Hence, when \(k\geq 5.21\,d^{3/2}\), every graph of maximum degree \(d\) is \(k\)-star colourable. In other words, if \(d\leq(5.21)^{-2/3}k^{2/3}\), then every graph of maximum degree \(d\) is \(k\)-star colourable. Note that \(0.33<(5.21)^{-2/3}\). Hence, if \(d\leq 0.33\,k^{2/3}\), then \(d\leq(5.21)^{-2/3}k^{2/3}\). Therefore, for \(d\leq 0.33\,k^{2/3}\), every graph of maximum degree \(d\) is \(k\)-star colourable, and thus \(k\)-Star Colourability is polynomial-time solvable for graphs of maximum degree \(d\). As a result, \(L_{s}^{(k)}>0.33\,k^{2/3}\) for \(k\geq 3\).
Theorem 4 proved that for \(k=5\) and \(k\geq 7\), \(k\)-Star Colourability is NP-complete for graphs of maximum degree \(k-1\), and thus \(L_{s}^{(k)}\leq k-1\).
Next, let us consider regular graphs. By Theorem 6, \(k\)-Star Colourability is NP-complete for \((k-1)\)-regular graphs for \(k=5\) and \(k\geq 7\). Also, \(4\)-Star Colourability is NP-complete for \(4\)-regular graphs [45, Corollary 5.1].
For \(d\geq 3\), at least \(\lceil(d+4)/2\rceil\) colours are required to star colour a \(d\)-regular graph [19]. If \(k\geq 3\) and \(d\geq 2k-3\), then \(\lceil(d+4)/2\rceil>k\), and thus no \(d\)-regular graph is \(k\)-star colourable. Therefore, for \(k\geq 3\), \(k\)-Star Colourability in \(d\)-regular graphs is polynomial-time solvable for each \(d\geq 2k-3\) (because the answer is always 'no'). By Observation 1, for \(k\geq 3\), \(k\)-Star Colourability in \(d\)-regular graphs is polynomial-time solvable for \(d\leq\max\{2,0.33k^{2/3}\}\). In particular, \(3\)-Star Colourability in \(d\)-regular graphs is polynomial-time solvable for all \(d\in\mathbb{N}\). In contrast, for \(k\in\{4,5,7,8,\dots\}\), there exists an integer \(d\) such that \(k\)-Star Colourability in \(d\)-regular graphs is NP-complete (see the last paragraph). Hence, for \(k\in\{4,5,7,8,\dots\}\), we are interested in the least (resp. highest) integer \(d\) such that \(k\)-Star Colourability in \(d\)-regular graphs is NP-complete, and we denote it by \(\widetilde{L}_{s}^{(k)}\) (resp. \(\widetilde{H}_{s}^{(k)}\)). By the definitions, \(L_{s}^{(k)}\leq\widetilde{L}_{s}^{(k)}\leq\widetilde{H}_{s}^{(k)}\) for \(k\in\{4,5,7,8,\dots\}\). We have \(\widetilde{L}_{s}^{(4)}\leq 4\) since \(4\)-Star Colourability is NP-complete for \(4\)-regular graphs [45]. Similarly, for \(k=5\) and \(k\geq 7\), \(\widetilde{L}_{s}^{(k)}\leq k-1\) since \(k\)-Star Colourability is NP-complete for \((k-1)\)-regular graphs (see Theorem 6).
Theorem 7 proved that for \(k\geq 3\) and \(d\leq k-1\), \(k\)-Star Colourability in graphs of maximum degree \(d\) is NP-complete if and only if \(k\)-Star Colourability in \(d\)-regular graphs is NP-complete. By the definition of \(L_{s}^{(k)}\), for \(k\geq 3\), \(k\)-Star Colourability in graphs of maximum degree \(d\) is NP-complete for \(d=L_{s}^{(k)}\), and not NP-complete for \(d<L_{s}^{(k)}\). Hence, for \(k=5\) and \(k\geq 7\), \(k\)-Star Colourability in \(d\)-regular graphs is NP-complete for \(d=L_{s}^{(k)}\), and not NP-complete for \(d<L_{s}^{(k)}\) by Theorem 7 (applicable because \(d\leq L_{s}^{(k)}\leq k-1\)). This proves that for \(k=5\) and \(k\geq 7\), \(L_{s}^{(k)}\) is the least integer \(d\) such that \(k\)-Star Colourability in \(d\)-regular graphs is NP-complete; that is, \(\widetilde{L}_{s}^{(k)}=L_{s}^{(k)}\).
**Theorem 8**.: _For \(k=5\) and \(k\geq 7\), we have \(\widetilde{L}_{s}^{(k)}=L_{s}^{(k)}\). _
As mentioned above, for \(k\geq 3\), \(k\)-Star Colourability in \(d\)-regular graphs is polynomial-time solvable for each \(d\geq 2k-3\). Hence, for \(k\in\{4,5,7,8,\dots\}\), we have \(\widetilde{H}_{s}^{(k)}\leq 2k-4\), and the same bound holds whenever \(\widetilde{H}_{s}^{(k)}\) can be defined (i.e, \(\exists d\in\mathbb{N}\), \(k\)-Star Colourability(\(d\)-regular) \(\in\) NPC). For \(k=5\) and \(k\geq 7\), \(k\)-Star Colourability is NP-complete for \((k-1)\)-regular graphs by Theorem 6, and thus \(L_{s}^{(k)}=\widetilde{L}_{s}^{(k)}\leq k-1\leq\widetilde{H}_{s}^{(k)}\leq 2k-4\).
See the concluding section (Section 4) for a discussion of the open problems.
## 3 Restricted Star Colouring
### Introduction and Literature Survey
Restricted star colouring is a variant of star colouring as well as a generalisation of vertex ranking. Therefore, the restricted star chromatic number \(\chi_{rs}(G)\) of a graph \(G\) is bounded from below by the star chromatic number and bounded from above by the ranking number, better known as the treedepth [46]. The treedepth is in turn bounded from above by vertex cover number plus one [47].
For complete \(r\)-partite graphs and split graphs, the rs chromatic number is equal to vertex cover number plus one [18, 34].
It is easy to observe that for \(k\in\mathbb{N}\), a \(k\)-rs colourable graph is \((k-1)\)-degenerate [7], and hence no \(d\)-regular graph is \(d\)-rs colourable. Almeter et al. [7] proved that \(\chi_{rs}(G)\leq 7\) for every subcubic graph \(G\). They also proved that the rs chromatic number of the hypercube \(Q_{d}\) is exactly \(d+1\). For every \(d\), there exists a graph \(G\) with maximum degree \(d\) such that \(\chi_{rs}(G)\geq\Omega(d^{2}/\log d)\)[7]. Karpas et al. [6] proved that (i) \(\chi_{rs}(T)=O(\log n/\log\log n)\) for every tree \(T\), and this bound is tight, and (ii) \(\chi_{rs}(G)=O(r\sqrt{n})\) for every \(r\)-degenerate graph \(G\). For every \(n\), there exists a \(2\)-degenerate \(3\)-regular graph \(G\) with \(\chi_{rs}(G)>n^{1/3}\)[6]. Also, \(\chi_{rs}(G)=O(\log n)\) for every planar graph \(G\), and this result holds for every graph class excluding a fixed minor [6]. Shalu and Sandhya [5] proved that \(\chi_{rs}(G)\leq 4\alpha(G)\) for every graph \(G\) of girth at least \(5\).
For \(k\geq 3\), \(k\)-RS Colourability is NP-complete for (\(2\)-degenerate) planar bipartite graphs of maximum degree \(k\) and arbitrarily large girth [18]. In addition, it is NP-complete to test whether a \(3\)-star colourable graph admits a \(3\)-rs colouring [18]. The optimization version of rs colouring is NP-hard to approximate within \(n^{\frac{1}{3}-\epsilon}\) for all \(\epsilon>0\) in the class of \(2\)-degenerate bipartite graphs [18]; in contrast, every \(2\)-degenerate graph admits an rs colouring with \(n^{\frac{1}{2}}\) colours [6, Theorem 6.2], and thus the optimization version of rs colouring is approximable within \(n^{\frac{1}{2}}\) for \(2\)-degenerate graphs.
On the positive side, for \(3\)-RS Colourability, there is a linear-time algorithm for the class of trees and a polynomial-time algorithm for the class of chordal graphs in [18]. The complexity of \(k\)-RS Colourability in chordal graphs is open for \(k\geq 4\). For each \(k\in\mathbb{N}\), \(k\)-RS Colourability can be expressed in \(\mathrm{MSO}_{1}\)[18], and thus admits FPT algorithms with parameter either treewidth or cliquewidth by Courcelle's theorem [39, 40]. Thanks to Observation 2, \(k\)-RS Colourability can be expressed in the Locally Checkable Vertex Subset and Partitioning problems (LC-VSP) framework of Telle and Proskurowski [48] (see supplement for details). This implies the existence of practically fast FPT algorithms for the problem [49, 50].
### RS Colouring in Terms of Homomorphisms
Let \(\vec{K_{q}}\) denote the tournament with vertex set \(\mathbb{Z}_{q}\) and edge set \(\{(i,j)\colon i,j\in\mathbb{Z}_{q}\text{ and }i<j\}\). Observe that a homomorphism \(\psi\) from an oriented graph \(\vec{H}\) to \(\vec{K_{q}}\) is in-neighbourhood injective if and only if no vertex \(v\) of \(\vec{H}\) has two in-neighbours \(u\) and \(w\) with \(\psi(v)>\psi(u)=\psi(w)\). Hence, an in-neighbourhood injective homomorphism from an orientation of a graph \(G\) to \(\vec{K_{q}}\) is a \(q\)-rs colouring of \(G\). Moreover, if \(f\) is a \(q\)-rs colouring of \(G\), then orienting each edge of \(G\) as an arc from the lower-coloured vertex to the higher-coloured vertex gives an (acyclic) orientation \(\vec{G}\) of \(G\) such that \(f\) is an in-neighbourhood injective homomorphism from \(\vec{G}\) to \(\vec{K_{q}}\). In short, a \(q\)-rs colouring of a graph \(G\) is precisely an in-neighbourhood injective homomorphism from an orientation of \(G\) to \(\vec{K_{q}}\). Thus, we have the following (since every transitive tournament on \(q\) vertices is isomorphic to \(\vec{K_{q}}\) as a digraph).
**Observation 2**.: _A graph \(G\) admits a \(q\)-rs colouring if and only if \(G\) has an orientation that admits an in-neighbourhood injective homomorphism to a transitive tournament on \(q\) vertices. _
To study minor-closed classes, Nesetril and Mendez [51] introduced a generalisation of in-neighbourhood injective homomorphism, called folding. The complexity of in-neighbourhood injective homomorphisms to (reflexive) tournaments is studied by MacGillivray and Swarts [52]. Given an orientation \(\vec{G}\) of a graph \(G\), one can test in polynomial time whether \(\vec{G}\) admits an in-neighbourhood injective homomorphism to \(\vec{K_{3}}\)[52]. On the other hand, it is NP-complete to test whether an input graph \(G\) has an orientation that admits an in-neighbourhood injective homomorphism to \(\vec{K_{3}}\) (by Observation 2 and [18, Theorem 1]).
### Hardness Transitions
For all \(k\geq 3\), \(k\)-RS Colourability is NP-complete for graphs of maximum degree \(k\)[18, Theorem 3]. In this section, we lower the maximum degree in this hardness result from \(k\) to \(k-1\) except for \(k=3\) (for \(k=3\), the problem is polynomial-time solvable in graphs of maximum degree \(k-1\)). We show that for all \(k\geq 4\), \(k\)-RS Colourability is NP-complete for triangle-free graphs of maximum degree \(k-1\). First, we prove this for \(k=4\). In fact, we show that \(4\)-RS Colourability is NP-complete for planar \(3\)-regular graphs of girth \(5\). Construction 7 below is employed to this end. Construction 7 makes use
of the following observation and Construction 6. Construction 6 was used in Theorem 1 of [18] to show that 3-RS Colourability is NP-complete for planar graphs of maximum degree 3.
**Observation 3**.: _Let \(f\) be an rs colouring of a graph \(G\). If \(u\) and \(v\) be two vertices in \(G\) that are within distance two in \(G\), then \(f(u)\neq 0\) or \(f(v)\neq 0\)\((\)or both\()\)._
**Construction 6** ([18]).: _Input:_ A positive boolean formula \(B=(X,C)\) such that the graph of \(B\) is a planar 3-regular graph. _Output:_ A planar graph \(G\) of maximum degree 3 and girth 6.
_Guarantee [18]:_\(B\) has a 1-in-3 satisfying truth assignment if and only if \(G\) is 3-rs colourable.
_Steps:_
Let \(X=\{x_{1},x_{2},\ldots,x_{m}\}\) and \(C=\{c_{1},c_{2},\ldots,c_{m}\}\) (note that \(|X|=|C|\) since the graph of \(B\) is 3-regular). Since \(B\) is a positive formula, each clause \(c_{j}\) is a 3-element subset of \(X\). Recall that the graph of \(B\), denoted by \(G_{B}\), is the graph with vertex set \(X\cup C\) and edge set \(\{x_{i}c_{j}\ :\ x_{i}\in c_{j}\}\). To construct \(G\) from \(G_{B}\), first replace each vertex \(c_{j}\) of \(G_{B}\) by a triangle \((c_{j1},c_{j2},c_{j3})\), and then subdivide every edge of the resultant graph exactly once (see Figure 25 for an example).
We employ Construction 7 below to prove that 4-RS Colourability is NP-complete. A gadget called _colour forcing gadget_ is employed in the construction. The graph displayed in Figure 1(a) is the main component of the colour forcing gadget; let us call it the gadget component.
**Lemma 8**.: _The gadget component has rs chromatic number 4. Besides, for every 4-rs colouring \(f\) of the gadget component, \(f(u_{3})=0\) or \(f(u_{5})=0\)._
Figure 26: Gadget component in Construction 7, and a 4-rs colouring of it.
Figure 25: Example of Construction 6.
Proof.: Observe that at least 4 colours are needed to rs colour a 5-vertex cycle (in fact, at least 4 colours are needed to star colour a 5-vertex cycle [34]). Hence, an rs colouring of the gadget component requires at least four colours. In addition, each 4-rs colouring \(f\) of the gadget component must use all four colours on each 5-vertex cycle in it. Thus, colour 0 has to occur on all four 5-vertex cycles in the gadget component. Since no vertex of the gadget component is in all of those four 5-vertex cycles, colour 0 has to occur at least twice in the gadget component.
Thanks to Observation 3, no two vertices within distance two can both get colour 0 under \(f\). Except for \(\{u_{3},u_{8}\}\) and \(\{u_{5},u_{6}\}\), all pairs of vertices from the gadget component are within distance two in the gadget component. Since colour 0 has to occur at least twice in the gadget component (see the last paragraph), the 0-th colour class \(f^{-1}(0)\) is either \(\{u_{3},u_{8}\}\) or \(\{u_{5},u_{6}\}\). Hence, \(f(u_{3})=0\) or \(f(u_{5})=0\).
For every construction in this paper, only selected vertices within each gadget are allowed to have neighbours outside the gadget. We call such vertices as _the terminals_ of the gadget, and highlight them in diagrams by drawing a circle around them.
The colour forcing gadget is displayed in Figure 27. Consider the 4-colouring of the gadget displayed in Figure 28. It is a 4-rs colouring of the gadget because (i) no vertex coloured 1 has two neighbours coloured 0, (ii) except for the vertex \(u_{3}\) coloured 2 and its two neighbours coloured 3, no vertex coloured 2 has two neighbours of the same colour, and (iii) no vertex coloured 3 has two neighbours of the same colour.
The colour forcing gadget is named so because the terminal of the gadget must be coloured 0 under each 4-rs colouring of the gadget (see Lemma 10 below). Since the colour forcing gadget is 4-rs colourable, its subgraph \(H\) shown in Figure 29 is 4-rs colourable as well.
**Lemma 9**.: \(f(u_{5}^{\prime\prime})=0\) _for every 4-rs colouring \(f\) of the graph \(H\) in Figure 29._
Figure 28: A 4-rs colouring of the colour forcing gadget.
Figure 27: The colour forcing gadget.
Proof.: Let \(f\) be a 4-rs colouring of \(H\). Note that there are three copies of the gadget component in \(H\). By Lemma 8, \(f(u_{3})=0\) or \(f(u_{5})=0\). For the same reason, \(f(u_{3}^{\prime})=0\) or \(f(u_{5}^{\prime})=0\). Similarly, \(f(u_{3}^{\prime\prime})=0\) or \(f(u_{5}^{\prime\prime})=0\). We claim that \(f(u_{3}^{\prime\prime})\neq 0\). On the contrary, assume that \(f(u_{3}^{\prime\prime})=0\). This implies that \(f(u_{3})\neq 0\) and \(f(u_{5}^{\prime})\neq 0\) by Observation 3. Since, \(u_{3}\) or \(u_{5}\) must be coloured \(0\), we have \(f(u_{5})=0\). Similarly, \(u_{3}^{\prime}\) or \(u_{5}^{\prime}\) must be coloured \(0\) and thus \(f(u_{3}^{\prime})=0\). We have a contradiction since \(f(u_{5})=0=f(u_{3}^{\prime})\) and \(u_{5}u_{3}^{\prime}\) is an edge. Therefore, \(f(u_{3}^{\prime\prime})\neq 0\) by contradiction. Hence, \(f(u_{5}^{\prime\prime})=0\) because \(u_{3}^{\prime\prime}\) or \(u_{5}^{\prime\prime}\) must be coloured \(0\).
Since \(H\) is a subgraph of the colour forcing gadget, every 4-rs colouring \(f\) of the gadget is a 4-rs colouring of its subgraph \(H\), and thus \(f(u_{5}^{\prime\prime})=0\) by Lemma 9. Hence, we have the following.
**Lemma 10**.: _Every 4-rs colouring \(f\) of the colour forcing gadget must assign colour 0 on its terminal (that is, \(f(u_{5}^{\prime\prime})=0\)). _
We employ the graph in Figure 27 as the colour forcing gadget rather than the graph in Figure 29 to ensure that the output graph is 3-regular. With the help of the next construction, we prove that 4-RS Colourability is NP-complete for planar 3-regular graphs.
**Construction 7**.: _Input:_ A positive boolean formula \(B=(X,C)\) such that the graph of \(B\) is a planar 3-regular graph.
_Output:_ A planar 3-regular graph \(G^{\prime}\) of girth five.
_Guarantee:_\(B\) has a 1-in-3 satisfying truth assignment if and only if \(G^{\prime}\) is 4-rs colourable.
_Steps:_
First, construct a graph \(G\) from formula \(B\) by Construction 6 (see page 22). Then, for every degree-2 vertex \(v\) of \(G\), introduce a colour forcing gadget (see Figure 27) and join the terminal of the gadget to \(v\) by an edge.
Clearly, \(G^{\prime}\) is a planar 3-regular graph. Since \(G\) has girth 6 (see Construction 6) and the colour forcing gadget has girth 5, the graph \(G^{\prime}\) has girth 5.
Proof of guarantee.: Suppose that the formula \(B\) has a 1-in-3 satisfying truth assignment. By the guarantee in Construction 6, \(G\) admits a 3-rs colouring \(f\colon V(G)\to\{1,2,3\}\). Observe that \(f(v)>0\) for all \(v\in V(G)\). Extend \(f\) into a 4-colouring \(f^{\prime}\) of \(G^{\prime}\) by applying the 4-rs colouring scheme in Figure 28 on each colour forcing gadget.
**Claim 1:**\(f^{\prime}\) is a 4-rs colouring of \(G^{\prime}\).
Assume the contrary. That is, there is a path \(Q=x,y,z\) in \(G^{\prime}\) with \(f^{\prime}(y)>f^{\prime}(x)=f^{\prime}(z)\). Since the copy of \(G\) in \(G^{\prime}\) and the colour forcing gadgets are coloured by rs-colouring schemes (namely, \(f\) and Figure 28), \(Q\) contains an edge \(u_{5}^{\prime\prime}v\), where \(u_{5}^{\prime\prime}\) is the terminal of a colour forcing gadget and \(v\in V(G)\). By symmetry, we assume without loss of generality that \(xy\) is the edge \(u_{5}^{\prime\prime}v\). Hence, either (i) \(x=u_{5}^{\prime\prime}\), \(y=v\) (and \(z\in V(G)\)); or (ii) \(y=u_{5}^{\prime\prime}\), \(x=v\) (and \(z\) is in a colour forcing gadget). Note that \(f^{\prime}(u_{5}^{\prime\prime})=0\) (see Figure 28) and \(f^{\prime}(v)=f(v)>0\). Since \(f^{\prime}(y)>f^{\prime}(x)\), we have \(x=u_{5}^{\prime\prime}\) and \(y=v\) (i.e., Case (i) occurs). Thus, \(x\) is the terminal of the colour forcing gadget attached at \(y\) (\(=v\)) and \(f^{\prime}(x)=f^{\prime}(u_{5}^{\prime\prime})=0\). Since Case (i) occurs, \(z\in V(G)\). Hence, \(f^{\prime}(z)=f(z)>0\). This is a contradiction since \(f^{\prime}(z)=f^{\prime}(x)=0\). This proves Claim 1. Therefore, \(G^{\prime}\) is 4-rs colourable.
Figure 29: The subgraph \(H\) of the colour forcing gadget.
Conversely, suppose that \(G^{\prime}\) admits a \(4\)-rs colouring \(f^{\prime}\). By Lemma 10, terminals of all colour forcing gadgets must be coloured \(0\) by \(f^{\prime}\). Note that every vertex in \(G\) is either a degree-\(2\) vertex or adjacent to a degree-\(2\) vertex. Since a colour forcing gadget is attached to each degree-\(2\) vertex of \(G\), every vertex \(v\in V(G)\) is within distance two from a terminal in \(G^{\prime}\). Thanks to Observation 3, this means that no vertex \(v\in V(G)\) is coloured \(0\) by \(f^{\prime}\). Since \(f^{\prime}\) restricted to \(V(G)\) uses only colours \(1\),\(2\) and \(3\), the restriction is indeed a \(3\)-rs colouring of \(G\). By the guarantee in Construction 6, this implies that \(B\) has a \(1\)-in-\(3\) satisfying truth assignment.
We know that the construction of graph \(G\) (i.e., Construction 6) requires only time polynomial in the input size. Construction 7 requires only time polynomial in the input size because (i) the colour forcing gadget is a fixed graph, and (ii) at most \(|V(G)|\) colour forcing gadgets are introduced in Construction 7. Given a positive boolean formula \(B=(X,C)\) such that \(G_{B}\) is a planar \(3\)-regular graph, it is NP-complete to test whether \(B\) has a \(1\)-in-\(3\) satisfying truth assignment [53]. Thus, Construction 7 gives the following result.
**Theorem 9**.: \(4\)-RS Colourability _is NP-complete for planar 3-regular graphs of girth 5. _
**Corollary 2**.: \(4\)-RS Colourability _is NP-complete for triangle-free graphs \(G\) of maximum degree 3. _
Next, we generalise Corollary 2 as follows: for \(k\geq 4\), \(k\)-RS Colourability is NP-complete for triangle-free graphs of maximum degree \(k-1\). For \(k\geq 5\), we employ Construction 8 below to establish a reduction from \((k-2)\)-RS Colourability of graphs of maximum degree \(k-2\) to \(k\)-RS Colourability of graphs of maximum degree \(k-1\).
The important gadget in this construction is the _colour blocking gadget_ shown in Figure 30 (note that vertex \(u_{2}\) is adjacent to vertices \(u_{1},u_{3},y_{2},y_{3},\ldots,y_{k-2}\), vertex \(v_{2}\) is adjacent to vertices \(v_{1},v_{3},x_{2},x_{3},\ldots,x_{k-2}\), and for each \(i,j\in\{1,2,\ldots,k-1\}\), \(x_{i}\) is adjacent to \(y_{j}\) except for \(j=i\)). Observe that the colour blocking gadget has maximum degree \(k-1\). Lemma 11 attests that the name of the gadget is meaningful.
**Lemma 11**.: _Let \(k\geq 5\). Let \(f\) be a \(k\)-rs colouring of the colour blocking gadget (displayed in Figure 30). Then, \(f(u_{2})\neq 0\), \(f(u_{3})\neq 0\), \(f(v_{2})\neq 0\), and \(f(v_{3})\neq 0\). In particular, \(f\) must assign a non-zero colour on the terminal of the gadget and the neighbour of the terminal in the gadget._
Proof.: To prove the lemma, let us discuss an observation. By the definition of rs colouring, two vertices coloured \(c\) cannot have a common neighbour of higher colour. Besides, two vertices \(w_{1}\) and \(w_{2}\) both coloured \(c\) cannot have two common neighbours \(w_{1}^{\prime}\) and \(w_{2}^{\prime}\) both coloured \(c^{\prime}\) (otherwise, path \(w_{1},w_{1}^{\prime},w_{2},w_{2}^{\prime}\) is a bicoloured \(P_{4}\), and hence \(f\) is not even a star colouring, let alone a restricted star colouring). Therefore, we have the following.
Figure 30: The colour blocking gadget
**Claim 1:** If two vertices \(x_{i}\) and \(x_{j}\) have colour \(c\), then their common neighbours get pairwise distinct colours less than \(c\).
Recall that \(X=\{x_{1},x_{2},\ldots,x_{k-1}\}\) and \(Y=\{y_{1},y_{2},\ldots,y_{k-1}\}\). For convenience, we call sets \(X\) and \(Y\) as two sides. Observe that the sets \(X\) and \(Y\) are symmetric because 'rotating' the gadget by \(180^{\circ}\) gives an automorphism \(\psi\) of the gadget that maps \(X\) to \(Y\) and vice versa (define \(\psi\) as \(\psi(x_{p})=y_{k-p}\) and \(\psi(y_{p})=x_{k-p}\) for \(1\leq p\leq k-1\), and \(\psi(u_{q})=v_{q}\) and \(\psi(v_{q})=u_{q}\) for \(1\leq q\leq 3\)). We consider two cases: Case 1 when a colour repeats on side \(X\), and Case 2 when not (i.e, no colour repeats on side \(X\)); see page 29 for Case 2.
_Case 1:_ A colour repeats on side \(X\) (i,e., \(f(x_{i})=f(x_{j})=c\), where \(1\leq i<j\leq k-1\)).
Each \(y\in Y\) is adjacent to \(x_{i}\) or \(x_{j}\) (or both), and thus \(f(y)\neq c\). We consider various subcases depending on the values of \(i\) and \(j\).
Subcase 1.1\((i=1)\colon f(x_{1})=f(x_{j})=c\) where \(j>1\).
Since \(u_{1}\) is adjacent to \(x_{1}\), we have \(f(u_{1})\neq c\). Hence, no vertex in \(\{u_{1}\}\cup Y\) is coloured \(c\) (i.e., \(c\notin f(\{u_{1}\}\cup Y)\)). Vertices in \(Y\setminus\{y_{1},y_{j}\}\) are common neighbours of \(x_{1}\) and \(x_{j}\). Thus, by Claim 1, vertices in \(Y\setminus\{y_{1},y_{j}\}\) have pairwise distinct colours less than \(c\). Hence, for each vertex \(y_{p}\in Y\setminus\{y_{1},y_{j}\}\), we have \(f(y_{p})<c\) and thus \(f(y_{1})\neq f(y_{p})\) (if not, the bicoloured path \(y_{1},x_{j},y_{p}\) has a higher colour on its middle vertex). That is, \(y_{1}\) cannot get a colour used in \(Y\setminus\{y_{1},y_{j}\}\) (i.e., \(f(y_{1})\notin f(Y\setminus\{y_{1},y_{j}\})\)). Similarly, if \(f(u_{1})=f(y_{p})\) (resp. \(f(y_{j})=f(y_{p})\)) for some \(y_{p}\in Y\setminus\{y_{1},y_{j}\}\), then the bicoloured path \(u_{1},x_{1},y_{p}\) (resp. \(y_{j},x_{1},y_{p}\)) has the higher colour on its middle vertex; a contradiction. Hence, \(u_{1}\) (resp. \(y_{j}\)) cannot get a colour used in \(Y\setminus\{y_{1},y_{j}\}\). That is, \(f(u_{1})\notin f(Y\setminus\{y_{1},y_{j}\})\) and \(f(y_{j})\notin f(Y\setminus\{y_{1},y_{j}\})\). Hence, we have the following.
**Claim 2 (of Subcase 1.1):**\(f(y_{1})\notin f(Y\setminus\{y_{1},y_{j}\})\), \(f(u_{1})\notin f(Y\setminus\{y_{1},y_{j}\})\) and \(f(y_{j})\notin f(Y\setminus\{y_{1},y_{j}\})\).
Since \(u_{1}y_{1}\) is an edge, \(f(u_{1})\neq f(y_{1})\). We also know that \(f(y_{1})\notin f(Y\setminus\{y_{1},y_{j}\})\), \(f(u_{1})\notin f(Y\setminus\{y_{1},y_{j}\})\), and vertices in \(Y\setminus\{y_{1},y_{j}\}\) have pairwise distinct colours. Hence, vertices in \(\{u_{1}\}\cup Y\setminus\{y_{j}\}\) have pairwise distinct colours. Besides, we know that \(|\{u_{1}\}\cup Y\setminus\{y_{j}\}|=k-1\) and \(c\notin f(\{u_{1}\}\cup Y)\). Thus, we have the following claim.
**Claim 3 (of Subcase 1.1):** Vertices in \(\{u_{1}\}\cup Y\setminus\{y_{j}\}\) get a permutation of the \(k-1\) colours \(0,\ldots,c-1,c+1,\ldots,k-1\).
Since \(f(y_{j})\neq c,\ f(y_{j})\in\{0,\ldots,c-1,c+1,\ldots,k-1\}=f(\{u_{1}\}\cup Y \setminus\{y_{j}\})\) (see Claim 3). Since \(f(y_{j})\notin f(Y\setminus\{y_{1},y_{j}\})\) (see Claim 2) and \(f(y_{j})\in f(\{u_{1}\}\cup Y\setminus\{y_{j}\})\), we have the following claim.
Figure 31: A \(k\)-rs colouring of the colour blocking gadget.
**Claim 4** (of Subcase 1.1): \(f(y_{j})\in\{f(u_{1}),f(y_{1})\}\)_._
Let us consider the colour of an arbitrary vertex \(x_{p}\in X\setminus\{x_{1},x_{j}\}\). By Claim 3, all \(k\) colours except \(c\) are used in \(\{u_{1}\}\cup Y\setminus\{y_{j}\}\). In particular, all \(k\) colours except \(c\) are used in \(\{u_{1}\}\cup Y\), and thus \(f(x_{p})\in\{c,f(u_{1})\}\cup f(Y)\). Since \(x_{p}\) is adjacent to every vertex in \(Y\setminus\{y_{p}\}\), we have \(f(x_{p})\notin f(Y\setminus\{y_{p}\})\). Therefore, \(f(x_{p})\in\{c,f(u_{1}),f(y_{p})\}\).
We show that \(f(x_{p})=f(u_{1})\) leads to a contradiction. Suppose that \(f(x_{p})=f(u_{1})\). Then, \(f(u_{1})=f(x_{p})\neq f(y_{j})\) (because \(x_{p}y_{j}\) is an edge). Since \(f(y_{j})\neq f(u_{1})\), \(f(y_{j})=f(y_{1})\) by Claim 4. Hence, path \(u_{1},y_{1},x_{p},y_{j}\) is a bicoloured \(P_{4}\), a contradiction. Thus, by contradiction, \(f(x_{p})\neq f(u_{1})\).
Next, we show that \(f(x_{p})=c\) leads to a contradiction. Suppose that \(f(x_{p})=c\). That is, \(f(x_{p})=c=f(x_{1})=f(x_{j})\); in particular, \(f(x_{p})=f(x_{1})\) and \(f(x_{p})=f(x_{j})\). By Claim 4, either \(f(y_{j})=f(u_{1})\) or \(f(y_{j})=f(y_{1})\). As a result, either path \(u_{1},x_{1},y_{j},x_{p}\) or path \(x_{j},y_{1},x_{p},y_{j}\) is a bicoloured \(P_{4}\). Thus, by contradiction, \(f(x_{p})\neq c\).
Therefore, the only possibility is \(f(x_{p})=f(y_{p})\). Since \(x_{p}\) is arbitrary, we have the following claim.
**Claim 5** (of Subcase 1.1): \(f(x_{p})=f(y_{p})\)_. for each \(p\in\{1,2,\ldots,k-1\}\setminus\{1,j\}\).
We consider two subcases based on the value of \(j\).
Subcase 1.1.1: \(j\neq k-1\).
By Claim 5, \(f(x_{k-1})=f(y_{k-1})\) (note that \(j\neq k-1\)).
Consider the colour at \(v_{1}\). Since all \(k\) colours except \(c\) are used in \(\{u_{1}\}\cup Y\) (see Claim 3), \(f(v_{1})\in\{c,f(u_{1})\}\cup f(Y)\). Since \(v_{1}y_{k-1}\) is an edge, \(f(v_{1})\neq f(y_{k-1})\). For \(1\leq p\leq k-2,\ f(v_{1})\neq f(y_{p})\) (otherwise, path \(y_{p},x_{k-1},v_{1},y_{k-1}\) is a bicoloured \(P_{4}\)). Thus, \(f(v_{1})\notin f(Y)\). Hence, \(f(v_{1})\in\{c,f(u_{1})\}\).
We show by contradiction that \(f(v_{1})\neq c\). Suppose that \(f(v_{1})=c\). As a result, \(f(v_{1})=f(x_{1})\). Since \(f(x_{k-1})=f(y_{k-1})\), path \(x_{k-1},v_{1},y_{k-1},x_{1}\) is a bicoloured \(P_{4}\), a contradiction. Thus, by contradiction, \(f(v_{1})\neq c\). Since \(f(v_{1})\in\{c,f(u_{1})\}\), it follows that \(f(v_{1})=f(u_{1})\).
Observe that \(f(y_{j})\neq f(v_{1})\) (otherwise, path \(y_{j},x_{k-1},v_{1},y_{k-1}\) is a bicoloured \(P_{4}\)). Since \(f(v_{1})=f(u_{1})\), this implies that \(f(y_{j})\neq f(u_{1})\). Since \(f(y_{j})\in\{f(u_{1}),f(y_{1})\}\) (see Claim 4) and \(f(y_{j})\neq f(u_{1})\), we have \(f(y_{j})=f(y_{1})\). Since vertices \(y_{1}\) and \(y_{j}\) have the same colour and they are at distance two from each other, \(f(y_{1})=f(y_{j})\neq 0\) by Observation 3. Similarly, \(f(x_{k-1})=f(y_{k-1})\neq 0\). By Claim 3, vertices in \(\{u_{1}\}\cup Y\setminus\{y_{j}\}\) get a permutation of colours \(0,\ldots,c-1,c+1,\ldots,k-1\). In particular, one of the vertices in \(\{u_{1}\}\cup Y\setminus\{y_{j}\}\) is coloured \(0\). Since \(f(y_{1})\neq 0\) and \(f(y_{k-1})\neq 0\), one of the vertices in \(\{u_{1}\}\cup Y\setminus\{y_{1},y_{j},y_{k-1}\}=\{u_{1},y_{2},\ldots,y_{j-1},y_ {j+1},\ldots,y_{k-2}\}\) is coloured \(0\). Since \(u_{2}\) and \(u_{3}\) are within distance two from each of these vertices, \(f(u_{2})\neq 0\) and \(f(u_{3})\neq 0\) (by Observation 3). Since \(f(v_{1})=f(u_{1})\) (see previous paragraph) and \(f(x_{p})=f(y_{p})\) for each \(p\in\{1,2,\ldots,k-1\}\setminus\{1,j\}\) (see Claim 5), one of the vertices \(v_{1},x_{2},\ldots,x_{j-1},x_{j+1},\ldots,x_{k-2}\) is coloured \(0\). Since \(v_{2}\) and \(v_{3}\) are within distance two from each of these vertices, \(f(v_{2})\neq 0\) and \(f(v_{3})\neq 0\) (by Observation 3). Therefore, \(f(u_{2})\neq 0\), \(f(u_{3})\neq 0\), \(f(v_{2})\neq 0\) and \(f(v_{3})\neq 0\).
Subcase 1.1.2: \(j=k-1\).
Due to Claim 3, all \(k\) colours except \(c\) are used in \(\{u_{1}\}\cup Y\). Consider the colour at \(v_{1}\). Clearly, \(f(v_{1})\in f(\{u_{1}\}\cup Y)\) and \(f(v_{1})\neq f(y_{k-1})\). For each \(y_{p}\in Y\setminus\{y_{1},y_{k-1}\}\), we know that \(f(y_{p})<c\) (due to Claim 1), and hence \(f(v_{1})\neq f(y_{p})\) (otherwise, the bicoloured path \(v_{1},x_{k-1},y_{p}\) has a higher colour on its middle vertex). Hence, \(f(v_{1})\in\{f(u_{1}),f(y_{1})\}\). By Claim 4, \(f(y_{k-1})=f(y_{j})\in\{f(u_{1}),f(y_{1})\}\). Thus, \(f(v_{1}),f(y_{k-1})\in\{f(u_{1}),f(y_{1})\}\). Since \(v_{1}y_{k-1}\) is an edge, we have the following.
**Claim 6** (of Subcase 1.1.2): \(\{f(v_{1}),f(y_{k-1})\}=\{f(u_{1}),f(y_{1})\}\)_._
Since \(f(y_{k-1})\in\{f(u_{1}),f(y_{1})\}\) and \(u_{1}\) (resp. \(y_{1}\)) is at distance two from \(y_{k-1}\), we have \(f(y_{k-1})\neq 0\) by Observation 3. Similarly, \(y_{1}\) is at distance two from both \(v_{1}\) and \(y_{k-1}\), and \(f(y_{1})\in\{f(v_{1}),f(y_{k-1})\}\); hence, \(f(y_{1})\neq 0\). Since \(f(y_{1})\neq 0\) and \(f(y_{k-1})\neq 0\), one of the vertices \(u_{1},y_{2},y_{3},\ldots,y_{k-2}\) is coloured \(0\) (thanks to Claim 3). Since \(u_{2}\) and \(u_{3}\) are within distance two from each of these vertices, \(f(u_{2})\neq 0\) and \(f(u_{3})\neq 0\) (by Observation 3). We need to prove that \(f(v_{2})\neq 0\) and \(f(v_{3})\neq 0\) as well.
Since \(f(x_{p})=f(y_{p})\) for each \(p\notin\{1,k-1\}\) (see Claim 5), one of the vertices \(u_{1},x_{2},x_{3},\ldots,x_{k-2}\) is
\(f(v_{3})\neq 0\) by Observation 3. This proves that \(f(v_{2})\neq 0\) and \(f(v_{3})\neq 0\) when \(f(u_{1})=0\). Therefore, \(f(u_{2})\neq 0\), \(f(u_{3})\neq 0\), \(f(v_{2})\neq 0\) and \(f(v_{3})\neq 0\).
Subcase 1.2: \(f(x_{i})=f(x_{j})=c\) where \(2\leq i<j\leq k-2\).
Vertices in \(\{v_{2}\}\cup Y\setminus\{y_{i},y_{j}\}\) are common neighbours of \(x_{i}\) and \(x_{j}\). Thus, by Claim 1, vertices in \(\{v_{2}\}\cup Y\setminus\{y_{i},y_{j}\}\) get pairwise distinct colours less than \(c\). For each \(w\in\{v_{2}\}\cup Y\setminus\{y_{i},y_{j}\}\), we have \(f(y_{i})\neq f(w)\) (if not, path \(x_{i},w,x_{j},y_{i}\) is a bicoloured \(P_{4}\)). We also know that \(f(y_{i})\neq c\). Therefore, vertices in \(\{v_{2}\}\cup Y\setminus\{y_{j}\}\) get a permutation of the \(k-1\) colours \(0,\ldots,c-1,c+1,\ldots,k-1\). Similarly, \(f(y_{j})\notin\{c,f(w)\}\) for each \(w\in\{v_{2}\}\cup Y\setminus\{y_{i},y_{j}\}\) and hence vertices in \(\{v_{2}\}\cup Y\setminus\{y_{i}\}\) get a permutation of colours \(0,\ldots,c-1,c+1,\ldots,k-1\). Since \(f(\{v_{2}\}\cup Y\setminus\{y_{j}\})\) and \(f(\{v_{2}\}\cup Y\setminus\{y_{i}\})\) are both permutations of \(\{0,\ldots,c-1,c+1,\ldots,k-1\}\), we have \(f(y_{i})=f(y_{j})\). That is, \(f(x_{i})=f(x_{j})=c\) and \(f(y_{i})=f(y_{j})\neq c\).
By symmetry of sides \(X\) and \(Y\), we assume without loss of generality that \(c<f(y_{i})\). Since \(c<f(y_{i})\leq k-1\), we have \(c\leq k-2\). Since \(x_{i}\) and \(x_{j}\) have \(k-2\) common neighbours (namely, vertices in \(\{v_{2}\}\cup Y\setminus\{y_{i},y_{j}\}\)) and they require pairwise disjoint colours less than \(c\) (see Claim 1), \(c=k-2\) and vertices in \(\{v_{2}\}\cup Y\setminus\{y_{i},y_{j}\}\) get a permutation of colours \(0,1,\ldots,k-3\). Since \(f(y_{i})=f(y_{j})>c=k-2\), we have \(f(y_{i})=f(y_{j})=k-1\).
**Claim 7 (of Subcase 1.2):**\(f(x_{i})=f(x_{j})=c=k-2\) and \(f(y_{i})=f(y_{j})=k-1\).
Since vertices in \(\{v_{2}\}\cup Y\setminus\{y_{i},y_{j}\}\) get a permutation of colours \(0,1,\ldots,k-3\) (see previous paragraph) and \(f(y_{i})=f(y_{j})=k-1\), we have the following claim.
**Claim 8 (of Subcase 1.2):** Colours \(0,1,\ldots,k-3,k-1\) are used in \(\{v_{2}\}\cup Y\).
Consider the colour at \(x_{1}\). Due to Claim 8, \(f(x_{1})\in\{k-2\}\cup f(\{v_{2}\}\cup Y)\). Since \(f(x_{1})\in\{k-2\}\cup f(\{v_{2}\}\cup Y)\) and \(x_{1}\) is adjacent to every vertex in \(Y\) except \(y_{1}\), we have \(f(x_{1})\in\{k-2,f(v_{2}),f(y_{1})\}\). Observe that \(f(x_{1})\neq k-2\) (if not, the bicoloured path \(x_{1},y_{j},x_{i}\) has the higher colour on its middle vertex). Hence, \(f(x_{1})\in\{f(v_{2}),f(y_{1})\}\). Similarly, \(f(x_{k-1})\in\{f(v_{2}),f(y_{k-1})\}\).
**Claim 9 (of Subcase 1.2):**\(f(x_{1})\in\{f(v_{2}),f(y_{1})\}\) and \(f(x_{k-1})\in\{f(v_{2}),f(y_{k-1})\}\).
We show that \(f(x_{k-1})\neq f(v_{2})\). We prove this by considering two scenarios: (i) \(f(x_{1})\neq f(y_{1})\), and (ii) \(f(x_{1})=f(y_{1})\).
Suppose that \(f(x_{1})\neq f(y_{1})\). Since \(f(x_{1})\in\{f(v_{2}),f(y_{1})\}\), this implies that \(f(x_{1})=f(v_{2})\). Hence, \(f(x_{k-1})\neq f(v_{2})\) (if \(f(x_{k-1})=f(v_{2})=f(x_{1})\), then path \(x_{1},y_{i},x_{k-1},y_{j}\) is a bicoloured \(P_{4}\), and thus \(f\) is not even a star colouring). Thus, by contradiction, \(f(x_{1})\neq f(y_{1})\) implies that \(f(x_{k-1})\neq f(v_{2})\).
Suppose that \(f(x_{1})=f(y_{1})\). Consider the colour at \(u_{1}\). By Claim 8, colours \(0,1,\ldots,k-3,k-1\) are used in \(\{v_{2}\}\cup Y\). Hence, \(f(u_{1})\in\{k-2\}\cup f(\{v_{2}\}\cup Y)\). Observe that \(f(u_{1})\neq k-2\) (otherwise, \(x_{1},u_{1},y_{1},x_{i}\) is a bicoloured \(P_{4}\)), and thus \(f(u_{1})\in f(\{v_{2}\}\cup Y)\). Since \(u_{1}y_{1}\) is an edge, \(f(u_{1})\neq f(y_{1})\). For \(2\leq p\leq k-1\), \(f(u_{1})\neq f(y_{p})\) (otherwise, \(y_{p},x_{1},u_{1},y_{1}\) is a bicoloured \(P_{4}\)). Thus, \(f(u_{1})=f(v_{2})\). As a result, \(f(x_{k-1})\neq f(v_{2})\) (if \(f(x_{k-1})=f(v_{2})=f(u_{1})\), then path \(x_{1},u_{1},y_{1},x_{k-1}\) is a bicoloured \(P_{4}\)). Thus, by contradiction, \(f(x_{1})=f(y_{1})\) implies that \(f(x_{k-1})\neq f(v_{2})\).
Hence, whether \(f(x_{1})=f(y_{1})\) or not, \(f(x_{k-1})\neq f(v_{2})\). Since \(f(x_{k-1})\neq f(v_{2})\) and \(f(x_{k-1})\in\{f(v_{2}),f(y_{k-1})\}\) (see Claim 9), we have \(f(x_{k-1})=f(y_{k-1})\). Consider the colour at \(v_{1}\). Due to Claim 8, \(f(v_{1})\in\{k-2\}\cup f(\{v_{2}\}\cup Y)\). Observe that \(f(v_{1})\neq k-2\) (otherwise, \(x_{k-1},v_{1},y_{k-1},x_{i}\) is a bicoloured \(P_{4}\)), and thus \(f(v_{1})\in f(\{v_{2}\}\cup Y)\). Clearly, \(f(v_{1})\neq f(v_{2})\) and \(f(v_{1})\neq f(y_{k-1})\). For \(1\leq p\leq k-2\), \(f(v_{1})\neq f(y_{p})\) (otherwise, \(y_{p},x_{k-1},v_{1},y_{k-1}\) is a bicoloured \(P_{4}\)). Therefore, no colour is available for vertex \(v_{1}\), and thus Subcase 1.2 leads to a contradiction.
Subcase 1.3: \(f(x_{i})=f(x_{k-1})\) where \(2\leq i\leq k-2\).
We know that rotating the gadget by \(180^{\circ}\) gives an automorphism \(\psi\) of the gadget such that \(\psi(x_{p})=y_{k-p}\) and \(\psi(y_{p})=x_{k-p}\) for \(1\leq p\leq k-1\). Hence, it suffices to consider the case \(f(y_{1})=f(y_{k-i})=c\) where \(2\leq i\leq k-2\). In other words, it suffices to consider the case \(f(y_{1})=f(y_{j})=c\) where \(2\leq j\leq k-2\). We can use arguments similar to that in Subcase 1.1 to prove that \(f(u_{2})\neq 0\), \(f(u_{3})\neq 0\), \(f(v_{2})\neq 0\) and \(f(v_{3})\neq 0\); an alternate argument is given below for completeness.
Suppose that \(f(y_{1})=f(y_{
in \(X\setminus\{x_{1},x_{j}\}\) have pairwise disjoint colours different from \(f(x_{1})\) and \(f(x_{j})\), vertices \(x_{1},x_{2},\ldots,x_{k-1}\) get a permutation of colours \(0,\ldots,c-1,c+1,\ldots,k-1\) (i.e., \(f(X)=\{0,\ldots,c-1,c+1,\ldots,k-1\}\)). Hence, \(f(u_{1})\in\{c\}\cup f(X)\). Since \(f(u_{1})\neq f(y_{1})=c\) and \(f(u_{1})\neq f(x_{1})\), \(\ f(u_{1})\in f(X\setminus\{x_{1}\})\). Since \(f(u_{1})\neq f(x_{p})\) for each \(x_{p}\in X\setminus\{x_{1},x_{j}\}\), we have \(f(u_{1})=f(x_{j})\)., For each \(y_{p}\in Y\setminus\{y_{1},y_{j}\}\), we have \(f(y_{p})\in\{c\}\cup f(X)\) (because \(f(X)=\{0,\ldots,c-1,c+1,\ldots,k-1\}\)) and \(f(y_{p})\neq c\) (if not, path \(u_{1},y_{1},x_{j},y_{p}\) is a bicoloured \(P_{4}\)); that is, \(f(y_{p})\in f(X)\). Since \(y_{p}\) is adjacent all vertices in \(X\setminus\{x_{p}\}\), we have \(f(y_{p})=f(x_{p})\) for all \(p\notin\{1,j\}\). In particular, since \(j\neq k-1\), we have \(f(x_{k-1})=f(y_{k-1})\). Clearly, \(f(v_{1})\in\{c\}\cup f(X)\) and \(f(v_{1})\neq f(x_{k-1})\). If \(f(v_{1})=f(x_{p})\) for \(p<k-1\) (resp. \(f(v_{1})=c\)), then path \(x_{k-1},v_{1},y_{k-1},x_{p}\) (resp. path \(y_{1},x_{k-1},v_{1},y_{k-1}\)) is a bicoloured \(P_{4}\). Consequently, no colour is available for vertex \(v_{1}\), a contradiction.
_Case 2:_ No colour repeats on side \(X\).
By Symmetry of sides \(X\) and \(Y\), we may assume that no colour repeats on side \(Y\) either. Clearly, there exists a colour \(c\) such that vertices \(x_{1},x_{2},\ldots,x_{k-1}\) get a permutation of colours \(0,\ldots,c-1,c+1,\ldots,k-1\) (i.e., \(f(X)=\{0,\ldots,c-1,c+1,\ldots,k-1\}\)). For \(1\leq p\leq k-1\), since \(f(y_{p})\in\{c\}\cup f(X)\) and \(y_{p}\) is adjacent to every vertex in \(X\setminus\{x_{p}\}\), we have \(f(y_{p})\in\{c,f(x_{p})\}\).
**Claim 10 (of Case 2):**\(f(y_{p})\in\{c,f(x_{p})\}\) for \(1\leq p\leq k-1\). In particular, we have \(f(y_{1})\in\{c,f(x_{1})\}\) and \(f(y_{k-1})\in\{c,f(x_{k-1})\}\).
We show that if \(f(y_{1})=f(x_{1})\), then \(f(u_{1})=c\). Suppose that \(f(y_{1})=f(x_{1})\). For \(2\leq p\leq k-1\), we have \(f(u_{1})\neq f(x_{p})\) (if not, \(x_{1},u_{1},y_{1},x_{p}\) is a bicoloured \(P_{4}\)). Hence, \(f(u_{1})\notin f(X)=\{0,\ldots,c-1,c+1,\ldots,k-1\}\); i.e., \(f(u_{1})=c\). This proves that if \(f(y_{1})=f(x_{1})\), then \(f(u_{1})=c\). Similarly, if \(f(y_{k-1})=f(x_{k-1})\), then \(f(v_{1})=c\).
**Claim 11 (of Case 2):** If \(f(y_{1})=f(x_{1})\), then \(f(u_{1})=c\). If \(f(y_{k-1})=f(x_{k-1})\), then \(f(v_{1})=c\).
Next, we show that \(f(y_{1})=c\) leads to a contradiction. Suppose that \(f(y_{1})=c\). Since no colour repeats on side \(Y\), \(f(y_{k-1})\neq c\), and thus \(f(y_{k-1})=f(x_{k-1})\) by Claim 10. By Claim 11, this implies that \(f(v_{1})=c\). Thus, the path \(y_{1},x_{k-1},v_{1},y_{k-1}\) is a bicoloured \(P_{4}\), a contradiction.
Since \(f(y_{1})=c\) leads to a contradiction, \(f(y_{1})=f(x_{1})\) by Claim 10. Similarly, \(f(y_{k-1})=c\) leads to contradiction, and thus \(f(y_{k-1})=f(x_{k-1})\). Since \(f(y_{1})=f(x_{1})\) and \(f(y_{k-1})=f(x_{k-1})\), we have \(f(u_{1})=f(v_{1})=c\) by Claim 11. Let \(c_{1}=f(y)\) and \(c_{2}=f(y_{k-1})\). Clearly, \(f(x_{1})=f(y_{1})=c_{1}\) and \(f(x_{k-1})=f(y_{k-1})=c_{2}\). We also know that \(f(X)=\{0,\ldots,c-1,c+1,\ldots,k-1\}\) and \(f(u_{1})=f(v_{1})=c\). Hence, vertices in \(\{v_{1}\}\cup X\setminus\{x_{1},x_{k-1}\}\) get a permutation of colours \(\{0,1,\ldots,k-1\}\setminus\{c_{1},c_{2}\}\). For \(2\leq p\leq k-2\), we have \(f(y_{p})\neq c\) (if not, path \(y_{p},x_{1},u_{1},y_{1}\) is a bicoloured \(P_{4}\)) and thus \(f(y_{p})=f(x_{p})\) by Claim 10. Hence, vertices in \(\{u_{1}\}\cup Y\setminus\{y_{1},y_{k-1}\}\) get a permutation of colours \(\{0,1,\ldots,k-1\}\setminus\{c_{1},c_{2}\}\). Applying Observation 3 on the path \(x_{1},u_{1},y_{1}\) reveals that \(c_{1}\neq 0\). Similarly, \(c_{2}\neq 0\) (consider path \(x_{k-1},v_{1},y_{k-1}\)). As a result, colour \(0\) is assigned to some vertex in \(\{v_{1}\}\cup X\setminus\{x_{1},x_{k-1}\}\). Since \(v_{2}\) and \(v_{3}\) are within distance two from each vertex in \(\{v_{1}\}\cup X\setminus\{x_{1},x_{k-1}\}\), \(f(v_{2})\neq 0\) and \(f(v_{3})\neq 0\) by Observation 3. Similarly, colour \(0\) is assigned to some vertex in \(\{u_{1}\}\cup Y\setminus\{y_{1},y_{k-1}\}\), and thus \(f(u_{2})\neq 0\) and \(f(u_{3})\neq 0\) by Observation 3. Thus, \(f(u_{2})\neq 0\), \(f(u_{3})\neq 0\), \(f(v_{2})\neq 0\) and \(f(v_{3})\neq 0\) in Case 2 as well.
Therefore, in both cases, \(f(u_{2})\neq 0\), \(f(u_{3})\neq 0\), \(f(v_{2})\neq 0\) and \(f(v_{3})\neq 0\).
**Construction 8**.:
_Parameter:_ An integer \(k\geq 5\).
_Input:_ A triangle-free graph \(G\) of maximum degree \(k-2\).
_Output:_ A triangle-free graph \(G^{\prime}\) of maximum degree \(k-1\).
_Guarantee:_\(G\) is \((k-2)\)-rs colourable if and only if \(G^{\prime}\) is \(k\)-rs colourable.
_Steps:_
Introduce a copy of \(G\). For each vertex \(w\) in the copy of \(G\), attach \(k-1-\deg_{G}(w)\) colour blocking gadgets one by one at \(w\) (a colour blocking gadget is attached at \(w\) by identifying the terminal of the gadget with \(w\); see Section 1.1 for the definition of vertex identification).
Observe that \(\deg_{G^{\prime}}(w)=k-1\) for each \(w\in V(G)\) because \(w\) has exactly \(\deg_{G}(w)\) neighbours within the copy of \(G\), and \(w\) has exactly one neighbour in each of the \(k-1-\deg_{G}(w)\) colour blocking gadgets attached at \(w\) in \(G^{\prime}\). Moreover, each non-terminal vertex in a colour blocking gadget has degree at most \(k-1\) (in \(G^{\prime}\)). Hence, \(G^{\prime}\) has maximum degree \(k-1\). It is easy to observe that the colour blocking gadget is triangle-free. Since \(G\) is triangle-free and \(G^{\prime}\) is obtained from \(G\) by attaching copies of colour blocking gadgets, \(G^{\prime}\) is triangle
Proof of guarantee.: First, we prove that if \(G\) is \((k-2)\)-rs colourable, then \(G^{\prime}\) is \(k\)-rs colourable. Suppose that \(G\) admits a \((k-2)\)-rs colouring \(f\colon V(G)\to\{1,2,\ldots,k-2\}\). Extend \(f\) into a \(k\)-colouring \(f^{\prime}\) of \(G^{\prime}\) by using the scheme in Figure 32 on each colour blocking gadget. Observe that each bicoloured \(P_{3}\) in Figure 32 has colour \(0\) on its middle vertex or colour \(k-1\) on its endvertices. Thus, in Figure 32, there is no bicoloured \(P_{3}\) with the higher colour on its middle vertex; i.e., the colouring scheme in Figure 32 is a \(k\)-rs colouring of the gadget.
**Claim 1:**\(f^{\prime}\) is a \(k\)-rs colouring of \(G^{\prime}\).
On the contrary, assume that there is a bicoloured \(3\)-vertex path \(Q=x,y,z\) in \(G^{\prime}\) with the higher colour on its middle vertex (i.e., \(f^{\prime}(y)>f^{\prime}(x)=f^{\prime}(z)\)). We know that \(f^{\prime}\) employs a \(k\)-ts colouring scheme (namely Figure 32) on the colour blocking gadget. We also know that the restriction of \(f^{\prime}\) to \(V(G)\) is an rs colouring of \(G\) (namely \(f\)). Hence, either (i) \(Q\) contains edges from two colour blocking gadgets, or (ii) \(Q\) contains an edge from a colour blocking gadget and an edge from the copy of \(G\) in \(G^{\prime}\). Since \(Q\) is a \(3\)-vertex path, in both cases, the middle vertex \(y\) of \(Q\) is a terminal of some colour blocking gadget, and \(Q\) contains an edge of the form \(u_{2}u_{3}\) from that colour blocking gadget. Without loss of generality, assume that \(xy\) is the edge of the form \(u_{2}u_{3}\). That is, \(y\) is the terminal (i.e., vertex \(u_{3}\)) of some colour blocking gadget, and \(x\) is the neighbour of the terminal (i.e., vertex \(u_{2}\)) in that colour blocking gadget. Due to the colouring scheme used on colour blocking gadgets, \(f^{\prime}(y)=f(y)<k-1\) and \(f^{\prime}(x)=k-1\). Thus, \(f^{\prime}(x)>f^{\prime}(y)\), which is a contradiction to the assumption that \(f^{\prime}(y)>f^{\prime}(x)=f^{\prime}(z)\). This proves Claim 1, and thus \(G^{\prime}\) is \(k\)-rs colourable.
Conversely, suppose that \(G^{\prime}\) admits a \(k\)-rs colouring \(f^{\prime}:V(G^{\prime})\to\{0,1,\ldots,k-1\}\). Consider an arbitrary vertex \(w\in V(G)\).
Note that \(w\) is the terminal of at least one colour blocking gadget attached at \(w\) in \(G^{\prime}\) (because \(\Delta(G)=k-2\)). By Lemma 11, terminals of colour blocking gadgets cannot get colour \(0\). Hence, \(f^{\prime}(w)\neq 0\). Since \(w\in V(G)\) is arbitrary, no vertex in \(V(G)\) is coloured \(0\) by \(f^{\prime}\).
We claim that \(f^{\prime}(w)\neq k-1\). On the contrary, assume that \(f^{\prime}(w)=k-1\). We know that
Figure 32: A \(k\)-rs colouring scheme for the colour blocking gadget, where \(c=f(w)\). Note that \(0<c<k-1\) because \(f\) uses only colours \(1,2,\ldots,k-2\). To be clear, if \(c=1\), then \(f(x_{i})=f(y_{i})=i\) for \(2\leq i\leq k-2\) (shown in Figure 31). Similarly, if \(c=k-2\), then \(f(x_{i})=f(y_{i})=i-1\) for \(2\leq i\leq k-2\).
\(\deg_{G^{\prime}}(w)=k-1\). Owing to the definition of rs colouring, if a vertex \(v\) of degree \(k-1\) in a graph \(H\) is coloured \(k-1\) under a \(k\)-rs colouring of \(H\), then \(v\) has a neighbour coloured \(0\), a neighbour coloured \(1\),..., a neighbour coloured \(k-2\) in \(H\). Since \(f^{\prime}(w)=k-1\) and \(\deg_{G^{\prime}}(w)=k-1\), the vertex \(w\) has a neighbour coloured \(0\), a neighbour coloured \(1\),..., a neighbour coloured \(k-2\) in \(G^{\prime}\). In particular, \(w\) has a neighbour \(w^{\prime}\) in \(G^{\prime}\) coloured \(0\) under \(f^{\prime}\). Since \(w^{\prime}\) is a neighbour of \(w\) in \(G^{\prime}\), \(w^{\prime}\) is either from the copy of \(G\) (i.e., \(w^{\prime}\in V(G)\)) or from a colour blocking gadget. But, \(w^{\prime}\notin V(G)\) since \(f^{\prime}(w^{\prime})=0\) and no vertex in \(V(G)\) is coloured \(0\) by \(f^{\prime}\). Since \(w^{\prime}\notin V(G)\), the vertex \(w^{\prime}\) is in some colour blocking gadget. Moreover, \(w\) is the terminal of a colour blocking gadget and \(w^{\prime}\) is the neighbour of the terminal in that colour blocking gadget. By Lemma 11, the neighbour of the terminal is not coloured \(0\) by \(f^{\prime}\) contradicting the assumption that \(f^{\prime}(w^{\prime})=0\). Thus, \(f^{\prime}(w)\neq k-1\) by contradiction.
Since \(w\in V(G)\) is arbitrary, \(f^{\prime}\) uses only colours \(1,2,\ldots,k-2\) in \(V(G)\). Therefore, the restriction of \(f^{\prime}\) to \(V(G)\) is a \((k-2)\)-rs colouring of \(G\). Hence, \(G\) is \((k-2)\)-rs colourable.
Note that a colour blocking gadget has only \(2k+3\) non-terminal vertices and \((k-1)(k-2)+2(k-3)+8\leq k^{2}\) edges. Hence, \(G^{\prime}\) has at most \(((k-1)(2k+3)+1)\,n=O(n)\) vertices and at most \(m+(k-1)k^{2}n=O(m+n)\) edges, where \(n=|V(G)|\) and \(m=|E(G)|\). Hence, Construction 8 requires only time polynomial in the input size.
For all \(k\geq 5\), Construction 8 establishes a reduction from \((k-2)\)-RS Colourability of triangle-free graphs of maximum degree \(k-2\) to \(k\)-RS Colourability of triangle-free graphs of maximum degree \(k-1\). Since \(k\)-RS Colourability of triangle-free graphs of maximum degree \(k\) is NP-complete for \(k\geq 3\)[18, Theorem 3], \(k\)-RS Colourability is NP-complete for triangle-free graphs of maximum degree \(k-1\) for \(k\geq 5\).
**Theorem 10**.: _For \(k\geq 5\), \(k\)-RS Colourability is NP-complete for triangle-free graphs of maximum degree \(k-1\). _
By Corollary 2 and Theorem 10, we have the following.
**Theorem 11**.: _For \(k\geq 4\), \(k\)-RS Colourability is NP-complete for triangle-free graphs of maximum degree \(k-1\). _
Next, we prove that for all \(k\geq 4\) and \(d<k\), the complexity of \(k\)-RS Colourability is the same for graphs of maximum degree \(d\) and \(d\)-regular graphs. First, we show this for \(d=k-1\).
**Construction 9**.:
_Parameter:_ An integer \(k\geq 4\).
_Input:_ A graph \(G\) of maximum degree \(k-1\).
_Output:_ A \((k-1)\)-regular graph \(G^{\prime}\).
_Guarantee:_\(G\) is \(k\)-rs colourable if and only if \(G^{\prime}\) is \(k\)-rs colourable.
_Steps:_
Introduce two copies of \(G\). For each vertex \(v\) of \(G\), introduce \((k-1)-\deg_{G}(v)\) filler gadgets (see Figure 33) between the two copies of \(v\); see Figure 34 for an example.
Each non-terminal vertex of a filler gadget has degree \(k-1\). For each \(v\in V(G)\), both copies of \(v\) in \(G^{\prime}\) have degree \(k-1\) (because there are exactly \((k-1)-\deg_{G}(v)\) filler gadgets between the two copies of \(v\)). Therefore, \(G^{\prime}\) is \((k-1)\)-regular.
Proof of guarantee.: If \(G^{\prime}\) is \(k\)-rs colourable, then \(G\) is \(k\)-rs colourable (because \(G\) is a subgraph of \(G^{\prime}\)). Conversely, suppose that \(G\) admits a \(k\)-rs colouring \(f:V(G)\rightarrow\{0,1,\ldots,k-1\}\). We produce a \(k\)-colouring \(f^{\prime}\) of \(G^{\prime}\) as follows. The copies of \(G\) are coloured first, followed by the filler gadgets. Colour both copies of \(G\) using \(f\). For each vertex \(v\) of \(G\), the filler gadgets for \(v\) are coloured by various \(k\)-rs colouring schemes depending on the colour of \(v\) under \(f\). If \(f(v)<k-1\), we employ the following
Figure 33: Filler gadget for \(v\in V(G)\).
\(k\)-rs colouring scheme on each filler gadget for \(v\) which ensures that the neighbour of the terminal in the gadget has a higher colour compared to the terminal \(v\): (i) if \(f(v)=0\), colour the filler gadgets for \(v\) by the \(k\)-rs colouring scheme in Figure 35a, (ii) if \(0<f(v)<k-1\), colour the filler gadgets for \(v\) by the \(k\)-rs colouring scheme in Figure 35b. If \(f(v)=k-1\), colour each filler gadget for \(v\), one by one, as follows: choose a colour \(j\) not yet used in the neighbourhood of (copy of) \(v\) in \(G^{\prime}\), and colour the filler gadget by the \(k\)-rs colouring scheme in Figure 36 (note that by the colouring scheme used on the filler gadgets, the colours present on the neighbourhood of the fist copy of \(v\) in \(G^{\prime}\) are the same as the colours present on the neighbourhood of the second copy of \(v\) in \(G^{\prime}\)). See Figure 37 for an example.
Clearly, \(f^{\prime}\) is a \(k\)-colouring of \(G^{\prime}\).
**Claim 1:**\(f^{\prime}\) is a \(k\)-rs colouring of \(G^{\prime}\).
We know that the copies of \(G\) and the filler gadgets in \(G^{\prime}\) are coloured by \(k\)-rs colouring schemes. Hence, to prove Claim 1, it suffices to show that no terminal \(y\) in \(G^{\prime}\) has two neighbours \(x\) and \(z\) such that \(f^{\prime}(y)>f^{\prime}(x)=f^{\prime}(z)\). On the contrary, assume that there exists a terminal \(y\) with neighbours \(x\) and \(z\) in \(G^{\prime}\) such that \(\boldsymbol{f^{\prime}(y)>f^{\prime}(x)=f^{\prime}(z)}\).
Obviously, \(y\) is a vertex in a copy of \(G\) (in \(G^{\prime}\)). Since \(f^{\prime}\) restricted to this copy of \(G\) is a \(k\)-rs colouring (namely \(f\)), \(x\) and/or \(z\) must be in a filler gadget. Without loss of generality, assume that \(z\) is in a filler gadget \(F_{z}\). Clearly, \(y\) is the terminal of the filler gadget \(F_{z}\) and \(z\) is the neighbour of the terminal in the filler gadget \(F_{z}\). Recall that unless \(f^{\prime}(y)=k-1\), the colouring scheme used on the filler gadget \(F_{z}\) ensures that \(f^{\prime}(z)>f^{\prime}(y)\) (i.e., the neighbour of the terminal in the gadget has a higher colour compared to the terminal). Since \(f^{\prime}(y)>f^{\prime}(z)\), we have \(f^{\prime}(y)=k-1\). As a result, the colouring scheme in Figure 36 is used on the filler gadgets attached at \(y\) and in particular on \(F_{z}\). When the filler gadget \(F_{z}\) was coloured, a colour \(j\) not yet present in the neighbourhood of \(y\) in \(G^{\prime}\) was chosen, and then the colouring scheme in Figure 36 was applied on \(F_{z}\). This means that \(j=f^{\prime}(z)\). We have two cases.
_Case 1:_\(x\) is in a copy of \(G\) in \(G^{\prime}\).
Clearly, \(x\) was coloured before the filler gadget \(F_{z}\) was coloured. Hence, the colour \(f^{\prime}(x)\) was present in the neighbourhood of \(y\) in \(G^{\prime}\) before \(F_{z}\) was coloured. As a result, \(j\neq f^{\prime}(x)\) by the choice of colour \(j\). This is a contradiction since \(j=f^{\prime}(z)=f^{\prime}(x)\).
_Case 2:_\(x\) is in a filler gadget, say \(F_{x}\).
Without loss of generality, assume that the gadget \(F_{x}\) was coloured first and the gadget \(F_{z}\) was coloured later. Consequently, \(x\) was coloured before the filler gadget \(F_{z}\) was coloured. Thus, the
Figure 34: Example of Construction 9 (here, \(k=4\)).
colour \(f^{\prime}(x)\) was present in the neighbourhood of \(y\) in \(G^{\prime}\) before \(F_{z}\) was coloured, and thus \(j\neq f^{\prime}(x)\) by the choice of colour \(j\). This is a contradiction since \(j=f^{\prime}(z)=f^{\prime}(x)\).
Since we have a contradiction in both cases, Claim 1 is proved. Therefore, \(G^{\prime}\) is \(k\)-rs colourable.
Next, we generalise Construction 9.
**Construction 10**.:
_Parameters:_ Integers \(k\geq 4\) and \(d\leq k-1\).
_Input:_ A graph \(G\) of maximum degree \(d\).
_Output:_ A \(d\)-regular graph \(G^{*}\).
_Guarantee:_\(G\) is \(k\)-rs colourable if and only if \(G^{*}\) is \(k\)-rs colourable.
_Steps:_
Introduce two copies of \(G\). For each vertex \(v\) of \(G\), introduce \(d-\deg_{G}(v)\) filler gadgets (see Figure 38) between the two copies of \(v\).
Proof of guarantee.: Observe that since \(d\leq k-1\), the filler gadget in Construction 10 (i.e., Figure 38) is as subgraph of the filler gadget in Construction 9 (i.e., Figure 33). Hence, \(G^{*}\) is a subgraph of the output graph \(G^{\prime}\) of Construction 9. Since \(G\) is a subgraph of \(G^{*}\), one direction is obvious. To prove the other direction, assume that \(G\) admits a \(k\)-rs colouring \(f\colon V(G)\to\{0,1,\ldots,k-1\}\). By the guarantee in Construction 9, \(G^{\prime}\) is \(k\)-rs colourable. Since \(G^{*}\) is a subgraph of \(G^{\prime},\ G^{*}\) is \(k\)-rs colourable as well. This completes the proof of the other direction.
Note that the filler gadget in Construction 10 has \(6d\) non-terminal vertices and \(3d(d-1)+4=O(d^{2})\) edges. Hence, \(G^{*}\) has only \((2+6d)n=O(n)\) vertices and \(2m+O(d^{2})n=O(m+n)\) edges. Thus, Construction 10 requires only time polynomial in the input size.
Figure 37: Example of producing \(f^{\prime}\) from \(f\) in Construction 9. (a) a graph \(G\) with a \(4\)-rs colouring \(f\), and (b) graph \(G^{\prime}\) with the corresponding \(4\)-rs colouring \(f^{\prime}\).
Figure 38: Filler gadget for \(v\in V(G)\) in Construction 10.
By Theorem 10, for all \(k\geq 4\), \(k\)-RS Colourability is NP-complete for graphs of maximum degree \(k-1\). For \(k\geq 4\) and \(d\leq k-1\), Construction 10 establishes a reduction from \(k\)-RS Colourability(\(\Delta=d\)) to \(k\)-RS Colourability(\(d\)-regular). Hence, for \(k\geq 4\) and \(d\leq k-1\), if \(k\)-RS Colourability is NP-complete for graphs of maximum degree \(d\), then \(k\)-RS Colourability is NP-complete for \(d\)-regular graphs. Clearly, if \(k\)-RS Colourability is NP-complete for \(d\)-regular graphs, then \(k\)-RS Colourability is NP-complete for graphs of maximum degree \(d\). Thus, we have the following theorem.
**Theorem 12**.: _For all \(k\geq 4\) and \(d\leq k-1\), \(k\)-RS Colourability is NP-complete for graphs of maximum degree \(d\) if and only if \(k\)-RS Colourability is NP-complete for \(d\)-regular graphs. In particular, for all \(k\geq 4\), \(k\)-RS Colourability is NP-complete for \((k-1)\)-regular graphs. _
On the other hand, for all \(k\geq 4\) and \(d\geq k\), \(k\)-RS Colourability is NP-complete for graphs of maximum degree \(d\) whereas it is trivially in P for \(d\)-regular graphs (because the answer is always no [7]).
### Results on \(L_{rs}^{(k)}\) and RS Colouring of Regular Graphs
Recall that for \(k\geq 3\), \(L_{rs}^{(k)}\) is the least integer \(d\) such that \(k\)-RS Colourability in graphs of maximum degree \(d\) is NP-complete. Bear in mind that we assume P \(\neq\) NP throughout this paper; thus, NP is partitioned into three classes: P, NPC and NPI [43]. If a problem in NP is not NP-complete (i.e., not in NPC), then it is either in P or in NPI. By the definition of \(L_{rs}^{(k)}\), \(k\)-RS Colourability(\(\Delta=d\)) is not NP-complete for \(d<L_{rs}^{(k)}\), which means that the problem is either in P or in NPI (we do not know which is the case).
Let \(G\) be a graph of maximum degree \(d\). If \(d\leq 2\), then \(G\) is a disjoint union of paths and cycles, and thus the rs chromatic number of \(G\) can be computed in polynomial time. Since 3-RS Colourability is NP-complete for graphs of maximum degree 3 [18, Theorem 1], we have \(L_{rs}^{(3)}=3\). Theorem 11 proved that for \(k\geq 4\), \(k\)-RS Colourability is NP-complete for graphs of maximum degree \(k-1\), and thus \(L_{rs}^{(k)}\leq k-1\).
Next, we show that \(L_{rs}^{(k)}>\sqrt{k}\). Let \(G\) be a graph of maximum degree \(d\). Each distance-two colouring of \(G\) is an rs colouring of \(G\)[7]. Moreover, \(G\) admits a distance-two colouring (i.e., a colouring of the square graph \(G^{2}\)) with \(\Delta(G^{2})+1=d^{2}+1\) colours. That is, \(\chi(G^{2})\leq d^{2}+1\). Furthermore, \(\chi(G^{2})\leq d^{2}\) unless \(G^{2}\cong K_{d^{2}+1}\), which is true only if \(G\) is a Moore graph of diameter 2 [54]. Using properties of Moore graphs, one can easily show that \(G\) is \(d^{2}\)-rs colourable (that is, \(\chi_{rs}(G)\leq d^{2}\)).
**Observation 4**.: \(\chi_{rs}(G)\leq d^{2}\) _for every graph \(G\) of maximum degree \(d\)._
See the supplement for a proof of Observation 4.
Consider the problem \(k\)-RS Colourability in graphs of maximum degree \(d\). When \(k\geq d^{2}\), we have \(\chi_{rs}(G)\leq d^{2}\leq k\) by Observation 4; that is, \(G\) is \(k\)-ts colourable. In other words, for \(k\in\mathbb{N}\) and \(d\leq\sqrt{k}\), every graph of maximum degree \(d\) is \(k\)-ts colourable, and thus \(k\)-RS Colourability(\(\Delta=d\)) is polynomial-time solvable. Therefore, \(L_{rs}^{(k)}>\sqrt{k}\).
**Observation 5**.: _For \(k\geq 4\), \(\sqrt{k}<L_{rs}^{(k)}\leq k-1\)._
Next, let us consider regular graphs. It is known that \(\chi_{rs}(G)\geq d+1\) for every \(d\)-regular graph \(G\)[7]. Hence, for a fixed \(k\geq 3\), \(k\)-RS Colourability in \(d\)-regular graphs is polynomial-time solvable for each \(d\geq k\) (because the answer is always 'no'). In particular, 3-RS Colourability in \(d\)-regular graphs is polynomial-time solvable for all \(d\in\mathbb{N}\).
Theorem 12 proved that for \(k\geq 4\) and \(d\leq k-1\), \(k\)-RS Colourability in graphs of maximum degree \(d\) is NP-complete if and only if \(k\)-RS Colourability in \(d\)-regular graphs is NP-complete. For \(k\geq 4\), by the definition of \(L_{rs}^{(k)}\), \(k\)-RS Colourability in graphs of maximum degree \(d\) is NP-complete for \(d=L_{rs}^{(k)}\), and not NP-complete for \(d<L_{rs}^{(k)}\). Hence, for \(d<L_{rs}^{(k)}\), we have \(d<L_{rs}^{(k)}\leq k-1\) by Observation 5, and thus \(k\)-RS Colourability in d-regular graphs is not NP-complete by Theorem 12. We know that \(k\)-RS Colourability in graphs of maximum degree \(d\) is NP-complete for \(d\geq L_{rs}^{(k)}\). As a result, for \(d\) in the range \(L_{rs}^{(k)}\leq d\leq k-1\), \(k\)-RS Colourability in d-regular graphs is also NP-complete by Theorem 12. Moreover, for \(d\geq k\), \(k\)-RS Colourability in \(d\)-regular graphs is polynomial-time solvable (see the previous paragraph). Thus, we have the following theorem.
**Theorem 13**.: _For \(k\geq 4\), \(k\)-RS Colourability is NP-complete for \(d\)-regular graphs if and only if \(L^{(k)}_{rs}\leq d\leq k-1\). _
## 4 Conclusion and Open Problems
We conclude the paper with this final section (see Sections 1.2, 2.3 and 3.4 for details). For \(k\geq 3\), there exists an integer \(d\) such that \(k\)-Colourability is NP-complete for graphs of maximum degree \(d\). In fact, for \(k\geq 3\), there exists a unique integer \(L^{(k)}\) such that \(k\)-Colourability is NP-complete for graphs of maximum degree \(d\) if and only if \(d\geq L^{(k)}\). Similarly, for \(k\geq 3\), there exists a unique integer \(L^{(k)}_{s}\) (resp. \(L^{(k)}_{rs}\)) such that \(k\)-Star Colourability (resp. \(k\)-RS Colourability) is NP-complete for graphs of maximum degree \(d\) if and only if \(d\geq L^{(k)}_{s}\) (resp. \(d\geq L^{(k)}_{rs}\)).
**Problem 2**.: _For \(k\geq 3\), determine \(L^{(k)}\), \(L^{(k)}_{s}\) and \(L^{(k)}_{rs}\)._
For each \(k\geq 3\), we have \(k+1\leq L^{(k)}\leq k-1+\left\lceil\sqrt{k}\right\rceil\)[28] and for sufficiently large \(k\), we have \(L^{(k)}=k-1+\left\lceil\sqrt{k}\right\rceil\)[29]. In particular, \(L^{(3)}=4\), \(L^{(4)}=5\) and \(6\leq L^{(5)}\leq 7\). Yet, the following is open.
**Problem 3** (Paulusma [30]).: _Is 5-Colourability NP-complete for graphs of maximum degree 6? In other words, is \(L^{(5)}=6\)?_
Regarding star colouring and rs colouring, we have (i) \(L^{(3)}_{s}=L^{(3)}_{rs}=3\), (ii) for \(k\geq 4\), we have \(0.33\,k^{\,2/3}<L^{(k)}_{s}\leq k\) and \(\sqrt{k}<L^{(k)}_{rs}\leq k-1\), and (iii) \(L^{(k)}_{s}\leq k-1\) for \(k=5\) and \(k\geq 7\).
**Problem 4**.: _Is \(L^{(k)}_{s}\leq k-1\) for \(k\in\{4,6\}\)?_
Next, let us consider the class of regular graphs. For \(k\geq 4\), \(k\)-RS Colourability is NP-complete for \(d\)-regular graphs if and only if \(L^{(k)}_{rs}\leq d\leq k-1\). It is unknown whether this result has a star colouring analogue. Hence, for values of \(k\) such that \(k\)-Star Colourability is NP-complete for \(d^{*}\)-regular graphs for some \(d^{*}\in\mathbb{N}\), we define \(\widetilde{L}^{(k)}_{s}\) (resp. \(\widetilde{H}^{(k)}_{s}\)) as the least (resp. highest) integer \(d\) such that \(k\)-Star Colourability is NP-complete for \(d\)-regular graphs. For other values of \(k\), let us say that \(\widetilde{L}^{(k)}_{s}\) and \(\widetilde{H}^{(k)}_{s}\) are undefined (e.g., \(\widetilde{L}^{(3)}_{s}\) and \(\widetilde{H}^{(3)}_{s}\) are undefined).
For \(k\in\{4,5,7,8,\ldots\}\), \(\widetilde{L}^{(k)}_{s}\) and \(\widetilde{H}^{(k)}_{s}\) are defined, and \(\widetilde{H}^{(k)}_{s}\leq 2k-4\). Moreover, \(L^{(k)}_{s}=\widetilde{L}^{(k)}_{s}\leq k-1\leq\widetilde{H}^{(k)}_{s}\leq 2k -4\) for \(k=5\) and \(k\geq 7\). If the answer to Problem 4 is 'yes' for some \(k\in\{4,6\}\), then \(L^{(k)}_{s}=\widetilde{L}^{(k)}_{s}\) (by Theorem 7). Since 4-Star Colourability is NP-complete for 4-regular graphs [45], we have \(L^{(4)}_{s}\leq\widetilde{L}^{(4)}_{s}\leq 4=\widetilde{H}^{(k)}_{s}\).
**Problem 5** ([19]).: _Is 4-Star Colourability NP-complete for 3-regular graphs?_
Depending on the answer to this problem, we have either (i) \(L^{(4)}_{s}=\widetilde{L}^{(4)}_{s}=3\) or (ii) 4-Star Colourability(\(d\)-regular)\(\,\)\(\in\,\)NPC if and only if \(d=4\).
Consider the complexity of \(k\)-Star Colourability in 3-regular graphs. Since \(4\leq\chi_{s}(G)\leq 6\) for every 3-regular graph \(G\)[36, 37], \(k\)-Star Colourability in 3-regular graphs is polynomial-time solvable for all \(k\) except possibly \(k\in\{4,5\}\). According to Conjecture 12 of Almeter et al. [7], 5-Star Colourability in 3-regular graphs is polynomial-time solvable. If this conjecture is true, then either (i) 4-Star Colourability(3-regular)\(\,\)\(\in\,\)NPC (i.e., \(L^{(4)}_{s}=\widetilde{L}^{(4)}_{s}=3\)), or (ii) Star Colourability(3-regular)\(\,\)\(\notin\,\)NPC.
Whenever \(\widetilde{H}^{(k)}_{s}\) is defined, we have \(\widetilde{H}^{(k)}_{s}\leq 2k-4\), and equality holds for \(k=4\).
**Conjecture 1**.: _For \(k\geq 4\), \(\widetilde{H}^{(k)}_{s}\) is defined, and \(\widetilde{H}^{(k)}_{s}=2k-4\); that is, \(k\)-Star Colourability is NP-complete for \((2k-4)\)-regular graphs._
## Acknowledgement
We thank Sounaka Mishra for suggesting 4-RS Colourability of cubic graphs as a problem to study. |
2309.05532 | Probing Spin Wave Diffraction Patterns of Curved Antennas | We report on the dependence of curvilinear shaped coplanar waveguides on the
near-field diffraction patterns of spin waves propagating in perpendicularly
magnetized thin films. Implementing the propagating spin waves spectroscopy
techniques on either concentrically or eccentrically shaped antennas, we show
how the link budget is directly affected by the spin wave interference, in good
agreement with near-field diffraction simulations. This work demonstrates the
feasibility to inductively probe a magnon interference pattern with a
resolution down to 1$\mu$m$^2$, and provides a methodology for shaping spin
wave beams from an antenna design. This methodology is successfully implemented
in the case study of a spin wave Young's interference experiment. | Loic Temdie, Vincent Castel, Vincent Vlaminck, Matthias Benjamin Jungfleisch, Romain Bernard, Hicham Majjad, Daniel Stoeffler, Yves Henry, Matthieu Bailleul | 2023-09-11T15:15:22Z | http://arxiv.org/abs/2309.05532v1 | # Probing Spin Wave Diffraction Patterns of Curved Antennas
###### Abstract
We report on the dependence of curvilinear shaped coplanar waveguides on the near-field diffraction patterns of spin waves propagating in perpendicularly magnetized thin films. Implementing the propagating spin waves spectroscopy techniques on either concentrically or eccentrically shaped antennas, we show how the link budget is directly affected by the spin wave interference, in good agreement with near-field diffraction simulations. This work demonstrates the feasibility to inductively probe a magnon interference pattern with a resolution down to \(1\mu\)m\({}^{2}\), and provides a methodology for shaping spin wave beams from an antenna design. This methodology is successfully implemented in the case study of a spin wave Young's interference experiment.
pacs: 74.20.-a, 74.20.-b, 74.20.-b, 74.20.-b
## I Introduction
The collective excitations of a spin ensemble, known as spin waves (or magnons for their quanta) [1], draw substantial interest as potential information carriers for unconventional electronic applications [2; 3; 4; 5]. The versatility of the magnon dispersion in the broad microwave range offers a vast field of exploration for the development of wave-based computing technologies[6; 7; 8], in which information could be encoded in both the phase and the amplitude of the spin wave. The manifold of nonlinear mechanisms along with the nanoscale integrability [9] makes it a system of choice for the development of novel architectures such as neuromorphic computing [10; 11], reservoir computing [12; 13], holographic memory [14; 15], or spectral analysis [16], which are all interference-based techniques. Furthermore, the wide variety of non-reciprocal effects inherent to spin dynamics [17; 18; 19; 20; 21; 22] generates considerable interest for reducing the dimensions of analog signal processing components such as microwave isolators, circulator, filters, directional couplers, and phase shifters.
Recently, basic concepts of optics applied to spin waves revealed the possibility of shaping and steering spin-wave beams in the sub-micron scale [23; 24; 25; 26], opening up new perspectives for the development of interferometric magnonic devices. Along these efforts, we developed a robust model to map the near-field diffraction pattern of arbitrary shaped antennas [27], which allows to comprehend the magnon beamforming in extended thin films as a result of the excitation geometry.
In this article, we experimentally probe via out-of-plane spin wave spectroscopy the diffraction pattern of curvilinear antenna. The manuscript is organized as follow: In section II, we present a comparative study of spin wave transduction between straight and concentric pairs of coplanar waveguides. In section III, we study a geometry of antenna that is akin to a Young's interference experiment for spin-waves. The design of these experiments relies on the near-field diffraction (NFD) simulation [27], which was proven to benchmark spin wave diffraction in thin films for arbitrary excitation geometries.
## II Concentric vs straight antennas
### Sample fabrication and measurement protocol
We firstly compare the transduction of spin-waves between pairs of identical straight antennas with the one of quarter circular concentric antennas, for which we kept the same separation distance \(D\), and the same length of excitation
antenna, namely \(L_{ant}=\frac{\pi}{2}\,R\approx 15.7\mu\)m (\(R\)=10 \(\mu\)m). Fig. 2-(a),(b) show SEM images of two such antennas devices with a separation distance of \(D\)=8 \(\mu m\), which consist in Au-coplanar waveguide (CPW) with the following dimensions: a central line of \(S\)=400 nm width, a ground lines of \(G\)=200 nm width, spaced by 200 nm. These dimensions of CPW produce wave packet centered around \(k_{1}\approx\)6 rad.\(\mu\) m\({}^{-1}\)[27]. The antennas were fabricated on top of an extended 30 nm-thin sputtered Yttrium Iron Garnet (YIG) film [28] via ebeam lithography, followed by lift-off of 5 nm Ti/60nm Au. A 40 nm \(SiO_{2}\) spacer was deposited on top of the YIG film prior to the process. For this study, we also fabricated similar comparative devices with a separation distance \(D\)=5 \(\mu\)m.
The sample is placed directly onto the pole of a vertical electromagnet that can reach up to 1.3 T at 5 A, and contacted via 150 \(\mu\)m-pitch picoprobe to an Agilent E8342B-50GHz vector network analyzer. We proceed to spin wave spectroscopy measurement at constant applied field sweeping the frequency in the [1-12] GHz range. In order to resolve a zero base line, we always subtract reference spectra acquired at different applied values (\(H_{ref}\)), for which no resonant feature occurs within the frequency sweep. Besides, we convert the \(S_{ij}\) matrix to the impedance matrix \(Z_{ij}\), which we divide by \(i\omega\) to represent our spectra in units of inductance, accordingly with the inductive nature of the coupling between a spin wave and a coplanar waveguide [29; 30]:
\[\Delta\,L_{ab}(f,H)=\frac{1}{i\omega}(Z_{ab}(f,H)-Z_{ab}(f,H_{ref})) \tag{1}\]
where the subscripts \((a,b)\) denote either a transmission measurement from ports b to port a, or a reflection measurement done on the same port if a=b.
### Spin wave spectroscopy
Fig. 1-(c),(d) shows reflection (blue) and transmission (red) spectra obtained at 279 mT and 465 mT and an input power of -15 dBm, respectively, for a pair of straight antennas (upper panel), and for a pair of concentric antennas (lower panel), both with a separation distance of 8 \(\mu\)m. We identify from the reflection spectra two main peaks. The first peak has a larger amplitude and appears at lower frequency. It corresponds to the FMR peak (\(k\approx 0\)), namely, the region of the CPW extending from the 150 \(\mu\)m-pitch picoprobe contacts to the slightly reduced section of the CPW, yet wider than 10 \(\mu\)m. The second peak corresponds to the \(k_{1}\) sub-micron termination of the CPW shown in Fig. 1-(a),(b), where microwave power is transmitted from port 1 to port 2 via spin-waves. One notices in particular the seeming lack of reflection peak \(\Delta L_{22}\) for the 2 \(\mu\)m-radius circular probe antenna (lower panel Fig. 1-(d)), accordingly with the proportionality of the signal amplitude with the length of the antenna.
Figure 1: (**a**) SEM image of a pair of identical straight antennas. (**b**) SEM image of a concentric antennas device. Propagating spin wave spectra measured at 279 mT and 465 mT respectively for the straight antennas (**c**), and concentric antenna (**d**). Magnetic characterization done on the straight antenna device. Field dependence of: (**e**) the resonance frequency of both \(k\approx 0\) and \(k_{1}\) modes, (**f**) the measured group velocity, (**g**) the spectra amplitude.
The transmission spectra reveal the typical features of propagating spin wave spectra [25; 30], namely oscillations convoluted with an envelope, i.e. black and red lines in Fig. 2(b,c), respectively. One notices that the envelope appears less symmetrical with respect to frequency for the concentric geometry than for the straight one, which is likely due to interferences caused by the near-field diffraction pattern of the concentric antenna.
More importantly, we observe a clear diminution of amplitude for the concentric geometry compared with the straight one, with a rather constant ratio \(\frac{\Delta L_{21}^{ concentric}}{\Delta L_{21}^{straight}}\approx 0.43\) over the whole frequency range. This observation may seem surprising at first, considering the confined nature of the radiation pattern with respect to the probe antenna definition (cf Fig. 2-(b)), and knowing that we kept the same length of antenna for the excitation, and the same separation distance for both geometries. Spin wave dispersion in out-of-plane magnetized films are known to be isotropic, and considering that equal amount of power radiates inwards or outwards from the circular antenna, one might expect a comparable amplitude between straight and concentric geometries.
However, one can grasp this difference of amplitude by making an analogy with the Friis transmission formula used in telecommunications engineering [31], which relates received and emitted powers between two radio antennas to the product of their effective aperture area, accordingly with the concept of directivity for an antenna having uniform and equiphase aperture. In our case of spin-wave propagating in 2D, this analogy would give an amplitude ratio proportional to the square-root of the ratio of the arc lengths: \(\frac{\Delta L_{21}^{ concentric}}{\Delta L_{21}^{straight}}\propto\sqrt{\frac{R_{2}}{R _{1}}}\approx 0.45\). Still, this agreement should be viewed cautiously as the Friis formula is normally applicable in the far-field region to ensure a plane wave front at the receiving antenna, which corresponds here to a propagation distance \(D\geq\frac{(\pi R_{1})^{2}}{\lambda}\approx\)1 mm. For this reason, we ought to resort to near-field diffraction simulations in order to assess the conformity of our measurements.
We now present in Fig. 1-(e)-(g) the methodology used to evaluate the magnetic properties used in the near-field diffraction simulations, from the spin wave spectroscopy performed over the whole field range on a single pair of straight antenna, for which we can ensure a plane wave profile. Firstly, we track the field dependence of the \(k=0\) reflection peak and the transmission peak as shown in Fig. 1-(e), and fit it to the MSFVW dispersion relation [32], which gives a gyromagnetic ratio \(\frac{\gamma}{2\pi}\)=28.2\(\pm\)0.1 GHz.T\({}^{-1}\), an effective magnetization \(\mu_{0}M_{s}\)=185\(\pm\)5 mT more or less equal to the saturation magnetization, suggesting no uniaxial anisotropy for our YIG film. We then estimate the group velocities \(v_{g}\) from the period of oscillation of the transmission spectra [25; 30], and fit its field dependence to the dispersion relation as shown in Fig. 1-(f), letting only the exchange constant as a free parameter, which gives \(A_{exch}\)=3.5\(\pm\)0.2 pJ.m\({}^{-1}\). Finally, we fit the field dependence of the transmission amplitude to an exponential decay \(\Delta L_{21}\propto exp(-D/L_{att})\) Fig. 1-(g), for which we adopted the low wavevector approximation of the attenuation length \(L_{att}=\frac{v_{g}}{2\pi\alpha_{free}}\), where \(f_{res}\) is expressed from the Kalinikos-Slavin expression [32]. We obtain a Gilbert damping of \(\alpha\)=9.1\(\pm\)0.5 10\({}^{-4}\), which appears slightly bigger than previously reported values on similar sputtered thin YIG films [25; 33]. We note that the same methodology applied to the \(D\)=5 \(\mu\)m straight antennas device gives very close results within the estimated error bars.
Figure 2: NFD simulations at \(\mu_{0}H_{ext}\)=419.2 mT, \(f_{exc}\)=6.892 GHz for (**a**) a 15.7\(\mu\)m-long straight antenna, and (**b**) 10\(\mu\)m-radius quarter circular antenna. (**c**) Comparison of the spin wave amplitude straight vs concentric with the emulated inductive signal from the NFD simulations.
### Comparison with NFD simulations
In order to assess the conformity of our measurements, we performed near-field diffraction (NFD) simulations [27] for each field values respectively for the straight and the circular excitation antenna. Fig. 2-(b),(c) show the simulated magnetization amplitude expressed in units of mT for an YIG film magnetized out-of-the plane with an external field \(\mu_{0}H_{ext}\)=419.2 mT, and for an excitation frequency of \(f_{exc}\)=6.892 GHz, respectively for a \(\frac{\pi}{2}\,R\)-long straight CPW, and a quarter-circular CPW with radius of curvature \(R\)=10 \(\mu m\). Both CPWs have the same lateral dimensions, namely a central line \(w_{s}\)=400 nm and a ground line \(w_{g}\)=200 nm, correspondingly with the measured devices. We defined the microwave magnetic field components \((h_{x},h_{y})\) from the Oersted field of a straight conductor with rectangular section carrying uniform current density, whose value was adjusted according to typically used input power and impedance of the antennas. We note that the field distribution obtained with this somewhat crude approximation compares very well with finite element simulations of curved coplanar waveguides [27].
In order to compare the simulations with propagating spin wave spectra obtained from several pairs of antennas, we perform a sum over an effective area where the detection antenna is located as represented in white on the simulations of Fig. 2-(b),(c), and multiplied by the pixel area \(dxdy\). Indeed, the coupling of a spin-wave with a CPW, which is inductive in nature, can be estimated by the magnetic flux sensed by the antenna. Although it does not strictly correspond to the dynamic field radiated from the spin wave that the probe antenna senses, this rather simple averaging method provides a comprehensive estimate of the antenna's shape-dependent transduction, which we express in units of magnetic flux, e.g. in femto Weber (fWb). Fig. 2-(c) summarizes the field dependence of all measured transmission amplitudes (\(\Delta L_{21}\)-left y-axis) compared with the emulated inductive signal from the NFD simulations (right y-axis) for both the straight and the concentric pairs of antennas. We find an excellent matching between the amplitude of the measured transmission spectra and the simulated inductive signal over the whole field range for both the \(D\)=5 \(\mu\)m and the \(D\)=8 \(\mu\)m series. The agreement between measurements and simulations is all the better here that the antenna design matches with the confined diffraction pattern, e.g. no spin-wave dynamics is to be found in the transition to the probe antenna's termination. This explanation of the differences in link budget between concentric and straight
Figure 3: (**a**)-(**c**) SEM images of three Young’s interference devices with different location of probe antenna. (**d**) Transmission (left y-axis), and reflection spectra (right y-axis) for the corresponding devices. (**e**) NFD simulation of a Young’s interference antenna performed at \(\mu_{0}\)H\({}_{ext}\)=1.061 T and f=4.07 GHz. (**f**) Comparison of the evolution of the spin wave amplitude at \(x\)=4 \(\mu\)m with the emulated inductive signal from the NFD simulations (color palette in unit of mT).
pairs of antennas validates our understanding of spin wave transduction in curvilinear geometries in terms of near-field diffraction.
Furthermore, the ratios of amplitude (\(\frac{\Delta L_{21}^{concentric}}{\Delta L_{21}^{straight}}\)) remains fairly constant and close to the square root of the radius ratio \(\sqrt{\frac{R_{2}}{R_{1}}}\), namely (\(\frac{\Delta L_{21}^{concentric}}{\Delta L_{21}^{straight}}\))\({}_{5\,\mu m}\approx 0.66\pm 0.01\) and (\(\frac{\Delta L_{21}^{concentric}}{\Delta L_{21}^{straight}}\))\({}_{8\,\mu m}\approx 0.43\pm 0.01\). The comparison is better for the longer separation distance as suggested by the analogy with the Friis formula, which only applies in the far-field region.
## III Young's interference experiment
We explore here the idea of magnon beamforming from the shape of an excitation antenna, and propose to reiterate a Young's interference experiment with two seemingly circular apertures. Fig. 3-(a)-(c) show scanning electron (SEM) images of the Young's interference devices consisting in two adjacent semi-circular \(1\,\mu\)m wavelength CPW, having each a \(1\,\mu\)m curvature radius for the central line, and whose centers are \(2\,\mu\)m apart. We fabricated a series of 6 such devices on top of a \(20\,\)nm-thin Ni\({}_{80}\)Fe\({}_{20}\) film, changing the location of the probe antenna, namely, keeping the same \(x=4\,\mu\)m and varying \(y\)=[0.0,0.47,0.9,1.375,2.0,3.0]. In this manner, we can perform a discrete mapping of this Young's interference pattern with sub-micron resolution, using a \(1\,\mu\)m\({}^{2}\) square CPW as probe antenna. The tightness of the aimed curvature could not allow to fabricate this sub-micron size geometry on a YIG film, due to the limitations posed by the conductive resine [34].
Fig. 3-(d) shows the transmission spectra \(\Delta L_{21}\) (colored lines, left y-axis) and reflection spectra \(\Delta L_{11}\) (black line, right y-axis) for the three devices with a probe antenna position at \(y=0\) for the device shown in Fig. 3-(a), \(y=0.9\,\mu\)m for the one of Fig. 3-(b), and \(y=2\,\mu\)m for the one of Fig. 3-(c). All devices were measured at \(-15\,\)dBm input power, and for 3 different applied fields: 1.061 T,1.134 T, and 1.212 T. The transmission spectra display a first peak at lower frequency, which should not be mistaken with a propagating spin wave signal, as it is aligned with the k=0 peak of the reflection spectra. Therefore, we focus on the remaining part of the spectra featuring the typical oscillations of the \(k_{1}\) spin-wave mode, in order to track the change of amplitude with the probe antenna position. For the three field values, the oscillation amplitude appears maximum for the \(y=0\) device, it is significantly reduced for the \(y=0.9\,\mu\)m device, while it increases again for the \(y=2\,\mu\)m device.
We show in Fig. 3-(e) a NFD simulation of this Young's interference device done at \(\mu_{0}\)H\({}_{ext}\)=1.061 T and f=4.07 GHz, using the following set of parameters accordingly with prior characterization of this permalloy film [35]: a saturation magnetization of \(\mu_{0}M_{s}\)=0.95 T, a gyromagnetic ratio \(\gamma\)=29.8 GHz.T\({}^{-1}\), and a Gilbert damping constant \(\alpha\)=7.5 10\({}^{-3}\), and exchange constant \(A_{exch}\)=7.5 pJ.m\({}^{-1}\). The diffraction pattern shows clearly the formation of spin wave beams separated by dark zones, corresponding respectively to the constructive and destructive interference of spin waves in a similar fashion as a double-slit experiment in optics.
We finally compare in Fig. 3-(f) the transmission spectra amplitude obtained on the 6 devices with the emulated inductive signal from the corresponding NFD simulations as described in sec.II. We obtain a satisfying agreement, reproducing on one hand the spatial dependence of the spin wave diffraction pattern over two constructive interference beams, and on the other hand the comparative amplitude between the 3 field values. The little discrepancy between simulations and measurements could be due to the part of the CPW that transitions to the \(1\,\mu\)m\({}^{2}\) termination, which can slightly pick up some flux within the remaining diffraction pattern. In essence, this study demonstrates the possibility to shape spin wave beams from the shape of an antenna, and resolve sub-micron featured-size diffraction pattern with a \(1\,\mu\)m\({}^{2}\) inductive probe.
## IV Conclusion
We presented a study on the spin wave transduction from curved excitation antennas, comparing transmission spectras with simulated mappings of the spin wave amplitude for various geometries of excitation. We firstly showed that the difference in transmission amplitude between pairs of straight antenna versus concentric antennas was very well reproduced over a broad frequency range by an emulated inductive signal built from the NFD mapping combined with the probe antenna definition. This validates our understanding of spin wave transduction in curvilinear geometries in terms of near-field diffraction. Secondly, we reiterated a Young double-slit experiment with an antenna made of two adjacent semi-circular CPW, acting like two seemingly circular apertures. We satisfyingly reproduced the simulated spin wave diffraction pattern with a series of devices varying the position of probe antenna. We demonstrated in particular the possibility to inductively sense the spin wave amplitude with a \(1\,\mu\)m\({}^{2}\) spatial resolution. These results provide a methodology to explore the magnon beamforming through the shape of an excitation antenna, and pave the way for future development of interferometric magnonic sensors.
###### Acknowledgements.
The authors would also like to acknowledge the financial support from the French National research agency (ANR) under the project _MagFunc_, the Departement du Finistere through the project _SOSMAG_, and also the Transatlantic Research Partnership, a program of FACE Foundation and the French Embassy under the project _Magnon Interferometry_.
|
2309.09308 | GAMMA: Revisiting Template-based Automated Program Repair via Mask
Prediction | Automated program repair (APR) aims to fix software bugs without human
intervention and template-based APR has been widely investigated with promising
results. However, it is challenging for template-based APR to select the
appropriate donor code, which is an important repair ingredient for generating
candidate patches. Inappropriate donor code may cause plausible but incorrect
patch generation even with correct fix patterns, limiting the repair
performance.
In this paper, we aim to revisit template-based APR, and propose GAMMA, to
directly leverage large pre-trained language models for donor code generation.
Our main insight is that instead of retrieving donor code in the local buggy
file, we can directly predict the correct code tokens based on the context code
snippets and repair patterns by a cloze task. Specifically, (1) GAMMA revises a
variety of fix templates from state-of-the-art template-based APR techniques
(i.e., TBar) and transforms them into mask patterns. (2) GAMMA adopts a
pre-trained language model to predict the correct code for masked code as a
fill-in-the-blank task. The experimental results demonstrate that GAMMA
correctly repairs 82 bugs on Defects4J-v1.2, which achieves 20.59\% (14 bugs)
and 26.15\% (17 bugs) improvement over the previous state-of-the-art
template-based approach TBar and learning-based one Recoder. Furthermore, GAMMA
repairs 45 bugs and 22 bugs from the additional Defects4J-v2.0 and QuixBugs,
indicating the generalizability of GAMMA in addressing the dataset overfitting
issue. We also prove that adopting other pre-trained language models can
provide substantial advancement, e.g., CodeBERT-based and ChatGPT-based GAMMA
is able to fix 80 and 67 bugs on Defects4J-v1.2, indicating the scalability of
GAMMA. Overall, our study highlights the promising future of adopting
pre-trained models to generate correct patches on top of fix patterns. | Quanjun Zhang, Chunrong Fang, Tongke Zhang, Bowen Yu, Weisong Sun, Zhenyu Chen | 2023-09-17T15:49:40Z | http://arxiv.org/abs/2309.09308v1 | # Gamma: Revisiting Template-based Automated Program Repair via Mask Prediction
###### Abstract
Automated program repair (APR) aims to fix software bugs without human intervention and plays a crucial role in software development and maintenance. Template-based APR has been widely investigated and shown promising results. However, it is challenging for template-based APR to select the appropriate donor code, which is an important repair ingredient for generating candidate patches. Inappropriate donor code may cause plausible but incorrect patch generation even with correct fix patterns, limiting the repair performance.
In this paper, we aim to revisit template-based APR, and propose Gamma, to directly leverage large pre-trained language models for donor code generation. Our main insight is that instead of retrieving donor code in the local buggy file, we can directly predict the correct code tokens based on the context code snippets and repair patterns by a deczek task. Specifically, (1) Gamma revises a variety of fix templates from state-of-the-art template-based APR techniques (i.e., TBar) and transforms them into mask patterns. (2) Gamma adopts a pre-trained language model to predict the correct code for masked code as a fill-in-the-blank task. Although our idea is general and can be built on various existing pre-trained language models, we have implemented Gamma as a practical APR tool based on the recent UniXeoder model. The experimental results demonstrate that Gamma correctly repairs 82 bugs on Defects4J-v1.2, which achieves 20.59% (14 bugs) and 26.15% (17 bugs) improvement over the previous state-of-the-art template-based approach TBar and learning-based one Recorder. Furthermore, Gamma repairs 45 bugs and 22 bugs from the additional Defects4J-v2.0 and QuixBugs, indicating the generalizability of Gamma in addressing the dataset overfitting issue. We also prove that adopting other pre-trained language models can provide substantial advancement, e.g., CodeBERT-based and ChataGPT-based Gamma is able to fix 80 and 67 bugs on Defects4J-v1.2, indicating the scalability of Gamma. Overall, our study highlights the promising future of adopting pre-trained models to generate correct patches on top of fix patterns in practice.
Automated Program Repair, Fix Pattern, Pre-trained Model, LLM4SE
## I Introduction
The complexity and size of modern software systems are continuously enlarging, leading to the soaring number of software bugs [1, 2]. Software bugs have detrimental effects on software development, as they give users an annoying experience, and sometimes can cause huge financial losses to developers [3]. A considerable amount of time and budget is spent on identifying and fixing such software bugs manually [4]. To facilitate the process of manual debugging, automated program repair (APR), which aims to generate correct patches for identified buggy code snippets automatically, is getting growing attention from both academia and industry [5, 6], such as Meta [7], Google [8] and Microsoft [9, 10].
In the literature, a variety of APR techniques have been proposed to generate patches, such as heuristic-based [11, 12], constraint-based [13, 14], template-based [15, 16]. Among these traditional APR techniques, template-based APR, which employs repair patterns hand-crafted by human experts to transform buggy code snippets into correct ones, has been widely investigated and recognized as state-of-the-art [17, 18, 19]. Candidate patches are usually generated by leveraging two kinds of repair ingredients (i.e., fix patterns and donor code) that are found in existing code bases. The repair pattern represents common code change actions (e.g., insertion of an If statement) and donor code represents code fragments (e.g., identifier tokens such as method names) to concretize patches guided by abstract patterns. A mass of studies has been dedicated to template extraction schemes, such as manually extracted templates and automatically mining templates [20, 21, 22, 23]. For example, state-of-the-art template-based APR tool TBar [17] focuses on the local buggy file and leverages the context of buggy code to prune away irrelevant donor code. Previous works [24, 25] have shown a considerable number of bugs cannot be fixed because the relevant donor code is not available in the local file. Therefore, TBar may fail to generate
correct patches with inappropriate donor code although the fix pattern matches with correct code change actions.
In this paper, we propose a novel template-based APR tool called Gamma by combining the advances of fix patterns and pre-trained language models. The key insight is that considering pre-trained models can acquire adequate general knowledge about programming language from all possible open-source projects in the wild, we can directly employ such models to retrieve relevant donor code from the fix pattern and surrounding code context. In particular, we first collect and summarize a super-set of fix patterns drawn from previous template-based work (e.g., TBar). We then transform these fix templates into hole-filling-based patterns, which replace donor code with several masked tokens to be filled. Finally, we perform a mask prediction task on the hole-filling-based fix patterns with the help of pre-trained models in a fill-in-the-blank manner, i.e., predicting the correct donor code for the masked tokens. Although Gamma is conceptually generalizable to various pre-trained models, we have implemented Gamma on top of one recent pre-trained language model, UniXcoder [26]. Unlike current template-based APR tools which usually retrieve fix ingredients in the local buggy file, Gamma directly utilizes generic knowledge pre-trained with millions of code snippets from open-source projects, allowing it to provide a variety of donor code to fix different bugs.
We conduct extensive experiments to compare Gamma with state-of-the-art APR approaches (including both traditional and learning-based ones) on the widely-adopted Defects4J-v1.2 benchmark. The experimental results demonstrate that Gamma is able to outperform all existing APR approaches, improving the number of correctly-fixed bugs to 82 with a precision of 81.19%, and 14 unique bugs that no prior work can fix, which is a new frontier in the APR field. Besides, Gamma fixes 45 and 22 bugs on the additional Defects4J-v2.0 and QuixBugs, 27 and 5 more than state-of-the-art learning-based technique Recoder, demonstrating that Gamma can address the important dataset overfitting issue well. Moreover, we implement Gamma with CodeBERT [27] and ChatGPT [28], and find 80 and 67 bugs are fixed correctly on Defects4J-v1.2. The results demonstrate that Gamma with other pre-trained models can further provide substantial advancement, highlighting the generalizability of Gamma.
To sum up, the contributions of this paper are as follows:
* **New Dimension.** We bridge the gap between the advances in recent pre-trained models and template-based APR. Different from existing template-based APR retrieving donor code from local buggy files and existing learning-based APR generating a patched code snippet from scratch, our work demonstrates that we can leverage pre-trained models to generate correct code tokens in a given fix pattern. More importantly, our work reveals the potential for leveraging pre-trained models to resolve the important fix ingredient problem in template-based APR.
* **Novel APR tool.** We propose Gamma, which leverages the large pre-trained language model to generate correct code with the help of fix patterns without any additional historical bug-fixing pairs for training. We define a set of fix patterns in a fill-in-the-blank format and leverage the original pre-training objective of pre-trained models to predict actual masked-out tokens. Considering the fill-in-the-blank task can leverage various pre-trained language models, Gamma is general in concept and can be implemented with different pre-trained models in practice.
* **Extensive study.** We conduct an empirical study to investigate the effectiveness of Gamma compared to state-of-the-art traditional and learning-based APR techniques. The results on the widely-adopted Defects4J-v1.2 show that Gamma is able to fix 82 bugs and 14 of them cannot be fixed by existing APR tools, creating a new higher baseline of repair performance. More importantly, Gamma fixes 45 and 22 bugs on the newly-developed Defects4J-v2.0 and QuixBugs, demonstrating that Gamma can avoid the important dataset overfitting issue of existing APR techniques. Moreover, we adopt different pre-trained models (e.g., ChatGPT) to further investigate the generalization ability of Gamma.
* **Available artifacts.** To support the open science community, we release the relevant materials (including source code, experimental results, and correct patches) in our experiment for replication and future research [29].
## II Background and Motivation
### _Automated Program Repair_
As a promising technique to shift the heavy manual debugging to efficient automated patch generation, APR has developed rapidly and received much attention from a broad of research communities, such as software engineering, software security, and artificial intelligence [6, 30]. The workflow of APR usually involves three phases: (1) _fault localization_, i.e., the off-the-shelf fault localization techniques are utilized to identify a ranked list of suspicious code elements, with whose help APR can focus on a small code region, thus reducing the workload [31]; (2) _patch generation_, i.e., candidate patches are generated by applying a set of transformation rules to the suspicious code snippets [32]; and (3) _patch validation_, i.e., the available test suites are utilized as the program specifications to check the correctness of candidate patches [33]. The candidate patches that pass all available test suites are considered plausible ones. The plausible patches that are semantically equivalent to the developer patch by manual inspection are considered correct ones; otherwise overfitting ones [34, 35].
In the literature, as the core component of APR research, a mass of research efforts are devoted to generating patches from different aspects, including traditional and learning-based ones. In particular, traditional APR techniques can be classified as heuristic-based [11, 12], constraint-based [13, 14], template-based [15, 16]. Among them, template-based APR is proven to achieve the best performance, which consists of two fix ingredients, i.e., fix patterns and donor code. Fix patterns are hand-crafted by human experts to denote the common code changes, and donor code is retrieved in buggy files to denote the actual correct code tokens. Gamma aims
to revise the important donor code by employing pre-trained language models in a fill-in-the-blank manner.
Compared with traditional APR techniques, learning-based techniques handle the program repair problem as a neural machine translation (NMT) task, which translates a code sequence from a source language (i.e., buggy code snippets) into a target language (i.e., correct code snippets). Existing NMT repair models are typically built on the top of the _encoder-decoder_ architecture [36]. The _encoder_ extracts the hidden status of buggy code snippets with the necessary context, and the _decoder_ takes the encoder's hidden status and generates the correct code snippets [22, 37, 38]. Thanks to the powerful ability of DL to learn hidden and intricate relationships from massive code corpora, learning-based APR techniques have achieved remarkable performance in the last couple of years. Although learning-based APR techniques have demonstrated their promising future, they are still limited by the quality and quantity of historical bug-fixing pairs for training [18]. We view Gamma as a novel learning-based APR technique that attempts to boost traditional APR techniques by utilizing deep learning technology. However, different from most existing learning-based APR that treats patch generation as an end-to-end NMT task with a limited number of bug-fixing pairs as training data, Gamma integrates pre-trained language models into template-based APR and only predicts masked code tokens with a zero-shot learning scenario.
### _Pre-trained Model_
Recently, pre-trained language models (e.g., UniXcoder [26] and ChatGPT [28]) have significantly boosted performance across a wide range of code-related tasks [39, 40]. These models are pre-trained by self-supervised training on large-scale unlabeled corpora and then fine-tuned by supervised training on limited corpora to enhance performance on multiple downstream tasks. During the pre-training process, a masked language modeling objective is usually employed to derive generic language representations from the massive unlabeled training data [41], i.e., a small percentage of tokens are replaced by masked tokens, and the training objective is to predict the original values of the masked tokens.
Existing pre-trained models usually adopt the encoder-decoder architectures, where the former encodes an input sequence as a fixed-length vector representation, and the latter generates an output sequence based on the input representation. These models can be generally categorized into three architectures: encoder-only, decoder-only, and encoder-decoder ones [42]. Encoder-only models (e.g., CodeBERT [41]) usually pre-train a bidirectional transformer in which each token can attend to each other. Decoder-only models (e.g., GPT [43]) are pre-trained using unidirectional language modeling that only allows tokens to attend to the previous tokens and themselves to predict the next token. Encoder-decoder models (e.g., UniXcoder [26]) often make use of denoising pre-training objectives that corrupt the source input and require the decoder to recover them.
In this work, we select UniXcoder to retrieve the doner code via a mask prediction task. UnixCoder is pre-trained using the MLM objective which can be used to directly generate masked code tokens from the appropriate fix pattern and surrounding code context. Besides, CodeBERT and ChatGPT are employed to investigate the generalization ability of Gamma.
### _Fix Template_
Fix templates are widely employed in the APR community [17]. A fix template is a pre-defined code transformation rule that represents a common code change in the bug-fixing process. The insight behind fix templates is that many software bugs are similar in nature [22]. Therefore, with fix templates summarized from previous bugs, it is possible to automatically fix some other flawed code [24].
In the literature, there are several strategies to access fix templates: (1) _manual template mining_[44, 45], i.e., through performing analysis on existing bugs as well as their relevant patches, experienced developers can identify similar code changes, and turn them into fix templates. (2) _machine learning_[15, 20], i.e., learning approaches are used so that fix templates can be generated automatically. (3) _static analysis_[16, 46], i.e., fix templates are generated from assorted types of warnings raised by static analysis tools.
The process of applying a fix template to patch generation typically involves two steps. First, the APR tool selects an appropriate fix template based on the abstract syntax tree (AST) representation of the buggy code. Fix templates are chosen according to the types of nodes in AST, and bugs are fixed by mutating the target nodes. Second, the APR tool generates a repaired version of the buggy code by searching and applying the relevant donor code to the fix pattern.
### _Motivation Example_
To better illustrate the limitation of existing template-based APR, we further present a motivating example in this section. As shown in Listing 1, we use a real-world bug Closure-92 from the widely-used benchmark Defects4J-v1.2 as an example. Closure-92 denotes the 92nd buggy version of the Google Closure Compiler project in Defects4J-v1.2. This bug is fixed by Gamma successfully, while TBar fails to generate a correct patch. To fix this bug, the method name "indexOf" is replaced by "lastIndexOf". The fix template used here is to mutate method names. TBar applies the selected fix patterns to the source code in a naive way. In this case, TBar searches all the methods that appear in the local file where the bug
is localized, and replaces the buggy method with all the other methods with the same return type one by one. As a result, TBar is not able to generate method names that do not exist in the original file, like the "lastIndexOf" in this example, limiting its repair performance. Different from TBar, we replace the method name "indexOf" with a mask token (i.e. \(<\)mask\(>\)) instead, and query the pre-trained model UniXcoder to fill the mask with the fix pattern and corresponding context.
Based on the example, we can observe that, although the correct fix pattern is selected, as a state-of-the-art traditional APR, TBar still fails to generate a correct patch with inappropriate donor code (i.e., "lastIndexOf"). The effectiveness of template-based APR is largely dependent on donor code, which refers to the code tokens (e.g., variable name) that can be combined with the fix template in order to produce a complete patch. Donor code can be accessed in different scopes of the buggy program (e.g. a method, a file, or a package), but for some bugs, the correct donor code cannot be found even if the whole program is searched. For example, previous work demonstrates that half of the bugs from the Defects4J benchmark cannot be fixed because the relevant donor code is not available in the search space [24]. Thus, these bugs cannot be correctly fixed by template-based APR tools like TBar [17], which only identifies the donor code within the local file. With a larger search space (e.g., searching donor code from other projects), there might be more chances to fix these bugs. However, such a strategy leads to the search space explosion problem and an unaffordable search time, reducing repair efficiency. In this paper, we utilize pre-trained language models to retrieve relevant donor code. These models have learned programming language knowledge from a great number of programs in the wild, making it possible to repair bugs that require donor code from outside the buggy program.
## III Approach
In order to assess the effectiveness of fix ingredients, we build Gamma, a template-based APR tool that combines the recurrently-used fix patterns and pre-trained language models. Fig. 1 presents the workflow of Gamma. Given a buggy program and a set of test suites that make the program fail, a list of suspicious code elements is returned by fault localization approaches (i.e., _fault localization phase_). On top of the existing fix pattern corpus, Gamma then selects appropriate fix patterns to the suspicious elements (i.e., _pattern selection phase_) and queries pre-trained models to retrieve donor code via a mask prediction mask (i.e., _patch generation phase_). Gamma finally employs the available test suites as the oracle to check the generated patches and returns plausible patches for manual inspection (i.e., _patch validation phase_). Considering that fault localization is usually developed as an independent field and existing APR techniques employ off-the-shelf fault localization tools in the repair pipeline, we do not discuss the fault localization below. We describe the role and operation of other phases as well as all necessary implementation details.
### _Mask Template Definition_
In the literature, a variety of fix patterns are designed based on manual summarization or automatic mining. On top of the state-of-the-art template-based approach TBar [17], we manually inspect all fix patterns and transform them into hole-filling-based patterns. We show the related templates as well as how they are applied to buggy code, or how statements with mask tokens are generated based on these templates.
_T1: Check Cast Expressions._ Adding an instanceof check around a statement when it contains an unchecked cast expression.
```
+if(expinstanceofT){ var-(T)exp... }
```
_T2: Mutate Conditional Expressions:_ Mutating an expression that returns a boolean value by removing part of the expression, replacing it with masks or adding new masks.
```
Removingexpression:-condExplopcondExp2 +condExp1
```
```
Updatingexpression:-condExplopcondexp2 +condExplop<mask>
```
```
Addingexpression:-condExpl1 +condExpl<mask>
```
where _condExp_ denotes conditional expressions and _Op_ denotes the logical operator (i.e., \(\parallel\) or \(\&\&\)).
Fig. 1: The overall workflow of Gamma
_T3: Mutate Data Types:_ Using one or more masks to replace data types in the variable declaration or cast expression nodes.
```
-Tvar-... +<mask>var-... -... -...... (T)exp... +...... (<mask>)exp...
```
where both \(T\) denotes a data type and _exp_ denotes the being casted expression (e.g., variable).
_T4: Mutate Literal Expressions:_ Replacing literal expressions, including number literals, string literals, boolean literals, etc. with masks.
```
-...literal... +...<mask>...
```
_T5: Mutate Method Invocations._ Mutating method invocation expressions by changing either method names or arguments.
```
Methodnamereplacement:-method(...)+<mask>(...)Argumentinsertion:-method(arg)+method(arg,<mask>)Argumentremoval:-method(arg1,arg2)+method(arg)Argumentreplacement:-method(arg)+method(smash>)
```
_To: Check Null Pointer:_ Adding a null check to a statement that contains an expression that is probably null.
```
Nullpointskip:+if(exp!-null){...exp... }
```
Returninsertion:+if(exp==null){+return<mask>;+}...exp... ```
Continue:+if(exp==null){+ continue;+}+...exp...Exceptionthrow:+if(exp==null){+thrownewIllegalArgumentException();+}...exp...Re-assignment:+if(exp==null){+exp<mask>+...exp...
_T7: Mutate Operators:_ Replacing an operator in a statement with masks or changing the priority of operations.
``` Changingthepriority:-(exp!op1exp2)op2exp3+exp1op1(exp2op2exp3)Replacingoperator:-exp1opexp2+exp1<mask>exp2 ```
_T8: Check Array Range:_ Checking the range of index before accessing an element in an array.
``` +if(index<array.length){...array[index]... } ```
_T9: Mutate Return Statements:_ Replacing the expression (e.g., literals, variables, and conditional expressions) that is returned in a method with masks.
``` -returnexp;+return<mask>; ```
_T10: Insert Statements:_ Inserting return statements, try catch statements, if statements, method invocations, or simply some masks to the existing statement.
``` Returnstatement:+return<mask>; statement;; Try-catchstatement;+}catch(Exceptione){} ```
_Ifstatement:+if(<mask>){ statement;;+} }
```
_Simplestatement:+<mask>; statement; ```
_T11: Remove Statements:_ Directly deleting one or more buggy statements from the original code.
_ statement;_
_T12: Replace Variables:_ Replacing a variable in a buggy statement with masks.
_...var..._
_T13: Move statements:_ Moving a statement from its original position to a new one.
_-- statement;_
_..._
_..._
_T14: Move statements:_ Moving a statement from its original position to a new one.
_-- statement;_
_..._
_..._
_T15: Move statements:_ Moving a statement from its original position to a new one.
_-- statement;_
_..._
_..._
_T16: Move statements:_ Moving a statement from its original position to a new one.
_-- statement;_
_..._
_T17: Move statements:_ Replacing an operator in a statement statement.
_T18: Move statements:_ Moving a statement from its original position to a new one.
[MISSING_PAGE_POST]
### _T13: Move statements:_ Moving a statement from its original position to a new one.
_-- statement;_
-- statement;
_-- statement;_
_-- statement;_
_-- statement;_
_-- statement;_
### _T13: Move statements:_ Moving a statement from its original position to a new one.
_-- statement;_
-- statement;
_-- statement;_
-- statement;
_-- statement;
_-- statement;_
### _T13: Move statements:_ Moving a statement from its original position to a new one.
_-- statement;
-- statement;
_-- statement;_
-- statement;
_-- statement;
the bug. We give an example of the _Closure-10_ bug from the Defects4J-v1.2 in Listing 2 and how we generate its patch in Fig. 1. In the AST of the input line, we find a _Method Invocation_ node in the AST of the input line, so we choose the template _T5: Mutate Method Invocation_. For the template _Method name replacement_ that alters the name of a method, after locating the method name _allResultsMatch_, we replace it with a mask token, which is going to be predicted in the next phase. It is worth noting that Gamma is built on top of UniXCoder, which is able to predict a sequence of code tokens based on a masked token. Thus, we do not need to consider how many tokens should be masked during patch generation and only use one mask token in the selected repair template, which is different from other pre-trained models, such as CodeBERT used in AlphaRepair [18] (discussed in Section V-C). There might be multiple repair templates that are suitable for a piece of buggy code at the same time. In this case, we stop the selection of repair templates as soon as the first correct patch is generated.
```
108-141d,7+141d,7@staticbooleanmayBestring(Noden){
2StaticbooleanmayBestring(Noden,booleanrecurse){
3if(recurse){
4returnallResultsMatch(n, MAX_BE_STRING_PREDICAET);
5returnanyResultsMatch(n, MAX_BE_STRING_PREDICAET);
6}else{
7returnmayBestringHelper(n);
8}
```
Listing 2: The Defects4J bug Closure-10
### _Patch Generation with Mask Prediction_
After selecting an appropriate fix template for a buggy code, we use UniXcoder [26] to generate the correct code tokens via a fill-in-the-blank format. To this end, we leverage the original pre-training training objective of masked language modeling in UniXcoder. UniXcoder is an advanced pre-trained model for programming languages that support code understanding and generation tasks. It contains a pre-training objective of Masked Language Modeling (MLM), which is designed to predict some tokens that have been masked out. We leverage this pre-training task to complete the clozes generated in the previous step with any fine-tuning, so that candidate patches for the buggy programs can be produced.
The prediction for the mask is largely dependent on the tokens surrounding the mask. If the correct token appears in the input for the model, it is more likely that the token will be chosen as one of the possible results. The precision of mask prediction is quite limited when only a single masked buggy line is given without any context of the code, where there may be some useful information for bug fixing. To get more context of the buggy line in each bug, we extract the method that contains the line and use the whole method with the masked buggy line as the input for UniXcoder. Considering that some tokens in the buggy line have been replaced with masks but these tokens may also contain essential information for mask prediction, in the first line of our input, we add the original buggy line in the form of a comment (i.e., add a "/" in front of the line). The commented buggy line followed by the method the bug is in together forms the final input. For every input, N candidate patches are generated by the UniXcoder model. N is the beam size and is an adjustable parameter of Gamma. Relatively large beam size increases the possibility of generating correct patches.
### _Patch Validation_
After a candidate patch for a given bug is generated by Gamma, we apply the corresponding changes to the buggy program. Following the practice in the APR community [18, 47], we first recompile the patched program and filter out any patches that fail to compile. We then execute the patched program against the available test suite to identify the plausible patches that successfully pass all the test suites. For those plausible patches, we examine them manually to ensure the programs are fixed correctly, i.e., whether the patches are semantically equivalent to developer patches.
## IV Experimental Setup
### _Research Questions_
In this paper, we study the following research questions:
**RQ1**: What is the performance of Gamma compared to state-of-the-art APR approaches?
**RQ2**: What is the generalizability of Gamma in repairing additional real-world bugs?
**RQ3**: What is the scalability of Gamma when employing other advanced pre-trained models?
### _Benchmarks_
To evaluate the repair performance, we use the standard benchmark of Defects4J-v1.2 [48] in the APR community. Defects4J-v1.2 is a collection of real-world bugs from open-source projects and is widely adopted by existing traditional [17, 49] and learning-based APR approaches [38, 39, 47, 50]. In particular, Defects4J-v1.2 contains 395 known and reproducible bugs, each of which contains a buggy version and a fixed version, as well as a corresponding test suite that triggers that bug for patch validation. Evaluation on Defects4J-v1.2 can reflect the performance of Gamma in a real-world debugging scenario and provides sufficient comparison results against most of the existing APR techniques.
Besides, we choose Defects4J-v2.0 and QuixBugs as other bug benchmarks for evaluation, so as to investigate the generalizability of Gamma. Defects4J-v2.0 provides 420 additional real-world bugs from 17 Java projects, which is adopted by some recent APR studies [18, 50]. QuixBugs [51] is a multi-lingual parallel bug-fixing dataset in Python and Java used in [18, 39]. QuixBugs contains 40 small classic algorithms with a bug on a single line, along with the bug-triggering test suite.
### _Baselines_
To enable sufficient evaluations, we compare GammaA against both traditional and learning-based APR approaches. We choose seven recent learning-based APR tools, i.e., AlphaRepair [18], Recoder [50], CURE [47], CoCoNuT [52], CIRCLE [39], DLFix [38], and SequenceR [53]. We also choose two state-of-the-art template-based APR tools TBar [17] and PraPR [54] as representatives of traditional APR. In total, we evaluate Gamma against nine advanced APR tools from different categories. Although the fault localization configuration is a significant part of APR, we do not take it into consideration in our experiment because of potential deviations that fault localization may bring about. Following recent APR studies [38, 39, 47, 50], we apply perfect fault localization in the way of inputting the exact buggy lines into different APR techniques to standardize the impact of fault localization on repair performance, discussed in Section VI.
### _Evaluation Metrics_
We use two common metrics to evaluate the performance of all involved APR approaches [5, 6], i.e., plausible patch and correct patch. The first one fixes the buggy functionality without harming other correct functionality (i.e., passing all available test suites), and the second one is semantically or syntactically equivalent to the developer patch (i.e., generalizing the potential test suite). We manually inspect each plausible patch to identify whether it is a correct patch by following the standard practice in APR research.
### _Implementation Details_
At the stage of fix template selection, we apply Eclipse JDT to parse the input line into AST, and then the AST is traversed to examine if it contains any node that is required by a fix template. There are several templates that can fit all the input buggy lines. For example, the template _TT0: Insert Statements_ only requires adding statements around the buggy line and does not mutate any nodes in the AST. Such templates are directly applied to all the inputs without checking the AST.
In the mask prediction phase, we choose the UniXcoder model "unixcoder-base". This is a model pre-trained on natural language-programming language (NL-PL) pairs and is reported in the original UniXcoder paper [26]. We use the encoder-decoder mode of the model to give predictions for each mask and generate candidate patches. We set the beam size as 250 due to the limitation of our device, which is smaller than 1000 used in CURE [47] and CoCoNuT [52]. Following previous learning-based APR approaches [18, 50], we set a 5-hour running-time limit for fixing one bug to perform a fair comparison. w
All experiments are conducted on one Ubuntu 18.04.3 server with two Tesla V100-SXM2 GPUs.
```
108-409,7+409,7@@publicstaticdouble
2factorialO(finalintn){
3publicstaticintgcd(intu,intw){
4if(u+v--0){
5if((u==0)||(v==0)){
6return(Math.abs(u)+Math.abs(v));
7} }
```
Listing 3: The Defects4J bug Math-94
## V Evaluation and Results
### _Comparison with State-of-the-arts_
**Experimental Design.** In this section, we aim to evaluate the performance of Gamma. We employ the 395 real-world bugs presented in the Defects4J-v1.2 dataset and compare Gamma with state-of-the-art APR techniques, including traditional and learning-based ones. We report the performance of all compared techniques under perfect fault localization (i.e., the ground-truth buggy statement is known to the techniques).
**Results.** Table I presents the number of bugs that different APR techniques successfully fix on the Defects4J-v1.2 dataset. Overall, we find that Gamma substantially outperforms the compared APR techniques including both traditional and learning-based APR techniques. Gamma is able to generate correct patches for 82 real-world bugs, 20.59% (14 bugs), 26.15% (17 bugs) and 14.8% (8 bugs) more than TBar, Recoder and AlphaRepair. In particular, Gamma fixes 11, 24, 16, 25, 3, and 3 bugs for Chart, Closure, Lang, Math, Mockito, and Time projects, respectively, four of which are best-performing (bold in Table I). More importantly, we find that Gamma achieves a correct rate of 81.19% (82/101) for plausible patches, 9.61% (68/95), 23.15% (65/112) and 13.30% (74/109) higher than TBar, Recoder and AlphaRepair, indicating that Gamma is able to alleviate the long-standing patch overfitting problem in the community of APR.
**Overlap Analysis.** To investigate what extent Gamma complements existing APR techniques, we further calculate the number of overlapping bugs fixed by different techniques. One best-performing traditional technique (i.e., TBar) and three best-performing learning-based techniques (i.e., AlphaRepair, CURE, and Recoder) are selected. As shown in Fig. 2, Gamma fixes 14 unique bugs that other APR approaches fail to fix, which is 11, 3, 8, and 10 more than TBar, AlphaRepair, CURE, and Recoder, respectively. More importantly, as a template-based APR technique, there are 22 correctly-fixed bugs unique to Gamma compared with TBar, highlighting the benefits of mask prediction performed by UniXcoder. Overall, the results demonstrate that Gamma is complementary to these best-performing APR techniques, to increase the number of correctly-fixed bugs in the Defects4J-v1.2 benchmark.
**Case Study.** We have demonstrated the superior performance of Gamma over a state-of-the-art template-based tool TBar, which is most related to our work. To further investigate the effectiveness of Gamma, we provide some examples of
bugs that Gamma is able to fix but TBar fails to. Listing 3 presents the bug Math-94 from the Defects4J-v1.2. Math-94 can be fixed with the fix template of mutating conditional expressions. The conditional expression "u * v == 0" within an if statement is replaced by "(u == 0 || (v == 0)". TBar deals with this template in the way of replacing the suspicious expression with other compatible ones collected from the same local file, while Gamma directly replaces the expression with a mask token so that they can later be predicted by the mask prediction task from the pre-trained model, making it possible to generate new expressions that can correctly fix the bugs.
Listing 4 presents a similar example of the bug Closure-52 from the Defects4J-v1.2. Closure-52 denotes the 52nd buggy version of the Google Closure Compiler project in Defects4J. To fix this bug, we need to insert a new sub-conditional expression into the original expression. TBar fails to generate a correct patch with improper operation and variables while Gamma is able to directly predict the masked expression with corresponding code context.
### _Generalizability of Gamma_
**Experimental Design.** We have demonstrated that Gamma achieves impressive performance to repair real-world bugs from the widely-adopted Defects4J-v1.2 benchmark on top of fix patterns. Durieux et al. [55] demonstrate that there exists a common benchmark overfitting phenomenon in APR evaluation, i.e., APR tools usually perform significantly better on Defects4J-v1.2 than on other benchmarks. In this section, according to prior work [18, 50], to evaluate the generalizability of Gamma, we continue to conduct some extended experiments on additional projects for further evaluation.
**Results.** Table II presents the comparison results of Gamma against baselines on Defects4J-v2.0 and QuixBugs. In Defects4J-v2.0, following some recent work [18, 56], we only focus on those bugs whose patches are confined to a single location. Overall, Gamma generates 45 correct patches in the given 257 buggy programs, outperforming both traditional and learning-based approaches. We find that the performance achieved on the Defects4J-v2.0 dataset is commonly less than that achieved on the Defects4J-v1.2 dataset. For example, AlphaRepair fixes 18.73% (74/395) of bugs from Defects4J-v1.2 while only fixes 14.01% (36/257) of bugs from Defects4J-v2.0. Based on our analysis of the two datasets, the possible reason is that Defects4J-v2.0 contains a harder set of projects for APR with a different variety of fixes compared to Defects4J-v1.2. Despite that, Gamma is able to generate 9, 34, and 37 more correct patches, which is the highest number among all approaches. We also find that as a template-based approach, TBar is able to generate a high amount of correct patches (68) for Defects4J-v1.2, while it only generates a limited number of correct patches (8) for Defects4J-v2.0. The possible reason may be that most fix patterns are designed to target Defects4J-v1.2, which may not generalize to other unseen projects, such as Defects4J-v2.0; Besides, learning-based approaches also suffer from moving to a harder evaluation dataset since the code transformation patterns are learned from training datasets
\begin{table}
\begin{tabular}{c|c c c c c c c c c} \hline \hline Project & SequenceR & CoCoNuT & CURE & DLFix & Recorder & AlphaRepair & CIRCLE & PraPR\(\uparrow\) & Tbar & Gamma \\ \hline Chart & 3 & 7 & 10 & 5 & 10 & 9 & 7 & 7 & 11 & **11** \\ Closure & 3 & 9 & 14 & 11 & 21 & 23 & 17 & 12 & 16 & **24** \\ Lang & 2 & 7 & 9 & 8 & 11 & 13 & 10 & 6 & 13 & **16** \\ Math & 6 & 16 & 19 & 13 & 18 & 21 & **27** & 10 & 22 & 25 \\ Mockito & 0 & 4 & 4 & 1 & 2 & **5** & 1 & 3 & 3 & 3 \\ Time & 0 & 1 & 1 & 2 & 3 & 3 & 2 & 3 & 3 & **3** \\ \hline Total & 14/19 & 44/85 & 57/104 & 40/68 & 65/112 & 74/109 & 64/182 & 41/146 & 68/95 & **82/101** \\ \hline \hline \end{tabular}
\end{table} TABLE I: Comparison with state-of-the-art APR techniques. Following the common practice in the APR community [39, 47, 52], we reuse the released results from the most recent work [18] instead of directly running the APR tools. Due to the APR community’s subsequent validation of publicly released correct patches, the results of some APR tools may be different from the reported results in their published papers. (\(\uparrow\)) PraPR is evaluated with the results of Ochiai fault localization [31].
Fig. 2: The overlaps of the bugs fixed by different approaches
which might not be present in Defects4J-v2.0. In contrast, Gamma is able to address the generalizability issue without training on specific bug datasets, which makes it less prone to suffer from generalizability issues of traditional template-based or learning-based tools.
Apart from Defects4J-v2.0, we also try to validate our approach on QuixBugs, which extracts bugs from Quixey Challenge and translates them into both Java and Python languages. Since our fix templates are designed for Java, we only focus on Java programs in QuixBugs following previous work [17]. Table II shows that among 40 bugs in QuixBugs, 22 are correctly fixed by Gamma, highlighting the competitive performance of Gamma against state-of-the-art approaches. It is worth noting that most of the templates are summarized from Defects4J-v1.2, which may mean that some templates cannot be applied to any bugs except those from Defects4J-v1.2. Thus, Gamma may be limited by the lack of more efficient fix templates when coming to other new bug datasets. For example, although various types of templates along with sub-templates are defined, some of the templates cannot be used to fix at least one bug from QuixBugs. As a result, we expect to explore more general fix templates in the future to further improve the performance of template-based APR.
### _Scalability of Gamma_
**Experimental Design.** To further investigate whether the performance of Gamma is affected by different pre-trained models, we apply two other advanced models to perform the mask prediction task: CodeBERT and ChatGPT. CodeBERT [27] is a pre-trained model for programming and natural languages, and mask prediction is one of its pre-training tasks. ChatGPT is a state-of-the-art language model that has shown impressive ability in conversations with human beings. We also use Defects4J-v1.2 as a benchmark but replace UniXcoder with these two models in the process of filling masks to find out to what extent pre-trained models influence the effectiveness of template-based program repair.
Similar to UniXcoder, CodeBERT can also generate predictions for a mask token "\(<\)mask\(>\)" in the given code snippet. The difference between them is that UniXcoder can predict several continuous tokens for a single mask while CodeBERT can only give one token for a mask. However, there is usually more than one token under a mask, so when using CodeBERT, we have to use different numbers of successive masks to mask the initial code and then predict them sequentially. We do not know the exact number of masks we should use (i.e., the number of tokens in the fixed code) as there could be a great many possibilities in the patch. So while masking, we naively try all the mask numbers from 1 to 20, which is a range suitable to most cases. In every iteration, a mask is predicted and a joint score for each prediction is calculated. Those predictions with the highest scores will be chosen to replace the mask, and the next mask will be predicted according to the previous predictions. We set the beam size as 250, the same as that used in UniXcoder, so in each iteration, CodeBERT will give 250 most possible predictions for the mask. Taking the process of fixing the Defects4J Closure-10 bug as an example (shown in Listing 2), the patch of the bug involves a change in the method name. To fix this bug, the method name _allResultsMatch_ is replaced with masks, and then CodeBERT is asked to give 250 predictions for the first mask. Among the predictions, the token "any" has a relatively high score, so it replaces the first mask and CodeBERT will continue to predict the next mask until all masks are filled.
Different from UniXcoder, ChatGPT is fine-tuned from GPT-3.5 and close-sourced. We can access ChatGPT with ChatGPT's API of gpt-3.5-turbo-0301, which is the latest version available. We interact with ChatGPT through natural language conversations, i.e., sending requests to ChatGPT or receiving responses from ChatGPT. To fill the masks with ChatGPT, we first give it some prompts, instructing it to return back predictions for the mask. Following the prompts we then add the masked buggy line along with its context to form the complete query for ChatGPT. In our experiment, the input for ChatGPT starts with a prompt _"Next token prediction task, the first line is a comment to help prediction, just return 250 possible predictions for \(<\)mask\(>\) with highest probability:_ ", and then the bug context we give is the same as the input for UniXcoder, which consists of a commented buggy line and the whole method where the buggy line belongs. Besides, due to the benefits of the designed prompt, we do not set the number of masked tokens in the buggy code, which is the same as UniXcoder.
**Results.** Fig. 3 presents the repair results of different pre-trained language models. Overall, the combination of the three models is able to fix 93 bugs from Defects4J-v1.2, demonstrating these models can be used together by Gamma to further increase the number of correct patches that can be generated. In particular, we find when using CodeBERT to perform the mask prediction task, 80 bugs in total are fixed by Gamma correctly, only two less than the bugs that Gamma with UniXcoder fixes. However, it takes much more time for CodeBERT to generate correct patches, as the number of masks that should be used in fixing a bug is unpredictable, and we have to run the mask prediction program on the same bug and the same fix template for a lot of times, each with a different mask number. In contrast, UniXcoder circularly predicts the next token for a mask until an EOF token is generated, so only one mask is required to fix the bug. We
\begin{table}
\begin{tabular}{c|c c c c c c c} \hline \hline Project & AlphaRepair & Recoder & CURE & CoCoNuT & CIRCLE & TBar & Gamma \\ \hline Defects4J 2.0 & 36 & 11 & - & - & - & 8 & 45 \\ QuixBugs & 28 & 17 & 26 & 13 & 19 & - & 22 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Comparison on additional datasets
also find that Gamma with ChatGPT only fixes 67 bugs correctly, performing not well as Gamma with UniXcoder and CodeBERT in mask prediction. The possible reason lies in that UniXcoder is pre-trained with a masked language modeling objective, where some training text is artificially masked and the training objective is to predict the real text. However, ChatGPT is designed for natural language conversations and it is unclear how ChatGPT is pre-trained due to it being close-sourced. Thus, it is natural to employ UniXcoder to recover masked code tokens for buggy code snippets in our approach. Future researchers should further explore how to betters utilize ChatGPT (e.g., designing other prompts) for mask prediction.
## VI Threats to Validity
The first threat to validity comes from the manual inspection of correct patches. To alleviate the influence of potential bias, following previous works [39, 47], three authors manually verify all plausible patches (i.e. patches that successfully pass the test) based on ground truth patches (i.e., developer patches). A plausible patch is considered to be correct if all three authors identify it as equivalent to a ground truth patch semantically. To facilitate the replication and verification of our experiments, we also release the relevant materials (including all source code and correct patches) publicly [29].
The second threat to validity is the fault localization setting. We evaluate Gamma and baselines on perfect fault localization (i.e., ground-truth buggy location is known) due to two reasons. First, off-the-shelf fault localization approaches affect the performance of APR techniques significantly, introducing a bias in comparison results [57]. The perfect fault localization mitigates the influence of differences in different fault localization approaches on the repair results and enables a fair assessment of repair performance independently of the fault localization approach used. Second, the most recent APR techniques [39, 47, 52, 58, 59] are only evaluated with perfect fault localization, which makes our work also use perfect localization to ensure direct comparison. However, this comparison setting may bring bias in repair performance since the perfect fault localization results are usually unavailable in real practice. Despite that, we believe that perfect fault localization has little impact on our results, as perfect localization can show the pure performance of different APR approaches. In the future, we attempt to report the repair performance of Gamma and baselines with both the automated fault localization (e.g., Ochiai [31]) and perfect fault localization.
The third threat to validity comes from the potential of data leakage of pre-trained models. In our experiment, we implement Gamma on top of UniXcoder and evaluate Gamma on the widely-adopted benchmark Defects4J-v1.2. Considering that UniXcoder is pre-trained with millions of code snippets, there may exist some bugs in the evaluation benchmark of Defects4J-v1.2 that appear in the pre-training dataset of UniXcoder. We perform a manual inspection to check whether the fixed bugs by Gamma are leaked into the pre-training dataset. In particular, we query the pre-training datasets including 2.3M functions paired with comments and 4.1M unimodal code from CodeSearchNet. The manual inspection is performed by two authors independently and confirmed by a third author. We find there are three bugs leaked into the pre-training set, i.e., Closure-73, Closure-126, and Time-19. For the three bugs, we manually perturb the buggy code (e.g., changing variable names, adding dead code) and find GAMMA is still able to generate the correct patches for all three bugs. We also find that if we exclude the three overlapping bugs, GAMMA still outperforms state-of-the-art APR techniques (79 vs. 68 for TBar, 79 vs. 74 for AlphaRepair). Thus, we are confident that the data leakage is not a key point to our conclusion.
## VII Related Work
### _Automated Program Repair_
Existing APR techniques can be divided into four categories, i.e., heuristic-based [11, 12], constraint-based [13, 14, 60], template-based [15, 16, 17] and learning-based repair techniques [38, 50, 52]. Our work is related to template-based and learning-based APR, discussed as follows.
Template-based APR, which generates patches with the help of fix patterns, represents state-of-the-art among traditional APR techniques. TBar [17] systematically collects and summarizes fix patterns from previous literature and investigates the effectiveness of patch generation based on these templates. Some other techniques explore fix patterns in various ways. For example, PAR [44] manually extracts fix patterns from 60,000 human-written patches. FixMiner [15] mines fix patterns with an iterative clustering strategy. AVATAR [16] leverages fix patterns from static bug detection tools to generate patches. Different from most existing template-based APR techniques that focus on mining fix patterns, Gamma is the first work that aims to address the donor code issue by integrating pre-trained models with a fill-in-the-blank task.
With large available open-source code corpora, learning-based APR, which applies machine learning to the bug-fixing objective, is getting growing attention. For example, DLFix [38] uses a tree-based recurrent neural network (RNN) model to learn from bug fixes and surrounding contexts in the
Fig. 3: The overlaps of the bugs fixed by different pre-trained language models
form of an abstract syntax tree. CoCoNuT [52] introduces a novel context-aware neural machine translation (NMT) architecture that separately represents the buggy code and context. CURE [47] attempts to break the limit of existing NMT-based techniques by pre-training a programming language model on a large codebase, introducing a new code-aware search strategy, and using subword tokenization to narrow the search space. Recoder [50] is a syntax-guided edit decoder with placeholder generation, which provides a novel provider/decider architecture to guarantee that patches with correct syntax are generated. Different from existing learning-based APR techniques generating patches from scratch with bug-fixing data training, Gamma aims to directly predict correct code tokens with the help of fix patterns in a zero-shot scenario.
Recently, there exists an increasing number of APR techniques on top of pre-trained models. For example, Yuan et al. [39] propose CIRCLE, a T5-based program repair framework equipped with continual learning ability across multiple languages. Xia et al. [18] propose AlphaRepair, a cloze-style APR approach based on CodeBERT without fine-tuning on historical bug-fixing data. In our work, we include CIRCLE and AlphaRepair as baselines in the experiment. Sobania et al. [61] investigate the performance of ChatGPT on the QuixBugs benchmark. Mashhadi et al. [62] investigate the performance of fine-tuning CodeBERT to fix software bugs from ManySStuBs4J. Jiang et al. [58] explore the performance of pre-trained models with and without fine-tuning for the program repair domain. Xia et al. [59] further present an extensive evaluation of recent pre-trained models for fixing real-world projects and find state-of-the-art pre-trained models are able to fix a considerable number of bugs. For example, CodeX, the most effective one, fixes 99 bugs in Defects4J-v1.2 with a total combination of three repair settings. We exclude CodeX as a baseline in our experiment due to the uncertainty of training data in such black-box large pre-trained models.
### _Pre-trained Language Models and Applications_
In this section, we introduce some typical pre-trained language models and then discuss the applications of pre-trained language models to some code-related tasks, e.g., code search.
#### Vii-B1 Pre-trained Language Models
Pre-trained language models have shown promising results on NLP tasks. BERT [63] is a model to condition on left and right contexts in all layers so as to pre-train deep bidirectional representations from unlabeled text. GPT-3 [43] is an autoregressive language model having 175 billion parameters, significantly outnumbering the parameters in previous language models. ChatGPT [28] is the currently most popular language model fine-tuned from GPT-3 and is receiving attention from both scientific and industrial fields. The most remarkable feature of ChatGPT is that it can generate human-like responses and communicate with human beings like what a real human can do.
Inspired by the success of pre-trained models in NLP, many researchers apply the pre-trained model to code-related tasks. Feng et al. [27] propose a bimodal pre-trained model (CodeBERT) for both programming language and natural language. CodeBERT is developed on Transformer-based neural architecture and pre-trained with the task of masked language modeling, which is to predict tokens, and replaced token detection. Guo et al. [26] present UniXcoder, a unified cross-modal pre-trained model for programming language. UniXcoder utilizes mask attention matrices with prefix adapters to control the behavior of the model and leverages cross-modal contents such as AST and code comment to enhance code representation. Different from these studies designing novel pre-train models from scratch, we attempt to boost template-based APR on top of these pre-trained models.
#### Vii-B2 Applications of Pre-trained Models
In addition to the above-mentioned typical pre-trained models, researchers have also applied such pre-trained models to some code-related domains (e.g., code completion, and program repair). Mastropalo et al. [64] present an empirical study to investigate the usage of pre-trained models for four code-related tasks, including program repair, mutants injection, assertion generation, and code summarization. A similar strategy combining mutation patterns and pre-trained models is adopted in mutation testing. For example, Degiovanni et al. [65] introduce \(\mu\)BERT, a CodeBERT-based mutation testing tool by masking a token from the expression and replacing the masked token with the predicted one from CodeBERT. Richter et al. [66] propose a contextual mutation operator by employing CodeBERT to produce a context-dependent distribution over feasible token replacements. Recently, Zhang et al. [67] conduct an extensive empirical study to investigate the performance of pre-trained models in repairing security vulnerabilities and propose a enhanced approach with bug fixing transfer learning. Although there exist some SE tasks (e.g., mutation testing and program repair) benefitting from pre-trained models, in this work, we perform the first work to employ pre-trained models to directly predict the correct code with the help of fix patterns.
## VIII Conclusion
In this work, we present Gamma, an innovative template-based APR tool that assimilates the advances of fix templates and pre-trained models. Gamma first defines a set of mask fix templates by masking buggy code tokens with corresponding code context. Gamma then uses the off-the-shelf pre-trained models to directly recover the correct code with a mask prediction task. More importantly, Gamma can be built on various pre-trained models under a zero-shot learning setting and we implement it as a practical APR tool using the recent UniXcoder model. The experimental results on the popular Defects4J-v1.2 dataset have shown promising performance, e.g., 82 bugs are fixed by Gamma, outperforming all state-of-the-art APR techniques. We also demonstrate that Gamma is able to address the dataset overfitting well, e.g., 45 and 22 bugs are fixed in Defects4J-v2.0 and Quixbugs. We further demonstrate that Gamma is generalizable to different pre-trained language models, such as CodeBERT and ChatGPT.
## Acknowledgment
The authors would like to thank the anonymous reviewers for their insightful comments. This work is supported partially by the National Natural Science Foundation of China (61932012, 62141215).
|
2309.10881 | Nanorobotics in Medicine: A Systematic Review of Advances, Challenges,
and Future Prospects | Nanorobotics offers an emerging frontier in biomedicine, holding the
potential to revolutionize diagnostic and therapeutic applications through its
unique capabilities in manipulating biological systems at the nanoscale.
Following PRISMA guidelines, a comprehensive literature search was conducted
using IEEE Xplore and PubMed databases, resulting in the identification and
analysis of a total of 414 papers. The studies were filtered to include only
those that addressed both nanorobotics and direct medical applications. Our
analysis traces the technology's evolution, highlighting its growing prominence
in medicine as evidenced by the increasing number of publications over time.
Applications ranged from targeted drug delivery and single-cell manipulation to
minimally invasive surgery and biosensing. Despite the promise, limitations
such as biocompatibility, precise control, and ethical concerns were also
identified. This review aims to offer a thorough overview of the state of
nanorobotics in medicine, drawing attention to current challenges and
opportunities, and providing directions for future research in this rapidly
advancing field. | Shishir Rajendran, Prathic Sundararajan, Ashi Awasthi, Suraj Rajendran | 2023-09-19T19:11:29Z | http://arxiv.org/abs/2309.10881v1 | Nanorobotics in Medicine: A Systematic Review of Advances, Challenges, and Future Prospects
###### Abstract
Nanorobotics offers an emerging frontier in biomedicine, holding the potential to revolutionize diagnostic and therapeutic applications through its unique capabilities in manipulating biological systems at the nanoscale. Following PRISMA guidelines, a comprehensive literature search was conducted using IEEE Xplore and PubMed databases, resulting in the identification and analysis of a total of 414 papers. The studies were filtered to include only those that addressed both nanorobotics and direct medical applications. Our analysis traces the technology's evolution, highlighting its growing prominence in medicine as evidenced by the increasing number of publications over time. Applications ranged from targeted drug delivery and single-cell manipulation to minimally invasive surgery and biosensing. Despite the promise, limitations such as biocompatibility, precise control, and ethical concerns were also identified. This review aims to offer a thorough overview of the state of nanorobotics in medicine, drawing attention to current challenges and opportunities, and providing directions for future research in this rapidly advancing field.
## Introduction
Nanorobotics, a field merging nanotechnology with teleoperated and autonomous robotics, presents groundbreaking solutions that are unattainable with conventional robotics. A nanorobot, also known as a nanomachine, is a miniature mechanical or electromechanical device designed to perform specific tasks at the nanoscale level [1]. Contrary to nanorobotics, nanoparticles are tiny particles with unique properties, used for applications like drug delivery. Nanorobotics involves designing molecular-scale robots for tasks such as targeted medical procedures. The former is about passive materials, while the latter introduces active, controllable machines at the nanoscale. These miniature robots, due to their size, offer unique opportunities for operations at molecular and cellular levels.
The trend toward miniaturization in medical robotics has been gathering considerable momentum, and the potential impacts of this trend on the field of biomedicine are profound. Beyond the realm of macroscale medical robotics, the exploration of small-scale medical robotics, ranging from several millimeters down to a few nanometers in all dimensions, has intensified. These micro and nanoscale robots have been investigated for diverse biomedical and healthcare applications, including single-cell manipulation and biosensing, targeted drug delivery, minimally invasive surgery, medical diagnosis, tumor therapy, detoxification, and more [2]. |
2309.07325 | Bicrystallography-informed Frenkel-Kontorova model for interlayer
dislocations in strained 2D heterostructures | In recent years, van der Waals (vdW) heterostructures and homostructures,
which consist of stacks of two-dimensional (2D) materials, have risen to
prominence due to their association with exotic quantum phenomena. Atomistic
scale relaxation effects play an extremely important role in the electronic
scale quantum physics of these systems. We investigate such structural
relaxation effects in this work using atomistic and mesoscale models, within
the context of twisted bilayer graphene -- a well-known heterostructure system
that features moire patterns arising from the lattices of the two graphene
layers. For small twist angles, atomic relaxation effects in this system are
associated with the natural emergence of interface dislocations or strain
solitons, which result from the cyclic nature of the generalized stacking fault
energy (GSFE), that measures the interface energy based on the relative
movement of the two layers. In this work, we first demonstrate using atomistic
simulations that atomic reconstruction in bilayer graphene under a large twist
also results from interface dislocations, although the Burgers vectors of such
dislocations are considerably smaller than those observed in small-twist
systems. To reveal the translational invariance of the heterointerface
responsible for the formation of such dislocations, we derive the translational
symmetry of the GSFE of a 2D heterostructure using the notions of coincident
site lattices (CSLs) and displacement shift complete lattices (DSCLs). The
workhorse for this exercise is a recently developed Smith normal form
bicrystallography framework. Next, we construct a bicrystallography-informed
and frame-invariant Frenkel-Kontorova model, which can predict the formation of
strain solitons in arbitrary 2D heterostructures, and apply it to study a
heterostrained, large-twist bilayer graphene system. | Md Tusher Ahmed, Chenhaoyue Wang, Amartya S. Banerjee, Nikhil Chandra Admal | 2023-09-12T15:30:28Z | http://arxiv.org/abs/2309.07325v1 | Bicrystallography-informed Frenkel-Kontorova model for interlayer dislocations in strained 2D heterostructures
###### Abstract
In recent years, van der Waals (vdW) heterostructures and homostructures, which consist of stacks of two-dimensional (2D) materials, have risen to prominence due to their association with exotic quantum phenomena originating from correlated electronic states harbored by them. Atomistic scale relaxation effects play an extremely important role in the electronic scale quantum physics of these systems, providing means of manipulation of these materials and allowing them to be tailored for emergent technologies. We investigate such structural relaxation effects in this work using atomistic and mesoscale models, within the context of twisted bilayer graphene -- a well-known heterostructure system that features moire patterns arising from the lattices of the two graphene layers. For small twist angles, atomic relaxation effects in this system are associated with the natural emergence of interface dislocations or strain solitons, which result from the cyclic nature of the generalized stacking fault energy (GSFE), that measures the interface energy based on the relative movement of the two layers. In this work, we first demonstrate using atomistic simulations that atomic reconstruction in bilayer graphene under a large twist also results from interface dislocations, although the Burgers vectors of such dislocations are considerably smaller than those observed in small-twist systems. To reveal the translational invariance of the heterointerface responsible for the formation of such dislocations, we derive the translational symmetry of the GSFE of a 2D heterostructure using the notions of coincident site lattices (CSLs) and displacement shift complete lattices (DSCLs). The workhorse for this exercise is a recently developed Smith normal form bicrystallography framework. Next, we construct a bicrystallography-informed and frame-invariant Frenkel--Kontorova model, which can predict the formation of strain solitons in arbitrary 2D heterostructures, and apply it to study a heterostrained, large-twist bilayer graphene system. Our mesoscale model is found to produce results consistent with atomistic simulations. We anticipate that the model will be invaluable in predicting structural relaxation and for providing insights into various heterostructure systems, especially in cases where the fundamental unit cell is large and therefore, atomistic simulations are computationally expensive.
## 1 Introduction
Quantum materials, i.e., materials that manifest exotic physical properties due to the presence of strong electronic correlations, have risen to prominence in recent years due to their applications in emergent technologies connected to nanoelectronics and quantum information science (Basov et al., 2017; Keimer and Moore, 2017; Tokura et al., 2017). The grand challenge of designing and manufacturing such materials stems from the high sensitivity of their properties to local structure and symmetry (Kim et al., 2022). In recent years, van der Waals (vdW) homostructures and heterostructures, which consist of stacks of two-dimensional (2D) materials, have emerged as an important class of quantum materials (Shimazaki et al., 2020; Jin et al., 2019; Cao et al., 2021; Regan et al., 2020). The weak vdW interactions between the 2D lattices in such materials offer high fidelity in tuning the local atomic environments, thus allowing exquisite control over the quantum properties of such systems. Small-twist bilayer graphene (BG) is the most prominent example, wherein dispersionless electronic states (or _flat bands_) emerge (Bistritzer and MacDonald, 2011; Tarnopolsky et al., 2019; Cao et al., 2018; Tao et al., 2022; Zhao et al., 2020, 2021) at a specific _magic_ twist angle,
\(\theta\sim 1.1^{\circ}\). Magic-angle twisted BG exhibits unconventional superconductivity, correlated insulator phases, magnetism, and non-trivial topological phases (Papageorgiou et al., 2017; Lee et al., 2019; Cao et al., 2020; Rakib et al., 2022; Uri et al., 2020; Wong et al., 2020) -- properties associated with the _moire superlattice_ formed by the constituent 2D lattices. However, such exotic properties are susceptible to perturbations in the twist angle. Since the relative twist between adjoining lattices constitutes only a one-dimensional subspace of the four-dimensional space of relative deformations,1 we recognize that the larger collection of relative deformation-induced moire is an exciting test bed to explore new moire physics. A vdW heterostructure is said to be _heterostrained_ if its two lattices are under different strain states (Pochet et al., 2017). In this paper, we refer to the mutually exclusive twisted and heterostrained states using an umbrella term, _heterodermation_. The use of heterostrains to tune the electronic properties of materials and explore new quantum states is the goal of _straintronics_(Miao et al., 2021), an emerging research area.
Footnote 1: If the two lattices of a heterostructure are subjected to uniform deformation gradients \(\mathbf{F}_{1}\) and \(\mathbf{F}_{2}\), then \(\mathbf{F}_{1}^{-1}\mathbf{F}_{2}\) is the relative deformation, and its representation as a \(2\times 2\) matrix accounts for the four dimensions.
Homostructures such as BGs, under a small twist (\(1^{\circ}-3^{\circ}\)) relative to the energetically favorable AB stacking, undergo atomic reconstruction due to spontaneous nucleation of interface dislocations, also referred to as _strain solitons_. Recognizing the sensitivity of the electronic properties to atomic rearrangements, strain engineering offers an exciting route to modulate the electronic properties of BGs by controlling the dislocation network using strains (Annevelink et al., 2020; Cazeaux et al., 2023; Cao et al., 2021; Kim et al., 2022). The overarching goal of this paper -- formulated to fully realize the potential of strain engineering for 2D heterostructures - _is to investigate and model atomic reconstruction in heterodeformed moire superlattices_. Moreover, due to the large size of moire superlattices, a high-throughput investigation of heterodermations is computationally challenging, which motivates us to seek a continuum model for atomic reconstruction. In what follows, we will identify the key features of atomic reconstruction observed in small-twist BG before formulating the objectives of this paper.
The atomic reconstruction (Annevelink et al., 2020; Zhang and Tadmor, 2017, 2018; Lopes dos Santos et al., 2012; Carr et al., 2018; Cazeaux et al., 2020, 2023; Gargiulo and Yazyev, 2017; Zhou et al., 2015) in a small-twist BG is a consequence of the interplay between interfacial energy and the elastic energies of the two lattices. The former is often described using the generalized stacking fault energy (GSFE) density, a periodic function of relative translations between the two AB-stacked lattices of the BG. The periodicity of the GSFE is derived from the bicrystallography of the interacting lattices. Under small twists relative to the AB stacking, the interfacial energy increases as the induced relative translations between the two lattices lead to regions of low-commensurability (high interfacial energy) interspersed with the highly commensurable AB stacking. The twisted BG responds to lower the interfacial energy by an atomic rearrangement that tends to increase (decrease) areas of high (low) commensurability. Due to the periodicity of the GSFE, the structural relaxation results in lines of displacement "jumps" that manifest as interface dislocation lines with the displacement "jump" as the Burgers vector (Alden et al., 2013; Kumar et al., 2016). Moreover, the Burgers vectors are parallel to the dislocation lines and their magnitude is equal to that of the smallest lattice vector of graphene. Therefore, the structural relaxation in a small-twist BG can be interpreted as elastic distortions associated with the formation of an array of screw lattice dislocations. Since the elastic energy diverges for discontinuous displacements, the "jumps" occur as localized displacement gradients, which implies the dislocation lines are diffused. The balance between the interfacial and elastic energies, which ultimately determines the network and the thickness of the dislocation lines, is at the core of the Frenkel-Kontorova continuum model for small-twist BGs. In this paper, the terms 'atomic relaxation' and'structural relaxation' are used synonymously.
Our study of atomic reconstruction in heterodeformed moire is guided by the energetics of structural relaxation in small-twist BGs. We begin by hypothesizing that the structural relaxation of a heterodeformed moire is also a consequence of interface dislocations and investigating the hypothesis using atomistic simulations. Instead of a bonafide 2D heterostructure, we use large-twist BG in our atomistic study due to the greater reliability of its interatomic potential, confirmed using Density Functional Theory (DFT). Moreover, it is reasonable to interpret a large-twist BG as a heterostructure since its lattices differ considerably. We show that the \(21.786789^{\circ}\) large-twist BG, when subjected to heterostrains, results in strain localization in a network of lines, suggesting the formation of interface dislocations. Interestingly, the Burgers vector of the
dislocations is smaller than that of the small-twist case.
In the presence of two distinct lattices, the notion of an interface dislocation has to be made precise as it is not clear to which lattice its Burgers vector belongs. The interpretation of a large-twist moire as a network of lattice screw dislocations breaks down as the dislocation cores overlap. To resolve this ambiguity, we turn our attention to grain/phase boundaries. Similar to a small-twist BG, small-tilt angle grain boundaries can be interpreted as an array of lattice dislocations. For large tilt angles, however, a grain boundary dislocation is defined as a defect in the translation invariance of the boundary (Grimmer et al., 1974). The translational invariance is derived by introducing two additional lattices -- coincident site lattice (CSL) and the displacement shift complete lattice (DSCL) -- originating from bicrystallography (Balluffi et al., 1982). The CSL is the intersection of the two lattices, and the DSCL is the smallest lattice that contains the two lattices. In 2D heterostructures, it is straightforward to see that the CSL _is_ the moire superlattice. On the other hand, the DSCL conveys the translational invariance of the interface -- displacing one lattice relative to the other by a DSCL vector preserves the structure of the interface. In other words, if a heterointerface hosts a dislocation, its Burgers vector must be a DSCL vector. While Koda et al. (2016) have used the CSL to identify heterodeformed moires, the use of DSCL to study interface dislocations remains largely unexplored.2 One of the key highlights of this paper is the application of Smith Normal Form (SNF) bicrystallography to characterize interface dislocations. SNF bicrystallography is an algebraic framework developed by the last authors' group to explore bicrystallography properties such as the translational invariance (Admal et al., 2022). In particular, it informs us that the Burgers vector (smallest DSCL lattice vector) is inversely proportional and a rational multiple of a CSL vector.
Footnote 2: A notable exception is the work of Ishikawa et al. (2016) where the DSCL and the moire superlattice are used to infer the atomic structure of twisted few-layer graphene, which is in the spirit of _moire metrology_(Annevelink et al., 2021).
Based on the atomistic simulations of heterodeformed BG and the SNF bicrystallography framework, we build a generalized Frenkel-Kontorova (GFK) model. The generalization relative to the classical Frenkel-Kontorova model stems from key features of the GFK model -- frame-invariance and defect-free _natural configurations_, which may include stackings that are not necessarily of the lowest energy.3 The GFK model generalizes the previous model of Nam and Koshino (2017) to large heterodeformations, including large twists. Unlike the model of Nam and Koshino (2017), which was developed exclusively for infinite systems, the model describes finite systems as well, wherein configurational forces due to surface tension play an important role.
Footnote 3: For example, in addition to the AB-stacked BG, a large-twist BG with a twist angle of \(21.786789^{\circ}\) is also a natural configuration.
This paper is organized as follows. In Section 2, we explore structural relaxation in a BG subjected to large heterodeformations using DFT-informed atomistic simulations and demonstrate the nucleation of interface dislocations. In Section 3, we review SNF bicrystallography and apply it to characterize the interface dislocations in heterodeformed BGs. In Section 4, we build the GFK model and implement and validate it in Section 5. We summarize and conclude in Section 6.
_Notation_: We use lowercase bold letters to denote vectors, and uppercase bold letters to denote second-order tensors, unless stated otherwise. The gradient, divergence, and curl operators are denoted by the symbols \(\nabla\), Div, and curl respectively. We use the symbol \(\otimes\) to denote the tensor product of two vectors, and \(\cdot\) to denote the inner product of two vectors or tensors.
## 2 Atomic scale investigation of structural relaxation under large heterodeformations
This section investigates the structural relaxation of 2D heterostructures using atomistic simulations of heterostrained BG, with the understanding that under large twists, a BG serves as a surrogate for a 2D heterostructure. The relaxation is restricted to being in-plane. Simulations are performed using Large-Scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) (Plimpton, 1995). Beginning with a small-twist BG, we systematically explore various small and large heterodeformations that result in atomic reconstruction. We will demonstrate that atomic reconstruction due to large heterodeformations results from interface dislocations, whose Burgers vector and the network are markedly different from those observed under small twists. We will revisit the examples of this section in Section 4 using a continuum model.
The simulated BGs are oriented such that the normal to the lattices is along the \(X_{3}\) direction. Since we allow only in-plane relaxation, the distance between the two graphene lattices is held fixed during the
simulation. The intralayer bonding in each graphene sheet is modeled using the reactive empirical bond order (REBO) potential (Brenner et al., 2002). The interlayer vdW interaction is described using the registry-dependent Kolmogorov-Crespi (KC) potential (Kolmogorov and Crespi, 2005). We investigate structural relaxation for two parametrizations of the KC potential, denoted as KC-1 (Kolmogorov and Crespi, 2005) and KC-2 (Ouyang et al., 2018) with parameters listed in Table 1. Since the KC-2 model was developed for BG systems under an out-of-plane compression, it may be viewed as an improvement of the KC-1 model. We will, however, explore both KC-1 _and_ KC-2 models while investigating large heterodeformations as the qualitative differences in the respective GSFEs lead to markedly different structural relaxations.
Periodic boundary conditions (PBCs) are imposed along two in-plane directions to avoid the influence of free boundary lines. Since PBCs necessitate the existence of a periodic supercell, we are restricted to bilayer configurations wherein the intersection of the projections (on the \(X_{1}-X_{2}\) plane) of the two lattices is a 2D superlattice. In other words, PBCs can be enforced if and only a CSL exists. The process of identifying heterodeformations that admit PBCs can be formalized as follows. Two 2D (multi) lattices \(\mathscr{A}\) and \(\mathscr{B}\), with structure matrices4\(\mathbf{A}\) and \(\mathbf{B}\), respectively, are coincident on a 2D CSL if and only if \(\mathbf{T}:=\mathbf{A}^{-1}\mathbf{B}\) is a rational matrix. We refer to \(\mathbf{T}\) as the _transition_ matrix. In a homostructure, if lattice \(\mathscr{B}\) is obtained by deforming lattice \(\mathscr{A}\) using a deformation gradient \(\mathbf{F}\), then \(\mathbf{B}=\mathbf{F}\mathbf{A}\), and all deformations that result in a rational \(\mathbf{A}^{-1}\mathbf{F}\mathbf{A}\) yield a CSL, and therefore, amenable to PBCs. In Section 3, we will show that the bicrystallographic properties of a heterodeformed moire can be deduced from the algebraic properties of the transition matrix. For example, the ratios of the areas of the primitive unit cells --
Footnote 4: The two basis vectors of a lattice are stored as columns of its structure matrix.
\[\Sigma_{\mathscr{A}}=\frac{\text{Area}(\text{CSL})}{\text{Area}(\mathscr{A})}, \quad\Sigma_{\mathscr{B}}=\frac{\text{Area}(\text{CSL})}{\text{Area}(\mathscr{ B})}, \tag{1}\]
are always integers, and if \(\mathscr{A}\) and \(\mathscr{B}\) have the same density, \(\Sigma_{\mathscr{A}}=\Sigma_{\mathscr{B}}=:\Sigma\). The two basis vectors of the CSL of a heterodeformed moire are chosen as the in-plane simulation box vectors. Therefore, the number of simulated atoms is equal to \(n_{\mathscr{A}}\Sigma_{\mathscr{A}}+n_{\mathscr{B}}\Sigma_{\mathscr{B}}\), where the factors \(n_{\mathscr{A}}\) and \(n_{\mathscr{B}}\) represents the number of basis atoms in the primitive unit cells of the respective 2D multilattices. In all our simulations, \(\mathscr{A}\) is a graphene lattice formed by the structure matrix
\[\mathbf{A}=\frac{a}{2}\begin{bmatrix}0&-\sqrt{3}\\ 2&-1\end{bmatrix},\]
and \(\mathscr{B}\) is a deformation or rotation of \(\mathscr{A}\), and placed at a prescribed interplanar distance in the \(X_{3}\) direction from \(\mathscr{A}\). The heterodeformed configurations studied in this paper are calculated using an algorithm (see Algorithm 2 in Admal et al. (2022)) derived from Theorem 2 in A, which generates heterostrained moires of various sizes and strains within prescribed upper bounds.
Atomic reconstruction is simulated by minimizing the total energy with respect to in-plane displacements of atoms using the fast inertial relaxation engine (FIRE) algorithm (Bitzek et al., 2006) with an energy tolerance and force tolerance of \(1\times 10^{-20}\,\text{eV}\) and \(1\times 10^{-20}\,\text{eV}\,\text{\AA}^{-1}\), respectively. The resulting displacements of atoms are analyzed to interpret them in terms of interface dislocations.
### Atomic reconstruction in a BG under a small twist and a small strain
In this section, we present simulations of atomic reconstruction in a BG under two small heterodeformations -- a) a \(0.2992634^{\circ}\) twist and b) a pure stretch of
\[\mathbf{U}=\begin{bmatrix}1.004219&0\\ 0&0.995781\end{bmatrix} \tag{2}\]
\begin{table}
\begin{tabular}{l|c c c c c c c} & \(C[\text{meV}]\) & \(C_{0}[\text{meV}]\) & \(C_{2}[\text{meV}]\) & \(C_{4}[\text{meV}]\) & \(A[\text{meV}]\) & \(\delta[\text{\AA}]\) & \(\lambda[\text{\AA}^{-1}]\) & \(z_{0}[\text{\AA}]\) \\ \hline KC-1 & 3.030 & 15.71 & 12.29 & 4.933 & 10.238 & 0.578 & 3.629 & 3.34. \\ KC-2 & 6.678 908 \(\times 10^{-4}\) & 21.847 167 & 12.060 173 & 4.711 099 & 12.660 270 & 0.771 810 1 & 3.143 921 & 3.328 819. \\ \end{tabular}
\end{table}
Table 1: Two parameterizations of the KC potential.
relative to the AB-stacked \(\Sigma 1\) configuration. Since the AB-stacked configuration corresponds to \(\mathbf{F}=\mathbf{R}(60^{\circ})\), lattice \(\mathscr{B}\) of the \(0.2992634^{\circ}\)-twisted BG is constructed using \(\mathbf{F}=\mathbf{R}(60.2992634^{\circ})\), and for the heterostrained case, \(\mathbf{F}=\mathbf{R}(60^{\circ})\mathbf{U}\). The interplanar distance is fixed at \(3.34\,\mathrm{\SIUnitSymbolAngstrom}\). The basis vectors of the corresponding CSLs,
\[\text{twist: }\mathbf{b}_{1}=470.824979\,\mathbf{e}_{1}, \mathbf{b}_{2}=235.412488\,\mathbf{e}_{1}+407.746391\,\mathbf{e}_{2},\text{ and} \tag{3a}\] \[\text{heterostrain: }\mathbf{b}_{1}=-581.794\,\mathbf{e}_{1}, \mathbf{b}_{2}=-287.199\,\mathbf{e}_{1}+505.965\,\mathbf{e}_{2}, \tag{3b}\]
define the respective periodic boxes of the simulations. From (1) and (3), it follows that \(\Sigma=36\,631\) for the \(0.2992634^{\circ}\)-twisted BG, whereas \(\Sigma_{\mathscr{A}}=56\,168\) and \(\Sigma_{\mathscr{B}}=56\,169\) for the heterostrained BG.
The color density plots of atomic energy density shown in Figs. 0(a) and 0(c) highlight the triangular dislocation network in the twisted and strained BGs, respectively. The high-energy nodal regions correspond to the AA stacking, and the interiors of the triangular domains are in AB stacking. Figs. 0(b) and 0(d) show line plots of displacements of atoms along the dashed lines in the respective energy density plots. The displacements are measured relative to the AB-stacking. Since Fig. 0(b) shows negligible displacement perpendicular to the dislocation line, interface dislocations in the twisted BG have a screw character. On the other hand, Fig. 0(d) suggests the interface dislocations in the heterostrained case have a mixed character. In both cases, the Burgers vector magnitude (size of the displacement jump) is \(<2.46\,\mathrm{\SIUnitSymbolAngstrom}\), the lattice constant of graphene, which implies the dislocations are not full dislocations. Annevelink et al. (2020), Pochet et al. (2017) demonstrated that the partial dislocations have a pure edge character under a small biaxial heterostrain relative to the AB stacking.
Figure 1: Atomic reconstruction in a BG under a small twist (top row) and a small heterostrain (bottom row). (a), (c) Plots of atomic energy density [\(\mathrm{meV}\,\mathrm{\SIUnitSymbolAngstrom}^{-2}\)] show a triangular network of interface dislocations. The dislocation lines separate triangular domains of low-energy AB-stacking. (b), (d) Line plots of the displacement components \(u_{1}\) and \(u_{2}\), measured along the dashed lines in (a) and (c). The displacements are measured relative to the untwisted AB-stacked configuration.
he origin of interface dislocations and their network pattern can be traced to the properties of AB-stacking's GSFE, shown in Fig. 2. The GSFE of a BG configuration is a function of the relative displacement between the two layers. The GSFE of AB stacking is periodic with respect to the lattice vectors of graphene. Under small uniform heterodeformations, the relative displacement between the two layers is spatially varying, and therefore, the interfacial energy is sampled from various regions of the GSFE, including the maxima and the minima. The BG responds by an atomic rearrangement to increase (decrease) regions of AB (AA) stacking, which corresponds to minima (maxima) in the GSFE plot, resulting in a juxtaposition of AB-stacked regions separated by dislocation lines. The Burgers vector of a dislocation line separating two AB-stacked regions is the relative vector, with magnitude \(1.42\,\mathrm{\SIUnitSymbolAngstrom}\), that connects the corresponding minima in the GSFE.5 Moreover, the triangular network of dislocation lines with every AB-stacked region surrounded by three similar regions originates from the observation that each minimum in the GSFE is surrounded by three nearest minima.
Footnote 5: Fig. 1b does not quite recover the entire Burgers vector magnitude of \(1.42\,\mathrm{\SIUnitSymbolAngstrom}\) as it includes displacements associated with elastic relaxation. In Section 4, where we present our continuum model, we will discuss the procedure to accurately measure the Burgers vector from the displacement field.
The arguments that helped us deduce the properties of dislocations from the GSFE are applicable only under small heterodeformations relative to the AB stacking. Under large heterodeformations, it is not clear if a heterostructure undergoes atomic reconstruction. If reconstruction occurs, its interpretation in terms of full/partial lattice dislocations breaks down as the dislocation cores overlap.6
Footnote 6: As the heterodeformation is measured relative to the AB stacking, the density of interface dislocations increases resulting in dislocation core overlap.
Figure 2: GSFE of AB stacking in meV \(\mathrm{\SIUnitSymbolAngstrom}^{-2}\). The minima and maxima correspond to the AB and AA stackings, respectively. Parametrizations KC-1 and KC-2 yield nearly identical GSFEs.
### Atomic reconstruction in a BG under large heterodeformations
It is well known that as the twist angle of a BG increases beyond a few degrees, the vdW interaction between the two lattices weakens, resulting in negligible structural relaxation (Annevelink et al., 2020; Morovati et al., 2022). Signatures of the presence or absence of interface dislocations can also be found in the plot of interface energy versus the twist angle, shown in Fig. 2(a). For small twist angles relative to the AB stacking, the interface energy variation (grey plot) is non-convex -- a signature for potential defect nucleation. In contrast, for large twist angles, the interfacial energy is insensitive to the twist angle, and this justifies the absence of atomic reconstruction. While Fig. 2(a) explores only twists -- as opposed to heterodeformations -- it is plausible that atomic reconstruction does not occur in heterodeformed configurations far from the AB and AA stackings. However, by decreasing the interlayer distance using external pressure, the interlayer electronic coupling can be made to persist for certain large twist angles. Indeed, this strategy follows the historical trend of using external pressure to probe correlated electron physics. For example, the role of interlayer compression on the atomic reconstruction of 2D heterostructures has been shown to have a substantial influence on the band structure (Carr et al., 2018; Chittari et al., 2018; Das et al., 2016). Moreover, experiments and first principle calculations (Hamer et al., 2022; Cheng et al., 2023) have shown that electronic scale effects are modified in large-twist bilayer graphene under out-of-plane compression. The blue plot in Fig. 2(a) shows that under a 10% out-of-plane strain, the interfacial energy from the KC-1 model is sensitive to twist angles in the neighborhood of \(21.786789^{\circ}\). We will refer to the \(21.786789^{\circ}\)-twist BG as the \(\Sigma 7\) configuration. More interestingly, the variation is non-convex in a neighborhood of \(\Sigma 7\) twist. Fig. 2(b) shows the atomic energy densities of the 28 atoms in the unit cell of the \(\Sigma 7\) configuration.
The non-convexity of the interfacial energy in the neighborhoods of \(\Sigma 1\) and \(\Sigma 7\) configurations motivates us to hypothesize that atomic reconstruction occurs for small heterodeformations relative to the \(\Sigma 7\) configuration in the presence of an out-of-plane compression. To investigate our hypothesis, we simulate the following two heterodeformations relative to the \(\Sigma 7\) configuration -- a) a \(0.170076^{\circ}\) twist and b) a pure stretch of
\[\mathbf{U}=\begin{bmatrix}1.010589&0\\ 0&0.997163\end{bmatrix}. \tag{4}\]
In other words, the heterodeformations are given by \(\mathbf{F}=\mathbf{R}(81.956865^{\circ})\) for the former, and \(\mathbf{F}=\mathbf{R}(81.786789^{\circ})\mathbf{U}\) for the latter case. The box vectors in the two simulations are
\[\text{twist:}\,\,\mathbf{b}_{1}=313.233\,\mathbf{e}_{1}, \mathbf{b}_{2}=156.616\,\mathbf{e}_{1}+271.267\,\mathbf{e}_{2},\text{ and} \tag{5a}\] \[\text{heterostrain:}\,\,\mathbf{b}_{1}=320.679\,\mathbf{e}_{1}, \mathbf{b}_{2}=1.04735\,\mathbf{e}_{1}+548.127\,\mathbf{e}_{2}, \tag{5b}\]
which imply \(\Sigma=16213\), and \(\Sigma_{\mathscr{A}}=33539\) and \(\Sigma_{\mathscr{B}}=33282\), respectively.
Figure 3: (a) A comparison of interfacial energy density versus the twist angle of a BG under zero strain with that of a BG under a 10% out-of-plane compression. (b) Atomic energy density [meV Å\({}^{-2}\)] within a fundamental unit cell of the \(\Sigma 7\) configuration, computed using the KC-1 potential.
Fig. 4 shows results of the large-twist simulations using the KC-1 (Figs. (a)a and (b)b) and KC-2 (Figs. (c)c and (d)d) models. Comparing the color density plots of atomic energy density in Figs. (a)a and (c)c, we note that the KC-1 model yields a honeycomb interface dislocation network, while KC-2 results in a triangular network, similar to the small twist case (Fig. (a)a). The atomic arrangement in the interiors of the triangular and hexagonal domains is that of the \(\Sigma 7\) stacking. The plots in Figs. (a)a and (c)c are of the atomic energy density relative to the total energy density of the \(\Sigma 7\) stacking. Therefore, we expect the energy density in the interior of the triangular/hexagonal domains to be zero. Figs. (a)a and (c)c, however, do not reflect this due to the variation in the energy densities of the 28 atoms in the primitive unit cell of the \(\Sigma 7\).7
Footnote 7: In other words, if the energy densities in Figs. (a)a and (c)c were spatially averaged using a weighting function with a \(\Sigma 7\) unit cell-shaped averaging domain, the resulting fields will be zero in the domain interiors.
To identify the nature of the interface dislocations, we plotted (Figs. (b)b and (d)d) the displacements of atoms along the dashed lines in Figs. (a)a and (c)c. The displacements are measured relative to the untwisted \(\Sigma 7\) stacking. The displacement line plots show negligible displacement perpendicular to the vertical dislocation line, suggesting a screw character. Moreover, the displacement "jumps" suggest the Burgers vector magnitude is \(\ll\) than that of the partial dislocations, noted in Section 2.1. Interestingly, the Burgers vector of dislocations from the KC-1 model is larger than those from the KC-2 model. For larger out-of-plane compression, the non-convexity of the interfacial energy increases in the neighborhood of \(\Sigma 7\). Therefore, we expect sharper displacement "jumps", which is confirmed in Fig. 5.
Figure 4: Atomic reconstruction in a large-twist (\(21.786789^{\circ}+0.170076^{\circ}\)) BG using the KC-1 (top row) and the KC-2 (bottom row) models. (a), (c) Plots of atomic energy density [\(\mathrm{meV\,\AA}^{-2}\)] show a honeycomb and a triangular network of interface dislocations. (b), (d) Line plots of the displacement components \(u_{1}\) and \(u_{2}\), measured along the dashed lines in (a) and (c). The displacements are measured relative to the untwisted \(\Sigma 7\) configuration.
Fig. 6 shows simulation results of a BG under large heterodeformation. Similar to the large twist case, we observe interface dislocations that form a distorted triangular network surrounding regions of \(\Sigma 7\) stackings. In addition, the displacement line plots along the dashed line in Fig. 5(a) suggest the dislocations have both screw and edge components, similar to heterostrained BG in Fig. 0(c).
Summarizing, this section conclusively demonstrates that atomic reconstruction occurs when a \(\Sigma 7\) BG is subjected to small heterodeformations.8 Analogous to the AB-stacking, the \(\Sigma 7\) configuration is energetically favorable and defect-free. When the \(\Sigma 7\) configuration is subjected to a small heterodeformation, atomic reconstruction ensues through strain localization along a network of lines, which we interpret as dislocations. However, we are yet to identify the crystallographic origin of the observed dislocations and their relatively short Burgers vector. In the next section, we will present SNF bicrystallography, an algebraic framework to study the geometric properties of moire superlattices. In particular, we will apply SNF to arrive at a rigorous definition for interface dislocations that is applicable across all heterodeformations.
Figure 5: Burgers Vector for \(21.786789^{\circ}+0.170076^{\circ}\) twisted BG at different magnitudes of out-of-plane compression calculated using the KC-1 parametrization
Figure 6: Atomic reconstruction under a large heterodeformation. (a) Color density plot of atomic energy density [meV Å\({}^{-2}\)] highlighting the network of interface dislocations in a BG heterodeformed relative to the \(\Sigma 7\) configuration using KC-2 parametrization. (b) A magnified view of the dislocation network in (a). (c) shows variations of the displacement components along the dashed line 1, shown in (a). The displacements are measured relative to the \(\Sigma 7\) configuration.
## 3 Bicrystallography and interface dislocations
The goal of this section is to define interface dislocations in 2D heterostructures, including homostructures, under large heterodeformations. An interface dislocation is a line defect that breaks the _translational invariance_ of a _defect-free interface_. Low-energy configurations, such as \(\Sigma 1\) and \(\Sigma 7\) interfaces in a twisted BG, are considered defect-free. In what follows, we will describe a framework to characterize the translational invariance of defect-free interfaces, which ultimately yields the set of interface dislocations a heterointerface can host. Recall from Section 2.1 that the Burgers vector of interface dislocations, formed due to a small heterodeformation of the AB stacking, originates from the GSFE of the AB-stacking. In particular, the periodicity of the GSFE conveys the translational invariance of the interface. This motivates us to investigate the GSFE of the \(\Sigma 7\) interface. We use SNF bicrystallography to identify the periodicity of GSFE of a defect-free heterointerface and use it to identify interface dislocations in the \(\Sigma 7\) configuration. In what follows, we describe the main results of SNF bicrystallography. For further details, we refer the reader to Algorithm A and A.
Figure 8: Translational invariance of a 21.786789\({}^{\circ}\) twisted BG. Translating the shaded region of the red lattice by a DSCL vector leaves the interface structure invariant and results in a shift in the CSL.
Figure 7: (a) AB-stacked (0\({}^{\circ}\) twist) bilayer graphene forming a \(\Sigma=1\) moiré. (b) 21.786789\({}^{\circ}\) twisted BG resulting in a \(\Sigma=7\) moiré. Open circles represent the second basis atom of graphene. The two graphene lattices, \(\mathscr{A}\) and \(\mathscr{B}\), are shown in blue and red, while the moiré superlattice (\(\mathscr{C}\)) and the DSCL (\(\mathscr{D}\)) are shown in purple and gray, respectively. The highlighted region represents a unit cell of \(\mathscr{C}\). The ratio of the unit cell size of the moiré superlattice to that of the graphene lattice and the corresponding ratio between the graphene lattice and \(\mathscr{D}\) are equal to \(\Sigma\).
Using the above definition, we can now revisit Fig. 7 to reason the qualitative differences noted in Section 2 between dislocations in the heterodeformed AB-stacked BG and the \(\Sigma\)7 BG configurations. Since the DSCL of an AB-stacked BG is identical to the graphene lattice, its GSFE has the periodicity of graphene, and a dislocation in an AB-stacking is a lattice vector of graphene. However, full dislocations are not observed in an AB-stacking due to degenerate minima in the GSFE, i.e., the GSFE plotted on a primitive unit cell of the DSCL has more than one minimum. Instead, as noted in Section 2.1, partial dislocations form whose Burgers vector is the relative vector connecting two minima of the GSFE. Moving on to the \(\Sigma\)7 BG, the magnitude of the smallest non-zero DSCL vector is equal to \(2.46/\sqrt{7}=0.929\,792\,153\,\mathrm{\SIUnitSymbolAngstrom}\) (follows from (1) and (6)), which is indeed the displacement jump (see Fig. 3(b)) observed in the atomistic simulation with the KC-1 parametrization. This implies the line defects in \(\Sigma\)7 BG, modeled using KC-1, are full interface dislocations. Fig. 8(a) shows the GSFE of the KC-1 modeled \(\Sigma\)7 BG. The absence of degenerate minima in Fig. 3(a) justifies the absence of partials in Fig. 3(b). However, the GSFE of the KC-2 modeled \(\Sigma\)7 BG, shown in Fig. 3(b), features two degenerate minima, which suggests the interface dislocations in Fig. 3(d) are partials. The magnitude of the Burgers vector of the partials can be inferred from Fig. 8(b) as \(2.46/(\sqrt{3}\sqrt{7})=0.536\,816\,\mathrm{\SIUnitSymbolAngstrom}\). In addition to the Burgers vectors, the GSFE also determines the arrangement of the dislocation network. Recall from Section 2.2, that the KC-1 parametrization resulted in a honeycomb dislocation network that separates defect-free \(\Sigma\)7-stacked hexagonal regions. We assert that _the number of sides of the low-energy region is determined by the number of nearest neighbor GSFE minimizers of a minimizer._ This assertion is corroborated by GSFE plots in Fig. 9 -- a GSFE minimizer in the KC-2 parameterization has three nearest neighbor minimizers, and therefore, the resulting dislocation network is formed by triangular defect-free \(\Sigma\)7-stackings.
Recall from Section 2.2 that the step character of the displacement "jump" is accentuated as the out-of-plane compression increases. This is because as the two layers are compressed the difference between the maximum and the minimum values of the GSFE, which we will refer to as the _GSFE range width_, increases. We noticed that compared to the KC-1 modeled BG, the KC-2 modeled \(\Sigma\)7 BG has a smaller GSFE range width. This was the primary reason we chose a larger out-of-plane compression (\(39\%\)) in the simulation using the KC-2 model so that the GSFE range widths of the two parametrizations are equal to \(0.8\,\mathrm{meV}\,\mathrm{\SIUnitSymbolAngstrom}^{-2}\), as shown in Fig. 9. Although the GSFE range widths match, the interface dislocations of the KC-2 model are more diffused as the Burgers vector of the partial is smaller by a factor of \(\sqrt{3}\). Finally, we verify using density functional theory (DFT) calculations that the KC-2 is superior to KC-1. The DFT-generated GSFE (see Appendix C), plotted in Fig. 10, compares qualitatively well with the GSFE in Fig. 8(b). However, from a modeling perspective, the two models are equally valuable as they demonstrate the links between bicrystallography, GSFE, and the properties of interface dislocations.
## 4 A generalized Frenkel-Kontorova (GFK) model for 2D heterointerfaces
The goal of this section is to build a continuum model to predict structural relaxation in heterostructures subjected to arbitrary heterodeformations by generalizing the Frenkel-Kontorova model of Nam and Koshino
Figure 9: GSFE [\(\mathrm{meV}\,\mathrm{\SIUnitSymbolAngstrom}^{-2}\)] plots of \(\Sigma\)7 21.786 789\({}^{\circ}\) twisted BG, computed in LAMMPS using (a) the KC-1 parametrization at a \(26\%\) out-of-plane compression, and (b) the KC-2 parametrization at \(39\%\) out-of-plane compression.
[2017]. In Section 5, we apply the GFK model to predict structural relaxation in heterodeformed BGs. The kinematics of the GFK model is inspired by the framework of large deformation crystal plasticity [Clayton, 2010, Admal et al., 2018, He and Admal, 2021, Joshi et al., 2022], wherein dislocations are defined with respect to a defect-free natural configuration.
### Kinematics
Consider an interface formed by two 2D (multi) lattices. For simplicity, we will ignore out-of-plane displacement and assume the lattices occupy regions \(\Omega_{\mathrm{t}}^{\mathrm{ref}}\) and \(\Omega_{\mathrm{b}}^{\mathrm{ref}}\) in the 2D Euclidean point space \(\mathbb{R}^{2}\).10 The variable \(\alpha\) is used to index the top (t) and bottom (b) layers, i.e., \(\alpha=\mathrm{t}\) or b. The role of a reference configuration is to measure displacements relative to a fixed configuration, and its choice should not affect the predictions of the model. Since our goal is to isolate and predict displacements associated with atomic reconstruction -- as opposed to large-scale deformation -- the reference configurations are chosen such that the lattices in \(\Omega_{\alpha}^{\mathrm{ref}}\) are twisted relative to each other or are independently strained uniformly such that they are marginally misaligned relative to a low energy moire configuration.11 From Section 2, we know that such reference configurations are not stable and undergo atomic reconstruction by nucleating interface dislocations. The goal of this section is to construct _frame-invariant_ kinematic measures that quantify the elastic and vdW energies responsible for the atomic reconstruction.
Footnote 10: The out-of-plane displacement during atomic reconstruction is an important feature recently studied by Dai et al. [2016]. While incorporating the out-of-plane displacement into our continuum model is conceptually straightforward, we chose to ignore it to better convey the GFK model.
Footnote 11: This choice is also motivated by the ‘tear and stack’ technique [Kim et al., 2016] to control the twist in a BG.
Let \(\mathbf{\phi}_{\alpha}:\Omega_{\alpha}^{\mathrm{ref}}\times[0,\infty]\to \mathbb{R}^{2}\) (\(\alpha=\mathrm{t},\mathrm{b}\)) denote time-dependent deformation maps associated with the atomic reconstructions of the respective lattices, measured relative to their reference configurations. The deformed configuration to which \(\mathbf{\phi}_{\alpha}\) maps to will be denoted by \(\Omega_{\alpha}\). Adopting the convention of continuum mechanics, we use \(\mathbf{X}_{\alpha}\) to denote an arbitrary material point in \(\Omega_{\alpha}^{\mathrm{ref}}\). The gradients of the deformation maps are given by \(\mathbf{F}_{\alpha}:=\nabla\mathbf{\phi}_{\alpha}\). At this stage, it is useful to connect to the heterostrained moire example of Section 2, wherein a \(\Sigma 7\) moire twisted BG (\(21.786789^{\circ}\) twist relative to the AB stacking) when subjected to principal stretches of \(1.05\%\) and \(-0.2\%\), was observed to undergo atomic reconstruction under PBCs. For a continuum analog of this system, the reference configuration will reflect the atomistic system prior to the energy minimization, and \(\mathbf{\phi}_{\alpha}-\mathbf{X}_{\alpha}\) corresponds to the displacements due to energy minimization. If the two lattices in the reference configurations are allowed to relax in the absence of external loads, they will a) return to their respective planar strain-free configurations, and b)twist by an angle that minimizes the vdW energy (as a function of the misorientation angle). In plasticity, this relaxed configuration is commonly referred to as a _natural configuration_.
The idea of a natural configuration plays a central role in our framework as we will show that deformation measures defined with respect to the natural configuration are frame-invariant and independent of the choice of the reference configuration. Employing the language of crystal plasticity theories, we let \(\mathbf{K}_{\alpha}\) represent the map from the tangent space of \(\Omega_{\alpha}^{\mathrm{ref}}\) to that of the natural configuration. In this work, \(\mathbf{K}_{\alpha}\) is a constant tensor, and \(\mathbf{K}_{\alpha}^{-1}\) can be interpreted as the average deformation gradient (relative to the natural configuration) an
Figure 10: Density functional theory-generated GSFE [meV Å\({}^{-2}\)] of a \(\Sigma 7\)\(21.786\,789^{\circ}\) twisted BG at a \(26\%\) out-of-plane compression.
experimentalist imposes. The mapping from the reference configuration to the natural configuration is given by \(\mathbf{\kappa}_{\alpha}(\mathbf{X}_{\alpha}):=\mathbf{K}_{\alpha}\mathbf{X}_{\alpha}\). \(\Omega_{\alpha}^{\mathrm{n}}\) is used to denote a lattice in the natural configuration and points in \(\Omega_{\alpha}^{\mathrm{n}}\) will be denoted by \(\mathbf{Y}_{\alpha}\). Furthermore, we use \(\mathbf{\eta}_{\alpha}\) to denote the mapping from the natural configuration to the deformed configuration. By construction, we have
\[\mathbf{\phi}_{\alpha}=\mathbf{\eta}_{\alpha}\circ\mathbf{\kappa}_{\alpha}, \tag{8}\]
where \(\circ\) denotes function composition. From (8), the following relationship between the gradients of the deformation maps follow:
\[\mathbf{F}_{\alpha}=\mathbf{H}_{\alpha}\mathbf{K}_{\alpha},\text{ where }\mathbf{H}_{\alpha}:= \nabla\mathbf{\eta}_{\alpha}. \tag{9}\]
Note that the gradient in (9) is with respect to \(\mathbf{Y}_{\alpha}\). Unlike \(\mathbf{K}_{\alpha}\), \(\mathbf{H}_{\alpha}\) is a time-dependent field, and its inverse describes the relaxation of a local neighborhood of a point \(\mathbf{x}\in\Omega_{\alpha}\) in the absence of external loads. In the context of our heterostrained moire example, the natural configuration is the \(\Sigma 7\) moire since the two lattices are strain-free and the interfacial energy is minimum in neighborhoods of small hetero-strains and twists. Moreover, \(\mathbf{K}_{\mathrm{b}}\equiv\mathbf{I}\) and \(\mathbf{K}_{\mathrm{t}}^{-1}\) is equal to the biaxial strain, given in (4).
We will now construct frame-invariant kinematic measures to quantify the elastic and vdW energies. Since the elastic energy due to atomic reconstruction originates from the strains in the lattices measured with respect to a strain-free natural configuration, the relevant frame-invariant kinematic measure is the Cauchy-Green deformation tensor \(\mathbf{C}_{\alpha}:=\mathbf{H}_{\alpha}^{\mathrm{T}}\mathbf{H}_{\alpha}\). From (9), \(\mathbf{C}_{\alpha}\) can be written as
\[\mathbf{C}_{\alpha}=\mathbf{K}_{\alpha}^{-\mathrm{T}}\mathbf{F}^{\mathrm{T}}\mathbf{F}\mathbf{K}_ {\alpha}^{-1}. \tag{10}\]
On the other hand, the vdW energy originates from the interaction between lattices in the region \(\Omega_{\mathrm{t}}\cap\Omega_{\mathrm{b}}\). The vdW energy is described by the relative translation between the two lattices when allowed to relax to the natural configuration. Therefore, the vdW energy density at a point \(\mathbf{x}\in\Omega_{\mathrm{t}}\cap\Omega_{\mathrm{b}}\) will be expressed as a function of the relative vector
\[\mathbf{r}(\mathbf{x},t)=\mathbf{K}_{\mathrm{t}}\mathbf{X}_{\mathrm{t}}-\mathbf{K}_{\mathrm{b}} \mathbf{X}_{\mathrm{b}},\text{ where }\mathbf{X}_{\alpha}:=\mathbf{\phi}_{\alpha}^{-1}(\mathbf{x},t). \tag{11}\]
Summarizing, we have two frame-invariant kinematic measures, \(\mathbf{C}_{\alpha}\) and \(\mathbf{r}\), expressed in terms of the deformation map \(\phi_{\alpha}\) for given \(\mathbf{K}_{\alpha}\), that characterize elastic and vdW energies, respectively. In the next section, we will describe the constitutive laws for the elastic and vdW energies in terms of \(\mathbf{C}_{\alpha}\) and \(\mathbf{r}\).
Figure 11: A schematic of the reference (left), natural (middle), and deformed (right) configurations of the GFK model.
### Constitutive law
In this section, we construct a frame-invariant energy functional for the GFK model. For prescribed average heterodeformations \((\mathbf{K}_{\alpha}^{-1})\), the total energy functional \(\mathscr{E}\) is additively decomposed as
\[\mathscr{E}[\mathbf{\phi}_{\mathrm{t}},\mathbf{\phi}_{\mathrm{b}}]=\mathscr{E}_{\mathrm{ el}}+\mathscr{E}_{\mathrm{vdW}}, \tag{12}\]
into elastic and interfacial energies. Since the elastic energy corresponds to elastic distortions relative to the natural configurations, we assume \(\mathscr{E}_{\mathrm{el}}\) to be an integral of an elastic energy density (per unit area in the natural configuration) \(e_{\mathrm{el}}\) over the natural configuration:
\[\mathscr{E}_{\mathrm{el}}[\mathbf{\phi}_{\mathrm{t}},\mathbf{\phi}_{\mathrm{b}}]=\sum _{\alpha=\mathrm{t,b}}\int_{\Omega_{\alpha}^{\mathrm{a}}}e_{\mathrm{el}}(\mathbf{E }_{\alpha};\alpha)\,d\mathbf{Y}_{\alpha},\qquad\text{where }\mathbf{E}_{\alpha}=(\mathbf{C}_{ \alpha}-\mathbf{I})/2 \tag{13}\]
is the frame-invariant Lagrangian strain tensor, and \(e_{\mathrm{el}}(\bullet;\alpha)\) is the elastic energy density of the \(\alpha\)-th layer. For example, a Saint Venant-Kirchhoff elastic energy density is of of the form \(e_{\mathrm{el}}=\mathbb{C}_{\alpha}\mathbf{E}_{\alpha}\cdot\mathbf{E}_{\alpha}/2\), where \(\mathbb{C}_{\alpha}\) is the fourth-order elasticity tensor of the \(\alpha\)-lattice.
The interaction energy term \(\mathscr{E}_{\mathrm{vdW}}\) measures the changes in the vdW energy due to relative translations between the lattices in the natural configuration. Since the lattices interact in the overlapping region \(\Omega_{\mathrm{t}}\cap\Omega_{\mathrm{b}}\) of the deformed configuration, we express \(\mathscr{E}_{\mathrm{vdW}}\) as an integral over \(\Omega_{\mathrm{t}}\cap\Omega_{\mathrm{b}}\) of a vdW energy density \(e_{\mathrm{vdW}}\) -- measured per unit area in the natural configuration. From an atomistic viewpoint, \(e_{\mathrm{vdW}}\) is the GSFE density introduced in Section 2, and is expressed as a function of the frame-invariant relative vector \(\mathbf{r}\) introduced in (11). Therefore,
\[\mathscr{E}_{\mathrm{vdW}}[\mathbf{\phi}_{\mathrm{t}},\mathbf{\phi}_{\mathrm{b}}]= \frac{1}{2}\sum_{\alpha=\mathrm{t,b}}\int_{\Omega_{\mathrm{t}}\cap\Omega_{ \mathrm{b}}}(\det\mathbf{H}_{\alpha})^{-1}e_{\mathrm{vdW}}(\mathbf{r}(\mathbf{x}_{\alpha}) )\,d\mathbf{x}_{\alpha}. \tag{14}\]
Note that the factor \((\det\mathbf{H}_{\alpha})^{-1}\) is necessary because the integration is over the deformed configuration as opposed to the natural configuration. As a result, the interaction energy has to be split evenly between the two lattices leading to the factor of \(1/2\). From Section 2, we know that \(e_{\mathrm{vdW}}\) has the periodicity of the DSCL corresponding to the natural configuration. For example, vdW energy densities for the AB stacked-\(\Sigma 1\) and the \(\Sigma 7\) natural configurations have the form
\[e_{\mathrm{vdW}}(\mathbf{r})=\pm 2\mathbf{v}_{0}\sum_{p=1}^{3}\cos\left(2\pi\mathbf{d}^{ p}\cdot(\mathbf{r}+\mathbf{t})\right)+c, \tag{15}\]
where \(\mathbf{\mathscr{A}}^{1}\) and \(\mathbf{\mathscr{A}}^{2}\) are basis vectors of the reciprocal lattice, \(\mathbf{\mathscr{A}}^{3}=-\left(\mathbf{\mathscr{A}}^{1}+\mathbf{\mathscr{A}}^{2}\right)\), \(\mathbf{t}\) is a translation vector, and \(c\) and \(v_{0}\) are constants. By comparing (15) to the atomistics/first principles GSFE densities plotted in Figs. 2 and 10 we obtain \(v_{0}\) and \(\mathbf{t}\) corresponding to the \(\Sigma 1\) and \(\Sigma 7\) configurations. Plots of the resulting \(e_{\mathrm{vdW}}\), shown in Fig. 12, compare well with those in Fig. 10.
Next, we propose an evolution law for the unknown fields, \(\mathbf{\phi}_{\mathrm{t}}\) and \(\mathbf{\phi}_{\mathrm{b}}\), as a gradient flow of \(\mathscr{E}\):
\[m\dot{\mathbf{\phi}}_{\alpha}=-\mathbf{\delta}_{\mathbf{\phi}_{\alpha}}\mathscr{E},\quad \alpha=\mathrm{t,b} \tag{16}\]
where \(m_{\alpha}\) is a prescribed mobility, and \(\mathbf{\delta}_{\mathbf{\phi}_{\alpha}}\) denotes variation with respect to \(\mathbf{\phi}_{\alpha}\).
Figure 12: Plots of \(e_{\mathrm{vdW}}\) for (a) AB-stacked \(\Sigma 1\) and (b) \(\Sigma 7\) configurations, computed using (15) and plotted such that the minimum is zero.
### Derivation of the governing equations of the GFK model
In this section, we derive the governing equations of the GFK model by calculating the variational derivative in (16). The derivation is applicable to finite systems, a notable departure from earlier works, which focused on infinite systems modeled using PBCs. In addition to the critical role vdW energy plays in the structural relaxation of 2D heterostructures, we will show that it manifests as surface tension, which contributes towards configurational forces on the two lattices.
To compute the variation with respect to \(\mathbf{\phi}_{\alpha}\), we transform the integrals in (13) and (14) to the reference configurations. We begin by rewriting the elastic energy in (13) by noting that \(d\mathbf{Y}_{\alpha}=(\det\mathbf{K}_{\alpha})d\mathbf{X}_{\alpha}\):
\[\mathscr{E}_{\rm el}[\mathbf{\phi}_{\rm t},\mathbf{\phi}_{\rm b}]=\sum_{\alpha={\rm t,b}}\int_{\Omega_{\rm r}^{\rm ref}}e_{\rm el}(\mathbf{E}_{\alpha};\alpha)J_{\alpha }\,d\mathbf{X}_{\alpha}, \tag{17}\]
where \(J_{\alpha}:=\det\mathbf{K}_{\alpha}\). Taking the variation of \(\mathscr{E}_{\rm el}\) in (17) with respect to \(\mathbf{\phi}_{\alpha}\), we obtain
\[-\delta_{\mathbf{\phi}_{\alpha}}\mathscr{E}_{\rm el}=J_{\alpha}\,{\rm Div}(\mathbf{P}_ {\alpha})\text{ in }\Omega_{\alpha}^{\rm ref},\text{ where }\mathbf{P}_{\alpha}:=\mathbf{H}_{\alpha}\mathbf{\nabla}\mathbf{e}_{\rm el}\mathbf{K}_{ \alpha}^{-\rm T}. \tag{18}\]
The tensor \(\mathbf{P}_{\alpha}\) is the 2D analog of the elastic Piola-Kirchhoff stress, which measures force in \(\Omega_{\alpha}\) measured per unit length in \(\Omega_{\alpha}^{\rm ref}\). In addition, the variational derivative also yields the usual expression for the traction on the boundary \(\Gamma_{\alpha}\) of \(\Omega_{\alpha}\) as \(\mathbf{P}_{\alpha}\mathbf{n}_{\alpha}\), where \(\mathbf{n}_{\alpha}\) is a outward unit vector normal to \(\Gamma_{\alpha}^{\rm ref}\).
Compared to (18), calculating the variation of \(\mathscr{E}_{\rm vdW}\) in (14) is a delicate exercise due to a) the presence of the inverse function \(\mathbf{\phi}_{\alpha}^{-1}\), and b) the domain of integration in (14) is a part of the deformed configuration and is therefore sensitive to \(\mathbf{\phi}_{\alpha}\). We begin with \(\delta_{\mathbf{\phi}_{\rm t}}\mathscr{E}_{\rm vdW}\). To eliminate the dependence on \(\mathbf{\phi}_{\rm t}^{-1}\), we transform the two domains of integration in (14) to \(\Delta_{\rm t}^{\rm ref}:=\mathbf{\phi}_{\rm t}^{-1}(\Omega_{\rm t}\cap\Omega_{\rm b})\) in the reference configuration by noting that \(d\mathbf{x}_{\rm t}=d\mathbf{x}_{\rm b}=(\det\mathbf{F}_{\rm t})d\mathbf{X}_{\rm t}\), resulting in
\[\mathscr{E}_{\rm vdW}[\mathbf{\phi}_{\rm t},\mathbf{\phi}_{\rm b}]=\frac{J_{\rm t}}{2} \int_{\Delta_{\rm t}^{\rm ref}}\left(1+\underbrace{\det(\mathbf{H}_{\rm t}\mathbf{H}_ {\rm b}^{-1})}_{\gamma_{\rm t}}\right)e_{\rm vdW}(\mathbf{r}_{\rm t}(\mathbf{X}_{\rm t} ))\,d\mathbf{X}_{\rm t}. \tag{19}\]
where \(\mathbf{r}_{\alpha}(\mathbf{X}_{\alpha}):=\mathbf{r}(\mathbf{x})|_{\mathbf{x}=\mathbf{\phi}_{\alpha}( \mathbf{X}_{\alpha})}\), i.e.
\[\mathbf{r}_{\rm t}(\mathbf{X}_{\rm t})=\mathbf{K}_{\rm t}\mathbf{X}_{\rm t}-\mathbf{K}_{\rm b}(\bm {\phi}_{\rm b}^{-1}\circ\mathbf{\phi}_{\rm t}(\mathbf{X}_{\rm t}))\text{ and }\mathbf{r}_{\rm b}(\mathbf{X}_{\rm b})=\mathbf{K}_{\rm t}(\mathbf{\phi}_{\rm t}^{-1}\circ\mathbf{ \phi}_{\rm b}(\mathbf{X}_{\rm b}))-\mathbf{K}_{\rm b}\mathbf{X}_{\rm b}. \tag{20}\]
From (20), we note that the integrand in (19) depends on \(\mathbf{\phi}_{\rm t}\) and its gradient but not its inverse, as desired. The variational derivative of (19) with respect to \(\mathbf{\phi}_{\rm t}\) now follows:
\[-\delta_{\mathbf{\phi}_{\rm t}}\mathscr{E}_{\rm vdW}=J_{\rm t}\left[\frac{1+\gamma _{\rm t}}{2}\mathbf{H}_{\rm b}^{-\rm T}\nabla e_{\rm vdW}+{\rm Div}\left(\frac{ \gamma_{\rm t}}{2}e_{\rm vdW}\mathbf{F}_{\rm t}^{-\rm T}\right)\right]\text{ in }\Delta_{\rm t}^{\rm ref}. \tag{21}\]
Figure 13: A schematic showing two types of tractions (purple and green arrows) on the boundary of the overlapping region, \(\Delta_{\rm t}\subset\Omega_{\rm t}\), due to surface tension associated with vdW interactions in a finite 2D heterostructure. The traction forces balance the thermodynamic driving forces that act to increase the region of overlap. The purple arrows are compressive forces associated with the overlapping region’s tendency to dilate to increase its area. The green arrows are configurational forces on \(S_{\rm t}\) that balance the thermodynamic forces conjugate to aerial changes in \(\Delta_{\rm t}\) due to the translation of \(\Omega_{\rm t}\) into \(\Omega_{\rm b}\).
along with two kinds of traction forces
\[\frac{\gamma_{\mathrm{t}}}{2}e_{\mathrm{vdW}}\mathbf{F}_{\mathrm{t}}^{- \mathrm{T}}\mathbf{n}_{\mathrm{t}}\text{ on }\partial\Delta_{\mathrm{t}}^{\mathrm{ref}}, \tag{22a}\] \[-\frac{1+\gamma_{\mathrm{t}}}{2}e_{\mathrm{vdW}}\mathbf{F}_{\mathrm{t}} ^{-\mathrm{T}}\mathbf{n}_{\mathrm{t}}\text{ on }S_{\mathrm{t}}:=\partial\Delta_{\mathrm{t}}^{\mathrm{ref}}- \Gamma_{\mathrm{t}}^{\mathrm{ref}}. \tag{22b}\]
The expression \(\nabla e_{\mathrm{vdW}}\) in (21) seeks to increase areas of high commensurability, and is responsible for the formation of interface dislocations. The term \(e_{\mathrm{vdW}}\gamma_{\mathrm{t}}/2\) in (21) is the surface tension/pressure that is conjugate to aerial changes in \(\Omega_{\mathrm{t}}\), and (22a) is the corresponding traction. The traction in (22a) acts to counter the thermodynamic driving force that tends to dilate \(\Omega_{\mathrm{t}}\) -- since \(e_{\mathrm{vdW}}\) is negative -- to maximize the area of overlap with \(\Omega_{\mathrm{b}}\). Therefore, the traction in (22a) is compressive, as the purple arrows in Fig. 13 depict. On the other hand, the traction in (22b), shown in green in Fig. 13, is a configurational force that acts on the part of \(\partial\Delta_{\mathrm{t}}^{\mathrm{ref}}\) that belongs to the interior of \(\Omega_{\mathrm{t}}\). It works to oppose the thermodynamic driving force that translates \(\Omega_{\mathrm{t}}\) into \(\Omega_{\mathrm{b}}\), thereby increasing the area of overlap.12 Notice that the boundary \(S_{\mathrm{t}}\) experiences both tractions mentioned in (22), as shown in Fig. 13. Therefore, the total traction on \(\partial\Delta_{\mathrm{t}}\) due to vdW interactions is given by
Footnote 12: Mathematically, the configurational force on \(S_{\mathrm{t}}\) arises due to the dependence of \(\Delta_{\mathrm{t}}^{\mathrm{ref}}\) (the domain of integration in (19)) on \(\mathbf{\phi}_{\mathrm{t}}\).
\[-e_{\mathrm{vdW}}\mathbf{F}_{\mathrm{t}}^{-\mathrm{T}}\mathbf{n}_{ \mathrm{t}}\text{ on }S_{\mathrm{t}},\text{ and } \tag{23a}\] \[\frac{\gamma_{\mathrm{t}}}{2}e_{\mathrm{vdW}}\mathbf{F}_{\mathrm{t}} ^{-\mathrm{T}}\mathbf{n}_{\mathrm{t}}\text{ on }\partial\Delta_{\mathrm{t}}^{\mathrm{ref}}-S_{\mathrm{t}}. \tag{23b}\]
Substituting (18) and (21) into (16), results in a governing equation for \(\Omega_{\mathrm{t}}\):
\[m_{\mathrm{t}}\dot{\mathbf{\phi}}_{\mathrm{t}}=\begin{cases}\mathrm{ Div}\left(\mathbf{P}_{\mathrm{t}}+\frac{\gamma_{\mathrm{t}}}{2}e_{\mathrm{vdW}}\mathbf{F}_{ \mathrm{t}}^{-\mathrm{T}}\right)+\frac{1+\gamma_{\mathrm{t}}}{2}\mathbf{H}_{ \mathrm{b}}^{-\mathrm{T}}\nabla e_{\mathrm{vdW}}(\mathbf{r}_{\mathrm{t}})\text{ in } \Delta_{\mathrm{t}},\\ \mathrm{Div}(\mathbf{P}_{\mathrm{t}})\text{ in }\Omega_{\mathrm{t}}-\Delta_{ \mathrm{t}},\end{cases} \tag{24}\]
where \(m_{\alpha}:=J_{\alpha}^{-1}m\), along with traction boundary conditions in (23). Similarly, the governing equation for \(\Omega_{\mathrm{b}}\) is given by
\[m_{\mathrm{b}}\dot{\mathbf{\phi}}_{\mathrm{b}}=\begin{cases}\mathrm{ Div}\left(\mathbf{P}_{\mathrm{b}}+\frac{\gamma_{\mathrm{t}}}{2}e_{\mathrm{vdW}}\mathbf{F}_{ \mathrm{b}}^{-\mathrm{T}}\right)-\frac{1+\gamma_{\mathrm{b}}}{2}\mathbf{H}_{ \mathrm{t}}^{-\mathrm{T}}\nabla e_{\mathrm{vdW}}(\mathbf{r}_{\mathrm{b}})\text{ in } \Delta_{\mathrm{b}},\\ \mathrm{Div}(\mathbf{P}_{\mathrm{b}})\text{ in }\Omega_{\mathrm{b}}-\Delta_{ \mathrm{b}},\end{cases} \tag{25}\]
where \(\gamma_{\mathrm{b}}:=\det(\mathbf{H}_{\mathrm{b}}\mathbf{H}_{\mathrm{t}}^{-1})\), along with corresponding traction boundary conditions. A notable feature of the governing equations is that the total stress now includes a contribution from surface tension, which was absent in previous Frenkel-Kontorova models that were developed for infinite systems. While the role of surface tension may be ignored for infinite systems, we expect it to play an important role in finite systems, wherein sliding between the constituent 2D lattices is enhanced. It is worth pointing out that the two key features of our model -- surface tension and frame-invariance13 -- are a consequence of the model's geometrically nonlinear kinematic framework.
Footnote 13: Indeed, under an arbitrary superposed rigid body displacement given by a constant rotation tensor \(\mathbf{R}\) and a constant vector \(\mathbf{c}\), the fields transform to
\[\mathbf{\ddot{\phi}}_{\alpha}=\mathbf{R}\mathbf{\phi}_{\alpha}+\mathbf{c};\quad\mathbf{\ddot{F}}_{ \alpha}=\mathbf{R}\mathbf{F}_{\alpha}(\implies\mathbf{\ddot{H}}_{\alpha}=\mathbf{R}\mathbf{H}_{ \alpha});\quad\mathbf{\ddot{\sigma}}_{\alpha}=\mathbf{\sigma}_{\alpha};\quad\mathbf{\ddot{K}}_ {\alpha}=\mathbf{K}_{\alpha};\quad\mathbf{\ddot{r}}_{\alpha}=\mathbf{r}_{\alpha}, \tag{26}\]
and continue to satisfy (27).
Next, we will focus our attention on using the GFK model to simulate atomic reconstruction in infinite 2D heterostructures and compare its predictions with atomistic simulation results of Section 2. To this end, we simplify our model for numerical implementation by ignoring the surface tension terms and assuming \(J_{\alpha}\approx 1\), resulting in
\[m\dot{\mathbf{\phi}}_{\mathrm{t}} =\mathrm{Div}(\mathbf{P}_{\mathrm{t}})+\mathbf{H}_{\mathrm{b}}^{-\mathrm{ T}}\nabla e_{\mathrm{vdW}}(\mathbf{r}_{\mathrm{t}}), \tag{27a}\] \[m\dot{\mathbf{\phi}}_{\mathrm{b}} =\mathrm{Div}(\mathbf{P}_{\mathrm{b}})-\mathbf{H}_{\mathrm{t}}^{-\mathrm{ T}}\nabla e_{\mathrm{vdW}}(\mathbf{r}_{\mathrm{b}}). \tag{27b}\]
Using (20) and recalling from (9) that \(\mathbf{H}_{\alpha}=\mathbf{F}_{\alpha}\mathbf{K}_{\alpha}^{-1}\), the right-hand-side of (27) can be expressed entirely in terms of the unknown \(\mathbf{\phi}_{\alpha}\), its gradient, and its inverse. However, the dependence on the inverse is impractical for numerical computation. Therefore, we resort to approximating14
Footnote 14: \(\mathbf{r}_{\text{t}}\) and \(\mathbf{r}_{\text{b}}\) can be expressed as
\[\mathbf{r}_{\text{t}} =\mathbf{K}_{\text{t}}\mathbf{X}_{\text{t}}-\mathbf{K}_{\text{b}}\mathbf{X}_{\text {b}}=(\mathbf{K}_{\text{t}}-\mathbf{K}_{\text{b}})\mathbf{X}_{\text{t}}+\mathbf{K}_{\text{b}}( \mathbf{X}_{\text{t}}-\mathbf{X}_{\text{b}}), \tag{28a}\] \[\mathbf{r}_{\text{b}} =\mathbf{K}_{\text{t}}\mathbf{X}_{\text{t}}-\mathbf{K}_{\text{b}}\mathbf{X}_{\text {b}}=\mathbf{K}_{\text{t}}(\mathbf{X}_{\text{t}}-\mathbf{X}_{\text{b}})+(\mathbf{K}_{\text{t}}- \mathbf{K}_{\text{b}})\mathbf{X}_{\text{b}}. \tag{28b}\]
Since \(\mathbf{\phi}_{\text{t}}(\mathbf{X}_{\text{t}})=\mathbf{\phi}_{\text{b}}(\mathbf{X}_{\text{b}})\), it follows that \[\mathbf{\phi}_{\text{t}}(\mathbf{X}_{\text{t}})-\mathbf{\phi}_{\text{b}}(\mathbf{X }_{\text{t}})=\mathbf{\phi}_{\text{b}}(\mathbf{X}_{\text{b}})-\mathbf{\phi}_{\text{b}}( \mathbf{X}_{\text{t}})\approx\mathbf{F}_{\text{b}}(\mathbf{X}_{\text{b}}-\mathbf{X}_{\text{t}}),\] \[\mathbf{\phi}_{\text{t}}(\mathbf{X}_{\text{b}})-\mathbf{\phi}_{\text{b}}(\bm {X}_{\text{b}})=\mathbf{\phi}_{\text{t}}(\mathbf{X}_{\text{b}})-\mathbf{\phi}_{\text{t}}( \mathbf{X}_{\text{t}})\approx\mathbf{F}_{\text{t}}(\mathbf{X}_{\text{b}}-\mathbf{X}_{\text{t}}),\]
which imply \[\mathbf{X}_{\text{t}}-\mathbf{X}_{\text{b}} \approx-\mathbf{F}_{\text{b}}^{-1}(\mathbf{\phi}_{\text{t}}(\mathbf{X}_{\text {t}})-\mathbf{\phi}_{\text{b}}(\mathbf{X}_{\text{t}})),\] (29a) \[\mathbf{X}_{\text{t}}-\mathbf{X}_{\text{b}} \approx-\mathbf{F}_{\text{t}}^{-1}(\mathbf{\phi}_{\text{t}}(\mathbf{X}_{\text {b}})-\mathbf{\phi}_{\text{b}}(\mathbf{X}_{\text{b}})).\] (29b)
Equation (28a) and the approximation (29a) yield (30a). Similarly, (28b) and (29b) result in (30b). \(\mathbf{r}_{\text{t}}\) and \(\mathbf{r}_{\text{b}}\) can be expressed as \[\mathbf{r}_{\text{t}}=\mathbf{K}_{\text{t}}\mathbf{X}_{\text{t}}-\mathbf{K}_{\text {b}}\mathbf{X}_{\text{b}}=(\mathbf{K}_{\text{t}}-\mathbf{K}_{\text{b}})\mathbf{X}_{\text{t}}+ \mathbf{K}_{\text{b}}(\mathbf{X}_{\text{t}}-\mathbf{X}_{\text{b}}),\] (28a) \[\mathbf{r}_{\text{b}}=\mathbf{K}_{\text{t}}\mathbf{X}_{\text{t}}-\mathbf{K}_{ \text{b}}\mathbf{X}_{\text{b}}=\mathbf{K}_{\text{t}}(\mathbf{X}_{\text{t}}-\mathbf{X}_{\text {b}})+(\mathbf{K}_{\text{t}}-\mathbf{K}_{\text{b}})\mathbf{X}_{\text{b}}.\] (28b)
Since \(\mathbf{\phi}_{\text{t}}(\mathbf{X}_{\text{t}})=\mathbf{\phi}_{\text{b}}(\mathbf{X}_{\text{b}})\), it follows that \[\mathbf{\phi}_{\text{t}}(\mathbf{X}_{\text{t}})-\mathbf{\phi}_{\text{b}}(\bm {X}_{\text{t}})=\mathbf{\phi}_{\text{b}}(\mathbf{X}_{\text{b}})-\mathbf{\phi}_{\text{b}}( \mathbf{X}_{\text{t}})\approx\mathbf{F}_{\text{b}}(\mathbf{X}_{\text{b}}-\mathbf{X}_{\text{t }}),\] \[\mathbf{\phi}_{\text{t}}(\mathbf{X}_{\text{b}})-\mathbf{\phi}_{\text{b}}(\bm {X}_{\text{b}})=\mathbf{\phi}_{\text{t}}(\mathbf{X}_{\text{b}})-\mathbf{\phi}_{\text{t}}( \mathbf{X}_{\text{t}})\approx\mathbf{F}_{\text{t}}(\mathbf{X}_{\text{b}}-\mathbf{X}_{\text{t }}),\]
which imply \[\mathbf{X}_{\text{t}}-\mathbf{X}_{\text{b}} \approx-\mathbf{F}_{\text{b}}^{-1}(\mathbf{\phi}_{\text{t}}(\mathbf{X}_{ \text{t}})-\mathbf{\phi}_{\text{b}}(\mathbf{X}_{\text{t}})),\] (29a) \[\mathbf{X}_{\text{t}}-\mathbf{X}_{\text{b}} \approx-\mathbf{F}_{\text{t}}^{-1}(\mathbf{\phi}_{\text{t}}(\mathbf{X}_{ \text{b}})-\mathbf{\phi}_{\text{b}}(\mathbf{X}_{\text{b}})).\] (29b)
Equation (28a) and the approximation (29a) yield (30a). Similarly, (28b) and (29b) result in (30b). \(\mathbf{r}_{\text{t}}\) and \(\mathbf{r}_{\text{b}}\) can be expressed as \[\mathbf{r}_{\text{t}}=\mathbf{K}_{\text{t}}\mathbf{X}_{\text{t}}-\mathbf{K}_{ \text{b}}\mathbf{X}_{\text{b}}=(\mathbf{K}_{\text{t}}-\mathbf{K}_{\text{b}})\mathbf{X}_{\text{t }}+\mathbf{K}_{\text{b}}(\mathbf{X}_{\text{t}}-\mathbf{X}_{\text{b}}),\] (29c) \[\mathbf{r}_{\text{b}}=\mathbf{K}_{\text{t}}\mathbf{X}_{\text{t}}-\mathbf{K}_{ \text{b}}\mathbf{X}_{\text{b}}=\mathbf{K}_{\text{t}}(\mathbf{X}_{\text{t}}-\mathbf{X}_{\text{b }})+(\mathbf{K}_{\text{t}}-\mathbf{K}_{\text{b}})\mathbf{X}_{\text{b}}.\] (29d)
Since \(\mathbf{\phi}_{\text{t}}(\mathbf{X}_{\text{t}})=\mathbf{\phi}_{\text{b}}(\mathbf{X}_{\text{b}})\), it follows that \[\mathbf{\phi}_{\text{t}}(\mathbf{X}_{\text{t}})-\mathbf{\phi}_{\text{b}}(\bm {X}_{\text{t}})=\mathbf{\phi}_{\text{b}}(\mathbf{X}_{\text{b}})-\mathbf{\phi}_{\text{b}}( \mathbf{X}_{\text{t}})\approx\mathbf{F}_{\text{b}}(\mathbf{X}_{\text{b}}-\mathbf{X}_{\text{t }}),\] \[\mathbf{\phi}_{\text{t}}(\mathbf{X}_{\text{b}})-\mathbf{\phi}_{\text{b}}(\bm {X}_{\text{b}})=\mathbf{\phi}_{\text{t}}(\mathbf{X}_{\text{b}})-\mathbf{\phi}_{\text{t}}( \mathbf{X}_{\text{t}})\approx\mathbf{F}_{\text{t}}(\mathbf{X}_{\text{b}}-\mathbf{X}_{\text{t }}),\]
which imply \[\mathbf{X}_{\text{t}}-\mathbf{X}_{\text{b}} \approx-\mathbf{F}_{\text{b}}^{-1}(\mathbf{\phi}_{\text{t}}(\mathbf{X}_{ \text{t}})-\mathbf{\phi}_{\text{b}}(\mathbf{X}_{\text{t}})),\] (29e) \[\mathbf{X}_{\text{t}}-\mathbf{X}_{\text{b}} \approx-\mathbf{F}_{\text{t}}^{-1}(\mathbf{\phi}_{\text{t}}(\mathbf{X}_{\text{ b}})-\mathbf{\phi}_{\text{b}}(\mathbf{X}_{\text{b}})).\] (29b)
Equation (28a) and the approximation (29a) yield (30a). Similarly, (28b) and (29b) result in (30b). \(\mathbf{r}_{\text{t}}\) and \(\mathbf{r}_{\text{b}}\) can be expressed as \[\mathbf{r}_{\text{t}}=\mathbf{K}_{\text{t}}\mathbf{X}_{\text{t}}-\mathbf{K}_{ \text{b}}\mathbf{X}_{\text{b}}=(\mathbf{K}_{\text{t}}-\mathbf{K}_{\text{b}})\mathbf{X}_{\text {t}}+\mathbf{K}_{\text{b}}(\mathbf{X}_{\text{t}
gradients in (27) are computed using FFT and the solution is marched forward in time using the Runge-Kutta (RK) explicit time integration with a fixed time step \(\delta t\), resulting in the following discretized equations:
\[\boldsymbol{\phi}_{\alpha}^{n+1}=\boldsymbol{\phi}_{\alpha}^{n}+ \frac{1}{6m}\left(\boldsymbol{k}_{\alpha 1}+2\boldsymbol{k}_{\alpha 2}+2 \boldsymbol{k}_{\alpha 3}+\boldsymbol{k}_{\alpha 4}\right),\text{ where}\] \[\boldsymbol{k}_{\alpha 1}=\delta t\boldsymbol{f}_{\alpha}|_{ \boldsymbol{\phi}=\boldsymbol{\phi}^{n}},\quad\boldsymbol{k}_{\alpha 2}=\delta t \boldsymbol{f}_{\alpha}|_{\boldsymbol{\phi}=\boldsymbol{\phi}^{n}+\frac{ \boldsymbol{k}_{\alpha 4}}{2}},\quad\boldsymbol{k}_{\alpha 3}=\delta t \boldsymbol{f}_{\alpha}|_{\boldsymbol{\phi}=\boldsymbol{\phi}^{n}+\frac{ \boldsymbol{k}_{\alpha 4}}{2}},\quad\boldsymbol{k}_{\alpha 4}=\delta t \boldsymbol{f}_{\alpha}|_{\boldsymbol{\phi}=\boldsymbol{\phi}^{n}+\boldsymbol{k} _{\alpha 3}}.\]
Here, \(\phi_{\alpha}^{n}:=\phi_{\alpha}(\cdot,t_{n})\), \(\boldsymbol{f}_{\alpha}\) represents the right-hand-side of (27), and \(\boldsymbol{f}_{\alpha}|_{\boldsymbol{\phi}=\boldsymbol{\phi}^{n}}\) denotes the evaluation of \(\boldsymbol{f}_{\alpha}\) using \(\boldsymbol{\phi}_{\alpha}^{n}\). The spatial derivatives in \(\boldsymbol{f}_{\alpha}\) are computed using FFT. The simulation domain was discretized using a \(128\times 128\) grid, and \(\delta t=0.1\,\mathrm{sec}\) was the time step size. The spatial discretization was chosen such that the width of the interface dislocations is reasonably resolved, and the temporal discretization is fixed to ensure the numerical scheme remains stable.
All simulations are run long enough to ensure the elastic and the vdW energies converge. Since \(\boldsymbol{\phi}_{\alpha}^{0}\equiv\boldsymbol{0}\), the elastic energy at \(t=0\) is zero and the vdW energy is the only contributor to the total energy. As the simulation progresses, elastic energy increases and the vdW energy decreases, such that the total energy monotonically decreases.
### Comparison with atomistics
We will now present continuum simulations and compare them to atomistic simulations of heterodeformed BGs, discussed in Section 2. The first column of Fig. 14 shows plots of the displacement magnitude and the total energy density in a \(0.2992634^{\circ}\) small-twist BG. They compare well with the corresponding plots from the atomistic simulation, shown in the second column. The area enclosed by the red dashed line is the simulation domain. The plots are presented on an extended domain to highlight the triangular network of dislocations.
Figure 14: A comparison of structural relaxation predicted by continuum and atomistics simulations of a \(0.299\,263\,4^{\circ}\) small-twist BG. The units of energy density and displacement are \(\mathrm{meV\,\AA}^{-2}\) and \(\mathrm{\AA}\), respectively. The area enclosed by the red dashed line is the simulation domain.
Fig. 15 shows continuum simulation results of a \(21.956\,865^{\circ}\) large-twist BG using vdW energy densities corresponding to the KC-1 (first column) and the KC-2 models (second column). The color density plots of the total energy density in Figs. (a)a and (b)b highlight the honeycomb and triangular dislocation networks and match well with those from atomistic simulations (see Fig. 4). As expected, the energy density in the domain interiors is zero as they correspond to the low-energy \(AB\) and \(\Sigma 7\) stackings. Figs. (c)c and (d)d compare the displacement jumps in the atomistic and continuum simulations.15 The displacement plots show that the core widths agree well for the KC-1 model, while the atomistic core width is relatively more diffused for the KC-2 model.
Footnote 15: Recall from footnote 5, that the calculation of the Burgers vectors from the diffused displacement jump of atomistic simulations is approximate. The Burgers vector can be computed exactly from the simulations of the GFK model by considering a Burgers circuit formed by directed lines \(\mathbf{l}_{\mathrm{t}}=\mathbf{l}\) and \(\mathbf{l}_{\mathrm{b}}=-\mathbf{l}\) in the deformed configurations of the top and bottom lattices, respectively. The directed line \(\mathbf{l}\) crosses a dislocation line and connects centers of two adjacent \(\Sigma 7/\Sigma 1\) stackings and back. By construction, the Burgers circuit is arbitrarily thin, and its normal lies in the interface. Under this setting, the Burgers vector can be measured as
\[\int_{\mathbf{l}}(\mathbf{H}_{\mathrm{t}}^{-1}-\mathbf{H}_{\mathrm{b}}^{-1})\,d\mathbf{l}.\]
We have confirmed using the above equation that all simulations presented in this section recover the bicrystallography-predicted Burgers vector.
Figure 15: Structural relaxation in a \(21.956\,865\,\mathrm{\SIUnitSymbolAngstrom}\) large-twist BG using the KC-1 and KC-2 parametrizations. The energy density plots in (a) and (b) highlight the honeycomb and the triangular network of interface dislocations, corresponding to KC-1 and KC-2 models, respectively. (c) and (d) compare the displacement across the dashed lines predicted by the atomistic and the FK models.
Fig. 16 shows continuum simulations results of a heterostrained \(21.786789^{\circ}\) (\(\Sigma 7\)) twisted BG using the vdW energy density of the KC-2 model. The displacement magnitude plotted in Fig. (b)b compares well with the plot of atomistic displacement magnitude in Fig. (a)a. We note that the maximum displacement in the continuum simulation is smaller compared to that in the atomistic simulation. The dislocation network is not as conspicuous in the energy density plot in Fig. (c)c as it was in the earlier simulations. We attribute this feature to the diffused dislocation network noted in Fig. (d)d. We note that magnitude of the energy density shown in Fig. (c)c does not match with that in Figs. (a)a and (b)b due to the atomic energy density variation of the \(\Sigma 7\) configuration, as noted in Section 2.2.
## 6 Summary and conclusions
Tuning quantum mechanical properties with atomic-scale precision is at the core of scientific efforts geared towards ushering in the second quantum revolution (Dowling and Milburn, 2003). New vdW materials and heterostructures are one of the key types of the novel materials that are being explored in this regard, and provide tremendous opportunities for the field of straintronics. The design and development of vdW heterostructures with tailored properties hinge on the ability to efficiently parse heterostrains and predict the properties of the resulting moire superlattices. Within this context, this paper focuses on predicting atomic reconstruction in vdW heterostructures efficiently by developing a generalized Frenkel-Kontorova (GFK) model.
Motivated by dislocations-mediated structural relaxation in a twisted BG, the development of the model was spurred by the following questions -- a) under what heterodeformations does a vdW heterostructure undergo structural relaxation?; b) is the relaxation dislocations-mediated?; and c) how are dislocations defined in heterostructures? In our study, large twist BG serves as a surrogate for heterostructures. Noting the cusp-like local minima at an angle of \(21.786789^{\circ}\) in the plot of interface energy versus the twist angle of a BG is a signature of defect nucleation, we hypothesized that a heterostrained \(21.786789^{\circ}\) large-twist BG will undergo dislocations-mediated atomic reconstruction. Using atomistic simulations of \(21.786789^{\circ}\) large-twist BG subjected to small heterowists and heterostrains, we confirmed our hypothesis and probed the above questions. The following key observations were made in our atomistic simulations:
1. Structural relaxation occurs via strain localization along a network of lines, which suggests it is dislocations-mediated. More interestingly, unlike the small twist case, the measured displacement jump/Burgers vectors were smaller than the smallest lattice vector of graphene.
2. Similar to the small-twist case, structural relaxation is characterized by regions of low-energy stacking interspersed by line defects. The defect-free \(21.786789^{\circ}\) stacking is the analog of the low energy AB-stacking observed in a small-twist BG.
Figure 16: Simulation results of a heterostrained \(21.786789^{\circ}\) large-twist BG.
To reveal the crystallographic origins of the observed dislocations, we explored the definition of an interface dislocation. Using SNF bicrystallography, which employs the Smith normal form for integer matrices, we showed that a heterointerface is translationally invariant with respect to translations in the DSCL. In other words, the GSFE of the defect-free \(21.786789^{\circ}\) twisted BG is periodic with respect to the DSCL, which implies the Burgers vector of interface dislocations belongs to the DSCL. The GSFE from atomistics, and its periodicity inferred from SNF bicrystallography, are used to construct the interfacial energy of the GFK model.
Inspired by crystal plasticity models, the GFK model includes three configurations -- reference, natural, and deformed. The defect-free \(21.786789^{\circ}\) BG serves as the stress-free natural configuration, and the reference configuration is the natural configuration subjected to a uniform heterodeformation. By prescribing the constitutive law (interfacial energy and elastic energy) with respect to the natural configuration, the GFK model is rendered frame-invariant. The GFK model was used to simulate various heterodeformed BGs, and it was validated by comparing its predictions to those from atomistics.
We conclude by emphasizing the immense potential of the GFK model to probe the enormous heterostructure-heterostrain space for correlated electron physics. Although the model is classical and focuses on structural prediction, it can serve as -- a) a workhorse for predicting structural relaxation in inhomogeneously strained heterointerfaces and b) provide a predictor for structural relaxation under uniform deformation, which can be further corrected using machine learning-based first-principles calculations (Pathrudkar et al., 2023).
## 7 Acknowledgements
NCA and TA would like to acknowledge support from the National Science Foundation Grant NSF-MOMS-2239734 with S. Qidwai as the program manager. ASB and CW would like to acknowledge support through grant DE-SC0023432 funded by the U.S. Department of Energy, Office of Science. ASB and CW also acknowledge computational resource support from UCLA's Institute for Digital Research and Education (IDRE), and the National Energy Research Scientific Computing Center (NERSC awards BES-ERCAP0025205 and BES-ERCAP0025168), a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.
**CRediT author statement**
**Md Tusher Ahmed:** Software, Validation, Formal analysis, Investigation, Data Curation, Writing-Original Draft, Visualization, **Chenhaoyue Wang:** Software, Investigation. **Amartya S. Banerjee:** Software, Investigation, Resources, Writing-Review & Editing, Supervision, Funding acquisition **Nikhil Chandra Admal:** Conceptualization, Methodology, Software, Formal analysis, Investigation, Resources, Writing-Review & Editing, Supervision, Project Management, Funding acquisition.
## Appendix A Coincidence relation between two lattices
In this section, we will develop an algorithm to enumerate heterodeformations that result in moire superlattices. Let \(\mathscr{A}\) and \(\mathscr{B}\) denote two 2D lattices with structure matrices \(\mathbf{A}\) and \(\mathbf{B}\), respectively. It is easy to see that the lattices coincide on a moire superlattice if and only if the transition matrix, \(\mathbf{T}:=\mathbf{A}^{-1}\mathbf{B}\), is rational. However, this condition is invariably not satisfied and a heterodeformation is required to form a moire supercell. Therefore, we are interested in all distortions \(\mathbf{F}\) of lattice \(\mathscr{A}\), such that the deformed lattice shares a moire supercell with \(\mathscr{B}\). In other words, we would like to compute all \(\mathbf{F}\) such that the transition matrix
\[\mathbf{T}=\mathbf{B}^{-1}\mathbf{F}\mathbf{A}\text{ is rational.} \tag{10}\]
Moreover, since large elastic strain are energetically unfavorable, we are only interested in heterodeformations that lead to small stretches. Heterotwists are included in this search as they cost no elastic energy.
A general solution to (10) is given by the following theorem (see Admal et al. (2022)):
**Theorem 2**.: _Let \(\mathscr{A}\) and \(\mathscr{B}\) be two 2D lattices forming a heterostructure, and \(\mathbf{F}\) an in-plane linear transformation. Then, \(\mathscr{A}\cap\mathbf{F}\mathscr{B}\) is a 2D moire superlattice if and only if there exist lattice vectors \(\mathbf{q}_{1}\) and \(\mathbf{r}_{1}\) in \(\mathscr{B}\), and \(\mathbf{q}_{2}\) and \(\mathbf{r}_{2}\) in \(\mathscr{A}\) such that_
\[\mathbf{q}_{2}=\alpha\mathbf{F}\mathbf{q}_{1},\quad\mathbf{r}_{2}=\beta\mathbf{F}\mathbf{r}_{1}\] (A.2)
_for some rationals \(\alpha\) and \(\beta\). Moreover, \(\mathbf{F}\) is given by_
\[\mathbf{F}=\frac{1}{\alpha}\mathbf{q}_{2}\otimes\mathbf{\mathscr{G}}_{1}+\frac{1}{\beta} \mathbf{r}_{2}\otimes\mathbf{\tau}_{1},\] (A.3)
_where \(\mathbf{\mathscr{G}}_{1}\), \(\mathbf{\tau}_{1}\in\mathscr{B}^{*}\) (dual/reciprocal lattice of \(\mathscr{B}\)) such that \(\mathbf{\mathscr{G}}_{1}\cdot\mathbf{q}_{1}=\mathbf{\tau}_{1}\cdot\mathbf{r}_{1}=1\)._
An algorithm to construct \(\mathbf{F}\) given in (A.3) can be found in Admal et al. (2022), and it is implemented in oILAB.
## Appendix B Smith normal form bicrystallography
In this section, we introduce SNF bicrystallography, a framework to analyze the crystallography of heterostructures. In this paper, SNF bicrystallography is used to calculate the DSCL of a heterostructure, and plays a central role in our definition of interface dislocation, introduced in Section 3. We refer the reader to Admal et al. (2022) for a more detailed presentation.
Let \(\mathscr{A}\) and \(\mathscr{B}\) denote 2D lattices that form a moire superlattice. From Appendix A, we know that the transition matrix \(\mathbf{T}=\mathbf{A}^{-1}\mathbf{B}\) is rational. Therefore, \(\mathbf{T}\) can be expressed as
\[\mathbf{T}=\frac{\mathbf{P}}{\mu},\] (B.1)
where \(\mu\) is an integer, and \(\mathbf{P}\) is an integer matrix such that \(\mu\) and the entries of \(\mathbf{P}\) are co-prime. Using the Smith normal form for integer matrices, \(\mathbf{P}\) can be multiplicatively decomposed as
\[\mathbf{P}=\mathbf{U}\mathbf{\Delta}\mathbf{V}^{-1},\] (B.2)
where \(\mathbf{U}\) and \(\mathbf{V}\) are unimodular matrices, and \(\mathbf{\Delta}=\mathrm{diag}(\delta_{1},\delta_{2})\) is a diagonal matrix and \(\delta_{1}=\mathrm{gcd}(\mathbf{P})\). Substituting (B.2) into (B.1) and rearranging, we have
\[\mu\mathbf{B}^{\parallel}=\mathbf{A}^{\parallel}\mathbf{\Delta},\ \mathrm{ where}\] (B.3) \[\mathbf{A}^{\parallel}=\mathbf{A}\mathbf{U},\ \mathrm{and}\ \mathbf{B}^{\parallel}=\mathbf{B}\mathbf{V}.\] (B.4)
Since \(\mathbf{U}\) and \(\mathbf{V}\) are unimodular, the matrices \(\mathbf{A}^{\parallel}\) and \(\mathbf{B}^{\parallel}\) qualify as new structure matrices of lattices \(\mathscr{A}\) and \(\mathscr{B}\), respectively. In other words, the columns \(\{\mathbf{a}^{\parallel}_{i}\}\) and \(\{\mathbf{b}^{\parallel}_{i}\}\) of \(\mathbf{A}^{\parallel}\) and \(\mathbf{B}^{\parallel}\) are the new bases of the two lattices. Since \(\mathbf{\Delta}\) is a diagonal matrix, Eq. (B.3) reads as,
\[\mu\mathbf{b}^{\parallel}_{i}=\delta_{i}\mathbf{a}^{\parallel}_{i}\] (no summation over i). (B.5)
(B.5) implies the new bases are parallel and coincide on the CSL \(\mathscr{G}\) with basis vectors
\[\mathbf{c}^{\parallel}_{i}=\frac{\mu}{\mathrm{gcd}(\mu,\delta_{i})}\mathbf{b}^{ \parallel}_{i}=\frac{\delta_{i}}{\mathrm{gcd}(\mu,\delta_{i})}\mathbf{a}^{ \parallel}_{i}\,.\] (B.6)
The basis vectors \(\{\mathbf{c}^{\parallel}_{1},\mathbf{c}^{\parallel}_{2}\}\) of \(\mathscr{G}\) can be collected in a structure matrix as
\[\mathbf{C}^{\parallel}=\mathbf{B}^{\parallel}\mathbf{N}=\mathbf{A}^{\parallel}\mathbf{M}\,.\] (B.7)
where
\[\mathbf{M}=\mathrm{diag}\left(\frac{\delta_{i}}{\mathrm{gcd}(\mu, \delta_{i})}\right)\] and \[\mathbf{N}=\mathrm{diag}\left(\frac{\mu}{\mathrm{gcd}(\mu,\delta_{i})}\right)\,,\] (B.8)
are auxiliary diagonal matrices satisfying the relation \(\mu\mathbf{M}=\mathbf{\Delta}\mathbf{N}\).
The DSCL, denoted as \(\mathscr{D}\), is the smallest lattice that contains lattices \(\mathscr{A}\) and \(\mathscr{B}\). The basis vectors \(\{\mathbf{d}_{1}^{\parallel},\mathbf{d}_{2}^{\parallel}\}\) of \(\mathscr{D}\) are given by
\[\mathbf{d}_{i}^{\parallel}=\frac{\gcd(\mu,\delta_{i})}{\delta_{i}}\mathbf{b}_{i}^{ \parallel}=\frac{\gcd(\mu,\delta_{i})}{\mu}\mathbf{a}_{i}^{\parallel}\,.\] (B.9)
(B.8) and (B.9) imply the structure matrix of \(\mathscr{D}\) satisfies
\[\mathbf{D}^{\parallel}=\mathbf{B}^{\parallel}\mathbf{M}^{-1}=\mathbf{A}^{\parallel}\mathbf{N}^{-1 }\,.\] (B.10)
(B.7) and (B.10) imply the ratios of areas of primitive unit cells
\[\Sigma_{\mathscr{A}}:=\frac{\det\mathbf{C}^{\parallel}}{\det\mathbf{A}^{\parallel}}= \det\mathbf{M},\quad\Sigma_{\mathscr{B}}:=\frac{\det\mathbf{C}^{\parallel}}{\det\mathbf{B} ^{\parallel}}=\det\mathbf{N}\] (B.11)
are integers, and \(\det(\mathbf{C})\,\det(\mathbf{D})=\det(\mathbf{A})\,\det(\mathbf{B})\).
The parallel bases for \(\mathscr{A}\), \(\mathscr{B}\), \(\mathscr{C}\), and \(\mathscr{D}\) highlight an interesting analogy with the notions of least common multiple (lcm) and greatest common divisor (gcd) of integers. The CSL and the DSCL may be interpreted as the lcm and the gcd of lattices \(\mathscr{A}\) and \(\mathscr{B}\), respectively.
We will now demonstrate the application of SNF bicrystallography to the \(21.786789^{\circ}\) twisted BG. Let \(\mathscr{A}\) represent the hexagonal lattice of graphene with structure matrix
\[\mathbf{A}=\frac{a}{2}\begin{bmatrix}0&-\sqrt{3}\\ 2&-1\end{bmatrix},\]
where the lattice constant of graphene is assumed to be \(a=2.46\mathring{A}\). Let \(\mathscr{B}\) represent another hexagonal lattice twisted anti-clockwise relative to \(\mathscr{A}\), i.e.
\[\mathbf{B}=\mathbf{R}_{\theta}\mathbf{A}\,.\]
where \(\theta=21.786789^{\circ}\) guarantees a coincidence between the lattices with a rational transition matrix
\[\mathbf{T}=\mathbf{A}^{-1}\mathbf{B}=\frac{1}{7}\begin{bmatrix}5&-8\\ 8&-3\end{bmatrix},\] (B.12)
The SNF of the integer matrix in (B.12) yields
\[\mathbf{\Delta}=\text{diag}(1,49),\quad\mathbf{U}=\begin{bmatrix}19&1\\ 1&0\end{bmatrix},\quad\mathbf{V}=\begin{bmatrix}-1&-3\\ -3&-8\end{bmatrix},\quad\mathbf{M}=\text{diag}(1,7),\quad\mathbf{N}=\text{diag}(7,1).\]
The basis vectors of the CSL and the DSCL can be obtained using (B.7) and (B.10). The basis vectors of \(\mathscr{C}\) are used to define the periodic box of our atomistic and continuum simulations, while those of \(\mathscr{D}\) are used to identify the Burgers vectors of interface dislocations. While the parallel bases are valuable in proving statements such as Theorem 1, the corresponding structure matrices are typically ill-conditioned. Therefore, we resort to lattice reduction algorithms to obtain reduced bases from the parallel bases.
Appendix C Computational details of density functional theory calculations of the GSFE of \(\mathbf{21.786789^{\circ}}\) twisted bilayer graphene
First-principles calculations using Density Functional Theory (DFT) were performed to calculate the GSFE of \(21.786789^{\circ}\) twisted bilayer graphene (TBG), via the Quantum Espresso package (Giannozzi et al., 2009). Projector augmented wave (PAW) type pseudopotentials (Blochl et al., 2002), along with the generalized gradient approximation (Perdew-Burke-Ernzerhof functional (Perdew et al., 1998)) and Grimme's density functional dispersion correction (DFT-D2) (Grimme et al., 2011) were employed to calculate the GSFE landscape of BG systems. These choices allowed us to balance computational accuracy and efficiency, and are also consistent with earlier literature (Zhou et al., 2015). The interlayer spacing was set to \(2.46\mathring{\text{A}}\)
inducing 26% out-of-plane compression in the BG system. In-plane periodic boundary conditions, and out-of-plane isolated system conditions were adopted to simulate the two-dimensional nature of the system. The values of the planewave cutoff energy (\(E_{\rm cut}=70\) Rydbergs) and k-point mesh parameters were optimized to ensure the total energy of systems converged to \(0.001\) eV in all calculations. The GSFE calculation was conducted by taking displacement on a \(15\times 15\times 1\) mesh of two lattice vectors, \(\mathbf{b}_{1}=-0.929792342\,\mathbf{e}_{1}\), and \(\mathbf{b}_{2}=-0.464896163\,\mathbf{e}_{1}+0.805223601\,\mathbf{e}_{2}\).
## Appendix D Comparison with the small-twist BG model of Nam and Koshino (2017)
In this section, we will specialize the GFK model to small-twist homostructures by invoking isotropic linear elasticity to recover the model of Nam and Koshino (2017), which was developed to model atomic reconstruction in small-twist BG.
The Frenkel-Kontorova model of Nam and Koshino (2017) for small-twist BG was formulated in terms of a single unknown field \(\mathbf{u}_{-}\), which denotes the difference in the displacement fields of the top and bottom lattices. We will now show that under small deformations, the Nam and Koshino (2017) model can be recovered from the GFK model of Section 4.
Introducing the displacement variables \(\mathbf{u}_{\alpha}:=\mathbf{\phi}_{\alpha}(\mathbf{X},t)-\mathbf{X}\), and \(\mathbf{u}_{\pm}=\mathbf{u}_{\rm t}\pm\mathbf{u}_{\rm b}\), we first express \(\mathscr{E}\) as a functional of \(\mathbf{u}_{+}\) and \(\mathbf{u}_{-}\) under the assumption of small deformation. Notice that we are using the variable \(\mathbf{X}\), as opposed to \(\mathbf{X}_{\alpha}\), since the two reference configurations coincide under PBCs. Beginning with \(\mathscr{E}_{\rm el}\), we invoke linear elasticity by writing the elastic energy density as
\[e_{\rm el}(\mathbf{\epsilon}_{\alpha})=\frac{1}{2}\mathbb{C}\mathbf{\epsilon}_{\alpha} \cdot\mathbf{\epsilon}_{\alpha}=\lambda(\operatorname{tr}\mathbf{\epsilon}_{\alpha})^ {2}+2\mu\mathbf{\epsilon}_{\alpha}\cdot\mathbf{\epsilon}_{\alpha},\] (D.1)
where the Lagrangian strain in (13) has been replaced by the infinitesimal strain \(\mathbf{\epsilon}_{\alpha}=(\nabla\mathbf{u}_{\alpha}+\nabla\mathbf{u}_{\alpha}^{\rm T})/2\), and \(\mathbb{C}\) is the fourth-order isotropic elasticity tensor with lame constants \(\lambda\) and \(\mu\). Under this setting, \(\mathscr{E}_{\rm el}\) can be recast as a functional of \(\mathbf{u}_{+}\) and \(\mathbf{u}_{-}\) as follows:
\[\mathscr{E}_{\rm el}[\mathbf{u}_{+},\mathbf{u}_{-}]=\frac{1}{4}\int_{\Omega_{\rm ref}} (\mathbb{C}\mathbf{\epsilon}_{+}\cdot\mathbf{\epsilon}_{+}+\mathbb{C}\mathbf{\epsilon}_{- }\cdot\mathbf{\epsilon}_{-})\,d\mathbf{X}.\] (D.2)
where \(\epsilon_{\pm}:=(\nabla\mathbf{u}_{\pm}+\nabla\mathbf{u}_{\pm}^{\rm T})/2\). Next, assuming a) \(\det\mathbf{K}_{\alpha}\approx\det\mathbf{F}_{\alpha}\approx 1\) and b) \(\mathbf{H}_{\alpha}\approx\mathbf{I}\), the vdW energy can be expressed as a functional exclusively of \(\mathbf{u}_{-}\):
\[\mathscr{E}_{\rm vdW}[\mathbf{u}_{-}]=\int_{\Omega^{\rm ref}}e_{\rm vdW}(\mathbf{r}( \mathbf{X}))\,d\mathbf{X},\] (D.3)
where \(\mathbf{r}(\mathbf{X})\), given by (11), is written as \(\mathbf{r}(\mathbf{X})=(\mathbf{K}_{\rm t}-\mathbf{K}_{\rm b})\mathbf{X}-\mathbf{u}_{-}(\mathbf{X})\) since \(\mathbf{\phi}_{\rm t}-\mathbf{\phi}_{\rm b}=\mathbf{u}_{\rm t}-\mathbf{u}_{\rm b}=\mathbf{u}_{-}\).
Since \(\mathscr{E}_{\rm vdW}\) in (D.3) is independent of \(\mathbf{u}_{+}\), it is easy to see that minimizing the total energy functional \(\mathscr{E}[\mathbf{u}_{+},\mathbf{u}_{-}]\) results in \(\mathbf{u}_{+}\equiv 0\), which allowed Nam and Koshino (2017) to cast their total energy functional in the single variable \(\mathbf{u}_{-}\):
\[\mathscr{E}[\mathbf{u}_{-}]=\int_{\Omega_{\rm ref}}\left(\frac{1}{4}\mathbb{C}\bm {\epsilon}_{-}\cdot\mathbf{\epsilon}_{-}+e_{\rm vdW}(\mathbf{r})\right)\,d\mathbf{X}.\] (D.4)
|
2309.15709 | Distributed Pilot Assignment for Distributed Massive-MIMO Networks | Pilot contamination is a critical issue in distributed massive MIMO networks,
where the reuse of pilot sequences due to limited availability of orthogonal
pilots for channel estimation leads to performance degradation. In this work,
we propose a novel distributed pilot assignment scheme to effectively mitigate
the impact of pilot contamination. Our proposed scheme not only reduces
signaling overhead, but it also enhances fault-tolerance. Extensive numerical
simulations are conducted to evaluate the performance of the proposed scheme.
Our results establish that the proposed scheme outperforms existing centralized
and distributed schemes in terms of mitigating pilot contamination and
significantly enhancing network throughput. | Mohd Saif Ali Khan, Samar Agnihotri, Karthik R. M | 2023-09-27T14:49:29Z | http://arxiv.org/abs/2309.15709v3 | # Distributed Pilot Assignment for Distributed Massive-MIMO Networks
###### Abstract
Pilot contamination is a critical issue in distributed massive MIMO networks, where the reuse of pilot sequences due to limited availability of orthogonal pilots for channel estimation leads to performance degradation. In this work, we propose a novel distributed pilot assignment scheme to effectively mitigate the impact of pilot contamination. Our proposed scheme not only reduces latency, but it also enhances fault-tolerance. Extensive numerical simulations are conducted to evaluate the performance of the proposed scheme. Our results establish that the proposed scheme outperforms existing centralized and distributed schemes in terms of mitigating pilot contamination and significantly enhancing network throughput.
Distributed Massive MIMO, Distributed Pilot Assignment, Pilot Contamination
## I Introduction
The rapid growth and adoption of wireless communication based data services has prompted the development of technologies that can improve the capacity, reliability, and efficiency of cellular networks. By reaping the benefits of mMIMO and Ultra Dense Network (UDN), Distributed mMIMO is emerging as a promising technology [1, 2]. It abandons the concept of distinct cells but instead deploys a large number of distributed access points (APs) throughout the coverage area [1]. As there are no traditional cells, the inter-cell interference that plagues cellular systems is eliminated. Further, as the user equipments (UEs) are very close to APs, this may provide high coverage probability in the network, thus improving quality-of service to the users. However, as in traditional mMIMO systems, channel estimation is a critical challenge in distributed mMIMO
networks. One commonly employed approach is blind convolution techniques, such as matrix decomposition-based signal detection. However, these methods often have high computational complexity. To mitigate this complexity, researchers have explored leveraging the channel hardening property of mMIMO systems [3]. However, in distributed mMIMO, the channel hardening is not always guaranteed [4], so getting the estimates using matrix decomposition-based channel estimation is not a viable option.
Pilot-based channel estimation presents a simple and low-complexity approach that can be employed within the distributed mMIMO systems to obtain channel estimates. However, before deploying such solutions in the networks an associated issue of pilot contamination needs to be addressed. Pilot contamination arises when the same pilot sequences are used for estimating channels of more than one user, leading to estimation errors and subsequent performance degradation. In a practical system, as it is almost impossible to assign orthogonal pilots to all users, the users may reuse the pilots. To minimize the impact of pilot contamination, pilot assignment (PA) should be carried out so that it minimizes the contamination. A lot of work on pilot assignment in distributed mMIMO networks exists, but most of it is centralized in nature [5, 6, 7, 8, 9, 10, 11]. In [1], the authors have proposed two pilot assignment schemes namely random pilot assignment and greedy pilot assignment based on throughput improvement. In [5], the authors have proposed a scalable scheme which performs joint pilot assignment and AP-UE association. In [6], the authors have proposed a graph coloring-based pilot assignment, where the AP-UE association takes place initially and then pilots are assigned using graph coloring. If the pilot assignment fails, the AP-UE associations are updated, and so is the pilot assignment. In [7], the authors have proposed a pilot assignment scheme based on Hungarian matching algorithm. In [8], the authors have proposed a clustering based scalable pilot scheme where UEs in the same cluster are allocated the same pilots. In [9], the authors have again exploited the graph theory, but instead by graph coloring, they have used weighted graph approach. In [10], the authors have tried to exploit the interference-aware reward, calculated by treating noise as a interference, for the pilot assignment and AP-UE association jointly. Along with random pilot assignment [1], that can also be implemented in distributed manner, there are a few distributed schemes [12, 13]. In [12], the authors have proposed a survey propagation based distributed pilot assignment scheme that incurs high signaling overhead due to messages passing among the feasible groups. Also, for the proposed scheme the computational cost increases drastically as the number of users-to-pilots ratio increases. In [13], the authors have proposed a distributed
multi-agent reinforcement learning-based pilot assignment. This, however, demands centralized training, resulting in increased signaling overhead, sensitivity to the training data, and lack of explainability of the model.
To address these challenges, a distributed pilot assignment scheme for distributed mMIMO systems is proposed in this paper. The novelty of proposed work is rooted in distributed scheme that jointly performs the pilot assignment and AP-UE association. In this scheme orthogonal pilot signals are allocated to the topmost UEs of each AP, determined by the largest large-scale fading coefficient. Subsequently, each AP independently associates itself with these topmost UEs with different pilot signals. The proposed scheme not only reduces latency in distributed mMIMO networks, but also improves the overall spectral efficiency. We compare the performance of the proposed scheme with existing schemes through extensive numerical simulations and establish its superiority in terms of scalability and overall system throughput.
_Organization:_ The paper is structured as follows: Section II presents the system model. Section III introduces the proposed distributed resource allocation scheme. Section IV evaluates performance of the proposed scheme and demonstrates its effectiveness. Finally, Section V concludes the paper by summarizing our work and discussing its future directions.
## II System Model
We consider a distributed mMIMO network configuration comprising of \(T\) UEs and \(M\) APs. The set of UEs is denoted as \(\mathcal{T}\), while the set of APs is denoted as \(\mathcal{M}\). Each AP is equipped with \(A\) antennas, while each UE is equipped with a single antenna. Both the APs and UEs are uniformly distributed over the coverage area. To facilitate the coordination and processing of UE signals, the APs are connected to a Central Processing Unit (CPU) through a front-haul connection. To minimize computation cost and front-haul overload, a scalable architecture is employed [5]. This architecture leverages the observation that more than 95% of the received signal strength is concentrated in a few nearby APs [14]. We consider the TDD mode for our operations and assume channel reciprocity. To model the wireless channel, we consider a standard block fading model with a resource block of length \(L_{c}\). The channel coefficient \(\textbf{h}_{t,m}\in\mathbb{C}^{A}\) represents the spatially correlated Rayleigh fading channel between the \(t^{th}\) UE and the \(m^{th}\) AP. It follows a complex Gaussian distribution such that \(\textbf{h}_{t,m}{\sim}\mathcal{N}_{\mathbb{C}}(0,\textbf{R}_{t,m})\), where \(\textbf{R}_{t,m}\in\mathbb{C}^{A\times A}\) represents the spatially correlated matrix that incorporates large-scale fading coefficient (LSFC), denoted by \(\beta_{t,m}\), that accounts for various factors such as path-loss and shadow-fading and can
be calculated by utilizing periodically broadcasted synchronization signals from the UE \(t\) to the AP \(m\), [5]. We assume that these channel vectors of a particular UE for different APs are independent and this assumption is reasonable as APs are distributed over a large area.
Let \(L_{p}\) denote the number of mutually orthogonal pilots such that the length \(L_{p}\) of each resource block is used for pilot training and the remaining for information transmission. As number of orthogonal pilots is smaller than the number of UEs, so pilot reuse comes into play. Let \(\mathcal{T}_{p}\) be the subset of UEs sharing the pilot \(p\). Following [5], the received signal \(\textbf{y}_{p,m}\in\mathbb{C}^{A}\) at the \(m^{th}\) AP when the pilot \(p\) is transmitted by UEs belonging to \(\mathcal{T}_{p}\) is given by :
\[\textbf{y}_{p,m}=\sum_{t\in\mathcal{T}_{p}}\sqrt{L_{p}\tau_{t}}\textbf{h}_{t,m }+\textbf{n}_{m}, \tag{1}\]
where \(\tau_{t}\) is the uplink transmit power of the UE \(t\in\mathcal{T}_{p}\) and \(\textbf{n}_{m}{\sim}\mathcal{N}_{\mathbb{C}}(0,\sigma^{2}\textbf{I}_{A})\) denotes the noise at the \(m^{th}\) AP. The MMSE estimate of channel \(\textbf{h}_{t,m}\) is given by
\[\hat{\textbf{h}}_{t,m}=\sqrt{L_{p}\tau_{t}}\textbf{R}_{t,m}\boldsymbol{\Psi}_ {t,m}^{-1}\textbf{y}_{p,m}, \tag{2}\]
where \(\boldsymbol{\Psi}_{t,m}=\sum_{t\in\mathcal{T}_{p}}\sqrt{L_{p}\tau_{t}}\textbf {R}_{t,m}+\sigma^{2}\textbf{I}_{A}\) is the correlation matrix of \(\textbf{y}_{p,m}\). The pilot sharing not only leads to poorer channel estimation, but also affects \(\hat{\textbf{h}}_{t,m}\), which becomes more correlated and thus increases the interference among the UEs.
The association of the UE \(t\) and the AP \(m\) is denoted by a indicator variable \(d_{t,m}\), which equals '1' when the UE \(t\) is served by the AP \(m\), and '0' when the UE \(t\) is not so served. The received uplink signal at the AP \(m\) is given by
\[\textbf{y}_{m}^{u}=\sum_{t=1}^{T}\tau_{t}\textbf{h}_{t,m}d_{t,m}x_{t}+\textbf{ n}_{m}. \tag{3}\]
The payload signal \(x_{t}\in\mathbb{C}\) is the unit power complex signal sent by the UE \(t\) and the power associated with it is denoted by \(\tau_{t}\). The combining vector \(\textbf{v}_{t,m}\in\mathbb{C}^{A}\) is assigned to the UE \(t\) by the AP \(m\). The estimate of \(x_{t}\) is \(\hat{x}_{t}=\sum_{t\in T}d_{t,m}\textbf{v}_{t,m}^{H}\textbf{y}_{m}^{u}\).
The uplink spectral efficiency (SE) is [5]
\[\mathsf{SE}_{t}^{u}=(1-\frac{L_{p}}{L_{c}})\mathbb{E}\{\log_{2}(1+\mathsf{ SINR}_{t}^{u})\}, \tag{4}\]
where \(\mathsf{SINR}_{t}^{u}\) represents the uplink instantaneous signal to noise plus interference ratio of the
\(t^{th}\) UE and is given by [5]
\[\mathsf{SINR}_{t}^{u}=\frac{\tau_{t}|\sum\limits_{m=1}^{M}d_{t,m}\textbf{v}_{t,m} ^{H}\hat{\textbf{h}}_{t,m}|^{2}}{\sum\limits_{j=1,j\neq t}^{T}\tau_{j}|\sum \limits_{m=1}^{M}d_{t,m}\textbf{v}_{t,m}^{H}\hat{\textbf{h}}_{t,m}|^{2}+\textbf {z}_{t}}, \tag{5}\]
where \(\textbf{z}_{t}=\sum_{m=1}^{M}d_{t,m}\textbf{v}_{t,m}^{H}(\sum_{t=1}^{T}\tau_{ t}\textbf{C}_{t}+\sigma^{2}\textbf{I}_{MA})\textbf{v}_{t,m}\), and \(\textbf{C}_{t}\) is the error correlation matrix for the collective channel of the UE \(t\).
The precoding vector, \(\textbf{s}_{t,m}\mathbb{\in}\mathbb{C}^{A}\), assigned to the \(t^{th}\) UE by the \(m^{th}\) AP and the received downlink signal at the \(t^{th}\) UE respectively, are:
\[\textbf{s}_{t,m}=\sqrt{\tau_{t,m}}\frac{\hat{\textbf{h}}_{t,m}}{ \sqrt{\mathbb{E}||\hat{\textbf{h}}_{t,m}||^{2}}}, \tag{6}\] \[y_{t}^{d}=\sum\limits_{m=1}^{M}\textbf{h}_{t,m}^{H}\sum\limits_{ j=1}^{T}d_{j,m}\textbf{s}_{j,m}x_{j}+n_{t}, \tag{7}\]
where \(n_{t}\)\(\sim\)\(\mathcal{N}_{\mathbb{C}}(0,\sigma^{2})\) is the thermal noise, \(x_{j}\in\mathbb{C}\) is unit power complex signal sent to the \(t^{th}\) UE and \(\tau_{t,m}\) is the downlink power assigned to the \(t^{th}\) UE by the \(m^{th}\) AP, allocated using the fraction power allocation [5].
By utilizing the use-and-then-forget bound [15], the downlink spectral efficiency (SE) is
\[\mathsf{SE}_{t}^{d}=(1-\frac{L_{p}}{L_{c}})\log_{2}(1+\mathsf{ SINR}_{t}^{d}), \tag{8}\]
where \(\mathsf{SINR}_{t}^{d}\) represents the downlink instantaneous signal to noise plus interference ratio of the \(t^{th}\) UE and is given by
\[\mathsf{SINR}_{t}^{d}=\frac{|\mathbb{E}\{\sum\limits_{m=1}^{M}d_{t,m}\textbf{ h}_{t,m}^{H}\textbf{s}_{t,m}\}|^{2}}{\textbf{z}-|\mathbb{E}\{\sum\limits_{m=1}^{M}d_{t,m} \textbf{h}_{t,m}^{H}\textbf{s}_{t,m}\}|^{2}+\sigma^{2}}, \tag{9}\]
where \(\textbf{z}=\sum_{j=1}^{T}|\mathbb{E}\{\sum_{m=1}^{M}d_{j,m}\textbf{h}_{j,m}^{ H}\textbf{s}_{j,m}\}|^{2}\).
## III Distributed Pilot Allocation
We propose a distributed pilot assignment scheme to address the limitations on the number of mutually orthogonal pilots (\(L_{p}\)\(<\)\(T\)) and mitigate pilot contamination. A key prerequisite for distributed mMIMO is that each UE be served by at least one AP. Additionally, each AP must
assign a given pilot to at most one UE to minimize pilot contamination. To guarantee that each UE is served by at least one AP and to manage pilot assignment, a specific AP is designated as the controller-AP for each UE. Therefore, in the proposed scheme first, a controller-AP is assigned to each UE in a distributed manner. Then, pilots are assigned distributively to each UE by its associated controller-AP. Furthermore, to fully leverage the benefits of mMIMO and make the system scalable, each UE needs to be connected to an adequate number of APs. To achieve this, a distributed AP-UE clustering scheme is employed in the final step based on the constraint that each AP serves one UE per pilot1, thus minimizing pilot contamination while providing significant power to the UEs.
Footnote 1: The number of users can grow but up to a certain extent such that \(T<MA\), for the system to behave as mMIMO [1]. Therefore, the above assumption does not hinder the performance when numbers of users is very large.
For each AP \(m\), we define a subset \(\mathcal{U}_{m}\) of UEs that includes, at most, the leading \(L_{p}\) UEs of the \(m^{th}\) AP, as determined by the decreasing order of the parameter \(\beta_{t,m}\). Let \(\mathcal{T}_{m}^{c}\) be the subset of UEs with the AP \(m\) as their controller-AP. We define \(\mathcal{L}_{t}^{p}\) as the set of available pilots for UE \(t\), initially contains all \(L_{p}\) pilots.
### _Controller-AP assignment_
For the controller-AP assignment, we propose a distributed algorithm. In the proposed approach, each UE is responsible for selecting its controller-AP from the set of available APs using their respective LSFC values, by following the standard random access procedure [16] with the broadcast signal [5]. However, there is a challenge when more than \(L_{p}\) UEs intend to be allocated a particular AP as their controller-AP. This challenges our design requirement that each AP can serve at most one UE per pilot. However, another issue is that, if a UE requests an AP with a lower LSFC value to be its controller-AP, this could potentially lead to degradation in the system performance. To address these, we propose a Controller-AP selection algorithm, where each UE is assigned one AP from the set \(\mathcal{M}_{t}^{c}\) of APs as its controller-AP. Also, an AP can serve as the controller-AP of at most \(L_{p}\) UEs. The details of constructing subset \(\mathcal{M}_{t}^{c}\) are given in Algorithm 1. In order to identify the APs that are serving more than \(L_{p}\) UEs and require some of their associated UEs to request a new controller-AP assignment, and the APs that are just serving \(L_{p}\) UEs and so are unable to accept controller-AP requests for more UEs, the APs are categorized into two set. The former are called the oversaturated-APs and are denoted as \(\mathcal{M}_{D}\), and the latter are called the inert-APs and are denoted as \(\mathcal{M}_{I}\), respectively. The detailed
procedure to select controller-AP distributively from the set of potential controller-APs is given by Algorithm 1. The Algorithm 1 is carried out at the APs in cooperation with the UEs.
```
1:Input:\(\mathcal{M}_{t}^{c}=\emptyset\), \(\beta_{t,m}\), \(\mathcal{T}_{m}^{c}=\emptyset\), \(\mathcal{M}_{D}=\emptyset\) and \(\mathcal{M}_{I}=\emptyset\)\(\forall\)\(t\)\(\in\)\(\mathcal{T}\), \(m\)\(\in\)\(\mathcal{M}\)
2:Returns: Controller-AP assignment for each UE.
3:for each UE \(t\in\mathcal{T}\)do
4: Calculate \(\beta_{t}^{d}\) as the difference between the largest and second largest LSFC values of \(\beta_{t,m}\).
5:endfor
6: Calculate \(\gamma^{th}=((\beta_{t}^{d})_{max}+(\beta_{t}^{d})_{min})/2\).
7:for each UE \(t\in\mathcal{T}\)do
8:\(\mathcal{M}_{t}^{c}=\mathcal{M}_{t}^{c}\cup m_{max},\) where \(m_{max}=\underset{m}{\operatorname{argmax}}\;\;\beta_{m,t}\).
9:for each AP \(m\in\mathcal{M}\)do
10:if\(\beta_{m_{max},t}-\beta_{m,t}<=\gamma^{th}\)then
11:\(\mathcal{M}_{t}^{c}=\mathcal{M}_{t}^{c}\cup m\).
12:endif
13:endfor
14: Choose a controller-AP randomly from \(\mathcal{M}_{t}^{c}\).
15:endfor
16:for all AP \(m\in\mathcal{M}\)do
17: Update \(\mathcal{T}_{m}^{c}=\mathcal{T}_{m}^{c}\cup t\), where AP m is a controller-AP of UE \(t\).
18:if\(|\mathcal{T}_{m}^{c}|>L_{p}\)then
19: Update \(\mathcal{M}_{D}=\mathcal{M}_{D}\cup m\).
20:elseif\(|\mathcal{T}_{m}^{c}|=L_{p}\)then
21: Update \(\mathcal{M}_{I}=\mathcal{M}_{I}\cup m\).
22:endif
23:endfor
24:while\(\mathcal{M}_{D}\neq\emptyset\)do
25:for each AP \(m\)\(\in\)\(\mathcal{M}_{D}\)do
26: Arrange \(t\in\mathcal{T}_{m}^{c}\) in increasing order of \(|\mathcal{M}_{t}^{c}|\), where priority is given to a UE with a larger LSFC w.r.t AP \(m\) to form a new set \(\mathcal{T}_{new}\).
27: Set \(\mathcal{T}_{m}^{c}=\emptyset\) and include first \(L_{p}\) UEs from \(\mathcal{T}_{new}^{c}\) in \(\mathcal{T}_{m}^{c}\).
28: AP \(m\) unassigns itself as the controller-APs of all UEs in \((\mathcal{T}_{new}-\mathcal{T}_{m}^{c})\).
29:for each UE \(t\)\(\in\)\((\mathcal{T}_{new}-\mathcal{T}_{m}^{c})\)do
30: Update \(\mathcal{M}_{t}^{c}\leftarrow\mathcal{M}_{t}^{c}-\mathcal{M}_{I}-m\).
31: Choose a controller-AP randomly from \(\mathcal{M}_{t}^{c}\).
32:endfor
33:endfor
34: Set \(\mathcal{M}_{D}=\emptyset\), \(\mathcal{M}_{I}=\emptyset\), \(\mathcal{T}_{m}^{c}=\emptyset\) and repeat steps in lines 16-23.
35:endwhile
```
**Algorithm 1** Controller-AP selection
### _Distributed Pilot assignment_
Now we propose a distributed pilot assignment scheme. In order to minimize pilot contamination, we introduce a contamination matrix \(\mathbf{Ad}\), a binary indicator matrix, indicating whether any two UEs can share a pilot or not. Every AP assigns distinct pilots to its serving UEs and informs2 its neighboring APs in \(\mathcal{M}_{m}^{nb}\) about its pilot assignment. The construction of \(\mathcal{M}_{m}^{nb}\) is explained in Algorithm 2. If any two UEs have the same pilot and \(\mathbf{Ad}\) indicates that these two UEs cannot share the pilot, then the AP allows a UE to retain the pilot if either the cardinality of set \(\mathcal{L}_{t}^{p}\) for this UE is smaller or its index is smaller than the other UE. The detailed procedure for assigning pilots to UEs is outlined in Algorithm 2. The Algorithm 2 is carried out at the APs.
Footnote 2: The exchange of information about pilots among APs is carried out via the CPU.
### _UE-AP Clustering_
The distributed UE-AP clustering is performed to improve the spectral diversity, as in [5]. Each AP serves at most one UE per pilot in order to minimize the pilot contamination. Initially, each AP \(m\) chooses to serve each UE \(t\in\mathcal{T}_{m}^{c}\). Then the AP \(m\), for each pilot (except pilots of UEs in \(\mathcal{T}_{m}^{c}\)), chooses a UE to serve based on the largest LSFC.
### _Complexity analysis_
The complexity of graph-coloring based pilot assignment algorithm in [6] is \(\mathcal{O}(\textit{TM}log_{2}\textit{M}+2\textit{TM}+\textit{T}^{2})\). The complexity of scalable PA in [5] is \(\mathcal{O}(\textit{TM}+\textit{TL}_{p}+\textit{ML}_{p})\). The complexity of IAR-based PA in [10] is \(\mathcal{O}(\textit{T}^{2}\textit{M}+\textit{ML}_{p}^{3}+\textit{TM}^{2})\).
The complexity of our proposed Algorithm 1 mainly depends upon the execution in lines 24-35, the **while** loop iterates at most \(\textit{M}\) times, given that any AP can be a unsaturated AP at most once, therefore its worst-case complexity is \(\mathcal{O}(\textit{TM}^{2})\). The complexity of Algorithm 2 depends on finding the set \(\mathcal{M}_{m}^{nb}\) for each AP. This has complexity of \(\mathcal{O}(\textit{ML}_{p})\). The Algorithm 2 also depends on execution of the **while** loop between lines 17-36. Assuming the **while** loop runs \(\textit{I}\) times, the complexity of execution of the **while** loop is \(\mathcal{O}(\textit{IML}_{p}^{2})\). Therefore, the overall complexity of Algorithm 2 is \(\mathcal{O}(\textit{ML}_{p}+\textit{IML}_{p}^{2})\). The final complexity of overall procedure including AP-UE association is \(\mathcal{O}(\textit{TM}^{2}+\textit{ML}_{p}+\textit{IML}_{p}^{2}+\textit{ML}_{ p})\). Putting everything together, the overall complexity of our proposed scheme is \(\mathcal{O}(\textit{TM}^{2}+\textit{IML}_{p}^{2})\). The actual execution time of the aforementioned
```
1:Input:\(\mathcal{M}_{m}^{nb}=\emptyset\), the set of neighboring APs, \(\mathcal{T}_{m}^{c}\), \(\mathcal{L}_{t}^{p}\), \(\mathcal{U}_{m}\), \(count=0\), \(max\_iter\) and \(a_{i,j}=0,\forall i,j,t\)\(\in\)\(\mathcal{T}\) and \(m\in\mathcal{M}\).
2:Returns: Pilot assignment for each UE.
3:for each AP \(m\in\mathcal{M}\)do
4:for\(t\in\mathcal{U}_{m}\)do
5:if\(t\in\mathcal{U}_{m}-\mathcal{T}_{m}^{c}\)then
6: Update \(\mathcal{M}_{m}^{nb}=\mathcal{M}_{m}^{nb}\cup m_{t}^{c}\), where \(m_{t}^{c}\) represents the controller-AP of UE \(t\).
7: Update \(\mathcal{M}_{m_{t}^{c}}^{nb}=\mathcal{M}_{m}^{nb}\cup m\).
8:endif
9:endfor
10: For UEs \(t,t^{\prime}\)\(\in\)\(\mathcal{U}_{m},where\)\(t\neq t^{\prime}\), then \(\textbf{Ad}\{t,t^{\prime}\}=1\).
11:endfor
12:while any UE \(t\in\mathcal{T}\) is not assigned a pilot and \(\mathcal{L}_{t}^{p}\neq\emptyset\)do
13:for each AP \(m\)do
14:for each unassigned UE \(t\in\mathcal{T}_{m}^{c}\)do
15: AP \(m\) assigns distinct but unused pilots from \(\mathcal{L}_{t}^{p}\).
16:endfor
17:endfor
18: Each AP \(m\in\mathcal{M}\) informs the other APs in \(\mathcal{M}_{m}^{nb}\) about the pilots of UEs in \(\mathcal{T}_{m}^{c}\).
19:for each AP \(m\in\mathcal{M}\)do
20:for\(t\in\mathcal{U}_{m}\)do
21:for\(t^{\prime}\in\mathcal{U}_{m}\)do
22:if\(t\) and \(t^{\prime}\) have the same pilot then
23:if\(|\mathcal{L}_{t}^{p}|>|\mathcal{L}_{t^{\prime}}^{p}|\)then
24: The UE \(t^{\prime}\) is allowed to keep its pilot.
25:elseif\(|\mathcal{L}_{t}^{p}|=|\mathcal{L}_{t^{\prime}}^{p}|\)then
26: The UE with lower index between \(t\) and \(t^{\prime}\) is allowed to keep its pilot.
27:endif
28:endif
29:endfor
30:endfor
31:endfor
32: Each AP \(m\in\mathcal{M}\) again informs the other APs in \(\mathcal{M}_{m}^{nb}\) about the pilots of UEs in \(\mathcal{T}_{m}^{c}\).
33:for each AP \(m\)do
34:for each UE \(t\in\mathcal{T}_{m}^{c}\)do
35: Updates the \(\mathcal{L}_{t}^{p}\) by removing the pilot of UE \(t^{\prime}\) for which \(\textbf{Ad}\{t^{\prime},t\}=1,\ \forall t^{\prime}\in\mathcal{T}\).
36:endfor
37:endfor
38:endwhile
39:if any UE \(t\) left unassigned then
40: The AP \(m_{t}^{c}\) assigns an unused pilot to the UE \(t\).
41:endif
42:endfor
processes may be substantially lower when compared to the complexities given above, primarily due to opportunities for extensive parallel execution of various processes within the procedure. In Algorithm 1, the formation of the sets \(\mathcal{M}_{D}\) and \(\mathcal{M}_{I}\) may take place at the APs independently. Similarly, each AP in \(\mathcal{M}_{D}\) may run the **while** loop in Algorithm 1 in parallel. This may further reduce the execution time significantly. In Algorithm 2, the pilot assignment is being carried out in parallel at the APs, thus reducing the execution time.
## IV Performance Evaluation
For numerical simulations we have considered an area of \(2000\times 2000\) square meters in which \(T\) UEs and 100 APs are uniformly distributed. Each AP is equipped with 4 antennas. We have considered APs to be 10 meters higher than the UEs. We set \(L_{c}=200,\ L_{p}=10,\ \tau_{t}=100\ mW\) and \(1000\ mW\) downlink power of each AP. We have considered bandwidth to be 20 MHz. To simulate the large scale propagation model, we have considered the 3GPP Urban Microcell scenario and the local scattering model for spatial correlation same as in [17]. We have averaged our results over 50 network instances and for each instance we have considered 500 channel realizations. For all the simulations, P-MMSE precoder [5] is used. For spectral efficiency comparison we have considered centralized schemes, such as scalable PA [5], IAR-sum PA [10], graph-coloring-based PA [6]; and a distributed scheme: survey-propagation-based (survey) PA [12]. For a fair comparison, in scalable and survey-propagation, a AP can serve at most \(L_{p}\) UEs.
### _Downlink Operations_
#### Iv-A1 Comparison with centralized schemes
Fig. 1 shows the plot of the 90%-likely downlink SE for different numbers of UEs. The proposed distributed scheme outperforms the scalable PA scheme by 2%, the graph-coloring PA scheme by 2%, and the IAR-sum PA scheme by 6% in scenario involving 100 UEs. Similarly, improvements with 150 UEs are 3%, 18%, and 9% respectively, and improvements with 200 UEs are 7%, 36%, and 7%, respectively. It can also be observed that as the number of UEs increases, the 90%-likely downlink SE of all schemes decreases, but our proposed scheme sees a lesser decrease than all other schemes. These results demonstrates the superiority of our proposed scheme in terms of 90%-likely downlink SE than all other schemes for all user scenarios.
Similarly Fig. 2 shows the graph between average downlink SE for different numbers of UEs. For \(T=100\), our proposed scheme demonstrates superior performance compared to the graph-coloring PA and IAR-sum PA by 22% and 1% respectively. However, it is slightly outperformed (by 1%) by the scalable PA. For \(T=150\), the proposed scheme outperforms both the graph-coloring based PA and the IAR-sum PA, while achieving comparable performance to the scalable PA. Finally, for \(T=200\), the proposed scheme surpasses all other schemes.
#### V-A2 Comparison with a distributed scheme
Fig. 3 shows the CDF of 90%-likely downlink SE for distributed PA schemes. To decrease the computational cost of the distributed survey PA scheme, we have considered the number of UEs to the number of pilots ratio to be \(3\). The plot shows that the proposed scheme outperforms the survey-propagation-based scheme, thus demonstrating its superiority.
Fig. 1: 90%-likely downlink SE vs \(T\).
Fig. 2: Average downlink SE vs \(T\).
### _Uplink Operations_
#### Iv-B1 Comparison with centralized schemes
Fig. 4 shows the plot of the 90%-likely uplink spectral efficiency for different numbers of UEs. In a scenario involving 100 UEs, the proposed approach outperforms the scalable PA technique by 1%, the graph-coloring-based PA scheme by 36%, and the IAR-sum PA scheme by 4%. Similarly, with 150 UEs, improvements are 3%, 72%, and 6%, respectively, and improvements with 200 UEs are 11%, 105%, and 9%, respectively. It can also be seen that when the number of UE increases, the 90%-likely downlink SE of all schemes decreases, however our proposed scheme has a lesser decrease than all other schemes.
Fig. 5 shows the graph between average uplink SE for different numbers of UEs. For \(T=100\), our proposed scheme demonstrates superior performance compared to the graph-coloring PA by approximately 17% and the IAR-sum PA by nearly 2%. It is, however, outperformed (by \(\leq\) 1%)
Fig. 4: 90%-likely uplink SE vs \(T\).
Fig. 3: CDF of downlink SE, when \(\frac{T}{L_{p}}=3\).
by the scalable PA. The proposed scheme surpasses both the graph-coloring-based PA and the IAR-sum PA by a significant margin for \(T=150\), while matching the scalable PA performance. Finally, for \(T=200\), our proposed scheme surpasses all other schemes.
#### V-B2 Comparison with a distributed scheme
Fig. 6 shows the uplink SE plot for distributed PA schemes. Again in order to decrease the computational costs in distributed survey PA scheme, we have considered the number of UEs to the number of pilots ratio to be \(3\). In terms of 90%-likely SE, the proposed scheme outperforms the survey-propagation-based scheme by 2% for \(T=45\) and by nearly 4% for \(T=60\). Thus, indicating the superiority of our proposed scheme.
### _Discussion_
Superior performance of our proposed distributed scheme in terms of both 90%-likely downlink as well as uplink SE demonstrates its ability to outperform not only distributed survey PA scheme,
Fig. 5: Average uplink SE vs \(T\).
Fig. 6: CDF of uplink SE, when \(\frac{T}{L_{p}}=3\).
but all of the aforementioned centralized schemes, and performance continues to improve as the number of UE grows. This is primarily attributable to our approach's priority of reducing pilot contamination for each UE by assigning distinct pilots to UEs that are more susceptible to contamination, thus improving fairness among UEs in terms of contamination. Although our proposed scheme does not focus on improving the average SE, it still achieves greater average downlink as well as uplink SE than IAR-sum and graph-coloring PA schemes in all scenarios. However, our approach has a modest disadvantage in terms of both average downlink and uplink SEs over the scalable PA scheme for low user density scenarios. This minor difference in average SE demonstrates that, despite being distributed, our approach is competitive with centralized approaches.
## V Conclusion
Our work shows that the proposed distributed pilot assignment technique in distributed mMIMO systems may substantially enhance spectral efficiency compared to centralized as well as distributed existing schemes. By using a distributed approach, this strategy not only boosts spectral efficiency but also reduces latency and increases the network's fault-tolerance capability. These findings highlight the importance of using distributed resource allocation algorithms to achieve optimal performance in distributed mMIMO systems. The positive findings of this study illustrate the potential benefits of adopting distributed approaches in future distributed mMIMO network plans and deployments.
|
2309.16200 | Max-Sliced Mutual Information | Quantifying the dependence between high-dimensional random variables is
central to statistical learning and inference. Two classical methods are
canonical correlation analysis (CCA), which identifies maximally correlated
projected versions of the original variables, and Shannon's mutual information,
which is a universal dependence measure that also captures high-order
dependencies. However, CCA only accounts for linear dependence, which may be
insufficient for certain applications, while mutual information is often
infeasible to compute/estimate in high dimensions. This work proposes a middle
ground in the form of a scalable information-theoretic generalization of CCA,
termed max-sliced mutual information (mSMI). mSMI equals the maximal mutual
information between low-dimensional projections of the high-dimensional
variables, which reduces back to CCA in the Gaussian case. It enjoys the best
of both worlds: capturing intricate dependencies in the data while being
amenable to fast computation and scalable estimation from samples. We show that
mSMI retains favorable structural properties of Shannon's mutual information,
like variational forms and identification of independence. We then study
statistical estimation of mSMI, propose an efficiently computable neural
estimator, and couple it with formal non-asymptotic error bounds. We present
experiments that demonstrate the utility of mSMI for several tasks,
encompassing independence testing, multi-view representation learning,
algorithmic fairness, and generative modeling. We observe that mSMI
consistently outperforms competing methods with little-to-no computational
overhead. | Dor Tsur, Ziv Goldfeld, Kristjan Greenewald | 2023-09-28T06:49:25Z | http://arxiv.org/abs/2309.16200v1 | # Max-Sliced Mutual Information
###### Abstract
Quantifying the dependence between high-dimensional random variables is central to statistical learning and inference. Two classical methods are canonical correlation analysis (CCA), which identifies maximally correlated projected versions of the original variables, and Shannon's mutual information, which is a universal dependence measure that also captures high-order dependencies. However, CCA only accounts for linear dependence, which may be insufficient for certain applications, while mutual information is often infeasible to compute/estimate in high dimensions. This work proposes a middle ground in the form of a scalable information-theoretic generalization of CCA, termed max-sliced mutual information (mSMI). mSMI equals the maximal mutual information between low-dimensional projections of the high-dimensional variables, which reduces back to CCA in the Gaussian case. It enjoys the best of both worlds: capturing intricate dependencies in the data while being amenable to fast computation and scalable estimation from samples. We show that mSMI retains favorable structural properties of Shannon's mutual information, like variational forms and identification of independence. We then study statistical estimation of mSMI, propose an efficiently computable neural estimator, and couple it with formal non-asymptotic error bounds. We present experiments that demonstrate the utility of mSMI for several tasks, encompassing independence testing, multi-view representation learning, algorithmic fairness, and generative modeling. We observe that mSMI consistently outperforms competing methods with little-to-no computational overhead.
## 1 Introduction
Dependence measures between random variables are fundamental in statistics and machine learning for tasks spanning independence testing [1, 2, 3], clustering [4, 5], representation learning [6, 7], and self-supervised learning [8, 9, 10]. There are a wide array of measures quantifying different notions of dependence, with varying statistical and computational complexities. The simplest is the Pearson correlation coefficient [11], which only captures linear dependencies. At the other extreme is Shannon's mutual information [12, 13], which is a universal dependence measure that is able to identify arbitrarily intricate dependence structures. Despite its universality and favorable properties, accurately estimating mutual information from data is infeasible in high-dimensional settings. First, mutual information estimation rates suffers from the curse of dimensionality, whereby convergence rates deteriorate exponentially with
dimension [14]. Additionally, computing mutual information requires integrating log-likelihood ratios over a high-dimensional ambient space, which is generally intractable.
Between these two extremes is the popular canonical correlation analysis (CCA) [15], which identifies maximally correlated linear projections of variables. However, classical CCA still only captures linear dependence, which has inspired nonlinear extensions such as Hirschfeld-Gebelein-Renyi (HGR) maximum correlation [16, 17, 18], kernel CCA [19, 20], deep CCA [21, 7], and various other generalizations [22, 23, 24, 25]. However, HGR is computationally infeasible, while kernel and deep CCA can be burdensome in high dimensions as they require optimization over reproducing kernel Hilbert spaces or deep neural networks (NNs), respectively. To overcome these shortcomings, this work proposes max-sliced mutual information (mSMI)--a scalable information-theoretic extension of CCA that captures the full dependence structure while only requiring optimization over linear projections.
### Contributions
The mSMI is defined as the maximal mutual information between linear projections of the variables. Namely, the \(k\)-dimensional mSMI between \(X\) and \(Y\) with values in \(\mathbb{R}^{d_{x}}\) and \(\mathbb{R}^{d_{y}}\), respectively, is1
Footnote 1: The parameter \(k\) is fixed and small compared to the ambient dimensions \(d_{x},d_{y}\), often simply set as \(k=1\).
\[\overline{\mathsf{S}\mathsf{I}}_{k}(X;Y):=\sup_{(\mathrm{A},\mathrm{B})\in \mathrm{St}(k,d_{x})\times\mathrm{St}(k,d_{y})}\mathsf{I}(\mathrm{A}^{\intercal }X;\mathrm{B}^{\intercal}Y), \tag{1}\]
where \(\mathrm{St}(k,d)\) is the Stiefel manifold of \(d\times k\) matrices with orthonormal columns. Unlike the nonlinear CCA variants that use nonlinear feature extractors in the high-dimensional ambient space, mSMI retains the linear projections of CCA and captures nonlinear structures in the _low-dimensional_ feature space. This is done by using the mutual information between the projected variables, rather than the correlation, as the optimization objective. Beyond being considerably simpler from a computational standpoint, this crucial difference allows mSMI to identify the full dependence structure, akin to classical mutual information. mSMI can also be viewed as the maximized version of the average-sliced mutual information (aSMI) [26, 27], which averages \(\mathsf{I}(\mathrm{A}^{\intercal}X;\mathrm{B}^{\intercal}Y)\) with respect to (w.r.t.) the Haar measure over \(\mathrm{St}(k,d_{x})\times\mathrm{St}(k,d_{y})\). However, we demonstrate that compared to aSMI, mSMI benefits from improved neural estimation error bounds and a clearer interpretation.
We show that mSMI inherits important properties of mutual information, including identification of independence, tensorization, and variational forms. For jointly Gaussian \((X,Y)\), the optimal mSMI projections coincide with those of \(k\)-dimensional CCA [28], posing mSMI as a natural information-theoretic generalization. Beyond the Gaussian case, the solutions differ and mSMI may yield more effective representations for downstream tasks due to the intricate dependencies captured by mutual information. We demonstrate this superiority empirically for multi-view representation learning.
For efficient computation, we propose an mSMI neural estimator based on the Donsker-Varadhan (DV) variational form [29]. Neural estimators have seen a surge in interest due to their scalability and compatibility with gradient-based optimization [30, 31, 32, 33, 34, 35, 36, 37]. Our estimator employs a single model that composes the projections with the NN proxy of the DV critic and jointly optimizes them. This results in both the estimated mSMI value and the optimal projection matrices. Building on recent analysis of neural estimation of \(f\)-divergences
[38, 39], we establish non-asymptotic error bounds that scale as \(O\big{(}k^{1/2}(\ell^{-1/2}+kn^{-1/2})\big{)}\), where \(\ell\) and \(n\) are the numbers of neurons and \((X,Y)\) samples, respectively. Equating \(\ell\) and \(n\) results in the (minimax optimal) parametric estimation rate, which highlights the scalability of mSMI and its compatibility to modern learning settings.
In our empirical investigation, we first demonstrate that our mSMI neural estimator converges orders of magnitude faster than that of aSMI [27]. This is because the latter requires (parallel) training of many neural estimators corresponding to different projection directions, while the mSMI estimator optimizes a single combined model. Notwithstanding the reduction in computational overhead, we show that mSMI outperforms average-slicing for independence testing. Next, we compare mSMI with deep CCA [21, 7] by examining downstream classification accuracy based on representations obtained from both methods in a multi-view learning setting. Remarkably, we observe that even the linear mSMI projections outperform nonlinear representations obtained from deep CCA. We also consider an application to algorithmic fairness under the infomin framework [40]. Replacing their generalized Pearson correlation objective with mSMI, we again observe superior performance in the form of more fair representations whose utility remains on par with the fairness-agnostic model. Lastly, we devise a max-sliced version of the InfoGAN by replacing the classic mutual information regularizer with its max-sliced analog. We show that despite the low-dimensional projections, the max-sliced InfoGAN successfully learns to disentangle the latent space and generates quality samples.
## 2 Background and Preliminaries
**Notation.** For \(a,b\in\mathbb{R}\), we use the notation \(a\wedge b=\min\{a,b\}\) and \(a\lor b=\max\{a,b\}\). For \(d\geq 1\), \(\|\cdot\|\) is the Euclidean norm in \(\mathbb{R}^{d}\). The Stiefel manifold of \(d\times k\) matrices with orthonormal columns is denoted by \(\mathrm{St}(k,d)\). For a \(d\times k\) matrix \(\mathrm{A}\), we use \(\mathfrak{p}^{\mathrm{A}}:\mathbb{R}^{d}\to\mathbb{R}^{k}\) for the orthogonal projection onto the row space of \(\mathrm{A}\). For \(\mathrm{A}\in\mathbb{R}^{d\times k}\) with \(\mathrm{rank(A)}=r\leq k\wedge d\), we write \(\sigma_{1}(\mathrm{A}),\ldots,\sigma_{r}(\mathrm{A})\) for its non-zero singular values, and assume without loss of generality (w.l.o.g.) that they are arranged in descending order. Similarly, the eigenvalues of a square matrix \(\Sigma\in\mathbb{R}^{d\times d}\) are denoted by \(\lambda_{1}(\Sigma),\ldots,\lambda_{d}(\Sigma)\). Let \(\mathcal{P}(\mathbb{R}^{d})\) denote the space of Borel probability measures on \(\mathbb{R}^{d}\). For \(\mu,\nu\in\mathcal{P}(\mathbb{R}^{d})\), we use \(\mu\otimes\nu\) to denote a product measure, while \(\mathrm{spt}(\mu)\) designates the support of \(\mu\). All random variables throughout are assumed to be continuous w.r.t. the Lebesgue measure. For a measurable map \(f\), the pushforward of \(\mu\) under \(f\) is denoted by \(f_{\sharp}\mu=\mu\circ f^{-1}\), i.e., if \(X\sim\mu\) then \(f(X)\sim f_{\sharp}\mu\). For a jointly distributed pair \((X,Y)\sim\mu_{XY}\in\mathcal{P}(\mathbb{R}^{d_{x}}\times\mathbb{R}^{d_{y}})\), we write \(\Sigma_{X}\) and \(\Sigma_{XY}\) for covariance matrix of \(X\) and cross-covariance matrix of \((X,Y)\), respectively.
Canonical correlation analysis.CCA is a classical method for devising maximally correlated linear projections of a pair of random variables \((X,Y)\sim\mu_{XY}\in\mathcal{P}(\mathbb{R}^{d_{x}}\times\mathbb{R}^{d_{y}})\) via [15]
\[(\theta_{\mathsf{CCA}},\phi_{\mathsf{CCA}})=\operatorname*{argmax}_{(\phi, \theta)\in\mathbb{R}^{d_{x}}\times\mathbb{R}^{d_{y}}}\frac{\theta\Gamma_{XY} \phi^{\mathsf{T}}}{\sqrt{\theta\Gamma_{XX}\theta\phi^{\mathsf{T}}\Sigma_{YY} \phi}}=\operatorname*{argmax}_{\begin{subarray}{c}(\theta,\phi)\in\mathbb{R} ^{d_{x}}\times\mathbb{R}^{d_{y}}:\\ \theta^{\mathsf{T}}\Sigma_{X}\theta=\phi^{\mathsf{T}}\Sigma_{Y}\phi=1\end{subarray}} \theta^{\mathsf{T}}\Sigma_{XY}\phi, \tag{2}\]
where the former objective is the correlation coefficient \(\rho(\theta^{\intercal}X,\phi^{\intercal}Y)\) between the projected variables and the equality follows from invariance of \(\rho\) to scaling. The global optimum has an analytic form as \((\theta_{\mathsf{CCA}},\phi_{\mathsf{CCA}})=(\Sigma_{X}^{-1/2}\theta_{1},\Sigma_ {Y}^{-1/2}\phi_{1})\), where \((\theta_{1},\phi_{1})\) is the (unit-length) top left and right singular vector pair associated the largest singular value of \(\mathrm{T}_{XY}\coloneqq\Sigma_{X}^{-1/2}\Sigma_{XY}\Sigma_{Y}^{-1/2}\in \mathbb{R}^{d_{x}\times d_{y}}\). This solution is efficiently computable with \(O((d_{x}\lor d_{y})^{3})\) complexity, given that the population correlation matrices are available. CCA extends to \(k\)-dimensional projections via the optimization [28]
\[\max_{\begin{subarray}{c}(\mathrm{A},\mathrm{B})\in\mathbb{R}^{d_{x}\times k }\times\mathbb{R}^{d_{y}\times k_{z}}\\ \mathrm{A}^{\intercal}\Sigma_{X}\mathrm{A}=\mathsf{B}^{\intercal}\Sigma_{Y} \mathrm{B}=\mathrm{I}_{k}\end{subarray}}\mathrm{tr}(\mathrm{A}^{\intercal} \Sigma_{XY}\mathrm{B}), \tag{3}\]
with the optimal CCA matrices being \((\mathrm{A}_{\mathsf{CCA}},\mathrm{B}_{\mathsf{CCA}})=(\Sigma_{X}^{-1/2} \mathrm{U}_{k},\Sigma_{Y}^{-1/2}\mathrm{V}_{k})\), where \(\mathrm{U}_{k}\) and \(\mathrm{V}_{k}\) are the matrices of the first \(k\) left- and right-singular vectors of \(\mathrm{T}_{XY}\). Then the optimal objective value is the sum of the top \(k\) singular values of \(\mathrm{T}_{XY}\) (the Ky Fan \(k\)-norm of \(\mathrm{T}_{XY}\)).
Divergences and information measures.Let \(\mu,\nu\in\mathcal{P}(\mathbb{R}^{d})\) satisfy \(\mu\ll\nu\), i.e., \(\mu\) is absolutely continuous w.r.t. \(\nu\). The Kullback-Leibler (KL) divergence is defined as \(\mathsf{D}(\mu\|\nu)\coloneqq\int_{\mathbb{R}^{d}}\log(d\mu/d\nu)d\mu\). We have \(\mathsf{D}(\mu\|\nu)\geq 0\), with equality if and only if (iff) \(\mu=\nu\). Mutual information and differential entropy are defined from the KL divergence as follows. Let \((X,Y)\sim\mu_{XY}\in\mathcal{P}(\mathbb{R}^{d_{x}}\times\mathbb{R}^{d_{y}})\) and denote the corresponding marginal distributions by \(\mu_{X}\) and \(\mu_{Y}\). The mutual information between \(X\) and \(Y\) is given by \(\mathsf{l}(X;Y)\coloneqq\mathsf{D}(\mu_{XY}\|\mu_{X}\otimes\mu_{Y})\) and serves as a measure of dependence between those random variables. The differential entropy of \(X\) is defined as \(\mathsf{h}(X)=\mathsf{h}(\mu_{X})\coloneqq-\mathsf{D}(\mu_{X}\|\mathrm{Leb})\). Mutual information between (jointly) continuous variables and differential entropy are related via \(\mathsf{l}(X;Y)=\mathsf{h}(X)+\mathsf{h}(Y)-\mathsf{h}(X,Y)\); decompositions in terms of conditional entropies are also available [12].
## 3 Max-Sliced Mutual Information
We now define the \(k\)-dimensional mSMI, establish structural properties thereof, and explore the Gaussian setting and its connections to CCA. We focus here on the case of (linear) \(k\)-dimensional projections and discuss extensions to nonlinear slicing in Section 3.3.
**Definition 1** (Max-sliced mutual information).: _For \(1\leq k\leq d_{x}\wedge d_{y}\), the \(k\)-dimensional mSMI between \((X,Y)\sim\mu_{XY}\in\mathcal{P}(\mathbb{R}^{d_{x}}\times\mathbb{R}^{d_{y}})\) is \(\overline{\mathsf{Sl}}_{k}(X;Y)\coloneqq\sup_{(\mathrm{A},\mathrm{B})\in \mathrm{St}(k,d_{x})\times\mathrm{St}(k,d_{y})}\mathsf{l}(\mathrm{A}^{\intercal }X;\mathrm{B}^{\intercal}Y)\), where \(\mathrm{St}(k,d)\) is the Stiefel manifold of \(d\times k\) matrices with orthonormal columns._
The mSMI measures Shannon's mutual information between the most informative \(k\)-dimensional projections of \(X\) and \(Y\). It can be viewed as a maximized version of the aSMI \(\underline{\mathsf{Sl}}_{k}(X;Y)\) from [26, 27], defined as the integral of \(\mathsf{l}(\mathrm{A}^{\intercal}X;\mathrm{B}^{\intercal}Y)\) w.r.t. the Haar measure over \(\mathrm{St}(k,d_{x})\times\mathrm{St}(k,d_{y})\). For \(d=d_{x}=d_{y}\), we have \(\underline{\mathsf{Sl}}_{d}(X;Y)=\overline{\mathsf{Sl}}_{d}(X;Y)=\mathsf{l}(X;Y)\) due to invariance of mutual information to bijections. The supremum in mSMI is achieved since the Stiefel manifold is compact and the function \((\mathrm{A},\mathrm{B})\mapsto\mathsf{l}(\mathrm{A}^{\intercal}X;\mathrm{B}^{ \intercal}Y)\) is Lipschitz and thus continuous (Lemma 2 of [27]).
**Remark 1** (Multivariate and conditional mSMI).: _It is natural to extend the mSMI definition above to the multivariate and conditional cases. Let \((X,Y,Z)\sim\mu_{XYZ}\in\mathcal{P}(\mathbb{R}^{d_{x}}\times\mathbb{R}^{d_{y}} \times\mathbb{R}^{d_{z}})\). The \(k\)-dimensional multivariate and conditional mSMI functionals are, respectively,
\(\overline{\mathsf{Sl}}_{k}(X,Y;Z)\coloneqq\max_{\mathrm{A,B,C}}\mathsf{l}( \mathsf{A}^{\mathrm{T}}X,\mathsf{B}^{\mathrm{T}}Y;\mathrm{C}^{\mathrm{T}}Z)\) and \(\overline{\mathsf{Sl}}_{k}(X;Y|Z)\coloneqq\max_{\mathrm{A,B,C}}\mathsf{l}( \mathsf{A}^{\mathrm{T}}X;\mathsf{B}^{\mathrm{T}}Y|\mathrm{C}^{\mathrm{T}}Z)\). Connections between \(\overline{\mathsf{Sl}}_{k}(X;Y)\) and its multivariate and conditional versions are given in the proposition to follow. We also note that one may generalize the definition of \(\overline{\mathsf{Sl}}_{k}(X;Y)\) to allow for projections into feature spaces of different dimensions, i.e., \(\mathrm{A}\in\mathrm{St}(k_{x},d_{x})\) and \(\mathrm{B}\in\mathrm{St}(k_{y},d_{y})\), for \(k_{x}\neq k_{y}\). We expect our theory to extend to that case, but leave further exploration for future work._
**Remark 2** (Max-sliced entropy).: _In the spirit of mSMI, we define the \(k\)-dimensional max-sliced (differential) entropy of \(X\sim\mu_{X}\in\mathcal{P}(\mathbb{R}^{d})\) as \(\overline{\mathsf{sh}}_{k}(X)\coloneqq\overline{\mathsf{sh}}_{k}(\mu):=\sup_{ \mathrm{A}\in\mathrm{St}(k,d)}\mathsf{h}(\mathsf{A}^{\mathrm{T}}X)\). An important property of classical differential entropy is the maximum entropy principle [12], which finds the highest entropy distribution within given class. In Appendix B, we study the max-sliced entropy maximizing distribution in several common scenarios. For instance, we show that \(\overline{\mathsf{sh}}_{k}\) is maximized by the Gaussian distribution under a fixed (mean and) covariance constraint. Namely, letting \(\mathcal{P}_{1}(m,\Sigma):=\big{\{}\mu\in\mathcal{P}(\mathbb{R}^{d}):\,\mathrm{ spt}(\mu)=\mathbb{R}^{d}\,,\,\mathbb{E}_{\mu}[X]=m\,,\,\mathbb{E}_{\mu}\big{[}(X-m)(X-m)^{ \mathsf{T}}\big{]}=\Sigma\big{\}}\), we have \(\operatorname*{argmax}_{\mu\in\mathcal{P}_{1}(\mu,\Sigma)}\overline{\mathsf{sh }}_{k}(\mu)=\mathcal{N}(m,\Sigma)\)._
**Remark 3** (Sliced divergences).: _The slicing technique has originated as a means to address scalability issues concerning statistical divergences. Significant attention was devoted to sliced Wasserstein distances as discrepancy measures between probability distributions [41, 42, 43, 44, 45, 46, 47]. As such, the sliced Wasserstein distance differs from mutual information and its sliced variants, which quantify dependence between random variables, rather than discrepancy per se. Additionally, as Wasserstein distances are rooted in optimal transport theory, they heavily depend on the geometry of the underlying data space. Mutual information, on the other hand, is induced by the KL divergence, which only depends on the log-likelihood of the considered distributions and overlooks the geometric aspects._
### Structural Properties
The following proposition lists useful properties of the mSMI, which are similar to those of the average-sliced variant (cf. [27, Proposition 1]) as well as Shannon's mutual information itself.
**Proposition 1** (Structural properties).: _The following properties hold:_
_1._ _Bounds:_ _For any integers_ \(k_{1}<k_{2}\)_:_ \(\underline{\mathsf{Sl}}_{k_{1}}(X;Y)\leq\overline{\mathsf{Sl}}_{k_{1}}(X;Y) \leq\overline{\mathsf{Sl}}_{k_{2}}(X;Y)\leq\mathsf{l}(X;Y)\)_._
_2._ _Identification of independence:___\(\overline{\mathsf{Sl}}_{k}(X;Y)\geq 0\) _with equality iff_ \((X,Y)\) _are independent._
_3._ _KL divergence representation:_ _We have_
\[\overline{\mathsf{Sl}}_{k}(X;Y)=\sup_{(\mathrm{A,B})\in\mathrm{St}(k,d_{x}) \times\mathrm{St}(k,d_{y})}\mathsf{D}\big{(}(\mathsf{p}^{\mathrm{A}},\mathsf{ p}^{\mathrm{B}})_{\#}\mu_{XY}\big{\|}(\mathsf{p}^{\mathrm{A}},\mathsf{p}^{ \mathrm{B}})_{\#}\mu_{X}\otimes\mu_{Y}\big{)},\]
_4._ _Sub-chain rule:_ _For any random variables_ \(X_{1},\ldots,X_{n},Y\)_, we have_
\[\overline{\mathsf{Sl}}_{k}(X_{1},\ldots,X_{n};Y)\leq\overline{\mathsf{Sl}}_{k }(X_{1};Y)+\sum_{i=2}^{n}\overline{\mathsf{Sl}}_{k}(X_{i};Y|X_{1},\ldots,X_{i- 1}).\]
_5._ _Tensorization:_ _For mutually independent_ \(\{(X_{i},Y_{i})\}_{i=1}^{n}\)_,_ \(\overline{\mathsf{Sl}}_{k}\big{(}\{X_{i}\}_{i=1}^{n};\{Y_{i}\}_{i=1}^{n}\big{)} \!=\!\sum\limits_{i=1}^{n}\overline{\mathsf{Sl}}_{k}(X_{i};\!Y_{i})\)_._
The proof follows by similar arguments to those in the average-sliced case, but is given for completeness in Supplement A.1. Of particular importance are Properties 2 and 3. The former renders mSMI sufficient for independence testing despite being significantly less complex than the classical mutual information between the high-dimensional variables. The latter, which represent mSMI as a supremized KL divergence, is the basis for neural estimation techniques explored in Section 4.
**Remark 4** (Relation to average-SMI).: _Beyond the inequality relationship in Property 1 above, Proposition 4 in [26] (paraphrased) shows that for matrices \(\mathrm{W}_{x},\mathrm{W}_{y}\) and vectors \(b_{x},b_{y}\) or appropriate dimensions we have \(\sup_{\mathrm{W}_{x},\mathrm{W}_{y},b_{x},b_{y}}\underline{\mathsf{Sl}}_{1}( \mathrm{W}_{x}^{\mathsf{T}}X+b_{x};\mathrm{W}_{y}^{\mathsf{T}}Y+b_{y})= \overline{\mathsf{Sl}}_{1}(X;Y)\), and the relation readily extends to projection dimension \(k>1\). In words, optimizing the aSMI over linear transformations of the high-dimensional data vectors coincides with the max-sliced version. This further justifies the interpretation of \(\overline{\mathsf{Sl}}_{k}(X;Y)\) as the information between the two most informative representations of \(X,Y\) in a \(k\)-dimensional feature space. It also suggests that mSMI is compatible for feature extraction tasks, as explored in Section 5.3 ahead._
### Gaussian Max-SMI versus CCA
The mSMI is an information-theoretic extension of the CCA coefficient \(\rho_{\mathsf{CCA}}(X,Y)\), which is able to capture higher order dependencies. Interestingly, when \((X,Y)\) are jointly Gaussian, the two notions coincide. We next state this relation and provide a closed-form expression for the Gaussian mSMI.
**Proposition 2** (Gaussian mSMI).: _Let \(X\sim\mathcal{N}(m_{X},\Sigma_{X})\) and \(Y\sim\mathcal{N}(m_{Y},\Sigma_{Y})\) be \(d_{x}\)- and \(d_{y}\)-dimensional jointly Gaussian vectors with nonsingular covariance matrices and cross-covariance \(\Sigma_{XY}\). For any \(k\leq d_{x}\wedge d_{y}\), we have_
\[\overline{\mathsf{Sl}}_{k}(X;Y)=\mathsf{I}(\mathrm{A}_{\mathsf{CCA}}^{ \mathsf{T}}X;\mathrm{B}_{\mathsf{CCA}}^{\mathsf{T}}Y)=-\frac{1}{2}\sum_{i=1}^ {k}\log\big{(}1-\sigma_{i}(\mathrm{T}_{XY})^{2}\big{)}, \tag{4}\]
_where \((\mathrm{A}_{\mathsf{CCA}},\mathrm{B}_{\mathsf{CCA}})\) are the CCA solutions from (3), \(\mathrm{T}_{XY}=\Sigma_{X}^{-1/2}\Sigma_{XY}\Sigma_{Y}^{-1/2}\in\mathbb{R}^{d_ {x}\times d_{y}}\), and \(\sigma_{k}(\mathrm{T}_{XY})\leq\ldots\leq\sigma_{1}(\mathrm{T}_{XY})\leq 1\) are the top \(k\) singular values of \(\mathrm{T}_{XY}\) (ordered)._
This proposition is proven in Supplement A.2. We first show that the optimization domain of \(\overline{\mathsf{Sl}}_{k}(X;Y)\) can be switched from the product of Stiefel manifolds to the space of all matrices subject to a unit variance constraint (akin to (3)), without changing the mSMI value. This implies that the CCA solutions \((\mathrm{A}_{\mathsf{CCA}},\mathrm{B}_{\mathsf{CCA}})\) from (3) are feasible for mSMI and we establish their optimality using a generalization of the Poincare separation theorem [48, Theorem 2.2]. Specializing Proposition 2 to one-dimensional projections, i.e., when \(k=1\), the mSMI is given in terms of the canonical correlation coefficient \(\rho_{\mathsf{CCA}}(X,Y)\coloneqq\sup_{(\phi,\theta)\in\mathbb{R}^{d_{x}} \times\mathbb{R}^{d_{y}}}\rho(\theta^{\mathsf{T}}X,\phi^{\mathsf{T}}Y)\). Namely,
\[\overline{\mathsf{Sl}}_{1}(X;Y)=\mathsf{I}(\theta_{\mathsf{CCA}}^{\mathsf{T} }X;\phi_{\mathsf{CCA}}^{\mathsf{T}}Y)=-0.5\log\big{(}1-\rho_{\mathsf{CCA}}(X,Y)^{2}\big{)},\]
where \((\theta_{\mathsf{CCA}},\phi_{\mathsf{CCA}})\) are the global optimizers of \(\rho_{\mathsf{CCA}}(X,Y)\).
**Remark 5** (Beyond Gaussian data).: _While the mSMI solution coincides with that of CCA in the Gaussian case, this is no longer expected to hold for non-Gaussian distributions. CCA is designed to maximize correlation, while mSMI has Shannon's mutual information between
the projected variables as the optimization objective. Unlike correlation, mutual information captures higher order dependencies between the variables, and hence the optimal mSMI matrices will not generally coincide with \((\mathrm{A}_{\mathsf{CCA}},\mathrm{B}_{\mathsf{CCA}})\). Furthermore, the intricate dependencies captured by mutual information suggest that the optimal mSMI projections may yield representations that are more effective for downstream tasks. We empirically verify this observation in Section 5 on several tasks, including classifications, multi-view representation learning, and algorithmic fairness._
**Remark 6** (Max-sliced entropy and PCA).: _Similarly to the above, the Gaussian max-sliced entropy is related to PCA [43]. In Supplement A.3, we show that for (d-dimensional) \(X\sim\mathcal{N}(m,\Sigma)\), we have \(\overline{\mathsf{sh}}_{k}(X)=\sup_{\mathrm{A}\in\mathrm{St}(k,d)}\mathsf{h} (A^{\intercal}X)=\mathsf{h}(\mathrm{A}^{\intercal}_{\mathsf{PCA}}X)=0.5\sum _{i=1}^{k}\log\big{(}2\pi e\lambda_{i}(\Sigma)\big{)}\), where \(\mathrm{A}_{\mathsf{PCA}}\) is optimal PCA matrix and \(\lambda_{1}(\Sigma),\ldots\lambda_{k}(\Sigma)\) are the top \(k\) eigenvalues of \(\Sigma\) (which are non-negative since \(\Sigma\) is a covariance matrix)[49, 15]. Extrapolating beyond the Gaussian case, this poses max-sliced entropy as an information-theoretic generalization of PCA for unsupervised dimensionality reduction. An analogous extension using the Renyi entropy of order \(2\) was previously considered in [50] for the purpose of downstream classification with binary labels. In that regard, \(\overline{\mathsf{sh}}_{k}(X)\) can be viewed as the \(\alpha\)-Renyi variant when \(\alpha\to 1\)._
### Generalizations Beyond Linear Slicing
The notion of mSMI readily generalizes beyond linear slicing. Fix \(d_{x},d_{y}\geq 1\), \(k\leq d_{x}\wedge d_{y}\), and consider two (nonempty) function classes \(\mathcal{G}\subseteq\{g:\mathbb{R}^{d_{x}}\to\mathbb{R}^{k}\}\) and \(\mathcal{H}\subseteq\{h:\mathbb{R}^{d_{y}}\to\mathbb{R}^{k}\}\).
**Definition 2** (Generalized mSMI).: _The generalized mSMI between \((X,Y)\sim\mu_{XY}\in\mathcal{P}(\mathbb{R}^{d_{x}}\times\mathbb{R}^{d_{y}})\) w.r.t. the classes \(\mathcal{G}\) and \(\mathcal{H}\) is \(\overline{\mathsf{SI}}_{\mathcal{G},\mathcal{H}}(X;Y)\coloneqq\sup_{(g,h)\in \mathcal{G}\times\mathcal{H}}\mathsf{l}\big{(}g(X);h(Y)\big{)}\)._
The generalized variant reduces back to \(\overline{\mathsf{SI}}_{k}(X;Y)\) by taking \(\mathcal{G}=\mathcal{G}_{\mathsf{proj}}\coloneqq\{\mathsf{p}^{\mathrm{A}}: \,\mathrm{A}\in\mathrm{St}(k,d_{x})\}\) and \(\mathcal{H}=\mathcal{H}_{\mathsf{proj}}\coloneqq\{\mathsf{p}^{\mathrm{B}}:\, \mathrm{B}\in\mathrm{St}(k,d_{y})\}\), but otherwise allows more flexibility in the way \((X,Y)\) are mapped into \(\mathbb{R}^{k}\). We also have that if \(\mathcal{G}\subseteq\mathcal{G}^{\prime}\) and \(\mathcal{H}\subseteq\mathcal{H}^{\prime}\), then \(\overline{\mathsf{SI}}_{\mathcal{G},\mathcal{H}}(X;Y)\leq\overline{\mathsf{SI }}_{\mathcal{G}^{\prime},\mathcal{H}^{\prime}}(X;Y)\leq\mathsf{l}(X;Y)\), which corresponds to Property 1 from Proposition 1. Further observations are as follows.
**Proposition 3** (Properties).: _For any classes \(\mathcal{G},\mathcal{H}\), we have that \(\overline{\mathsf{SI}}_{\mathcal{G},\mathcal{H}}\) always satisfies Properties 3-5 from Proposition 1. If further \(\mathcal{G}_{\mathsf{proj}}\subseteq\mathcal{G}\) and \(\mathcal{H}_{\mathsf{proj}}\subseteq\mathcal{H}\), then \(\overline{\mathsf{SI}}_{\mathcal{G},\mathcal{H}}\) also satisfies Property 2._
We omit the proof as it follows by the same argument as Proposition 1, up to replacing the linear projections with the functions \((g,h)\in\mathcal{G}\times\mathcal{H}\). In practice, the classes \(\mathcal{G}\) and \(\mathcal{H}\) are chosen to be parametric, typically realized by artificial NNs. As discussed in Remark 7 ahead, this is well-suited to the neural estimation framework for mSMI (both standard and generalized). Lastly, note that \(\overline{\mathsf{SI}}_{\mathcal{G},\mathcal{H}}(X;Y)\) corresponds to the objective of multi-view representation learning [51], which considers the maximization of the mutual information between NN-based representation of the considered variables. We further investigate this relation in Section 5.3.
## 4 Neural Estimation of Max-SMI
We study estimation of mSMI from data, seeking an efficiently computable and scalable approach subject to formal performance guarantees. Towards that end, we observe that the
mSMI is compatible with neural estimation [30, 39] since it has a natural variational form. In what follows we derive the neural estimator, describe the algorithm to compute it, and provide non-asymptotic error bounds.
### Estimator and Algorithm
Fix \(d_{x},d_{y}\geq 1\), \(k\leq d_{x}\wedge d_{y}\), and \(\mu_{XY}\in\mathcal{P}(\mathbb{R}^{d_{x}}\times\mathbb{R}^{d_{y}})\); we suppress \(k,d_{x},d_{y}\) from our notation of the considered function classes. Neural estimation is based on the DV variational form:2
Footnote 2: One may instead use the form that stems from convex duality: \(\mathsf{l}(U;V)=\sup_{f}\mathbb{E}[f(U,V)]-\mathbb{E}\big{[}e^{f(\tilde{V}, \tilde{V})}-1\big{]}\).
\[\mathsf{l}(X;Y)=\sup_{f\in\mathcal{F}}\mathcal{L}_{\mathsf{DV}}(f;\,\mu_{XY}), \quad\mathcal{L}_{\mathsf{DV}}(f;\,\mu_{XY})\coloneqq\mathbb{E}[f(X,Y)]-\log \big{(}e^{\mathbb{E}[f(\tilde{X},\tilde{Y})]}\big{)},\]
where \((X,Y)\sim\mu_{XY}\), \((\tilde{X},\tilde{Y})\sim\mu_{X}\otimes\mu_{Y}\), and \(\mathcal{F}\) is the class of all measurable functions \(f:\mathbb{R}^{d_{x}}\times\mathbb{R}^{d_{y}}\to\mathbb{R}\) (often referred to as DV potentials) for which the expectations above are finite. As mSMI is the maximal mutual information between projections of \(X,Y\), we have
\[\overline{\mathsf{Sl}}_{k}(X;Y)=\sup_{(\mathrm{A},\mathrm{B})\in\mathrm{St}(k,d_{x})\times\mathrm{St}(k,d_{y})}\sup_{f\in\mathcal{F}}\mathcal{L}_{\mathsf{ DV}}\big{(}f;\,(\mathfrak{p}^{\mathrm{A}},\mathfrak{p}^{\mathrm{B}})_{\sharp} \mu_{XY}\big{)}=\sup_{f\in\mathcal{F}^{\mathsf{proj}}}\mathcal{L}_{\mathsf{DV} }(f;\mu_{XY}),\]
where \(\mathcal{F}^{\mathsf{proj}}\coloneqq\big{\{}f\circ(\mathfrak{p}^{\mathrm{A}}, \mathfrak{p}^{\mathrm{B}})\,:\,f\in\mathcal{F},\,(\mathrm{A},\mathrm{B})\in \mathrm{St}(k,d_{x})\times\mathrm{St}(k,d_{y})\big{\}}\). The RHS above is given by optimizing the DV objective \(\mathcal{L}_{\mathsf{DV}}\) over the _composed_ class \(\mathcal{F}^{\mathsf{proj}}\), which first projects \((X,Y)\mapsto(\mathrm{A}^{\intercal}X,\mathrm{B}^{\intercal}Y)\) and then applies a DV potential \(f:\mathbb{R}^{k}\times\mathbb{R}^{k}\to\mathbb{R}\) to the projected variables.
Neural estimator.Neural estimators parametrize the DV potential by NNs, approximate expectations by sample means, and optimize the resulting empirical objective over parameter space. Let \(\mathcal{F}_{\mathsf{nn}}\) be a class of feedforward NNs with input space \(\mathbb{R}^{k}\times\mathbb{R}^{k}\) and real-valued outputs.3 Given i.i.d. samples \((X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\) from \(\mu_{XY}\), we first generate negative samples (i.e., from \(\mu_{X}\otimes\mu_{Y}\)) by taking \((X_{1},Y_{\sigma(1)}),\ldots,(X_{n},Y_{\sigma(n)})\), where \(\sigma\in S_{n}\) is a permutation such that \(\sigma(i)\neq i\), for all \(i=1,\ldots,n\). The neural estimator of \(\overline{\mathsf{Sl}}_{k}(X;Y)\) is now given by
Footnote 3: For now, we leave the architecture (number of layers/neurons, parameter bounds, nonlinearity) implicit to allow flexibility of implementation; we will specialize to a concrete class when providing theoretical guarantees.
\[\widehat{\mathsf{Sl}}_{k}^{\mathcal{F}_{\mathsf{nn}}}(X^{n},Y^{n}):=\sup_{f \in\mathcal{F}_{\mathsf{nn}}^{\mathsf{proj}}}\frac{1}{n}\sum_{i=1}^{n}f(X_{i},Y_{i})-\log\left(\frac{1}{n}\sum_{i=1}^{n}e^{f(X_{i},Y_{\sigma(i)})}\right), \tag{5}\]
where \(\mathcal{F}_{\mathsf{nn}}^{\mathsf{proj}}\coloneqq\big{\{}f\circ(\mathfrak{p} ^{\mathrm{A}},\mathfrak{p}^{\mathrm{B}})\,:\,f\in\mathcal{F}_{\mathsf{nn}},\,( \mathrm{A},\mathrm{B})\in\mathrm{St}(k,d_{x})\times\mathrm{St}(k,d_{y})\big{\}}\) is the composition of the NN class with the projection maps. The projection maps can be absorbed into the NN architecture as a first linear layer that maps the \((d_{x}+d_{y})\)-dimensional input to a \(2k\)-dimensional feature vector, which is then further processed by the original NN \(f\in\mathcal{F}_{\mathsf{nn}}\). Note that projection onto the Stiefel manifold can be efficiently and differentiably computed via QR decomposition. Hence, the Stiefel manifold constraint can be easily enforced by setting \(A,B\) to be the projections of unconstrained \(d\times k\) matrices. Further details on the implementation are given in Supplement C.
**Remark 7** (Nonlinear slicing).: _For learning tasks that may need more expressive representations of \((X,Y)\), one may employ the nonlinear mSMI variant from Section 3.3. In practice, the classes \(\mathcal{G}=\{g_{\theta}\}\) and \(\mathcal{H}=\{h_{\phi}\}\) are taken to be parametric, realized by NNs. These NNs naturally compose with the DV critic \(f_{\psi}\), resulting in a single compound model \(f_{\psi}\circ(g_{\theta},h_{\phi})\) that is optimized together._
### Performance Guarantees
Neural estimation nominally involves three sources of error: (i) function approximation of the DV potential; (ii) empirical estimation of the means; and (iii) optimization, which comes from employing suboptimal (gradient-based) routines. Our analysis provides sharp non-asymptotic bounds for errors of type (i) and (ii), leaving the account of the optimization error for future work. We focus on a class of \(\ell\)-neuron shallow ReLU networks, although the ideas extend to other nonlinearities and deep architectures. Define \(\mathcal{F}^{\ell}_{\mathsf{nn}}\) as the class of NNs \(f:\mathbb{R}^{k}\times\mathbb{R}^{k}\to\mathbb{R}\), \(f(z)=\sum_{i=1}^{\ell}\beta_{i}\phi\left(\langle w_{i},z\rangle+b_{i}\right) +\langle w_{0},z\rangle+b_{0}\), whose parameters satisfy \(\max_{1\leq i\leq\ell}\|w_{i}\|_{1}\vee|b_{i}|\leq 1\), \(\max_{1\leq i\leq\ell}|\beta_{i}|\leq\frac{a_{\ell}}{2\ell}\), and \(|b_{0}|,\|w_{0}\|_{1}\leq a_{\ell}\), where \(\phi(z)=z\lor 0\) is the ReLU activation and \(a_{\ell}=\log\log\ell\lor 1\).
Consider the neural mSMI estimator \(\widehat{\mathsf{S}}^{n,\ell}_{k}\coloneqq\widehat{\mathsf{S}}^{\mathcal{F}^{ \ell}_{\mathsf{nn}}}_{k}(X^{n},Y^{n})\) (see (5)). We provide convergence rates for it over an appropriate distribution class, drawing upon the results of [38] for neural estimation of \(f\)-divergences. For compact \(\mathcal{X}\subset\mathbb{R}^{d_{x}}\) and \(\mathcal{Y}\subset\mathbb{R}^{d_{y}}\), let \(\mathcal{P}_{\mathsf{ac}}(\mathcal{X}\times\mathcal{Y})\) be the set of all Lebesgue absolutely continuous joint distribution \(\mu_{XY}\) with \(\operatorname{spt}(\mu_{XY})\subseteq\mathcal{X}\times\mathcal{Y}\). Denote the Lebesgue density of \(\mu_{XY}\) by \(f_{XY}\). The distribution class of interest is4
Footnote 4: Here, \(\mathcal{C}^{s}_{\mathsf{c}}(\mathcal{U})\coloneqq\{f\in\mathcal{C}^{s}( \mathcal{U})\,:\,\max_{\alpha:\|\alpha\|_{1}\leq s}\|D^{\alpha}f\|_{\infty, \mathcal{U}}\leq b\}\), where \(D^{\alpha}\), \(\alpha=(\alpha_{1},\ldots,\alpha_{d})\in\mathbb{Z}^{d}_{\geq 0}\), is the partial derivative operator of order \(\sum_{i=1}^{d}\alpha_{i}\). The restriction of \(f:\mathbb{R}^{d}\to\mathbb{R}\) to \(\mathcal{X}\subseteq\mathbb{R}^{d}\) is \(f|_{\mathcal{X}}\).
\[\mathcal{P}_{k}(M,b)\coloneqq\left\{\mu_{XY}\in\mathcal{P}_{\mathsf{ac}}( \mathcal{X}\times\mathcal{Y}):\begin{array}{l}\exists\,r\in\mathcal{C}^{k+3} _{b}(\mathcal{U})\text{ for some open set }\mathcal{U}\supset\mathcal{X}\times \mathcal{Y}\\ \text{s.t. }\log f_{XY}=r|_{\mathcal{X}\times\mathcal{Y}},\ \mathsf{I}(X;Y)\leq M \end{array}\right\}, \tag{6}\]
which, in particular, contains distributions whose densities are bounded from above and below on \(\mathcal{X}\times\mathcal{Y}\) with a smooth extension to an open set covering \(\mathcal{X}\times\mathcal{Y}\). This includes uniform distributions, truncated Gaussians, truncated Cauchy distributions, etc. The following theorem provides the convergence rate for the mSMI neural estimator, uniformly over \(\mathcal{P}_{k}(M,b)\).
**Theorem 1** (Neural estimation error).: _For any \(M,b\geq 0\), we have_
\[\sup_{\mu_{X,Y}\in\mathcal{P}_{k}(M,b)}\mathbb{E}\left[\left|\overline{ \mathsf{S}}_{k}(X;Y)-\widehat{\mathsf{S}}^{n,\ell}_{k}\right|\right]\leq Ck^{ \frac{1}{2}}\big{(}\ell^{-\frac{1}{2}}+kn^{-\frac{1}{2}}\big{)}.\]
_where the constant \(C\) depends on \(M\), \(b\), \(k\), and the radius of the ambient space, which is given by \(\|\mathcal{X}\times\mathcal{Y}\|\coloneqq\sup_{(x,y)\in\mathcal{X}\times \mathcal{Y}}\|(x,y)\|\)._
The theorem is proven in Supplement A.4 by adapting the error bound from [39, Proposition 2] to hold for \(\mathsf{I}(\mathrm{A}^{\intercal}X;\mathrm{B}^{\intercal}Y)\), uniformly over \((\mathrm{A},\mathrm{B})\in\operatorname{St}(k,d_{x})\times\operatorname{St}(k, d_{y})\). To that end, we show that for any \(\mu_{XY}\in\mathcal{P}_{k}(b,M)\), the log-density of \((\mathrm{A}^{\intercal}X,\mathrm{B}^{\intercal}Y)\sim(\mathfrak{p}^{\mathrm{ A}},\mathfrak{p}^{\mathrm{B}})_{\sharp}\mu_{XY}\) admits an extension (to an open set containing the support) with \(k+3\) continuous and uniformly bounded derivatives.
**Remark 8** (Parametric rate and optimality).: _Taking \(\ell\asymp n\), the resulting rate in Theorem 1 is parametric, and hence minimax optimal. This result implicitly assumes that \(M\) is known when picking the neural net parameters. This assumption can be relaxed to mere existence of (an unknown) \(M\), resulting in an extra \(\operatorname{polylog}(\ell)\) factor multiplying the \(n^{-1/2}\) term._
**Remark 9** (Comparison to average-SMI).: _Neural estimation of classic mutual information under the framework of [39] requires the density to have Holder smoothness \(s\geq\lfloor(d_{x}+d_{y})/2\rfloor+3\). For \(\overline{\mathsf{S}}\mathbb{I}_{k}(X;Y)\), smoothness of \(k+3\) is sufficient (even though the ambient dimension is the same), which means it can be estimated over a larger class of distributions. Similar gains in terms of smoothness levels was observed for aSMI in [27]. Nevertheless, we note that mSMI is significantly more compatible with neural estimation than average-slicing [26, 27]. The mSMI neural estimator integrates the max-slicing into the NN architecture and optimizes a single objective. The aSMI neural estimator from [27] requires an additional Monte Carlo integration step to approximate the integral over the Steifel manifolds. This results in an extra \(k^{1/2}m^{-1/2}\) term in the error bound, where \(m\) is the number of Monte Carlo samples, introducing significant computational overhead (see Section 5.1)._
**Remark 10** (Non-ReLU networks).: _Theorem 1 employs the neural estimation bound from [39], which, in particular, relies on [52] to control the approximation error. As noted in [39], their bound extends to any other sigmoidal bounded activation with \(\lim_{z\to-\infty}\sigma(z)=0\) and \(\lim_{z\to\infty}\sigma(z)=1\) by appealing to the approximation bound from [53] instead. Doing so would allow relaxing the smoothness requirement on the extension to \(r\in\mathcal{C}_{b}^{k+2}\) in (6), albeit at the expense of scaling the hidden layer parameters as \(\ell^{1/2}\log\ell\). This stands in contrast to the presented ReLU-based bound, where the parameters are bounded independently of \(\ell\)._
## 5 Experiments
### Neural Estimation
We compare the performance of neural estimation methods for mSMI and aSMI on a synthetic dataset of correlated Gaussians. Namely, let \(X,Z\sim\mathcal{N}(0,1)\) be i.i.d. and set \(Y=\rho X+\sqrt{1-\rho^{2}}Z\), for \(\rho\in(0,1)\). The goal is to estimate mSMI and aSMI with \(k\)-dimensional projections between \((X,Y)\). We train our mSMI neural estimator and the aSMI neural estimator from [27, Section 4.2] based on \(n\) i.i.d. samples, and compare their performance as a function of \(n\). Both average and max-sliced algorithms converge at similar rates; however, aSMI has significantly higher time complexity due to the need to train multiple neural estimators (one for each projection direction). This is shown in Figure 1, where we compare the average epoch time for each algorithm against the dataset size. Implementation details are given in Supplement C.
Figure 1: Neural estimation performance with \(\rho=0.5\). Convergence vs. \(n\) in upper figures and average epoch time vs. \(n\) in lower figure.
### Independence Testing
In this experiment, we compare mSMI and aSMI for independence testing. We follow the setting from [27, Section 5], generating \(d\)-dimensional samples correlated in a latent \(d^{\prime}\)-dimensional subspace and estimating the information measure to determine dependence. We estimate the aSMI with the method from [27], using \(m=1000\) Monte Carlo samples and the Kozachenko-Leonenko estimator for the mutual information between the projected variables [54]. We then compute AUC-ROC over 100 trials, considering various ambient and projected dimensions. For mSMI, as we cannot differentiate through the Kozachenko-Leonenko estimator, we resort to gradient-free methods. Specifically, we employ the LIPO algorithm from [55] with a stopping criterion of 1000 samples. This choice is motivated by the Lipschitzness of \((\mathrm{A},\mathrm{B})\mapsto\mathrm{I}(\mathrm{A}^{\intercal}X;\mathrm{B}^{ \intercal}Y)\) w.r.t. the Frobenius norm on \(\mathrm{St}(k,d_{x})\times\mathrm{St}(k,d_{y})\) (cf. [27, Lemma 2]). Figure 2 shows that when \(k>2\), mSMI better captures independence than aSMI, particularly in the lower sample regime. We hypothesize that this may be due to the fact that the shared signal lies in a low-dimensional subspace, which mSMI can isolate and perhaps better exploit than aSMI, which averages over all subspaces. When \(k\) is much smaller than the shared signal dimension \(d^{\prime}\), mSMI will fail to capture all the information and may underperform aSMI which takes all slices into account. Results are averaged over 10 seeds. Further implementation details are in Supplement C.
### Multi-View Representation Learning
We next explore mSMI as an information-theoretic generalization of CCA by examining its utility in multi-view representation learning--a popular CCA application. Without using class labels, we obtain mSMI-based \(k\)-dimensional representations of the top and bottom halves of MNIST images (considered as two separate views of the digit image). This is done by computing the \(k\)-dimensional mSMI between the views and using the maximizing projected variables as the representations. We compare to similarly obtained CCA-based representations, following the method of
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \(k\) & **Linear CCA** & **Linear mSMI** & **MLP DCCA** & **MLP mSMI** \\ \hline
1 & 0.261\(\pm\)0.03 & **0.274\(\pm\)**0.02 & 0.284\(\pm\)0.03 & **0.291\(\pm\)**0.02 \\
2 & 0.32\(\pm\)0.02 & **0.346\(\pm\)**0.02 & 0.314\(\pm\)0.03 & **0.417\(\pm\)**0.02 \\
4 & 0.42\(\pm\)0.01 & **0.478\(\pm\)**0.02 & 0.441\(\pm\)0.04 & **0.546\(\pm\)**0.01 \\
8 & 0.553\(\pm\)0.03 & **0.666\(\pm\)**0.01 & 0.645\(\pm\)0.02 & **0.665\(\pm\)**0.01 \\
12 & 0.614\(\pm\)0.02 & **0.751\(\pm\)**0.01 & 0.697\(\pm\)**0.01 & **0.753\(\pm\)**0.01 \\
16 & 0.673\(\pm\)0.02 & **0.775\(\pm\)**0.01 & 0.730\(\pm\)0.02 & **0.779\(\pm\)**0.01 \\
20 & 0.704\(\pm\)0.007 & **0.79\(\pm\)**0.006 & 0.774\(\pm\)0.01 & **0.798\(\pm\)**0.01 \\ \hline \end{tabular}
\end{table}
Table 1: Downstream classification test accuracy based on MNIST representations learned by CCA and mSMI.
Figure 2: ROC-AUC comparison. Dashed and solid lines show results for aSMI [27] and mSMI (ours), respectively. Blue plots correspond to \((d,d^{\prime})=(10,4)\), while red correspond to \((d,d^{\prime})=(20,6)\).
[21]. Both linear and nonlinear (parameterized by an MLP neural network) slicing models are optimized with similar initialization and data pipelines but different loss functions. Performance is evaluated via downstream 10-class classification accuracy utilizing the learned top-half representations. Results are averaged over 10 seeds. As shown in Table 1, mSMI outperforms CCA for learning meaningful representations. Interestingly, linear representations learned by mSMI outperform nonlinear representations from the CCA methodology, demonstrating the potency of mSMI. Full implementation details and additional results are given in Supplements C and D, respectively.
We note that aSMI is not considered for this experiment since it does not provide a concrete latent space representation (due to it being an averaged quantity). Moreover, if one were to attempt to maximize aSMI as an objective to derive such a concrete representation, this would simply lead back to computing mSMI; cf. Remark 4.
### Learning Fair Representations
Another common application of dependence measures is learning fair representations of data. We seek a data transformation \(Z=f(X)\) that is useful for predicting some outcome or label \(Y\), while being statistically independent of some sensitive attribute \(A\) (e.g., gender, race, or religion of the subject). In other words, a fair representation is one that is not affected by the subjects' protected attributes so that downstream predictions are not biased against protected groups, even if the training data may have been biased. Following the setup of [40], we measure utility and fairness using the HGR maximal correlation \(\rho_{\mathsf{HGR}}(\cdot,\cdot)=\sup_{h,g}\rho\big{(}h(\cdot),g(\cdot)\big{)}\), seeking large \(\rho_{\mathsf{HGR}}(Z,Y)\) and small \(\rho_{\mathsf{HGR}}(Z,A)\) where \(h\) and \(g\) are parameterized by NNs. As solving this minimax problem directly is difficult in practice, following [40] we learn \(Z\) by optimizing the bottleneck equation \(\rho_{\mathsf{HGR}}(Z,Y)-\beta\overline{\mathsf{SI}}_{k}(Z,A)\), where we use a neural estimator for the mSMI and \(\beta\), \(k\) are hyperparameters.
Table 2 shows results on the US Census Demographic dataset extracted from the 2015 American Community Survey, which has 37 features collected over 74,000 census tracts. Here \(Y\) is the fraction of children below the poverty line in a tract, and \(A\) is the fraction of women in the tract. Following the same experimental setup as [40], the learned \(Z\) is 80-dimensional. As [40] showed that their "Slice" approach significantly outperformed all other baselines on this experiments under a computational constraint5, we apply the same computational constraint to our approach and compare only to Slice and to the "N/A" fairness-agnostic model trained on the bottleneck objective with \(\beta=0\). Note that for \(k>1\), mSMI learns a more fair representation \(Z\) (lower \(\rho_{\mathsf{HGR}}(Z,A)\)) than Slice, while retaining a utility \(\rho_{\mathsf{HGR}}(Z,Y)\) on par with the fairness agnostic N/A model. We emphasize that due to the reasons outlined in Section 5.3, aSMI is not suitable for the considered task and is thus not included in the
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|} \hline & **N/A** & **Slice**[40] & \multicolumn{6}{|c|}{**mSMI (ours)**} \\ \hline & & & \(k=1\) & \(k=2\) & \(k=3\) & \(k=4\) & \(k=5\) & \(\mathbf{k=6}\) & \(k=7\) \\ \hline \(\rho_{\mathsf{HGR}}(Z,Y)\uparrow\) & 0.949 & 0.967 & 0.955 & 0.958 & 0.952 & 0.942 & 0.940 & 0.957 & 0.933 \\ \hline \(\rho_{\mathsf{HGR}}(Z,A)\downarrow\) & 0.795 & 0.116 & 0.220 & 0.099 & 0.067 & 0.048 & 0.029 & **0.026** & 0.047 \\ \hline \end{tabular}
\end{table}
Table 2: Learning a fair representation of the US Census Demographic dataset, following the setup of [40]. Results are shown as the median over 10 runs with random data splits. The fairest result is \(k=6\).
comparison. Results on the Adult dataset are shown in Supplement E.
### Max-Sliced InfoGAN
We present an application of max-slicing to generative modeling under the InfoGAN framework [56]. The InfoGAN learns a disentangled latent space by maximizing the mutual information between a latent code variable and the generated data. We revisit this architecture but replace the classical mutual information regularizer in the InfoGAN objective with mSMI. Our max-sliced InfoGAN is tested on the MNIST and Fashion-MNIST datasets. Figure 3 presents the max-sliced InfoGAN performance for several projection dimensions. We consider 3 latent codes \((C_{1},C_{2},C_{3})\), which automatically learn to encode different features of the generated data. We vary the values of \(C_{1}\), which is a 10-state discrete variable, along the column (and consider random values of \((C_{2},C_{3})\) along the rows). Evidently, \(C_{1}\) has successfully disentangled the 10 class labels and the quality of generated samples is on par with past implementations [56, 27]. It is noteworthy that since mSMI relies on low-dimensional projections, the resulting InfoGAN uses a reduced number of parameters (at the negligible expense of optimizing over the linear projections). Additional implementation details are found in Supplement C.
## 6 Conclusion
This paper proposed mSMI, an information theoretic generalization of CCA. mSMI captures the full dependence structure between two high dimensional random variables, while only requiring an optimized linear projection of the data. We showed that mSMI inherits important properties of Shannon's mutual information and that when the random variables are Gaussian, the mSMI optimal solutions coincide with classic \(k\)-dimensional CCA. Moving beyond Gaussian distributions, we present a neural estimator of mSMI and establish non-asymptotic error bounds.
Through several experiments we demonstrate the utility of mSMI for tasks such as independence testing, multi-view representation learning, algorithmic fairness and generative modeling, showing it outperforms popular methodologies. Possible future directions include an investigation of an operational meaning of mSMI, either in information theoretic or physical
Figure 3: MNIST images generated via the max-sliced InfoGAN.
terms, extension of the proposed formal guarantees to the nonlinear setting, and the extension of the neural estimation convergence guarantees to deeper networks. Additionally, mSMI can provide a mathematical foundation to mutual information-based representation learning, a popular area of self-supervised learning [10, 57].
In addition to the above, we plan to develop a rigorous theory for the choice of \(k\), which is currently devised empirically and is treated as a hyperparameter. When the support of the distributions lies in some \(d^{\prime}<d\) dimensional subspace, the choice of \(k=d^{\prime}\) is sufficient to recover the classical mutual information, and therefore it characterizes the full dependence structure. Extrapolating from this point, we conjecture that the optimal value of \(k\) is related to the intrinsic dimension of the data distribution, even when it is not strictly supported on a low-dimensional subset. |
2302.14765 | On Learning Intrinsic Rewards for Faster Multi-Agent Reinforcement
Learning based MAC Protocol Design in 6G Wireless Networks | In this paper, we propose a novel framework for designing a fast convergent
multi-agent reinforcement learning (MARL)-based medium access control (MAC)
protocol operating in a single cell scenario. The user equipments (UEs) are
cast as learning agents that need to learn a proper signaling policy to
coordinate the transmission of protocol data units (PDUs) to the base station
(BS) over shared radio resources. In many MARL tasks, the conventional
centralized training with decentralized execution (CTDE) is adopted, where each
agent receives the same global extrinsic reward from the environment. However,
this approach involves a long training time. To overcome this drawback, we
adopt the concept of learning a per-agent intrinsic reward, in which each agent
learns a different intrinsic reward signal based solely on its individual
behavior. Moreover, in order to provide an intrinsic reward function that takes
into account the long-term training history, we represent it as a long
shortterm memory (LSTM) network. As a result, each agent updates its policy
network considering both the extrinsic reward, which characterizes the
cooperative task, and the intrinsic reward that reflects local dynamics. The
proposed learning framework yields a faster convergence and higher transmission
performance compared to the baselines. Simulation results show that the
proposed learning solution yields 75% improvement in convergence speed compared
to the most performing baseline. | Luciano Miuccio, Salvatore Riolo, Mehdi Bennis, Daniela Panno | 2023-02-28T17:07:51Z | http://arxiv.org/abs/2302.14765v1 | On Learning Intrinsic Rewards for Faster Multi-Agent Reinforcement Learning based MAC Protocol Design in 6G Wireless Networks
###### Abstract
In this paper, we propose a novel framework for designing a fast convergent multi-agent reinforcement learning (MARL)-based medium access control (MAC) protocol operating in a single cell scenario. The user equipments (UEs) are cast as learning agents that need to learn a proper signaling policy to coordinate the transmission of protocol data units (PDUs) to the base station (BS) over shared radio resources. In many MARL tasks, the conventional centralized training with decentralized execution (CTDE) is adopted, where each agent receives the same global extrinsic reward from the environment. However, this approach involves a long training time. To overcome this drawback, we adopt the concept of learning a per-agent intrinsic reward, in which each agent learns a different intrinsic reward signal based solely on its individual behavior. Moreover, in order to provide an intrinsic reward function that takes into account the long-term training history, we represent it as a long short-term memory (LSTM) network. As a result, each agent updates its policy network considering both the extrinsic reward, which characterizes the cooperative task, and the intrinsic reward that reflects local dynamics. The proposed learning framework yields a faster convergence and higher transmission performance compared to the baselines. Simulation results show that the proposed learning solution yields 75% improvement in convergence speed compared to the most performing baseline.
6G, Intrinsic reward learning, MARL, Protocol learning.
## I Introduction
The emergence of data-driven medium access control (MAC) protocols can provide a cost-effective, flexible approach to boost the performance of beyond 5G (B5G) and 6G networks. To address this problem, multi-agent reinforcement learning (MARL) methods enable agents to learn an optimal policy by interacting in the same environment [1]. Current works, such as [2, 3], have studied the MAC protocol learning in a single cell scenario, where user equipments (UEs) need to deliver MAC protocol data units (PDUs) to the base station (BS) sharing the same radio channel. UEs are cast as reinforcement learning (RL) agents that are trained to learn a new MAC protocol from their partial observations of the global state.
However, despite the good performances shown at the end of the training procedure, learning efficient and robust MAC protocols consisting of multiple agents acting and learning in the same shared environment requires very long training time. This aspect prevents the applicability of this approach to a dynamic wireless environment that requires retraining to adapt the MAC protocol to changing environments. The main causes of slowness in training stem from the partial observability and non-stationarity of the MARL problem (i.e., transitions from a state to another depend on the actions of all agents) [4]. In addition to this, [2, 3] are based on the conventional centralized training and decentralized execution (CTDE) paradigm and parameter sharing technique [5] that further slows down the convergence time.
Specifically, on the one hand, CTDE allows agents to learn their local policies in a centralized way while retaining the ability of decentralized execution. During the training phase, the environment assigns the same global reward to all agents without distinguishing their own contributions. As a consequence, only a subset of agents contributes to the reward, and so, during training an agent may be punished even if it has acted optimally, or rewarded even if it has acted wrongly. Clearly, this approach induces slow and unstable policy learning.
On the other hand, the parameter sharing technique consists in simultaneously learning a single shared policy for multiple agents, which boosts scalability. However, UEs may compete at the same time to transmit their own packets in the same shared channel. In other words, even if two UEs have the same observation, the action taken by one UE should be different from the action taken by the other one to avoid interference. Updating the same policy for every agent can create adverse actions that slow down convergence during learning.
In light of the above considerations, the main contribution of this paper is to propose a **novel framework that provides faster convergence to the MAC protocol learning problem**. Specifically, we consider the same communication scenario studied in [2, 3] and introduce our innovative learning framework with the following features.
First, we adopt an enhanced version of the CTDE paradigm, which in addition to the global reward signal, leverages for each agent, a different local intrinsic reward signal based on its individual behavior. This idea is inspired by intrinsic reward learning introduced in [6, 7] for a single-agent environment. Different from the global reward given by the environment (termed as "extrinsic reward") that is hand-designed, the intrinsic reward is automatically learned by each agent. |
2309.10117 | Deep smoothness WENO scheme for two-dimensional hyperbolic conservation
laws: A deep learning approach for learning smoothness indicators | In this paper, we introduce an improved version of the fifth-order weighted
essentially non-oscillatory (WENO) shock-capturing scheme by incorporating deep
learning techniques. The established WENO algorithm is improved by training a
compact neural network to adjust the smoothness indicators within the WENO
scheme. This modification enhances the accuracy of the numerical results,
particularly near abrupt shocks. Unlike previous deep learning-based methods,
no additional post-processing steps are necessary for maintaining consistency.
We demonstrate the superiority of our new approach using several examples from
the literature for the two-dimensional Euler equations of gas dynamics. Through
intensive study of these test problems, which involve various shocks and
rarefaction waves, the new technique is shown to outperform traditional
fifth-order WENO schemes, especially in cases where the numerical solutions
exhibit excessive diffusion or overshoot around shocks. | Tatiana Kossaczká, Ameya D. Jagtap, Matthias Ehrhardt | 2023-09-18T19:42:35Z | http://arxiv.org/abs/2309.10117v1 | Deep smoothness WENO scheme for two-dimensional hyperbolic conservation laws: A deep learning approach for learning smoothness indicators
###### Abstract
In this paper, we introduce an improved version of the fifth-order weighted essentially non-oscillatory (WENO) shock-capturing scheme by incorporating deep learning techniques. The established WENO algorithm is improved by training a compact neural network to adjust the smoothness indicators within the WENO scheme. This modification enhances the accuracy of the numerical results, particularly near abrupt shocks. Unlike previous deep learning-based methods, no additional post-processing steps are necessary for maintaining consistency. We demonstrate the superiority of our new approach using several examples from the literature for the two-dimensional Euler equations of gas dynamics. Through intensive study of these test problems, which involve various shocks and rarefaction waves, the new technique is shown to outperform traditional fifth-order WENO schemes, especially in cases where the numerical solutions exhibit excessive diffusion or overshoot around shocks.
keywords: Weighted essentially non-oscillatory method, Hyperbolic conservation laws, Smoothness indicators, Deep Learning, Neural Networks Msc: [2020] 65M06, 68T05, 76M20 +
Footnote †: journal: Journal
## 1 Introduction
It has long been a challenge to adequately simulate complex flow problems using numerical methods. Recently, this has been further improved using machine learning techniques. As an example, in [1; 2; 3], the concept of physics-informed neural networks (PINNs) for the solution of complex fluid flow problems was proposed, which seamlessly combines the
data and the mathematical models; see [1; 4; 5; 6; 7; 8] for more details. Similarly, a new method using a U-Net-like convolutional neural network (CNN) along with established finite difference discretization techniques was proposed to learn approximate solutions for the NSE without the need for parameterization [9]. Also, recently, a framework called _local transfer function analysis_ (LTA) for optimizing numerical methods for convection problems using a graph neural network (GNN) was proposed [10].
The work [11] investigated the use of PINNs to approximate the hyperbolic Euler equations of gas dynamics. The Euler equations and initial and boundary conditions are used to create a loss function that solves scenarios with smooth solutions and those with discontinuities. Next, in [4], a novel approach, called _conservative PINNs_, for solving nonlinear conservation laws, such as the compressible Euler equations, was presented. In the recent paper [12], another novel approach has been proposed where machine learning improves finite-difference-based approximations of PDEs while maintaining high-order convergence through node refinement.
This research area is also the context of our work. Recently, improvements to the standard finite difference methods (FDMs) have been developed [13]. By adding a small convolutional neural network, the solutions of the standard PDEs are improved, while the convergence and consistency properties of the original methods are preserved. We aim to further improve modern FDMs, such as WENO schemes, for nonlinear hyperbolic systems using machine learning. For this type of PDEs, it is known that discontinuities (shocks) can occur despite initial smoothness, which makes specialized numerical methods mandatory. Therefore, the focus of our attention is on the behavior of numerical solutions in the vicinity of shocks.
To better frame our current work, let us very briefly sketch the historical development of WENO schemes. Crandall and Majda [14] introduced _monotone schemes_ in 1980 that maintain stability and satisfy entropy conditions, but are only exactly first order due to Godunov's theorem. Next, _shock-capturing schemes_ were developed to accurately handle shocks and gradients without excessive diffusion [15]. The _essentially non-oscillatory (ENO) schemes_[16] were outstanding, achieving high accuracy in smooth regions and effective shock resolution using smoothness indicators, e.g. [17; 18]. Extensions such as the _Hermite WENO (HWENO)_ schemes [19; 20] and _hybrid methods_[21; 22] were introduced for higher accuracy and efficiency. A gas-kinetic theory-based KWENO scheme was proposed in [23] for hyperbolic conservation laws. Moreover, further modifications of WENO scheme have been developed, e.g. [24; 25; 26; 27; 28; 29].
Neural networks approximated the solutions of PDEs and improved numerical methods for PDEs. While the data-driven approach is promising for improving modern numerical methods, it is always important to maintain a balance between new data-driven insights and established mathematical structures, i.e., the basic numerical scheme (here based on physical principles), e.g., for hyperbolic problems, the resulting hybrid scheme should be conservative in any case. We have maintained this balance, and next, we will briefly describe our approach. Recent approaches to solving numerical PDEs include neural network-based WENO methods
that modify coefficients and smoothness indicators of established state-of-the-art numerical methods to further improve these schemes, especially near shocks. However, some methods achieve only first-order accuracy [30].
In this paper, we present a new approach called "WENO-DS", a Deep learning-based extension of the family of WENO methods, and extend it to solving a general two-dimensional system of hyperbolic conservation laws
\[U_{t}+F(U)_{x}+G(U)_{y}=0. \tag{1}\]
To this end, we modify the smoothness indicators of the WENO schemes using a small neural network, maintaining high accuracy in smooth regions and reducing diffusion and overshoots (oscillatory behavior) near shocks. The resulting machine learning-enhanced WENO scheme combines accuracy and improved qualitative behavior for both smooth and discontinuous solutions.
The paper is organized as follows. In Section 2, we introduce two underlying WENO schemes and explain the basic ideas, such as the smoothness indicators, on a 1D conservation law. In Section 3, we present our method for improving these schemes using a deep learning approach to modify the smoothness indicators accordingly. This novel idea does not destroy the basic structure of the WENO schemes, such as the conservative property, and qualitatively improves the solution near shocks with only small additional computational costs. In this section, we also elaborate on implementation aspects, such as adaptive activation functions, the design of the small network, and the training procedure. In Section 4, we briefly describe our application example of the 2D Euler equations of gas dynamics. Subsequently, in Section 5 we present in detail the numerical results with a wide range of test configurations. Finally, in Section 6 we conclude our work and give a brief overview of future research directions.
## 2 The WENO scheme
We first introduce the standard fifth-order WENO scheme for solving one-dimensional hyperbolic conservation laws
\[u_{t}+f(u)_{x}=0, \tag{2}\]
as developed by Jiang and Shu [17; 18]. For this purpose, we consider the uniform grid defined by the points \(x_{i}=x_{0}+i\Delta x\) with cell boundaries \(x_{i+\frac{1}{2}}=x_{i}+\frac{\Delta x}{2}\), \(i=0,\ldots,I\). The semi-discrete formulation of (2) can be written as
\[\frac{\mathrm{d}u_{i}(t)}{\mathrm{d}t}=-\frac{1}{\Delta x}\big{(}\hat{f}_{i+ \frac{1}{2}}-\hat{f}_{i-\frac{1}{2}}\big{)}, \tag{3}\]
where \(u_{i}(t)\) approximates \(u(x_{i},t)\) pointwise and \(\hat{f}\) is a numerical approximation of the flux function \(f\), i.e. \(\hat{f}_{i+\frac{1}{2}}\) and \(\hat{f}_{i-\frac{1}{2}}\) are numerical flux approximations at the cell boundaries
and \(x_{i-\frac{1}{2}}\), respectively. The numerical flux \(\hat{f}_{i+\frac{1}{2}}\) is chosen such that for all sufficiently smooth \(u\)
\[\frac{1}{\Delta x}\Big{(}\hat{f}_{i+\frac{1}{2}}-\hat{f}_{i-\frac{1}{2}}\Big{)} =\big{(}f(u)\big{)}_{x}\big{|}_{x=x_{i}}+O(\Delta x^{5}), \tag{4}\]
with fifth-order of accuracy. Defining a function \(h\) implicitly by
\[f\big{(}u(x)\big{)}=\frac{1}{\Delta x}\int_{x-\frac{\Delta x}{2}}^{x+\frac{ \Delta x}{2}}h(\xi)\,d\xi, \tag{5}\]
we obtain
\[f^{\prime}\big{(}u(x_{i})\big{)}=\frac{1}{\Delta x}\big{(}h_{i+\frac{1}{2}}-h_ {i-\frac{1}{2}}\big{)},\qquad h_{i\pm\frac{1}{2}}=h(x_{i\pm\frac{1}{2}}), \tag{6}\]
where \(h_{i\pm\frac{1}{2}}\) approximates the numerical flux \(\hat{f}_{\pm\frac{1}{2}}\) with the fifth-order of accuracy in the sense that
\[\hat{f}_{i\pm\frac{1}{2}}=h_{i\pm\frac{1}{2}}+O(\Delta x^{5}). \tag{7}\]
This procedure results in a _conservative_ numerical scheme.
To ensure numerical stability, the _flux splitting method_ is applied. We therefore write the flux in the form
\[f(u)=f^{+}(u)+f^{-}(u),\quad\text{where}\quad\frac{\mathrm{d}f^{+}(u)}{\mathrm{ d}u}\geq 0\quad\text{and}\quad\frac{\mathrm{d}f^{-}(u)}{\mathrm{d}u}\leq 0. \tag{8}\]
The numerical flux \(\hat{f}_{i\pm\frac{1}{2}}\) is then given by \(\hat{f}_{i\pm\frac{1}{2}}=\hat{f}_{i\pm\frac{1}{2}}^{+}\hat{f}_{i\pm\frac{1}{2}} ^{-}\) and we get the final approximation
\[\frac{\mathrm{d}u_{i}}{\mathrm{d}t}=-\frac{1}{\Delta x}\bigg{[}\Big{(}\hat{f} _{i+\frac{1}{2}}^{+}-\hat{f}_{i-\frac{1}{2}}^{+}\Big{)}+\Big{(}\hat{f}_{i+ \frac{1}{2}}^{-}-\hat{f}_{i-\frac{1}{2}}^{-}\Big{)}\bigg{]}. \tag{9}\]
**Remark 1**.: _In our implementation, we use the Lax-Friedrichs flux splitting_
\[f^{\pm}(u)=\frac{1}{2}\big{(}f(u)\pm\alpha u\big{)}, \tag{10}\]
_with \(\alpha=\max\limits_{u}|f^{\prime}(u)|\)._
### The fifth order WENO scheme
First, we consider the construction of \(\hat{f}_{i+\frac{1}{2}}^{+}\) and drop the superscript \({}^{+}\) for simplicity. For this approximation a 5-point stencil
\[S(i)=\{x_{i-2},\ldots,x_{i+2}\} \tag{11}\]
is used. The main idea of the fifth-order WENO scheme is to divide this stencil (11) into three candidate substencils, which are given by
\[S^{m}(i)=\{x_{i+m-2},x_{i+m-1},x_{i+m}\},\quad m=0,1,2. \tag{12}\]
The numerical fluxes \(\hat{f}^{m}(x_{i+\frac{1}{2}})=\hat{f}^{m}_{i+\frac{1}{2}}=h_{i+\frac{1}{2}}+O( \Delta x^{3})\) are then calculated for each of the small substencils (12). Let \(\hat{f}^{m}(x)\) be the polynomial approximation of \(h(x)\) on each of the substencils (12). By evaluation of these polynomials at \(x=x_{i+\frac{1}{2}}\) the following explicit formulas can be obtained [18]
\[\hat{f}^{0}_{i+\frac{1}{2}} =\frac{2f(u_{i-2})-7f(u_{i-1})+11f(u_{i})}{6}, \tag{13}\] \[\hat{f}^{1}_{i+\frac{1}{2}} =\frac{-f(u_{i-1})+5f(u_{i})+2f(u_{i+1})}{6},\] \[\hat{f}^{2}_{i+\frac{1}{2}} =\frac{2f(u_{i})+5f(u_{i+1})-f(u_{i+2})}{6},\]
where the value of a function \(f\) at \(u(x_{i})\) is indicated by \(f(u_{i})=f(u(x_{i}))\). Then, we obtain a final approximation on a big stencil (11) as a linear combination of the fluxes (13)
\[\hat{f}_{i+\frac{1}{2}}=\sum_{m=0}^{2}d_{m}\hat{f}^{m}_{i+\frac{1}{2}}, \tag{14}\]
where the coefficients \(d_{m}\) are the linear weights, which would form the upstream fifth-order central scheme for the 5-point stencil and their values are
\[d_{0}=\frac{1}{10},\quad d_{1}=\frac{6}{10},\quad d_{2}=\frac{3}{10}. \tag{15}\]
As described in [17, 18], the linear weights can be replaced by _nonlinear weights_\(\omega_{m}^{JS},\,m=0,1,2\), such that
\[\hat{f}_{i+\frac{1}{2}}=\sum_{m=0}^{2}\omega_{m}^{JS}\hat{f}^{m}_{i+\frac{1}{2 }}, \tag{16}\]
with
\[\omega_{m}^{JS}=\frac{\alpha_{m}^{JS}}{\sum_{i=0}^{2}\alpha_{i}^{JS}},\quad \text{ where }\quad\alpha_{m}^{JS}=\frac{d_{m}}{(\epsilon+\beta_{m})^{2}}. \tag{17}\]
The parameter \(\beta_{m}\) is crucial for deciding which substencils to include in the final flux approximation. It is referred to as _smoothness indicator_ and its main role is to reduce or remove the contribution of the substencil \(S^{m}\), which contains the discontinuity. In this case, the corresponding nonlinear weight \(\omega_{m}^{JS}\) becomes smaller. For smooth parts of the solution, the indicators are designed to come closer to zero, so that the nonlinear weights \(\omega_{m}^{JS}\) come closer to the ideal weights \(d_{m}\). We will further analyze the smoothness indicators in the next section. The parameter \(\epsilon\) is used to prevent the denominator from becoming zero. In all our experiments, we set the value of \(\epsilon\) to \(10^{-6}\).
### Smoothness indicators
In [17], the smoothness indicators have been developed as:
\[\beta_{m}=\sum_{q=1}^{2}\Delta x^{2q-1}\int_{x_{i-\frac{1}{2}}}^{x_{i+\frac{1}{2} }}\Bigl{(}\frac{\mathrm{d}^{q}\hat{f}^{m}(x)}{\mathrm{d}x^{q}}\Bigr{)}^{2}\,dx, \tag{18}\]
with \(\hat{f}^{m}(x)\) being the polynomial approximation in each of three substencils. Their explicit form corresponding to the flux approximation \(\hat{f}_{i+\frac{1}{2}}\) can be obtained as
\[\beta_{0} =\frac{13}{12}\bigl{(}f(u_{i-2})-2f(u_{i-1})+f(u_{i})\bigr{)}^{2}+ \frac{1}{4}\bigl{(}f(u_{i-2})-4f(u_{i-1})+3f(u_{i})\bigr{)}^{2}, \tag{19}\] \[\beta_{1} =\frac{13}{12}\bigl{(}f(u_{i-1})-2f(u_{i})+f(u_{i+1})\bigr{)}^{2}+ \frac{1}{4}\bigl{(}-f(u_{i-1})+f(u_{i+1})\bigr{)}^{2},\] \[\beta_{2} =\frac{13}{12}\bigl{(}f(u_{i})-2f(u_{i+1})+f(u_{i+2})\bigr{)}^{2}+ \frac{1}{4}\bigl{(}3f(u_{i})-4f(u_{i+1})+f(u_{i+2})\bigr{)}^{2}.\]
**Remark 2**: _As mentioned before, we only considered the construction of the numerical flux \(\hat{f}^{+}_{i+\frac{1}{2}}\). For the numerical approximation of the flux \(\hat{f}^{+}_{i-\frac{1}{2}}\) we can use formulas (13)-(17) and (19) and shift each index by \(-1\)._
The negative part of the flux splitting can be obtained using symmetry (see, e.g., [31]), and we briefly summarize the formulas for \(\hat{f}^{-}_{i+\frac{1}{2}}\) and omit the superscript \(\bar{}\):
\[\hat{f}^{0}_{i+\frac{1}{2}} =\frac{11f(u_{i+1})-7f(u_{i+2})+2f(u_{i+3})}{6}, \tag{20}\] \[\hat{f}^{1}_{i+\frac{1}{2}} =\frac{2f(u_{i})+5f(u_{i+1})-f(u_{i+2})}{6},\] \[\hat{f}^{2}_{i+\frac{1}{2}} =\frac{-f(u_{i-1})+5f(u_{i})+2f(u_{i+1})}{6},\]
where the weights \(\omega^{JS}_{m}\) are computed as in (17) using the smoothness indicators given by
\[\beta_{0} =\frac{13}{12}\bigl{(}f(u_{i+1})-2f(u_{i+2})+f(u_{i+3})\bigr{)}^{ 2}+\frac{1}{4}\bigl{(}3f(u_{i+1})-4f(u_{i+2})+f(u_{i+3})\bigr{)}^{2}, \tag{21}\] \[\beta_{1} =\frac{13}{12}\bigl{(}f(u_{i})-2f(u_{i+1})+f(u_{i+2})\bigr{)}^{2} +\frac{1}{4}\bigl{(}f(u_{i})-f(u_{i+2})\bigr{)}^{2},\] \[\beta_{2} =\frac{13}{12}\bigl{(}f(u_{i-1})-2f(u_{i})+f(u_{i+1})\bigr{)}^{2} +\frac{1}{4}\bigl{(}f(u_{i-1})-4f(u_{i})+3f(u_{i+1})\bigr{)}^{2}.\]
In the next section, where the deep learning algorithm will be introduced, this will help to understand how the improved smoothness indicators will be constructed.
### The WENO-Z scheme
Borges et al. (2015) pointed out that the classical WENO-JS scheme described in previous sections looses the fifth-order accuracy at the critical points where \(f^{\prime}(u)=0\), and proposed new nonlinear weights defined by
\[\omega_{m}^{Z}=\frac{\alpha_{m}^{Z}}{\sum\limits_{i=0}^{2}\alpha_{i}^{Z}},\quad \text{ where }\quad\alpha_{m}^{Z}=d_{m}\bigg{[}1+\Big{(}\frac{\tau_{5}}{\beta_{m}+ \epsilon}\Big{)}^{2}\bigg{]} \tag{22}\]
and
\[\tau_{5}=|\beta_{0}-\beta_{2}| \tag{23}\]
is a new global smoothness indicator.
## 3 Deep smoothness WENO scheme
In (Sanden, 2014; Sanden et al., 2015; Sanden et al., 2016) the new WENO-DS scheme based on the improvement of the smoothness indicators was developed. The smoothness indicators \(\beta_{m}\), \(m=0,1,2\), are multiplied by the perturbations \(\delta_{m}\), which are the outputs of the respective neural network algorithm. The new smoothness indicators are denoted by \(\beta_{m}^{DS}\):
\[\beta_{m}^{DS}=\beta_{m}(\delta_{m}+C),\qquad m=0,1,2, \tag{24}\]
where \(C\) is a constant that ensures the consistency and accuracy of the new method. In all our experiments we set \(C=0.1\). For more details and corresponding theoretical proofs of accuracy and consistency, we refer to (Sanden, 2014; Sanden et al., 2015).
Note that the formulation of the new smoothness indicators \(\beta_{m}^{DS}\) as a multiplication of the original ones with the perturbations \(\delta_{m}\) is very favorable. In a case where the original smoothness indicator converges to zero, the improved \(\beta_{m}^{DS}\) behaves in the same way. On the other hand, if a subset \(S^{m}\) contains a discontinuity, the perturbation \(\delta_{m}\) can improve the original smoothness indicator so that the final scheme exhibits better numerical approximations. Moreover, the theoretical convergence properties are not lost, see (Sanden, 2014; Sanden et al., 2015).
In (Sanden et al., 2015) the algorithm was successfully applied to one-dimensional benchmark examples such as the Burgers' equation, the Buckley-Leverett equation, the one-dimensional Euler system, and the two-dimensional Burgers' equation. In (Sanden et al., 2015), the algorithm was extended to nonlinear degenerate parabolic equations and further applied to computational finance problems in (Sanden et al., 2015). The theoretical order of convergence was demonstrated on the smooth solutions and the large numerical improvements were obtained when comparing the WENO-DS method with the original WENO methods.
### Preservation of a conservative property for WENO-DS scheme
However, the multipliers introduced for the smoothness indicators in [32] were cell-based (not interface-based). This means that although the high numerical accuracy was theoretically demonstrated and numerically confirmed, the guarantee of the conservative property was lost. As stated in [33], the conservative property can be easily recovered by defining the multipliers such that
\[\begin{array}{l}\beta^{DS}_{m,i+\frac{1}{2}}=\beta_{m,i+\frac{1}{2}}(\delta_ {m,i+\frac{1}{2}}+C),\\ \beta^{DS}_{m,i-\frac{1}{2}}=\beta_{m,i-\frac{1}{2}}(\delta_{m,i-\frac{1}{2}}+ C),\end{array} \tag{25}\]
with
\[\delta_{0,i+\frac{3}{2}}=\delta_{1,i+\frac{1}{2}}=\delta_{2,i-\frac{1}{2}}, \quad i=0,\ldots,N. \tag{26}\]
This makes the multipliers depend on the location of the substencils corresponding to \(\beta_{m,i+\frac{1}{2}}\) and \(\beta_{m,i-\frac{1}{2}}\). This ensures that the values \(\hat{f}^{\pm}_{i-\frac{1}{2}}\) can be obtained from the values \(\hat{f}^{\pm}_{i+\frac{1}{2}}\) by simple index shifting and that the conservative property is preserved.
### Structure of neural network
To ensure the consistency of a numerical method, the Convolutional Neural Network (CNN) is used. This is crucial to ensure the spatial invariance of the resulting numerical method. This means that the multipliers \(\delta_{m}\) are independent of their position in the spatial grid and only depend on the solution itself.
Let us formulate the CNN as a function \(H(\cdot)\in\mathbb{R}^{2k+1}\to\mathbb{R}\), where \(2k+1\) is the size of the receptive field of the CNN:
\[H\big{(}\bar{f}(\bar{u}_{i})\big{)}=\text{CNN}\big{(}\bar{f}(\bar{u}_{i}) \big{)}. \tag{27}\]
As an input we define a vector
\[\begin{array}{l}\bar{f}(\bar{u}_{i})=\big{(}f(u(x_{i-k})),f(u(x_{i-k+1})), \ldots,f(u(x_{i+k}))\big{)},\\ \bar{u}_{i}=\bar{u}(\bar{x}_{i})=\big{(}u(x_{i-k}),u(x_{i-k+1}),\ldots,u(x_{i+ k})\big{)}.\end{array} \tag{28}\]
The Figure 1 shows the values from which the multipliers \(\delta_{m}\), \(m=0,1,2\) are constructed, assuming \(2k+1=3\) for the receptive field. In this case, the values used to compute the original smoothness indicators are also used to compute the multipliers \(\delta_{m}\), \(m=0,1,2\), (see equations (19) and (21)). If we enlarge the receptive field of the CNN, we also enlarge the stencil for computing the multipliers \(\delta_{m}\), \(m=0,1,2\). In this way, the smoothness indicators are basically computed from a wider stencil, which can lead to better numerical approximations. In this case, we just need to add more bounds before feeding the values (28) to the CNN.
As we are improving the existing numerical scheme and adding a neural network part to it, it is important that the new numerical scheme remains computationally efficient. The neural
network part added to the numerical scheme could be computationally expensive. However, we propose to use only a small CNN, which would not have such high computational costs. The detailed structure of the CNN can be found in Section 4.1.
It was pointed out in [32] that better numerical results were obtained using two different neural networks for the positive and negative part of a flux. We experimentally found that we can avoid using more neural networks and use only one CNN. On the other hand, we can achieve better results by using a superior training procedure and adaptive activation functions. More details will be discussed in the next subsections.
For convergence and consistency of the numerical scheme, all hidden layers of the CNN must be differentiable functions, and the activation function in the last CNN layer must be bounded from below [33]. Experimentally, we found that the use of a _softplus_ activation function in the last CNN layer is more effective and gives better numerical results compared to e.g. _sigmoid_ as used in [32].
Figure 1: The substencils used for computation of multipliers \(\delta_{m}\), \(m=0,1,2\) corresponding to the flux approximations \(\hat{f}^{\pm}_{i\pm\frac{1}{2}}\), assuming that for the receptive field of the CNN holds \(2k+1=3\).
#### 3.2.1 Adaptive activation functions
We can make the training more effective and get better numerical results by using _adaptive activation functions_. [35; 36; 37]. The activation function is one of the most important hyperparameters in neural network architectures. The purpose of this hyperparameter is to introduce nonlinearity into the prediction. There are many activation functions proposed in the literature; see the comprehensive survey [38] for more details. However, there is no basic rule for the choice of the activation function. This is the motivation to use an adaptive activation function that can adapt to the problem at hand. In this work, we used global adaptive activation functions [35], where the additional slope parameter is introduced in the activation function as follows.
For the ELU activation function, we train the additional parameter \(\alpha\):
\[\text{ELU}=\begin{cases}x,&\text{if}\quad x>0,\\ \alpha(\exp(x)-1)&\text{if}\quad x\leq 0\end{cases} \tag{29}\]
and we denote the adaptive ELU as _aELU_. For the softplus activation function, we train the additional parameter \(\beta\):
\[\text{Softplus}(x)=\frac{1}{\beta}\log(1+\exp(\beta x)) \tag{30}\]
and we denote the adaptive softplus as _aSoftplus_.
### Training procedure
In this section, we describe how the training procedure for WENO-DS is carried out. We have experimented with different training procedures and have found experimentally that following the training procedure described in [33] gives the best numerical results. First, we have to create the data set. For this purpose we compute the reference solutions using the WENO-Z method on a fine grid of \(I\times J=400\times 400\) space points up to the given final time \(T\), where \(t_{n}\) represents the time points, \(n=0,\ldots,N\). More details on the construction of the reference solutions are given in Sections 5.1, 5.2, 5.3.
During training, we compute the numerical solutions on a grid of \(I\times J=100\times 100\) space points. At the beginning of a training we randomly select a problem from a data set and perform a single time step to get to the time \(t_{n+1}\), using CNN to predict the multipliers \(\delta_{m}\). However, by performing a single time step on a coarse grid, we do not match the time step size of the fine precomputed solutions, as the adaptive time step size is used. So we simply take the closest reference solution from the data set, use it as an initial condition, and do another small time step to get a reference solution in time \(t_{n+1}\). Then we compute the loss and its gradient with respect to the weights of the CNN.
We then decide whether to proceed to the next time step of a current problem or to choose another problem from our dataset and run a time step of that problem. The probability of choosing the new problem has to be determined at the beginning of the training session and
we set it to \(\varphi=0.5\) in our experiments. We set the maximum number of opened problems to \(150\). We remember all opened problems, and if no new problem is opened (with probability \(1-\varphi\)), or if the maximum number of opened problems is reached, we execute the next time step of a problem uniformly chosen from the set of already opened problems. Keeping the solution from the previous time step as initial data, we repeat the same procedure until we reach the maximum number of training steps. This training procedure gives us a great opportunity to mix the solutions with different initial data and in different time points, which makes the training more effective.
#### 3.3.1 Optimizer and the optimal learning rate
To train the network, we used a gradient-based optimizer, namely a variant of stochastic gradient descent, the Adam optimizer [39].
The learning rate is another important hyperparameter to choose. A larger learning rate may miss the local minima, and a smaller learning rate may require a large number of iterations to reach convergence. Therefore, it is important to find a near-optimal learning rate. In this work, the learning rate is \(0.001\) to update the weights of the CNN. This near-optimal learning rate was found through experiments.
#### 3.3.2 Loss function
In this work, the loss function consists of the data mismatch term between the solution predicted by the networks and the reference solution. For the loss function, we use the mean square error loss as follows:
\[LOSS_{\text{MSE}}(u)=\frac{1}{I}\sum_{i=0}^{I}(u_{i}-u_{i}^{\text{ref}})^{2}, \tag{31}\]
where \(u_{i}\) is a numerical approximation of \(u(x_{i})\) and \(u_{i}^{\text{ref}}\) is the corresponding reference solution. The \(L_{2}\) norm-based loss function has the advantage of stronger gradients with respect to \(u_{i}\), resulting in faster training. However, in our examples, we use the \(L_{1}\) norm as the main error measure, which is more typical for measuring errors for hyperbolic conservation laws. Thus, for validation during training, we use the metrics
\[L_{1}(u)=\frac{1}{I}\sum_{i=0}^{I}|u_{i}-u_{i}^{\text{ref}}|. \tag{32}\]
## 4 Application of our approach to the 2D Euler equations
We consider the two-dimensional Euler equations of gas dynamics in the form (1) with
\[U=\begin{pmatrix}\rho\\ \rho u\\ \rho v\\ E\end{pmatrix}\qquad F(U)=\begin{pmatrix}\rho u\\ \rho u^{2}+p\\ \rho uv\\ u(E+p)\end{pmatrix}\qquad G(U)=\begin{pmatrix}\rho v\\ \rho uv\\ \rho v^{2}+p\\ v(E+p)\end{pmatrix} \tag{33}\]
for polytropic gas. Here, the variable \(\rho\) is the density, \(u\) the \(x\)-velocity component, \(v\) the \(y\)-velocity component, \(E\) the total energy and \(p\) the pressure. Further, it holds
\[p=(\gamma-1)\Big{[}E-\frac{\rho}{2}(u^{2}+v^{2})\Big{]}. \tag{34}\]
\(\gamma\) denotes the ratio of the specific heats and we will use \(\gamma\in(1.1,1.67)\) in this paper.
We consider the spatial domain \([0,1]\times[0,1]\) and solve the Riemann problem with the following initial condition
\[(\rho,u,v,p)=\begin{cases}(\rho_{1},u_{1},v_{1},p_{1})&x>0.5\quad\text{and} \quad y>0.5,\\ (\rho_{2},u_{2},v_{2},p_{2})&x<0.5\quad\text{and}\quad y>0.5,\\ (\rho_{3},u_{3},v_{3},p_{3})&x<0.5\quad\text{and}\quad y<0.5,\\ (\rho_{4},u_{4},v_{4},p_{4})&x>0.5\quad\text{and}\quad y<0.5.\end{cases} \tag{35}\]
The combination of four elementary planar waves is used to define the classification of the Riemann problem. A detailed study of these configurations has been done in [40, 41, 42, 43, 44, 45] and there are 19 different possible configurations for polytropic gas. These are defined by three types of elementary waves, namely a backward rarefaction wave \(\overleftarrow{R}\), a backward shock wave \(\overleftarrow{S}\), a forward rarefaction wave \(\overrightarrow{R}\), a forward shock wave \(\overrightarrow{S}\) and a contact discontinuity \(J^{\pm}\), where the superscript \(\pm\) refers to negative and positive contacts.
To obtain the WENO approximations in the two-dimensional example, we apply the procedure described in Section 2 using the dimension-by-dimension principle. Thus we obtain the flux approximations for (1) as
\[\begin{split}\frac{1}{\Delta x}\big{(}\hat{f}_{i+\frac{1}{2}}- \hat{f}_{i-\frac{1}{2}}\big{)}&=\big{(}F(U)\big{)}_{x}\big{|}_{(x_ {i},y_{j})}+O\big{(}\Delta x^{5}\big{)},\\ \frac{1}{\Delta y}\big{(}\hat{g}_{i+\frac{1}{2}}-\hat{g}_{i- \frac{1}{2}}\big{)}&=\big{(}G(U)\big{)}_{y}\big{|}_{(x_{i},y_{j})}+ O\big{(}\Delta y^{5}\big{)},\end{split} \tag{36}\]
with the uniform grid defined by the nodes \((x_{i},y_{j})\), \(\Delta x=x_{i+1}-x_{i}\), \(\Delta y=y_{j+1}-y_{j}\), \(i=0,\ldots,I\), \(j=0,\ldots,J\).
In our examples, we proceed with the implementation of the Euler system using characteristic decomposition. This means that we first project the solution and the flux onto the characteristic fields using left eigenvectors. Then we apply the Lax-Friedrichs flux splitting (10) for each component of the characteristic variables. These values are fed into the CNN and the enhanced smoothness indicators are computed. After obtaining the final WENO approximation, the projection back to physical space is done using the right eigenvectors, see [46] for more details on this procedure.
### Size of the neural network
In our paper, we considered different structures of neural networks and carried out numerous experiments with them. First, we used a rather simple CNN with only two layers and a receptive field of width 3. The structure is shown in Figure 1(a). The advantage of this is its computational efficiency. Second, we used a CNN with the same number of layers, but we increased the number of channels and made the receptive field wider. The structure is shown in Figure 1(b). Finally, we used only a receptive field of width 3, but added one more layer and used a more complex neural network, as shown in Figure 1(c). Each of these neural networks gave interesting results and we summarize them in Section 5.
As can be seen, we have 4 input channels in the first hidden layer and 4 output channels in the last hidden layer in each CNN. These represent the dimension of the solution \(U\) from (33). In this way, the neural network also takes in information from other variables, which can be useful for improving the numerical solution. The input \(\bar{F}(\bar{U})\), respectively \(\bar{G}(\bar{U})\) represents the numerical approximation after the projection using left eigenvectors and after applying the flux splitting method.
We also have to adapt the loss function from (31) and use it for training
\[LOSS_{\text{MSE}}(\rho,u,v,p)=LOSS_{\text{MSE}}(\rho)+LOSS_{\text{MSE}}(u)+ LOSS_{\text{MSE}}(v)+LOSS_{\text{MSE}}(p) \tag{37}\]
and for the validation during training from (32)
\[L_{1}(\rho,u,v,p)=L_{1}(\rho)+L_{1}(u)+L_{1}(v)+L_{1}(p). \tag{38}\]
Figure 2: Different structures of the convolutional neural network.
When we plot the error on validation problems, we rescale the values for each validation problem to be in the interval \([0,1]\) using the relationship
\[L_{1}^{*}(\rho,u,v,p)=\frac{L_{1}^{l}(\rho,u,v,p)}{\max_{l}(L_{1}^{l}(\rho,u,v,p) )},\qquad l=0,\ldots,L, \tag{39}\]
where \(L\) denotes the total number of training steps.
### Construction of the data set for the CNN training procedure
For each of the 19 configurations of the Riemann problem, the specific relations must be satisfied by the initial data and the symmetry properties of the solution. We present the formulas given in [41] and create the data sets for the CNN training according to these formulas.
We define
\[\Phi_{lr}:=\frac{2\sqrt{\gamma}}{\gamma-1}\Big{(}\sqrt{\frac{p_{l}}{\rho_{l}}} -\sqrt{\frac{p_{r}}{\rho_{r}}}\Big{)},\quad\Psi_{lr}^{2}:=\frac{(p_{l}-p_{r}) (\rho_{l}-\rho_{r})}{\rho_{l}\rho_{r}},\quad(\Psi_{lr}>0) \tag{40}\]
and
\[\Pi_{lr}:=\Big{(}\frac{p_{l}}{p_{r}}+\frac{(\gamma-1)}{(\gamma+1)}\Big{)} \Big{/}\Big{(}1+\frac{(\gamma-1)}{(\gamma+1)}\frac{p_{l}}{p_{r}}\Big{)}. \tag{41}\]
In Sections 5.1, 5.2, 5.3 we list the specific relations for given examples that are sufficient to uniquely define the solution. Following these relations, we randomly generate the initial data and construct our data sets.
## 5 Numerical results
To demonstrate the efficiency of the proposed method, in this section, we present the numerical results obtained with the WENO-DS method after the CNN training procedure. Note that the CNN training procedure only needs to be performed once as _offline_ training for each of the examples presented in Sections 5.1, 5.2, 5.3. No additional training was performed for the examples in Section 5.4 as we show the results using the same trained CNN from the previous examples. In Section 5.5 we perform two more trainings with larger CNN and illustrate the results. Further details can be found in the respective sections.
For the following system of ordinary differential equations (ODEs)
\[\frac{\mathrm{d}U(t)}{\mathrm{d}t}=L(U), \tag{42}\]
we use a third-order _total variation diminishing_ (TVD) Runge-Kutta method [17] given by
\[U^{(1)} =U^{n}+\Delta t\,L(U^{n}),\] \[U^{(2)} =\frac{3}{4}U^{n}+\frac{1}{4}U^{(1)}+\frac{1}{4}\Delta t\,L(U^{( 1)}), \tag{43}\] \[U^{n+1} =\frac{1}{3}U^{n}+\frac{2}{3}U^{(2)}+\frac{2}{3}\Delta t\,L(U^{( 2)}),\]
where \(U^{n}\) is the numerical solution at the time step \(n\).
For the scheme (43) we use an adaptive step size
\[\Delta t=0.6\min\Big{(}\frac{\Delta x}{a},\frac{\Delta y}{a}\Big{)}, \tag{44}\]
with
\[a=\max_{\begin{subarray}{c}i=0,\ldots,I\\ j=0,\ldots,J\end{subarray}}(|\lambda_{i,j}^{+}|,|\lambda_{i,j}^{-}|)\quad \lambda^{\pm}=V\pm c,\quad V=\sqrt{u^{2}+v^{2}}\quad c^{2}=\gamma\,\frac{p}{ \rho}, \tag{45}\]
where \(u\), \(v\) are the velocities and \(c\) is the local speed of sound.
In the sequel we enumerate the different configurations of initial conditions according to [44].
### Configuration 2
This is the configuration with four rarefaction waves: \(\overrightarrow{R}_{21}\), \(\overleftarrow{R}_{32}\), \(\overleftarrow{R}_{34}\), \(\overrightarrow{R}_{41}\). The detailed analysis was done in [45; 41] and we have to satisfy the following relations for this case:
\[\begin{split}& u_{2}-u_{1}=\Phi_{21},\quad u_{4}-u_{3}=\Phi_{34}, \quad u_{3}=u_{2},\quad u_{4}=u_{1},\\ & v_{4}-v_{1}=\Phi_{41},\quad v_{2}-v_{3}=\Phi_{32},\quad v_{2}=v _{1},\quad v_{3}=v_{4}\end{split} \tag{46}\]
with the compatibility conditions \(\Phi_{21}=-\Phi_{34}\) and \(\Phi_{41}=-\Phi_{32}\). Moreover, for a polytropic gas the equations
\[\rho_{l}/\rho_{r}=(p_{l}/p_{r})^{1/\gamma}\quad\text{for}\quad(l,r)\in\{(2,1), (3,4),(3,2),(4,1)\} \tag{47}\]
have to be included. Furthermore, we have \(\rho_{2}=\rho_{4}\), \(\rho_{1}=\rho_{3}\), \(p_{1}=p_{3}\), \(p_{2}=p_{4}\), \(u_{2}-u_{1}=v_{4}-v_{1}\) and \(u_{4}-u_{3}=v_{2}-v_{3}\).
We use for creating of the data set the values
\[\begin{split}&\rho_{1}\in\mathcal{U}[0.7,2],\quad\rho_{2}\in \mathcal{U}[0.5,\rho_{1}],\quad p_{1}\in\mathcal{U}[0.2,1.5],\\ &\quad u_{1}\in\mathcal{U}[-1,1],\quad v_{1}=u_{1},\quad\gamma\in (1.1,1.67)\end{split} \tag{48}\]
and for the other values we use the relations (46), (47) with (40). We also compute the reference solutions using the WENO-Z method on a grid \(I\times J=400\times 400\) space points up to the final time \(T\in\mathcal{U}[0.1,0.2]\) and create the data set consisting of 50 reference solutions.
For training, we use the training procedure described in Section 3.3. First, we use the simplest neural network structure shown in Figure 1(a) and perform the training for the total number of 4000 training steps. We plot the evolution of the \(L_{1}^{*}\) error (39) for the validation problems in Figure 3. Note that these problems were not included in the training data, and the initial conditions of these problems were generated analogously to the construction of the training data set. For these problems, we measured the error every 100 training steps
and at a randomly chosen final time \(T\). We select the final model based on the evolution of the error of the validation set. We see that the error decreases up to a certain point for all problems and then starts to increase for some problems. Longer training would lead to overfitting of the training data. Finally, we choose the final model from the 2800 training step and present the results using this model.
As a test problem we use the problem from [44] with \(\gamma=1.4\), \(T=0.2\) and the initial condition
\[(\rho,u,v,p)=\begin{cases}(1,0,0,1)&x>0.5\quad\text{and}\quad y>0.5,\\ (0.5197,-0.7259,0,0.4)&x<0.5\quad\text{and}\quad y>0.5,\\ (1,-0.7259,-0.7259,1)&x<0.5\quad\text{and}\quad y<0.5,\\ (0.5197,0,-0.7259,0.4)&x>0.5\quad\text{and}\quad y<0.5.\end{cases} \tag{49}\]
The results are shown in Table 1. As can be seen, we achieve a significant error improvement for all four variables and for different discretizations. It should be noted that we trained only with the discretization \(100\times 100\) space points and did not retrain the neural network for different discretizations. We refer to the error of the WENO-Z method divided by the error of WENO-DS (rounded to 2 decimal points) as the 'ratio'. The density contour plots are shown in Figure 4 and the absolute pointwise errors for the density are shown in Figure 5.
Finally, we want to compare the computational cost of WENO-DS compared to the original WENO scheme in solving the problem shown in Figure 6. Using a logarithmic scale, we plot the computation time against the \(L_{1}\) error averaged over the four variables \(\rho\), \(u\), \(v\), \(p\).
It should be noted that if we were to test the method on another unseen test problem using the initial data from the previously described range, we would obtain very similar error improvements in those cases.
Figure 3: The values (39) for different validation problems evaluated each 100 training steps.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \(I\times J\) & \multicolumn{3}{c|}{\(50\times 50\)} & \multicolumn{3}{c|}{\(100\times 100\)} & \multicolumn{3}{c|}{\(200\times 200\)} \\ \hline & WENO-Z & WENO-DS & ratio & WENO-Z & WENO-DS & ratio & WENO-Z & WENO-DS & ratio \\ \hline \hline \(\rho\) & 0.012488 & 0.010722 & 1.16 & 0.005465 & 0.004648 & 1.18 & 0.001862 & 0.001547 & 1.20 \\ \hline \(u\) & 0.014363 & 0.011986 & 1.20 & 0.006153 & 0.005066 & 1.21 & 0.002053 & 0.001627 & 1.26 \\ \hline \(v\) & 0.014363 & 0.011986 & 1.20 & 0.006153 & 0.005066 & 1.21 & 0.002053 & 0.001627 & 1.26 \\ \hline \(p\) & 0.013113 & 0.011510 & 1.14 & 0.005619 & 0.004899 & 1.15 & 0.001879 & 0.001587 & 1.18 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of \(L_{1}\) error of WENO-Z and WENO-DS methods for the solution of the Euler system with the initial condition (49) for different spatial discretizations, \(T=0.2\).
Figure 4: Density contour plot for the solution of the Riemann problem with the initial condition (49), \(I\times J=100\times 100\), \(T=0.2\).
Figure 5: Absolute pointwise errors for the density solution of the Riemann problem with the initial condition (49), \(I\times J=100\times 100\), \(T=0.2\).
### Configuration 3
This is the configuration with four shock waves: \(\overleftarrow{S}_{21},\overleftarrow{S}_{32},\overleftarrow{S}_{34},\overleftarrow{S }_{41}\). According to [41], in this case we have the following equations that must be satisfied:
\[\begin{split}& u_{2}-u_{1}=\Psi_{21},\quad u_{3}-u_{4}=\Psi_{34}, \quad u_{3}=u_{2},\quad u_{4}=u_{1},\\ & v_{4}-v_{1}=\Psi_{41},\quad v_{3}-v_{2}=\Psi_{32},\quad v_{2}=v _{1},\quad v_{3}=v_{4}\end{split} \tag{50}\]
and for polytropic gas the equations
\[\rho_{l}/\rho_{r}=\Pi_{lr}\quad\text{for}\quad(l,r)\in\{(2,1),(3,4),(3,2),(4,1)\} \tag{51}\]
are added. This gives the compatibility conditions \(\Psi_{21}=\Psi_{34}\) and \(\Psi_{41}=\Psi_{32}\). Furthermore, we have \(\rho_{2}=\rho_{4}\), \(p_{2}=p_{4}\) and \(u_{2}-u_{1}=v_{4}-v_{1}\).
In this case, we use them for creating the data set values
\[\begin{split}\rho_{1}\in\mathcal{U}[1,2],\quad\rho_{2}\in& \mathcal{U}[0.5,1],\quad p_{1}\in\mathcal{U}[1,2],\\ u_{1}\in&\mathcal{U}[-0.25,0.25],\quad v_{1}=u_{1}, \quad\gamma\in(1.1,1.67)\end{split} \tag{52}\]
and for the other values we use the relations (50), (51) with (40) and (41). Similar to the previous example, we compute the reference solutions using the WENO-Z method on a grid \(I\times J=400\times 400\) space points up to the final time \(T\in\mathcal{U}[0.1,0.3]\) and create the data set consisting of 50 reference solutions.
We proceed with training as described in the previous section, using the same neural network structure as shown in Figure 1(a). Again, we train only on the discretization \(I\times J=100\times 100\) space steps. We run the training for 4000 training steps and plot the evolution of
Figure 6: Comparison of computational cost against \(L_{1}\)-error of the solution of the Riemann problem with the initial condition (49).
the validation metrics (39) for the validation problems in Figure 7. We measured the error every 100 training steps and at the randomly chosen final time \(T\). Based on this, we choose the final model from training step 3200 and present the results for the test problem with \(\gamma=1.4\), \(T=0.3\), and initial condition [44]
\[(\rho,u,v,p)=\begin{cases}(1.5,0,0,1.5)&x>0.5\quad\text{and}\quad y>0.5,\\ (0.5323.1.206,0,0.3)&x<0.5\quad\text{and}\quad y>0.5,\\ (0.138,1.206,1.206,0.029)&x<0.5\quad\text{and}\quad y<0.5,\\ (0.5323,0,1.206,0.3)&x>0.5\quad\text{and}\quad y<0.5.\end{cases} \tag{53}\]
We compare the results in Table 2. As can be seen, we achieve a large error improvement for all discretizations listed. The density contour plots can be found in Figure 8 and the absolute pointwise errors for the density in Figure 9. Here it can be seen that the error of WENO-DS is significantly lower in the areas of the shock contacts.
We also compare the weights \(\omega_{m}^{Z}\), \(m=0,1,2\) (22) and the updated weights \(\omega_{m}^{DS}\), \(m=0,1,2\) with the improved smoothness indicators (25). We plot these weights, corresponding
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \(I\times J\) & \multicolumn{3}{|c|}{\(50\times 50\)} & \multicolumn{3}{|c|}{\(100\times 100\)} & \multicolumn{3}{|c|}{\(200\times 200\)} \\ \hline & WENO-Z & WENO-DS & ratio & WENO-Z & WENO-DS & ratio & WENO-Z & WENO-DS & ratio \\ \hline \hline \(\rho\) & 0.038682 & 0.027906 & 1.39 & 0.019232 & 0.012817 & 1.50 & 0.007454 & 0.004657 & 1.60 \\ \hline \(u\) & 0.034692 & 0.027638 & 1.26 & 0.019588 & 0.015043 & 1.30 & 0.008249 & 0.005810 & 1.42 \\ \hline \(v\) & 0.034692 & 0.027638 & 1.26 & 0.019588 & 0.015043 & 1.30 & 0.008249 & 0.005810 & 1.42 \\ \hline \(p\) & 0.038920 & 0.030888 & 1.26 & 0.018666 & 0.014041 & 1.33 & 0.007275 & 0.005001 & 1.45 \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of \(L_{1}\) error of WENO-Z and WENO-DS methods for the solution of the Euler system with the initial condition (53) for different spatial discretizations, \(T=0.3\).
Figure 7: The values (39) for different validation problems evaluated each 100 training steps.
to the positive part of a flux \(\hat{f}^{+}\) from the flux splitting, using WENO-Z and WENO-DS for the previous test problem at the final time \(T=0.3\). Since we apply the principle dimension-by-dimension, we present the weights only for the approximation of the flux \(F(U)\). For the approximations of the flux \(G(U)\), we could obtain these weights in this example using symmetry. As can be seen, WENO-DS is much better at localizing the shock from the other direction as well, which has a significant impact on error improvement.
Finally, let us compare the computational cost of WENO-DS for the problem shown in Figure 11. We see that WENO-DS is much more computationally intensive compared to WENO-DS. Again, if we tested the method on the unseen problems, but with the same initial configuration, we would get analogous significant error improvements.
Figure 8: Density contour plot for the solution of the Riemann problem with the initial condition (53), \(I\times J=100\times 100\), \(T=0.3\).
Figure 9: Absolute pointwise errors for the density solution of the Riemann problem with the initial condition (53), \(I\times J=100\times 100\), \(T=0.3\).
Figure 11: Comparison of computational cost against \(L_{1}\)-error of the solution of the Riemann problem with the initial condition (53).
### Configuration 16
This is the configuration with the combination of rarefaction wave, shock wave and contact discontinuities: \(\overleftarrow{R}_{21}\), \(J_{32}^{-}\), \(J_{34}^{+}\), \(\overrightarrow{S}_{41}\). As shown in [41], the following relations must hold for this case
\[\begin{split} u_{1}-u_{2}=&\Phi_{21},\quad u_{3}=u_{4 }=u_{1},\\ v_{4}-v_{1}=\Psi_{41},& v_{3}=v_{2}=v_{1},\quad p_{1} <p_{2}=p_{3}=p_{4}\end{split} \tag{54}\]
and for polytropic gas we add the equation (47) for a rarefaction and (51) for a shock wave between the \(l\)th and \(r\)th quadrants.
For our data set we use the values
\[\begin{split}\rho_{4}\in\mathcal{U}[1,2],\quad\rho_{3}\in \mathcal{U}[0.5,\rho_{4}],\quad p_{1}\in\mathcal{U}[0.3,1],\quad p_{2}\in \mathcal{U}[1,1.5],\\ u_{1}\in\mathcal{U}[-0.25,0.25],\quad v_{1}=u_{1},\quad\gamma\in (1.1,1.67)\end{split} \tag{55}\]
To compute the data set consisting of 50 reference solutions, we use the WENO-Z method on a grid \(I\times J=400\times 400\) space points up to the final time \(T\in\mathcal{U}[0.1,0.2]\).
We train the CNN with the structure shown in Figure 2a as in the previous examples on the discretization \(I\times J=100\times 100\) space steps for the total number of 2000 training steps. We show the evolution of the validation metrics (39) in Figure 12 and choose the model from training step 1900.
We test the trained WENO-DS on a test problem [44] with \(\gamma=1.4\), \(T=0.2\) and the initial condition
\[(\rho,u,v,p)=\begin{cases}(0.5313,0.1,0.1,0.4)&x>0.5\quad\text{and}\quad y>0. 5,\\ (1.0222,-0.6179,0.1,1)&x<0.5\quad\text{and}\quad y>0.5,\\ (0.8,0.1,0.1,1)&x<0.5\quad\text{and}\quad y<0.5,\\ (1,0.1,0.8276,1)&x>0.5\quad\text{and}\quad y<0.5.\end{cases} \tag{56}\]
Figure 12: The values (39) for different validation problems evaluated each 100 training steps.
We compare the results in Table 3 and the density contour plots can be found in Figure 13. As can be seen, WENO-DS outperforms WENO-Z and has smaller \(L_{1}\) errors in all cases. In addition, we plot the absolute pointwise errors for the density solution and show them in Figure 14.
For another unseen test problem with the same initial configurations, we would again obtain analogous significant error improvements.
### Configuration 11 and Configuration 19
In the previous sections, we trained three WENO-DS methods for three different types of configurations. We denote by WENO-DS (C2), WENO-DS (C3), and WENO-DS (C16) the methods from Sections 5.1, 5.2, and 5.3, respectively. In this section, we test these methods on the unseen problems containing the combination of rarefaction wave, shock wave, and contact discontinuities. First, we consider Configuration 11 (\(\overleftarrow{S}_{21}\), \(J_{32}^{+}\), \(J_{34}^{+}\), \(\overleftarrow{S}_{41}\)) with the test problem with \(\gamma=1.4\), \(T=0.3\), and the initial condition [44]
\[(\rho,u,v,p)=\begin{cases}(1,0.1,0,1)&x>0.5\quad\text{and}\quad y>0.5,\\ (0.5313,0.8276,0,4)&x<0.5\quad\text{and}\quad y>0.5,\\ (0.8,0.1,0,0.4)&x<0.5\quad\text{and}\quad y<0.5,\\ (0.5313,0.1,0.7276,0.4)&x>0.5\quad\text{and}\quad y<0.5.\end{cases} \tag{57}\]
Figure 13: Density contour plot for the solution of the Riemann problem with the initial condition (56), \(I\times J=100\times 100\), \(T=0.2\).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \(I\times J\) & \multicolumn{2}{|c|}{\(50\times 50\)} & \multicolumn{2}{|c|}{\(100\times 100\)} & \multicolumn{2}{|c|}{\(200\times 200\)} \\ \hline & WENO-Z & WENO-DS & ratio & WENO-Z & WENO-DS & ratio & WENO-Z & WENO-DS & ratio \\ \hline \hline \(\rho\) & 0.010980 & 0.00987 & 1.11 & 0.004834 & 0.004327 & 1.12 & 0.001827 & 0.001624 & 1.12 \\ \hline \(u\) & 0.012464 & 0.011287 & 1.10 & 0.005989 & 0.005326 & 1.12 & 0.002223 & 0.001913 & 1.16 \\ \hline \(v\) & 0.015020 & 0.013932 & 1.08 & 0.006609 & 0.006172 & 1.07 & 0.002527 & 0.002298 & 1.10 \\ \hline \(p\) & 0.010594 & 0.009644 & 1.10 & 0.004236 & 0.003820 & 1.11 & 0.001576 & 0.001392 & 1.13 \\ \hline \end{tabular}
\end{table}
Table 3: Comparison of \(L_{1}\) error of WENO-Z and WENO-DS methods for the solution of the Euler system with the initial condition (56) for different spatial discretizations, \(T=0.2\).
Second, we test the models on the configuration \((J_{21}^{+},\overleftarrow{S}_{32},\,J_{34}^{-},\overrightarrow{R}_{41})\) with the test problem with \(\gamma=1.4\), \(T=0.3\) and the initial condition [44]
\[(\rho,u,v,p)=\begin{cases}(1,0,0.3,1)&x>0.5\quad\text{and}\quad y>0.5,\\ (2,0,-0.3,1)&x<0.5\quad\text{and}\quad y>0.5,\\ (1.0625,0,0.2145,0.4)&x<0.5\quad\text{and}\quad y<0.5,\\ (0.5197,0,-0.4259,0.4)&x>0.5\quad\text{and}\quad y<0.5.\end{cases} \tag{58}\]
We summarize the results in Tables 4 and 5. As can be seen, the method trained on problems containing only rarefaction waves has the worst ability to generalize to unseen problems. On the other hand, by using methods trained on problems containing shocks or a combination of contact discontinuities, rarefaction, and shock waves, we obtain the error improvements even on unseen problems with different initial configurations. We would like to emphasize that the test problems in this section are far from the problems included in the training and validation sets. This is not only due to the choice of initial data, but also to the combination of rarefaction, shock waves and their direction, and positive and negative contact discontinuities.
### Bigger CNN and ability to generalize on unseen configurations
As can be seen from the previous Section 5.4, the models trained using the data from Section 5.2 and Section 5.3 are able to generalize very well to unseen problems. The WENO-DS method is able to properly localize the shocks and discontinuities, leading to a better numerical solution. Let us now increase the size of the CNN and use the structures shown in Figures 1(b), increasing the size of the receptive field and the number of channels, and Figure 1(c), increasing the number of channels and adding another CNN layer. Experimentally,
Figure 14: Absolute pointwise errors for density solution of the Riemann problem with the initial condition (56), \(I\times J=100\times 100\), \(T=0.2\).
we found that only increasing the size of the receptive field and the number of channels leads to similar results as described in the previous sections. In addition, increasing the receptive field makes the WENO-DS computationally more expensive. This is because we need to prepare wider inputs for the CNN, which also need to be projected onto the characteristic fields using left eigenvectors, and the matrix multiplications are more expensive here. On the other hand, if we use the CNN structure described in Figure 2c, we obtain a trained WENO-DS method that provides a much better numerical solution even for unseen problems with significantly different initial configurations.
Let us now train the method on two data sets. First, we use the dataset from the Section 5.2, train the CNN, and denote the final method as WENO-DS (C3c). Second, we train the CNN on the data set from the Section 5.3 and denote the final method as WENO-DS (C16c). We test the methods on even more different configurations and compare the results in Tables 6 and 7. We use boldface to indicate the configuration on which the method was actually trained.
With the number of configurations listed in the tables, we cover a wide range of possible combinations of contact discontinuities, rarefaction and shock waves. For all of them we use the test examples from the literature, see, e.g. [44]. We treat the possibility with four contact discontinuities with Configuration 6: \(J_{21}^{-}\), \(J_{32}^{-}\), \(J_{34}^{-}\), \(J_{41}^{-}\), two contact discontinuities and two rarefaction waves with Configuration 8: \(\overleftarrow{R}_{21}\), \(J_{32}^{-}\), \(J_{34}^{-}\), \(\overleftarrow{R}_{41}\), two shock waves and two contact discontinuities using Configuration 14: \(J_{21}^{+}\), \(\overleftarrow{S}_{32}\), \(J_{34}^{-}\), \(\overleftarrow{S}_{41}\) and Configuration 11 from Section 5.4. Finally, the combination of contact discontinuities, rarefaction, and shock waves using Configuration 18: \(J_{21}^{+}\), \(\overleftarrow{S}_{32}\), \(J_{34}^{+}\), \(\overleftarrow{R}_{41}\), and Configuration 19 form the Section 5.4.
As one can see, we obtain significant error improvements with both methods. Comparing
\begin{table}
\begin{tabular}{|c|c||c|c|c|c|c|c|} \hline & \multicolumn{8}{c|}{Configuration 11} \\ \hline & WENO-Z & WENO-DS (C2) & ratio & WENO-DS (C3) & ratio & WENO-DS (C16) & ratio \\ \hline \hline \(\rho\) & 0.007492 & 0.008000 & 0.97 & **0.006783** & **1.15** & 0.007538 & 1.03 \\ \hline \(u\) & 0.008003 & 0.008701 & 0.92 & 0.007846 & 1.02 & **0.007840** & **1.02** \\ \hline \(v\) & 0.007692 & 0.008300 & 0.93 & **0.007161** & **1.07** & 0.007370 & 1.04 \\ \hline \(p\) & 0.005883 & 0.006467 & 0.91 & **0.005115** & **1.15** & 0.005776 & 1.02 \\ \hline \end{tabular}
\end{table}
Table 4: Comparison of \(L_{1}\) error of WENO-Z and WENO-DS methods trained on data in Sections 5.1, 5.2 and 5.3 for the solution of the Euler system with the initial condition (57), \(I\times J=100\times 100\), \(T=0.3\).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & \multicolumn{8}{c|}{Configuration 19} \\ \hline & WENO-Z & WENO-DS (C2) & ratio & WENO-DS (C3) & ratio & WENO-DS (C16) & ratio \\ \hline \hline \(\rho\) & 0.014844 & 0.014463 & 1.03 & **0.013702** & **1.08** & 0.013841 & 1.07 \\ \hline \(u\) & 0.003749 & **0.003562** & **1.05** & 0.003689 & 1.02 & 0.003574 & 1.05 \\ \hline \(v\) & 0.009891 & 0.009502 & 1.04 & 0.009791 & 1.01 & **0.009245** & **1.07** \\ \hline \(p\) & 0.006123 & 0.005922 & 1.03 & **0.005595** & **1.09** & 0.005844 & 1.05 \\ \hline \end{tabular}
\end{table}
Table 5: Comparison of \(L_{1}\) error of WENO-Z and WENO-DS methods trained on data in Sections 5.1, 5.2 and 5.3 for the solution of the Euler system with the initial condition (58), \(I\times J=100\times 100\), \(T=0.3\).
both methods, even better results are obtained when the CNN was trained on a data set from Section 5.2 on a configuration with four shock waves. Compared to the Table 2, the improvement for Configuration 3 is smaller but still significant. However, the method is able to generalize much better to unknown configurations. For example, for Configuration 14, we obtain an average improvement rate of 1.30 for all four variables. In addition, we use WENO-DS (C3c) to illustrate the density contour plots and absolute pointwise errors in Figures 15, 16, and 17. Here we also show the difference from Configuration 3, with which the model was actually trained.
The WENO-DS (C3c) method achieves large error improvements not only for problems from the same configuration, but also for problems from significantly different configurations. Since we used a larger CNN, the question is what is the actual numerical cost of these improvements? We illustrate the computational costs in Figure 18. As can be seen from the shift of the red dots to the right, the method involves larger computational costs. However, it is still more effective or not worse than the original method in most cases. We would like to emphasize that here we are comparing results with significantly different initial problems than those on which the method was actually trained. Machine learning models are generally not expected to give much better results on unseen problems.
This study demonstrates the superiority of the WENO-DS approach through an extensive examination of various test problems, including scenarios featuring shocks and rarefaction waves. The results consistently showcase the newfound capabilities of the approach, outperforming traditional fifth-order WENO schemes, especially when dealing with challenges like excessive diffusion or overshooting around shocks.
The introduction of machine learning into the realm of solving partial differential equations (PDEs) has brought about promising improvements in numerical methods. However,
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{3}{|c|}{**Configuration 16**} & \multicolumn{3}{|c|}{Configuration 6} & \multicolumn{3}{|c|}{Configuration 8} & \multicolumn{3}{|c|}{Configuration 11} \\ \hline & WENO-Z & WENO-DS & ratio & WENO-Z & WENO-DS & ratio & WENO-Z & WENO-DS & ratio & WENO-Z & WENO-DS & ratio \\ \hline \(\rho\) & 0.004834 & 0.004127 & **1.17** & 0.038616 & 0.036329 & 1.06 & 0.005711 & 0.004777 & 1.20 & 0.007792 & 0.007695 & 1.01 \\ \hline \(u\) & 0.005989 & 0.004981 & **1.20** & 0.019662 & 0.019575 & 1.00 & 0.004848 & 0.007056 & 1.20 & 0.008003 & 0.007824 & 1.02 \\ \hline \(v\) & 0.006609 & **0.005776** & **1.14** & 0.022582 & 0.019974 & 1.13 & 0.008488 & 0.007056 & 1.20 & 0.007692 & 0.007482 & 1.03 \\ \hline \(p\) & 0.004236 & **0.003663** & **1.16** & 0.0105226 & 0.010216 & 1.03 & 0.005350 & 0.004624 & 1.16 & 0.005883 & 0.006295 & 0.93 \\ \hline \end{tabular}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{3}{|c|}{Configuration 14} & \multicolumn{3}{|c|}{Configuration 18} & \multicolumn{3}{|c|}{Configuration 19} \\ \hline & WENO-Z & WENO-DS & ratio & WENO-Z & WENO-DS & ratio & WENO-Z & WENO-DS & ratio \\ \hline \(\rho\) & 0.013169 & 0.011718 & 1.12 & 0.014918 & 0.013447 & 1.11 & 0.014841 & 0.013198 & 1.12 \\ \hline \(u\) & 0.004835 & 0.004021 & 1.00 & 0.005354 & 0.002975 & 1.19 & 0.003749 & 0.003526 & 1.15 \\ \hline \(v\) & 0.021299 & 0.0203803 & 1.05 & 0.010345 & 0.008302 & 1.11 & 0.008981 & 0.008796 & 1.12 \\ \hline \(p\) & 0.034996 & 0.034088 & 0.97 & 0.006795 & 0.006355 & 1.04 & 0.006123 & 0.005752 & 1.06 \\ \hline \end{tabular}
\end{table}
Table 7: Comparison of \(L_{1}\) error of WENO-Z and WENO-DS (C16c) methods for the solution of the Euler system with various initial configurations, \(I\times J=100\times 100\).
Figure 15: Density contour plots and absolute pointwise errors for the solution of the Riemann problem with initial Configuration 6, \(I\times J=100\times 100\), \(T=0.3\).
it is crucial to strike a balance between these data-driven insights and the foundational mathematical principles underpinning the numerical scheme. This study successfully maintains this equilibrium, building upon the physical principles of the Euler equations while incorporating deep learning enhancements.
In summary, the WENO-DS approach represents a significant advancement in the field of numerical methods for hyperbolic conservation laws, where the incorporation of deep learning techniques has not only enhanced the accuracy but also improved the qualitative behavior of solutions, both in smooth regions and near discontinuities. This research paves the way for future developments in the intersection of traditional numerical methods and machine learning, offering a promising direction for further advancements in solving complex PDEs like the Euler equations.
|
2308.16897 | Distinguishing models with $μ\to e $ observables | Upcoming experiments will improve the reach for the lepton flavour violating
(LFV) processes $\mu \to e \gamma$, $\mu \to e \bar{e} e$ and $\mu A \to e A$
by orders of magnitude. We investigate whether this upcoming data could rule
out some popular TeV-scale LFV models (the type II seesaw, the inverse seesaw
and a scalar leptoquark) using a bottom-up EFT approach involving twelve Wilson
coefficients that can in principle all be determined by experimental
measurements. In this 12-dimensional coefficient space, each model can only
predict points in a specific subspace; for instance, flavour change involving
singlet electrons is suppressed in the seesaw models, and the leptoquark
induces negligible coefficients for 4-lepton scalar operators. Using the fact
that none of these models can populate the whole region accessible to upcoming
experiments, we show that $\mu \to e$ experiments have the ability to rule them
out. | Marco Ardu, Sacha Davidson, Stéphane Lavignac | 2023-08-31T17:57:43Z | http://arxiv.org/abs/2308.16897v2 | # Distinguishing models with \(\mu\to e\) observables
###### Abstract
Upcoming experiments will improve the reach for the lepton flavour violating (LFV) processes \(\mu\to e\gamma\), \(\mu\to e\bar{e}e\) and \(\mu A\to eA\) by orders of magnitude. We investigate whether this upcoming data could rule out some popular TeV-scale LFV models (the type II seesaw, the inverse seesaw and a scalar leptoquark) using a bottom-up EFT approach involving twelve Wilson coefficients that can in principle all be determined by experimental measurements. In this 12-dimensional coefficient space, each model can only predict points in a specific subspace; for instance, flavour change involving singlet electrons is suppressed in the seesaw models, and the leptoquark induces negligible coefficients for 4-lepton scalar operators. Using the fact that none of these models can populate the whole region accessible to upcoming experiments, we show that \(\mu\to e\) experiments have the ability to rule them out.
+
Footnote †: preprint:
## I Introduction and review
### Introduction
Searches for New Physics(NP) in the lepton sector are of great interest, because such NP is required by neutrino masses, it could fit some current anomalies (such as \((g-2)_{\mu}\)[1] and observations in \(B\) meson physics [2; 3; 4; 5; 6]), and because leptons do not have strong interactions, so the observables are relatively clean. In this manuscript, we assume that this leptonic New Physics is heavy, and parametrise it in EFT [7; 8; 9].
Lepton Flavour change (LFV) in the \(\mu\to e\) sector is promising for the discovery of leptonic NP, because the experimental sensitivity is already good, and is expected to improve by several orders of magnitude in the near future (see table 1). However, few processes are constrained, so the current experimental bounds only set about a dozen constraints on Wilson coefficients [10]. One can therefore wonder whether future observations of \(\mu\!\to\!e\) flavour change could distinguish among the multitude of models that predict LFV.
Predictions for \(\mu\!\to\!e\) LFV have been widely studied over several decades in a multitude of models, such as the supersymmetric type I seesaw, the supersymmetric type II seesaw, supersymmetric flavour models, left-right symmetric models, two Higgs doublet models, the inverse seesaw and its supersymmetric version, warped extra dimensions, the
\begin{table}
\begin{tabular}{|l|l|l|} \hline process & current sensitivity & future \\ \hline \(\mu\to e\gamma\) & \(<4.2\times 10^{-13}\)(MEG [11]) & \(\sim 10^{-14}\) (MEG II [12]) \\ \(\mu\to e\bar{e}e\) & \(<1.0\times 10^{-12}\)(SINDRUM [13]) & \(\sim 10^{-16}\) (Mu3e [14]) \\ \(\mu\)Au\(\to e\)Au & \(<7\times 10^{-13}\)(SINDRUM II [15]) & \(?\to 10^{-(18\to 20)}\) (PP/AMF [16; 17]) \\ \(\mu\)Ti\(\to e\)Ti & \(<6.1\times 10^{-13}\)(SINDRUM II [18]) & \(\sim 10^{-16}\) (COMET [19], Mu2e [20]) \\ \hline \(\tau\to l+\ldots\) & \(\lesssim 10^{-8}\)(Babar/Belle) [21; 22] & \(\sim 10^{-(9\to 10)}\)(BelleII) [23] \\ \hline \end{tabular}
\end{table}
Table 1: Current bounds on the branching ratios for various lepton flavour changing processes, and the estimated reach of upcoming experiments.
littlest Higgs model with T parity, unparticle physics, radiative neutrino mass models, spontaneous lepton number violation, low-scale flavour models, and many others (see e.g. Refs. [24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37], and for recent reviews Refs. [38; 39]). Top-down analyses - which start from the model to predict observables - frequently show correlations among branching ratios, often resulting from scans over model parameter space. In our bottom-up EFT perspective, starting from the data, we address a different question: can observations distinguish among models?
In this manuscript, we focus on three models with new heavy particles around the TeV scale. The first two are neutrino mass models : the TeV-scale version of the type II seesaw mechanism [40; 41; 42; 43] and the inverse type I seesaw [44; 45; 46; 47; 48; 49; 50; 51], whose predictions for LFV processes have been studied, mainly in the top-down approach, by many authors (see e.g. Refs. [47; 48; 49; 50; 51] for the type II and Refs. [52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62] for the inverse seesaw, where Refs. [57; 59; 60; 62] follow an EFT approach). Both these models have the additional attraction of being able to generate the baryon asymmetry of the Universe via leptogenesis [63] (for a review, see Ref. [64]). While, in the type II seesaw case, thermal leptogenesis requires a triplet mass above \(10^{10}\,\mathrm{GeV}\) or so [65; 66; 67], a TeV-scale scalar triplet with non-minimal coupling to gravity can lead to successful leptogenesis [68] through the Affleck-Dine mechanism [69]. The inverse seesaw model, on the other hand, features TeV-scale sterile neutrinos which can generate the baryon asymmetry of the Universe through resonant leptogenesis [70; 71; 72; 73] or ARS leptogenesis [74; 75; 76; 77]. The last model is an SU(2) singlet leptoquark which can fit the \(R_{D}\) anomaly [2; 3; 4; 5; 6], as discussed by many authors (see e.g. Refs. [78; 79; 80; 81]). The leptoquark differs from the neutrino mass models in that at tree level, it generates lepton-2quark operators (which mediate \(\mu\!\rightarrow\!e\) conversion), and in that it couples to SU(2) singlet fermions of the SM.
We apply bottom-up EFT to explore whether \(\mu\!\rightarrow\!e\) LFV can distinguish among models, starting from the observation that the data could determine 12 operator coefficients, and not just the three branching ratios. We consider this 12-dimensional coefficient space, and ask whether the volume accessible to upcoming experiments can be filled by each of three models. So we aim to identify the region of the ellipse that a model cannot occupy; an observation in this region would rule the model out. Our study is performed in an EFT framework inspired by [82], and differs from top-down analyses, in that we do not scan over model parameter space for which we do not known the measure, and because we take the data to be 12 Wilson coefficients. A more complete analysis and technical details will appear in a subsequent publication [83].
Our EFT framework is briefly summarised in the next subsection. In the following three sections, we present and discuss three models of New Physics at the TeV scale: the type II seesaw in section II, the inverse type I seesaw in section III, and a leptoquark in section IV. Section V compares the models and summarises the results.
### review
We consider three processes - \(\mu\to e\gamma\), \(\mu\to e\bar{e}e\) and Spin-Independent1 (SI)\(\mu A\to eA\) - because they are complementary [82], and because the experimental sensitivity could improve significantly in coming years. They can be parametrised at the experimental scale by the Lagrangian [86]:
Footnote 1: Spin-Dependent \(\mu A\!\rightarrow\!eA\)[84; 85] is also possible, but analogously to WIMP scattering, is relatively suppressed.
\[\delta\mathcal{L} = \frac{1}{v^{2}}\sum_{X\in L,R}\left[C^{e\mu}_{D,X}(m_{\mu} \overline{e}\sigma^{\rho\sigma}P_{X}\mu)F_{\rho\sigma}+C^{e\mu ee}_{S,XX}( \overline{e}P_{X}\mu)(\overline{e}P_{X}e)+C^{e\mu ee}_{VXR}(\overline{e} \gamma^{\rho}P_{X}\mu)(\overline{e}\gamma_{\rho}P_{R}e)\right.\] (I.1) \[\left.+C^{e\mu ee}_{VXL}(\overline{e}\gamma^{\rho}P_{X}\mu)( \overline{e}\gamma_{\rho}P_{L}e)+C_{Alight,X}\mathcal{O}_{Alight,X}+C_{Aheavy,X}\mathcal{O}_{Aheavy,X}\right]+\mathrm{h.c}\]
where the twelve coefficients \(C\) are dimensionless complex numbers, \(X\in\{L,R\}\), \(\frac{1}{v^{2}}=2\sqrt{2}G_{F}\) (so \(v=\)174 GeV), and \(\mathcal{O}_{Alight,X}\) and \(\mathcal{O}_{Aheavy,X}\) are respectively the four-fermion operator combinations that induce \(\mu A\!\rightarrow\!eA\) on light nuclei
\begin{table}
\begin{tabular}{|l|c|c|l|} \hline coefficient & current bound & future bound & process \\ \hline \(C^{e\mu}_{P,X}\) & \(1.0\times 10^{-8}\) & \(10^{-9}\) & \(\mu\to e\gamma\) \\ \(C^{e\mu ee}_{V,XX}\) & \(0.7\times 10^{-6}\) & \(10^{-8}\) & \(\mu\to eee\) \\ \(C^{e\mu ee}_{V,XY}\) & \(1.0\times 10^{-6}\) & \(10^{-8}\) & \(\mu\to ee\bar{e}e\) \\ \(C^{e\mu ee}_{S,XX}\) & \(2.8\times 10^{-6}\) & \(10^{-8}\) & \(\mu\to e\bar{e}e\) \\ \(C_{Alight,X}\) & \(5.0\times 10^{-8}\) & \(10^{-10}\) & \(\mu\mathrm{T}\mathrm{i}\to e\mathrm{Ti}\) \\ \(C_{Aheavy,1,X}\) & \(2\times 10^{-7}\) & \(10^{-9}\) & \(\mu\mathrm{Au}\to e\mathrm{Au}\) \\ \hline \end{tabular}
\end{table}
Table 2: Current bounds on the operator coefficients of the Lagrangian (I.1) at the experimental scale \(m_{\mu}\) (\(X=L,R\)), and estimated reach of upcoming experiments.
like Titanium or Aluminium, and an operator combination probed by heavy targets like Gold. Expressions for these operators are given in the Appendix.
The non-observation of \(\mu\!\to\!e\) processes constrains the coefficients in eqn (I.1) to sit in an \(12d\) ellipse at the origin [10] (see Table 2 for the current bounds, and estimated sensitivities of upcoming experiments). Observations could in principle determine the magnitude of each coefficient: if the decaying muon is polarised [86] (which can also be possible for \(\mu A\to eA\)[87; 88]) then the chirality of the \(\mu\to e\) bilinear can be determined, and asymmetries and angular distributions in \(\mu\to e\bar{e}e\) can distinguish among the four-lepton operators that contribute [89]. Some relative phases can also be measured [89]. Finally, changing the target material in \(\mu A\to\!eA\) allows to probe different combinations of vector and scalar coefficients on protons or neutrons [90; 91]; current theoretical accuracy allows to obtain independent information from at least two targets [10; 92], so in this manuscript we focus on light targets like Titanium (used by SINDRUMII [15; 18]) or Aluminium (the target for the upcoming Mu2e and COMET experiments). The complementary constraints that can be obtained with Gold (used by SINDRUMII [15]), will be discussed in [83]. With theoretical optimism, we assume the coefficients can be distinguished to the reach of upcoming expts.
We take the New Physics scale \(\Lambda_{NP}\sim\) TeV for the three models considered here. The coefficients are evolved from the experimental scale \(\sim m_{\mu}\) to \(\Lambda_{NP}\sim\) TeV in the broken electroweak theory, using the "Leading Order" RGEs of QED and QCD [93; 82] (starting respectively at \(m_{\mu}\) and 2 GeV), for the operator basis of [82]. This includes the leading log-enhanced loops of QED and QCD via the RGEs, and loop diagrams with the \(W\),\(Z\) or Higgs can contribute in the matching. We prefer this approach over matching to SMEFT at the weak scale, because it allows to resum QCD2 between the experimental scale and \(\Lambda_{NP}\), and avoids the issue that \(v\)/TeV is not large, implying that the SMEFT expansions in \(1/\Lambda_{NP}^{2}\) and \(\alpha^{n}\log\) may not converge quickly 3. This RG evolution gives the 12-dimensional ellipse at \(\Lambda_{NP}\). We then match onto each of the models in turn (at tree level in the EFT), and explore whether they can fill the ellipse.
Footnote 2: It is convenient to use 5 flavours at all scales, because the results for 5 or 6 flavours are numerically similar.
Footnote 3: This approximation may also double count some electroweak contributions that we think are higher order, as will be discussed in [83].
In relating models to observables, it is convenient to use as stepping stones the coupling constant combinations that appear in Wilson coefficients, because they parametrise the \(\mu\!\to\!e\) LFV. For instance, tree level exchange of a leptoquark interacting with \(u\) quarks, matches onto an coefficient \(\propto\lambda^{e\mu}\lambda^{\mu u*}\), and the loop diagram of figure 1b is \(\propto[ff^{*}Y_{c}]_{e\mu}\). We refer to these combinations as "invariants" (a la Jarlskog), because they are related to \(S\)-matrix elements, and therefore should be independent of some Lagrangian redefinitions.
The operator coefficients can of course be complex, and in some cases the relative phases are observable (for instance in asymmetries in \(\mu\to e\bar{e}e\)[89]). However, for plotting purposes, it is common to approximate the coefficients as real. In our analysis, the coefficients are complex, but we plot either the absolute values or the real parts; the phases will be discussed in [83].
## II Type II seesaw
The first model we consider is the type II seesaw mechanism [40; 41; 42; 43], which generates neutrino masses via the tree level exchange of an SU(2) triplet scalar \(\Delta\). In this model, the Yukawa matrix is directly proportional to the observed neutrino mass matrix, so it is predictive of flavour structure - for instance fixing some ratios between \(\tau\to l\) and \(\mu\to e\) transitions - and its LFV signatures have been widely studied [47; 48; 49; 50; 51].
We assume the triplet scalar is at the TeV scale, so could be produced at current and future colliders and lead to particular signatures [94; 95; 96; 97; 98; 99]. It could also affect Higgs physics [100; 101] and contribute to electroweak observables such as \(m_{W}\)[102].
The SM Lagrangian is augmented by the following interactions
\[\delta{\cal L}_{\Delta} = (D_{\rho}\Delta^{I})^{\dagger}D^{\rho}\Delta^{I}-M_{\Delta}^{2}| \Delta|^{2}+\frac{1}{2}\left(f_{\alpha\beta}\,\overline{\ell_{\alpha}^{c}}(i \tau_{2})\tau_{I}\ell_{\beta}\Delta^{I}+M_{\Delta}\lambda_{H}\,H^{T}(i\tau_{2 })\tau_{I}H\Delta^{*I}+{\rm h.c.}\right)\] (II.1) \[+\lambda_{3}(H^{\dagger}H)(\Delta^{I*}\Delta^{I})+\lambda_{4}{ \rm Tr}(\Delta^{I*}\tau_{I}\tau_{J}\tau_{K}\Delta^{K})(H^{\dagger}\tau_{J}H)+ \ldots\;\;\;,\]
where \(\Delta\) is the colour-singlet, \(SU(2)\)-triplet scalar of hypercharge \(Y_{\Delta}=+1\), \(\ell\) are the left-handed SU(2) doublets, \(M_{\Delta}\) is the triplet mass which we take \(\sim\) TeV, \(f\) is a symmetric complex \(3\times 3\) matrix proportional to the light neutrino mass matrix and whose indices \(\alpha,\beta\) run over \(\{e,\mu,\tau\}\), \(\{\tau_{I}\}\) are the Pauli matrices, and the \(\lambda\)'s are real dimensionless couplings4. The dots on the right-hand side of Eq. (II.1) stand for scalar interactions that are not relevant for LFV processes. We also find negligible contributions to LFV operators from the triplet-Higgs interactions assuming perturbative \(\lambda_{3,4}\). Consequently, these contributions will not be included in the subsequent discussion.
Footnote 4: \(\lambda_{H}\) can be taken real with no loss of generality.
We match the model to EFT at the scale \(M_{\Delta}\sim\)TeV, generating a neutrino mass matrix \([m_{\nu}]_{\alpha\beta}=U_{\alpha i}m_{i}U_{\beta i}\) via the tree-level exchange of the triplet between pairs of leptons and Higgses:
\[[m_{\nu}]_{\alpha\beta}\simeq 0.03\ \text{eV}\ f_{\alpha\beta}^{*}\frac{ \lambda_{H}}{10^{-12}}\frac{\text{TeV}}{M_{\Delta}}\ \.\] (II.2)
Exchanging the triplet among four leptons matches onto one of the LFV coefficients of eqn (I.1), which induces \(\mu\to e\bar{e}e\):
\[C_{V,LL}^{e\mu ee}\simeq\frac{v^{2}}{2M_{\Delta}^{2}}f_{\mu e}f_{ee}^{*}=\frac {[m_{\nu}^{*}]_{\mu ee}[m_{\nu}]_{ee}}{2\lambda_{H}^{2}v^{2}}\ \.\] (II.3)
So the small ratio \(m_{\nu}/v\) can be obtained by suppressing \(\lambda_{H}\), while leaving unconstrained \(fv/M_{\Delta}\), which controls the magnitude of LFV. However, the triplet Yukawa matrix \([f]\) is proportional to \([m_{\nu}]\), so its flavour structure can be determined from neutrino oscillation data [103]. The only unknowns are the mass \(m_{min}\) of the lightest neutrino, two Majorana phases, and the Hierarchy (Normal = \(m_{3}>m_{2}>m_{1}\), Inverted = \(m_{3}<m_{1}<m_{2}\)). We use eqn (II.2) in order to express \([f]\) in terms of \([m_{\nu}]\).
The type II seesaw will also induce other LFV coefficients given in the Lagrangian (I.1). Tree-level triplet exchange matches onto \(4\ell\) operators with \(\mu\) and \(\tau\) bilinears, and these combine with eqn (II.3) in a "penguin", as illustrated in figure (a)a, to generate, for instance
\[C_{V,LR}^{e\mu ee}\ =\ \frac{\alpha_{e}}{4\pi}\left[\frac{[m_{\nu}^{\dagger}m_{ \nu}]_{\mu e}}{\lambda_{H}^{2}v^{2}}\log\left(\frac{M_{\Delta}}{m_{\tau}} \right)+\sum_{\alpha\in e,\mu}\frac{[m_{\nu}^{*}]_{\mu\alpha}[m_{\nu}]_{e \alpha}}{\lambda_{H}^{2}v^{2}}\log\left(\frac{m_{\tau}}{m_{\mu}}\right)\right]\] (II.4)
This loop arises in the RGEs, or equivalently, is log-enhanced. The logarithm is cut off at low energy by the experimental scale (\(m_{\mu}\)) or the mass of the lepton in the loop, so the \(\tau\) is not included in the loop between \(m_{\tau}\to m_{\mu}\). This is interesting, because \([m_{\nu}^{\dagger}m_{\nu}]_{\mu e}\), which appears in the first term of eqn (II.4) is determined by neutrino oscillation parameters5
Footnote 5: Here and in the rest of this paper, we assume \(\delta=3\pi/2\), a value consistent with the hints for CP violation in the lepton sector from the T2K experiment [104].
\[[m_{\nu}^{\dagger}m_{\nu}]_{\mu e}\sim i\sin\theta_{13}\Delta_{atm}^{2}\ \,\]
so any dependence of \(C_{V,LR}^{e\mu ee}\) on the unknown neutrino mass scale or Majorana phases can only arise from the second term. This same penguin diagram also generates a loop correction to \(C_{VLL}^{e\mu ee}\) of eqn (II.3), and contributes to \(\mu A\to eA\) on the proton
\[\Delta C_{V,LL}^{e\mu ee}=C_{V,LR}^{e\mu ee}\ \,\ \ \ C_{V,L}^{e\mu pp}=-2C_{V,LR}^{e\mu ee}\] (II.5)
where \(C_{Alight,L}^{e\mu pp}=\frac{1}{2}C_{V,L}^{e\mu pp}+....\). The coefficient on neutrons, \(C_{V,L}^{e\mu nn}\), vanishes at the order we calculate.
Figure 1: Loop contributions to \(\mu\to e\) operators and to their mixing in the type II seesaw model.
Finally, the dipole coefficients are induced by one loop matching(see figure 1b), and shrink marginally in running down to the experimental scale, while being regenerated at two loop 6 as illustrated in figure 1c:
Footnote 6: The two-loop diagrams [93; 105; 106; 107] are included here because they are “leading order” in the RGEs, and because they are numerically significant – for instance, in the electroweak contribution to \((g-2)_{\mu}\), the log-enhanced 2-loop contribution is \(\sim 1/4\) of the 1-loop matching part.
\[C_{D,R}^{e\mu}\ \simeq\ \frac{3e}{128\pi^{2}}\Big{[}\frac{[m_{\nu}m_{\nu}^{ \dagger}]_{e\mu}}{\lambda_{H}^{2}v^{2}}\Big{(}1+\frac{32}{27}\frac{\alpha_{e}} {4\pi}\ln\frac{M_{\Delta}}{m_{\tau}}\Big{)}+\frac{116\alpha_{e}}{27\pi}\ln\frac {m_{\tau}}{m_{\mu}}\sum_{\alpha\in e\mu}\frac{[m_{\nu}]_{\mu\alpha}[m_{\nu}^{ \ast}]_{e\alpha}}{|\lambda_{H}|^{2}v^{2}}\Big{]}\] (II.6)
where the first (leading) term is independent of the neutrino mass scale and Majorana phases. The second term of eqn (II.6), which is of \(\mathcal{O}(\alpha_{e})\) with respect to the first, depends on the neutrino mass scale and Majorana phases due to removing the \(\tau\) from the loop below \(m_{\tau}\), as for the penguin diagram.
The other 8 coefficients in the Lagrangian of eqn (I.1) will be discussed further in [83]. The coefficient on Gold, \(C_{Aheavy,L}\), should be predicted in the type II seesaw, where \(\mu A\to eA\) rates are related to the \(n/p\) ratio. The remaining coefficients are suppressed: for instance the dipole \(C_{D,L}^{e\mu}\) should be \(\approx\frac{m_{\tau}}{m_{\mu}}C_{D,R}^{e\mu}\), as expected in neutrino mass models where the new particles only interact with lepton doublets (the chirality-flip is via SM Yukawa interactions). Similarly, the operators with flavour-change involving singlet leptons (\(C_{S,RR},C_{S,LL},C_{V,RL},C_{V,RR},C_{Alight,R},C_{Aheavy,R}\)) are not discussed here, because they are Yukawa-suppressed. So we already see that the type II seesaw predicts that more than half the coefficients of eqn (I.1) are negligible; however, many of these predictions are generic to models where the New Particles interact only with lepton doublets.
The type II seesaw is expected to predict additional relations between the Wilson coefficients of eqn (I.1), because the flavour structure of LFV is controlled by the neutrino mass matrix. This should allow to predict ratios of coefficients, despite that the overall magnitude of LFV is unknown. We focus on the remaining three coefficients, given in eqns (II.3,II.4,II.6). These formulae suggest the model prefers a hierarchy \(10^{-3}:10^{-2}:1\) between the dipole, penguin-induced and tree-level coefficients; however, we aim to identify regions of coefficient space that the model cannot predict, not what it prefers.
We observe that the tree-level four-lepton coefficient \(C_{V,LL}^{e\mu e}\) (see eqn II.3) can vanish, either for \([m_{\nu}]_{ee}\to 0\) in NH for \(m_{min}\sim\Delta_{sol}\) (as is familiar from neutrinoless double \(\beta\)-decay), or for \([m_{\nu}]_{e\mu}\to 0\), which can occur for any \(m_{min}\mathrel{\hbox to 0.0pt{\raise 2.1973pt\hbox{$>$}}{\lower 2.1973pt \hbox{$\sim$}}}\Delta_{sol}\) in NH and IH by suitable choice of both Majorana phases. If \(C_{VLL}^{e\mu e}\) vanishes with \([m_{\nu}]_{\mu e}\), the dipole to penguin ratio is predicted:
\[\frac{C_{D,R}^{e\mu}}{C_{V,LR}^{e\mu e}}\approx\frac{3e}{32\pi\alpha_{e}\log \frac{M_{\Delta}}{m_{\tau}}}\sim\frac{2}{\pi^{2}}\,{}^{+100\%}_{-30\%}\quad.\] (II.7)
because in this case the Majorana phase and neutrino mass scale dependent terms of the penguin and dipole are proportional to \(|[m_{\nu}]_{\mu e}|\) (see the second terms of eqns II.4 and II.6). This prediction is also approximately obtained when \(C_{VLL}^{e\mu e}\) vanishes with \([m_{\nu}]_{ee}\), because the second term of the penguin coefficient (which depends on Majorana phases and the mass scale, see eqn II.4) is \(\mathrel{\hbox to 0.0pt{\raise 2.1973pt\hbox{$<$}}{\lower 2.1973pt\hbox{$\sim$}}}1/2\) of the first term, whereas the dipole is numerically unaffected.
It is also the case that the "penguin-induced" coefficient of eqn (II.4), as well as the dipole coefficient eqn (II.6), can separately vanish for specific choices of both Majorana phases and the neutrino mass scale in the appropriate range (However, a high neutrino mass scale \(m_{min}\mathrel{\hbox to 0.0pt{\raise 2.1973pt\hbox{$>$}}{\lower 2.1973pt \hbox{$\sim$}}}2\) eV is required for the dipole coefficient to vanish.). So in all limits where one of the three coefficients \(C_{V,LL}^{e\mu e}\), \(C_{V,LR}^{e\mu e}\) or \(C_{D,R}^{e\mu}\) vanishes, the ratio of the non-vanishing coefficients is constrained7. However, the large coefficient ratios that arise when the dipole or penguin vanishes may be beyond the sensitivity of upcoming experiments.
Footnote 7: The values of the neutrino parameters that lead to cancellations in the coefficients are sensitive to the triplet mass. Therefore, the predictions/expectations for the non-vanishing coefficients ratio may significantly depend on the assumption \(M_{\Delta}\sim 1\) TeV.
In order to graphically represent the area of coefficient space that the type II seesaw model _cannot_ reproduce, we plot the magnitudes \(|C_{D,R}^{e\mu}|\), \(|C_{V,LR}^{e\mu e}|\) and \(|C_{V,LL}^{e\mu ee}|\) in spherical coordinates, with on the \(\hat{z}\) axis \(|C_{V,LL}^{e\mu ee}|\propto\cos\theta\). The current bounds and the reach of upcoming experiments are given in table 2, which imply that upcoming experiments could probe
\[\tan\theta\equiv\frac{\sqrt{|C_{D,R}^{e\mu}|^{2}+|C_{V,LR}^{e\mu ee}|^{2}}}{|C _{V,LL}^{e\mu ee}|}:10^{-3}\to 10\quad,\quad\tan\phi\equiv\frac{|C_{D,R}^{e\mu}|}{|C_{V,LR}^{ e\mu ee}|}:10^{-2}\to 10\] (II.8)
Figure 2 illustrates (as empty) the regions of the tree/penguin/dipole coefficient space that are inaccessible to the type II seesaw model. The vertical bar represents the correlation between the dipole and penguin when the tree contribution shrinks, given in eqn (II.7). For large tree contribution, the penguin contribution can shrink when
the second term of eqn (II.4), \(\propto|[m_{\nu}]_{\mu e}|\), cancels the first. This happens for values of the unconstrained neutrino parameters (the lightest neutrino mass and the Majorana phases) that enhance the tree-level coefficient \(|C_{V,LL}^{eee}|\), so that \(\tan\theta\mathrel{\hbox to 0.0pt{\raise 2.1973pt\hbox{$<$}}{\lower 2.1973pt \hbox{$\sim$}}}10^{-3}\) - this gives the upper bound to the red region. Finally, for generic values of the Majorana phases, the tree coefficient is large with respect to the penguin-induced coefficients and the dipole, which corresponds to the blue region at \(\tan\theta\to 0\) and \(\tan\phi\mathrel{\hbox to 0.0pt{\raise 2.1973pt\hbox{$<$}}{\lower 2.1973pt \hbox{$\sim$}}}2/\pi^{2}\). In this manuscript, we leave the neutrino mass scale free, so can obtain \(\tan\phi\to 10^{-2}\) by increasing \(m_{min}\) to \(\mathrel{\hbox to 0.0pt{\raise 2.1973pt\hbox{$>$}}{\lower 2.1973pt\hbox{$\sim$}}}0.2\) eV; we will study the impact of complementary observables - such as the cosmological bound on the neutrino mass scale - in Ref. [83].
## III Inverse type I seesaw
In this section, we consider the inverse type I seesaw model [44; 45; 46], which generates neutrino masses via the exchange of heavy gauge-singlet fermions. Like the type II seesaw, the model can generate LFV without Lepton Number Violation, so LFV rates are not suppressed by small neutrino masses. However, unlike the type II case, the flavour-changing couplings are disconnected from the neutrino mass matrix, and several heavy new particles are added, with potentially different masses.
We add to the SM \(n\) pairs of gauge singlet fermions \(N,S\) of opposite chirality, with the interactions
\[\delta\mathcal{L}_{NS}=i\overline{N}\not{\partial}N+i\overline{S}\not{ \partial}S-\left(Y_{\nu}^{\alpha a}(\overline{\ell}_{\alpha}\tilde{H}N_{a})+M _{ab}\overline{S}_{a}N_{b}+\frac{1}{2}\mu_{ab}\overline{S}_{a}S_{b}^{c}+{\rm h.c}\right),\] (III.1)
where \(Y_{\nu}\) is a complex \(3\times n\) dimensionless matrix and \(M,\mu\) are \(n\times n\) mass matrices. If Lepton Number is attributed to \(\ell,N\) and \(S\), then only \(\mu\) is Lepton Number Violating. Upon the spontaneous breaking of the electroweak symmetry, the neutrino mass Lagrangian reads (suppressing flavour indices)
\[\mathcal{M}_{\nu N}=\overline{\left(\nu_{L}\ \ N^{c}\ \ S\right)}\left(\begin{array} []{ccc}0&m_{D}&0\\ m_{D}^{T}&0&M^{T}\\ 0&M&\mu\end{array}\right)\left(\begin{array}{c}\nu_{L}^{c}\\ N\\ S^{c}\end{array}\right)+{\rm h.c.}\] (III.2)
which, in the seesaw limit (\(Y_{\nu}v=m_{D}\ll M\)), give the following active neutrino masses
\[m_{\nu}=m_{D}(M^{-1})\mu(M^{T})^{-1}m_{D}^{T}.\] (III.3)
while for \(M\gg\mu\) the \(N,S\) pairs have pseudo-Dirac masses determined by the eigenvalues of \(M\). Neutrino masses and oscillation parameters can be obtained by adjusting the lepton number breaking matrix \(\mu\) for arbitrary choices of the
Figure 2: The white regions indicate ratios of operator coefficients that the type II seesaw _cannot_ predict, as discussed after eqn (II.8), where \(\tan\theta\) and \(\tan\phi\) are defined. Upcoming experiments are sensitive to the plotted ranges of the ratios. These estimates are independent of the neutrino mass hierarchy and mass scale; the star and circle are respectively in the regions predicted in NH and IH for \(m_{min}=0\).
Yukawas \(Y_{\nu}\) and sterile masses \(M\). This contrasts with the "vanilla" type I seesaw expectation of GUT scale sterile neutrinos or suppressed Yukawas [108; 109; 110; 111; 112], which give negligible contributions to LFV observables.
In the following, we consider \(M\sim\text{TeV}\) and allow \(Y_{\nu}\) to vary in the parameter space allowed by current LFV searches and other experimental constraints. Low-scale type I seesaw models can be directly probed via the production of the heavy neutral leptons at colliders [113; 114; 115; 116; 117; 118; 119], or indirectly through the active-sterile neutrino mixing (or the associated non-unitarity of the effective \(3\times 3\) lepton mixing matrix), which affect electroweak precision observables, universality ratios and lepton flavor violating processes [120; 121; 122; 123].
### \(\mu\to e\) Lfv
Large lepton flavour violating transitions are among the distinctive features of the inverse seesaw [50; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62]. In this manuscript, we focus on the contact interactions that are relevant for \(\mu\to e\) observables and aim at determining the region of the EFT coefficient space that the model cannot reach.
The LFV transitions we are interested in occurs in this model via loops, as we illustrate in Figure 3. The four-fermion operator coefficients are obtained in matching out the heavy singlets in penguin and box diagrams. The vector four-fermion coefficients \(C^{eff}_{V,LX}\), receive contributions from penguin diagrams shown in Figures 2(a) and 2(b), which are respectively \(\mathcal{O}(Y_{\nu}Y_{\nu}^{\dagger})\) and \(\mathcal{O}(Y_{\nu}Y_{\nu}^{\dagger}Y_{\nu}Y_{\nu}^{\dagger})\). We include the second diagram following [62], who observed that the contributions \(\propto Y_{\nu}^{4}\) could be relevant for \(Y_{\nu}\)\(\mathcal{O}(1)\). The box diagrams of Fig. 2(c) also match onto vector four-lepton operators, while the diagrams of Fig. 2(d) match onto the \(\mu\to e\) dipole. Similarly to the type II seesaw model of section II, the new states couple to the left-handed doublets, so the operators featuring LFV currents with electron singlets are suppressed by the electron Yukawa. As a result, the model matches onto four8 of the operators
Figure 3: Matching contributions to \(\mu\to e\) operators in the inverse seesaw. The diagrams illustrate the relevant interactions that are generated, but other diagrams may also contribute to the same operators
in Eq. (I.1):
\[C^{e\mu ee}_{V,LR} \simeq v^{2}\frac{\alpha_{e}}{4\pi}\left(1.5[Y_{\nu}M_{a}^{-2}\left( \frac{11}{6}+\log\left(\frac{m_{W}^{2}}{M_{a}^{2}}\right)\right)Y_{\nu}^{\dagger }]_{e\mu}-2.7[Y_{\nu}(Y_{\nu}^{\dagger}Y_{\nu})_{ab}\frac{1}{M_{a}^{2}-M_{b}^{2 }}\log\left(\frac{M_{a}^{2}}{M_{b}^{2}}\right)Y_{\nu}^{\dagger}]_{e\mu}+{\cal O }\left(\frac{\alpha_{e}}{4\pi}\right)\right)\] \[C^{e\mu}_{Alight,L} \simeq v^{2}\frac{\alpha_{e}}{4\pi}\left(-0.6[Y_{\nu}M_{a}^{-2}\left( \frac{11}{6}+\log\left(\frac{m_{W}^{2}}{M_{a}^{2}}\right)\right)Y_{\nu}^{ \dagger}]_{e\mu}+1.1[Y_{\nu}(Y_{\nu}^{\dagger}Y_{\nu})_{ab}\frac{1}{M_{a}^{2}- M_{b}^{2}}\log\left(\frac{M_{a}^{2}}{M_{b}^{2}}\right)Y_{\nu}^{\dagger}]_{e\mu}+{\cal O }\left(\frac{\alpha_{e}}{4\pi}\right)\right)\] \[C^{e\mu}_{V,LL} \simeq v^{2}\frac{\alpha_{e}}{4\pi}\bigg{(}-1.8[Y_{\nu}M_{a}^{-2}\left( \frac{11}{6}+\log\left(\frac{m_{W}^{2}}{M_{a}^{2}}\right)\right)Y_{\nu}^{ \dagger}]_{e\mu}+2.7[Y_{\nu}(Y_{\nu}^{\dagger}Y_{\nu})_{ab}\frac{1}{M_{a}^{2}- M_{b}^{2}}\log\left(\frac{M_{a}^{2}}{M_{b}^{2}}\right)Y_{\nu}^{\dagger}]_{e\mu}+\] \[+2.5Y_{\nu}^{ea}Y_{\nu}^{*\mu\alpha}Y_{\nu}^{eb}Y_{\nu}^{*eb} \frac{1}{M_{a}^{2}-M_{b}^{2}}\log\left(\frac{M_{a}^{2}}{M_{b}^{2}}\right)+{\cal O }\left(\frac{\alpha_{e}}{4\pi}\right)\bigg{)}\] \[C^{e\mu}_{D,R} \simeq -\frac{v^{2}}{2}\left(\frac{\alpha_{e}}{4\pi e}\right)[Y_{\nu}M^{ -2}Y_{\nu}^{\dagger}]_{e\mu},\] (III.4)
where \(a,b\) are summed over the number \(n\) of sterile neutrinos. We include the finite part of the penguin diagrams shown in Fig.3a because the ratios of sterile masses and the electroweak scale involved in the logarithms are not large (as we discuss in the introduction). Higher-order terms in the \(\alpha_{e}/(4\pi)\) expansion are neglected because they are small and would require including some two-loop diagrams for a consistent treatment. Consequently, the results presented in Eq.(III.4) are reliable at the \(\lesssim{\rm few}\times{\cal O}(1\%)\) level.
The four operator coefficients generated by the model are linear combinations of four contractions of the neutrino Yukawas and sterile masses. As the number of relevant \(\mu\to e\) observables equals the number of invariant matrix elements upon which the observables depend, it seems that the model can predict any observation - i.e points in the 4-dimensional space of the operator coefficients- with suitable choices of the \(Y_{\nu}\) and \(M\) matrices. The number of invariants could be reduced if the sterile neutrinos are nearly degenerate 9. In this limit, the combination entering the \({\cal O}(Y_{\nu}Y_{\nu}^{\dagger})\) penguin contributions aligns with the matrix elements parameterizing the dipole coefficient. Indeed, by expanding \(M_{a}^{2}/M^{2}=1+x_{a}\) for small \(x_{a}\), we have that
Footnote 9: A motivation for considering this limit comes from the baryon asymmetry of the Universe, which can be generated from resonant leptogenesis with highly degenerate TeV-scale sterile neutrinos (see e.g. Refs. [70; 71; 72; 73]), or from the CP-violating oscillations of nearly degenerate sterile neutrinos (a mechanism known as ARS leptogenesis [74]) with masses in the GeV [75] to multi-TeV [76] range.
\[\frac{1}{M_{a}^{2}}\left(\frac{11}{6}+\log\left(\frac{m_{W}^{2}}{M_{a}^{2}} \right)\right)=\frac{1}{M^{2}(1+x_{a})}\left(\frac{11}{6}+\log\left(\frac{m_{W }^{2}}{M^{2}}\right)-\log(1+x_{a})\right)=\frac{1}{M^{2}}\left(\frac{11}{6}+ \log\left(\frac{m_{W}^{2}}{M^{2}}\right)+{\cal O}(x_{a})\right).\] (III.5)
If the mass-splitting between the heavy singlets is \(\lesssim v^{2}\), the error introduced by the degenerate approximation is a dimension eight \(v^{2}/M^{2}\) suppressed contribution, that, for TeV scale sterile masses, would be approximately of the same order of the neglected \({\cal O}(\alpha_{e}/4\pi)\) corrections. Similarly, the leading order term in the \(x_{a}\) expansion of the mass function that enters in the \({\cal O}(Y_{\nu}Y_{\nu}^{\dagger}Y_{\nu}Y_{\nu}^{\dagger})\) penguin and in the boxes is
\[\frac{1}{M_{a}^{2}-M_{b}^{2}}\log\left(\frac{M_{a}^{2}}{M_{b}^{2}}\right)= \frac{1}{M^{2}}\left(1+{\cal O}(x_{a},x_{b})\right),\] (III.6)
so that in the nearly degenerate limit, we find10
Footnote 10: We remind that the operator coefficients depend logarithmically on the scale of the new states, which we take to be \(\sim\) TeV.
\[C^{e\mu}_{D,R}(m_{\mu}) \simeq -10^{-3}\frac{v^{2}}{M^{2}}(Y_{\nu}Y_{\nu}^{\dagger})_{e\mu}\] \[C^{e\mu}_{Alight,L}(m_{\mu}) \simeq \frac{v^{2}}{M^{2}}\left(1\times 10^{-3}(Y_{\nu}Y_{\nu}^{\dagger})_{e \mu}+6.6\times 10^{-4}(Y_{\nu}Y_{\nu}^{\dagger}Y_{\nu}Y_{\nu}^{\dagger})_{e\mu}\right)\] \[C^{e\mu ee}_{V,LR}(m_{\mu}) \simeq \frac{v^{2}}{M^{2}}\left(-2.8\times 10^{-3}(Y_{\nu}Y_{\nu}^{ \dagger})_{e\mu}-1.6\times 10^{-3}(Y_{\nu}Y_{\nu}^{\dagger}Y_{\nu}Y_{\nu}^{ \dagger})_{e\mu}\right)\] \[C^{e\mu ee}_{V,LL}(m_{\mu}) \simeq \frac{v^{2}}{M^{2}}\left(3.3\times 10^{-3}(Y_{\nu}Y_{\nu}^{ \dagger})_{e\mu}(1+0.56(Y_{\nu}Y_{\nu}^{\dagger})_{ee})+1.55\times 10^{-3}(Y_{\nu}Y_{\nu}^{ \dagger}Y_{\nu}Y_{\nu}^{\dagger})_{e\mu}\right)\] (III.7)
Despite the large number of free parameters in the inverse seesaw model, even in the degenerate limit, the coefficients of the \(\mu\to e\) operators can now be determined by just two invariant contractions of the neutrino Yukawa matrix. Being linear combinations of two invariants, the correlations of the operator coefficients that the model can predict are restricted: by measuring two (complex) coefficients, it would be possible to predict the others. Focusing on the first three operators of Eq. (III.7), we find that
\[C^{e\mu ee}_{V,LR}(m_{\mu})=-2.4C^{e\mu}_{Alight,L}(m_{\mu})+0.02C^{e\mu}_{D,R}(m_{ \mu})\] (III.8)
In the purely left-handed \(\mu\to 3e\) vector the magnitude of the coefficient multiplying the matrix element \((Y_{\nu}Y_{\nu}^{\dagger})_{e\mu}\) is dependent on the real and positive parameter \((Y_{\nu}Y_{\nu}^{\dagger})_{ee}\) arising from the box diagram contribution. However, since the Yukawas are assumed to be perturbative \((Y_{\nu}Y_{\nu}^{\dagger})_{ee}\lesssim 1\), we can similarly find that
\[C^{e\mu ee}_{V,LL}(m_{\mu})=2.4C^{e\mu}_{Alight,L}(m_{\mu})+c_{d}C^{e\mu}_{D,R}( m_{\mu})\] (III.9)
where \(-1.99\lesssim c_{d}\lesssim-0.57\). The correlations described by Eq.s (III.8) and (III.9) hold, within the accuracy of our calculations, for general complex coefficients. To visually represent the parameter space accessible to the inverse seesaw model, we consider the real parts of the coefficients and plot the corresponding planes in the 3D space of low-energy coefficients. By normalizing each coefficient to the upper limit imposed by current experimental searches, the allowed region of parameter space correspond to the interior of a sphere. The inverse seesaw model (with nearly degenerate sterile neutrinos) can sit in the intersection of this region with the planes defined by Eq. (III.8) and Eq. (III.9), as illustrated in Figure 4. Since the dipole coefficient in Eq. (III.9) is unknown but bounded, the model can cover the volume enclosed by the two extreme planes
Figure 4: Parameter space covered by the inverse seesaw (with degenerate sterile neutrinos) in the low-energy operator coefficient space
## IV Leptoquark
This section studies the \(\mu\!\to\!e\) predictions of an SU(2) singlet leptoquark of hypercharge \(Y=1/3\) that could fit the the R\({}_{D}\) anomaly [78; 79; 80; 81; 2; 6; 78; 79; 90; 91], which is an excess of \(b\to c\overline{\tau}\nu\) events. Requiring the leptoquark to fit \(R_{D}\) fixes the mass to be O(TeV) and restricts the quantum numbers, but our \(\mu\!\to\!e\) interactions are independent of the couplings that contribute to \(R_{D}\). Unlike the models of the previous sections, the leptoquark couples to both lepton doublets and singlets, and can mediate \(\mu A\!\to\!eA\) at tree level - but does not generate neutrino masses.
The SU(2)-singlet leptoquark is denoted \(S_{1}\)[124] (not to be confused with the singlet fermions \(\{S_{a}\}\) of the previous section), with interactions:
\[\mathcal{L}_{S} = (D_{\rho}S_{1})^{\dagger}D^{\rho}S_{1}-m_{LQ}^{2}S_{1}^{\dagger}S _{1}+(-\lambda_{L}^{\alpha j}\overline{l}_{\alpha}i\tau_{2}q_{j}^{c}+\lambda_ {R}^{\alpha j}\overline{e}_{\alpha}u_{j}^{c})S_{1}+(\lambda_{L}^{\alpha j*} \overline{q}_{\phantom{\alpha}j}^{\alpha j}i\tau_{2}\ell_{\alpha}+\lambda_{R }^{\alpha j*}\overline{u}_{\phantom{\alpha}j}^{c}e_{\alpha})S_{1}^{\dagger}\] \[+\text{ Higgs interactions}\]
where the leptoquark mass is \(m_{LQ}\simeq\) TeV, the generation indices are \(\alpha\in\{e,\mu,\tau\}\) and \(j\in\{u,c,t\}\), and the sign of the doublet contraction is taken to give \(+\lambda_{L}^{\alpha j}\overline{e}_{L}(u_{L})^{c}S_{1}\). Like in the type II model of section II, the leptoquark-Higgs interactions are neglected because their contributions to LFV observables are negligible assuming perturbative couplings.
Leptoquarks are strongly interacting, so can be readily produced at hadron colliders; the current LHC searches impose \(m_{LQ}\mathrel{\hbox to 0.0pt{\raise 2.1973pt\hbox{$>$}}{\lower 2.1973pt\hbox{ $\sim$}}}1\)-2 TeV [103]. Also, their peculiar Yukawa interactions connecting quarks to leptons, can predict diverse quark and/or lepton flavour-changing processes. For instance, non-zero \(\lambda_{\Lambda}^{\mu\mu}\), \(\lambda_{\Lambda}^{\mu e}\), \(\lambda_{\Lambda}^{\mu e}\) and \(\lambda_{\Lambda}^{e\tau}\) induce \(\mu\to e\) processes on a \(u\) and \(c\) quark currents - which we study here - and also induce LFV \(D\) decays with \(e^{\mu}\mu^{\mp}\) in the final state. In addition, \(S_{1}\) will mediate \(\Delta F=2\) four-quark operators via box diagrams which can contribute to meson-anti-meson mixing [125]. We did not find relevant constraints on the LFV interactions of \(S_{1}\) from quark flavour physics, but will discuss in more detail the complementarity of quark and lepton observables in [83].
In matching the leptoquark onto the QCD\(\times\)QED-invariant EFT at \(m_{LQ}\), vector (\(\propto\lambda_{R}^{\star}\lambda_{R}\), \(\lambda_{L}^{\star}\lambda_{L}\)), and scalar/tensor (\(\propto\lambda_{R}^{\star}\lambda_{L}\), \(\lambda_{L}^{\star}\lambda_{R}\)) operators are generated at tree-level. We only consider the subset which are quark flavour-diagonal and \(\mu\!\to\!e\) flavour-changing. The model matches onto vector four-fermion operators of the form \((\bar{e}\gamma^{\rho}P_{X}\mu)(\bar{f}\gamma_{\rho}P_{Y}f)\) (where \(X,Y\in\{L,R\}\) and \(f\) any lepton or quark) via "penguin" diagrams (see figure 4(a)), and also can generate vector four lepton operators via box diagrams as in figure 4(b). Finally, the dipole operators can be generated via the last diagram of figure 5. This collection of operators at the leptoquark mass scale is schematically represented in figure 6 as the top row of boxes and ovals.
Several of the operators generated in matching out the leptoquark are present in the Lagrangian of eqn (I.1). For instance, \(S_{1}\) matches onto vector and/or scalar \(\bar{e}\)-\(\mu\)-\(\bar{u}\)-\(u\) operators, which give large contributions to \(\mu A\!\to\!eA\). In addition, the log-enhanced loops change the predictions significantly: the coefficients of scalar and tensor quark operators respectively grow and shrink due to QCD, and QED loops can cause some \(\mathcal{O}(1)\) mixing, such as the top and charm tensors into the dipole, or the \(u\)-tensor into the \(u\)-scalar. The effect of the RGEs is represented by lines in figure 6.
At the experimental scale, the contribution of the \(S_{1}\) leptoquark to the dipole coefficients is
\[\frac{m_{LQ}^{2}}{v^{2}}C^{\mu}_{D,R}(m_{\mu}) \simeq \frac{e[\lambda_{L}\lambda_{L}^{\dagger}]_{e\mu}}{128\pi^{2}} \left(1-16\frac{\alpha_{e}}{4\pi}\ln\frac{m_{LQ}}{m_{\mu}}\right)+\frac{2 \alpha_{e}^{2}}{9\pi^{2}e}[\lambda_{L}\ln\frac{m_{LQ}}{m_{Q}}\lambda_{L}^{ \dagger}]_{e\mu}-\frac{\alpha_{e}}{2\pi ey_{\mu}}\left([\lambda_{L}[Y_{u} \widetilde{f}^{Q}\ln\frac{m_{LQ}}{m_{Q}}]\lambda_{R}^{\dagger}]_{e\mu}\right)\] (IV.1)
where the first term is the matching contribution (times its QED running), the second term is the 2-loop mixing of tree vector operators into the dipole, the last term is the RG-mixing of tensor operators to dipoles, and the
Figure 5: Representative diagrams for the matching of the leptoquark onto four-fermion operators, and the dipole.
serving as lower cutoff for the logarithms (here and further in the manuscript) is \(\max\{m_{Q},2\) GeV} 11. For \(C_{D,L}^{e\mu}\), one interchanges \(R\leftrightarrow L\). The QCD running of the quark tensor operator is intricated with the QED mixing to the dipole [127; 128], which induces a quark-flavour-dependent rescaling \(\tilde{f}^{Q}\simeq\{1,1.4\}\) for \(\{t,c\}\) quarks.
Footnote 11: We neglect the estimates of [126].
The leptoquark also generates vector four-lepton operators (for \(X\in\{L,R\}\))
\[\frac{m_{LQ}^{2}}{v^{2}}C_{V,LX}^{e\mu}(m_{\mu}) \simeq -\frac{N_{c}}{64\pi^{2}}[\lambda_{L}\lambda_{L}^{\dagger}]_{e\mu} [\lambda_{X}\lambda_{X}^{\dagger}]_{ee}\left(1\mp 12\frac{\alpha_{e}}{4\pi}\ln \frac{m_{LQ}}{m_{\mu}}\right)+\frac{\alpha_{e}}{3\pi}[\lambda_{L}\ln\frac{m_{ LQ}}{m_{Q}}\lambda_{L}^{\dagger}]_{e\mu}\] \[-g_{X}^{e}\frac{N_{c}}{16\pi^{2}}[\lambda_{L}Y_{u}Y_{u}^{\dagger }\lambda_{L}^{\dagger}]_{e\mu}\log\frac{m_{LQ}}{m_{t}}\]
where \(g_{L}^{e}=-1+2s_{W}^{2}\), \(g_{R}^{e}=2s_{W}^{2}\), the first term represents the box diagram at \(m_{LQ}\) (and its QED running to \(m_{\mu}\), with \(-/+\) for X=/\(\neq\)Y) which is represented as the \(V,4l\) oval at the top of figure 6 connecting to the \(V_{XY}\) box at the bottom, the second term is the log-enhanced photon penguin that mixes the tree operators \({\cal O}^{QQ}_{VLL}\) (for \(Q\in\{u,c,t\}\)) into 4-lepton operators (represented in figure 6 as a thin wavy line between the \(V,2l2u\) and \(V_{XY}\) boxes), and the last term is the contribution of the \(Z\)-penguins shown in figure 5a (the \(V2l2f\) oval of figure 6), not including the negligible effect of the RGEs.
The scalar 4-lepton coefficient \(C_{SXX}^{e\mu ee}\) can be generated via a box diagram, with Higgs insertions on the internal quark lines (so the coefficient can be significant for internal top quarks); however, the coupling constant combination that appears on the flavour-changing line is already strictly constrained by \(\mu\to e\gamma\). So this coefficient has a very small contribution to \(\mu\to e\) processes, and we neglect it.
A classic signature of leptoquarks is \(\mu A\!\rightarrow\!eA\), which can be mediated at tree level, via scalar or vector operators
Figure 6: A schematic representation of how the leptoquark generates \(\mu\to e\gamma\), \(\mu\to e\bar{e}e\) and \(\mu A\!\rightarrow\!eA\). The top row represents _classes_ of coefficients, generated in matching out the leptoquark, of given Lorentz structure and particle type, for any flavours and chiralities. The boxes correspond to operators with coefficients of \({\cal O}(\lambda^{2}/m_{LQ}^{2})\), whereas the ovals have suppressed coefficients \(\sim{\cal O}(\lambda^{2}/[16\pi^{2}m_{LQ}^{2}])\), or \(\sim{\cal O}(\lambda^{4}/[16\pi^{2}m_{LQ}^{2}])\). The bottom row of boxes are the six observable coefficients (for fixed \(e\) chirality in the \(\mu\!\rightarrow\!e\) bilinear) of eqn (I.1). Lines represent the transformation between the \(m_{LQ}\) and experimental scale; a straight line means the observable coefficient can be directly obtained in matching. Operator mixing is represented as wavy lines: a thick line indicates an \({\cal O}(1)\) contribution of at least one operator from the class to the observable; a thin line indicates a more suppressed \({\cal O}(\alpha)\) contribution.
involving first generation quarks. The branching ratio on light targets like Titanium or Aluminium, can be written
\[\begin{split}\sqrt{\frac{BR_{T_{1}}^{exp}}{250}}\ \raisebox{-1.72pt}{\hbox{\hbox to 0.0pt{$\sim$}} \raisebox{1.72pt}{$>$}}&\ \raisebox{-1.72pt}{\hbox{\hbox to 0.0pt{$\sim$}} \raisebox{1.72pt}{$>$}}\ \bigg{|}0.250C_{D,R}(m_{\mu})+0.37\lambda_{L}^{eu}\lambda_{L}^{mu*} \cdot\left(1+\frac{2\alpha}{\pi}\log\right)+0.39\left(\frac{g^{2}}{64\pi^{2}} \lambda_{L}^{eu}\lambda_{L}^{mu*}\log\frac{m_{LQ}}{m_{W}}\right)\\ &\ \
expressions for the coefficients are given in Appendix B). This leads to correlations among operator coefficients, as occurred in the inverse seesaw. In figure 7, we illustrate this correlation by plotting ratios of coefficients, rather than the 3-\(d\) plots of section III (notice that the horizontal axis is in \(\log_{10}\) scale, but runs from negative to positive values, so small values of \(C_{Al,X}\) have been deleted at the origin).
One sees that "generically", a leptoquark coupled to the \(t\) gives a large dipole, whereas a large \(\mu A\to eA\) rate is expected for leptoquarks interacting with the up quark. However, neither of these expectations is an unambiguous footprint of the quark flavour dominantly coupled to the leptoquark, because \(C_{Al,X}\) (resp. \(C_{D,X}\)) can vanish for leptoquarks interacting with \(u\) (resp. \(t\)) quarks. Therefore, the observation of \(\mu\to 3e\) without \(\mu\to e\gamma\) would not exclude an \(S_{1}\) leptoquark coupling mostly to top quarks; it would just exclude generic values of the parameters of that model (i.e., values of the parameters that do not lead to cancellation in some Wilson coefficients). Similarly, the observation of \(\mu\to e\gamma\) but not \(\mu\to e\) conversion on light nuclei would not exclude an \(S_{1}\) leptoquark coupling only to up quarks.
## V Discussion and Summary
In this manuscript, we explored whether a bottom-up EFT analysis (outlined in section I) can give a perspective on LFV models that is complementary to top-down studies. We emphasize that in EFT, the data for \(\mu\to e\gamma\), \(\mu\to e\bar{e}e\) and \(\mu A\to eA\) consists of twelve Wilson coefficients (given in Eq. 1), and not just three branching ratios. The current experimental null-results confine the coefficients to the interior of a 12-D ellipse centered at the origin, and the aim of this manuscript was to determine whether a future observation could exclude models. To address this question, we searched for the regions of coefficient space accessible-to-future-experiments that each model cannot reach, as an observation in that part of the ellipse would rule the model out.
We studied three TeV-scale12 models: the type II seesaw, the inverse type I seesaw and a singlet scalar leptoquark added to the SM. We chose the first two because they can explain neutrino masses (which are the best motivation for LFV), while we considered the scalar leptoquark in light of the charged current anomaly observed in \(b\to cl\nu\) transitions. The model predictions depend on combinations of NP and SM parameters which we refer to as "invariants", see _e.g._ Eqs. (III.4) and (III.7).
Footnote 12: Although the EFT calculations are only logarithmically sensitive to the choice \(\Lambda_{\rm NP}\sim\) TeV, our results may depend on this assumption, especially in the cases where cancellations between different contributions are envisaged.
The type II and inverse seesaw models generate Majorana neutrino masses via the tree-level exchange of heavy new particles, respectively a scalar triplet and fermion singlets. Large lepton-flavour-changing rates are possible because the models can contain LFV without lepton number violation, avoiding any suppression by small neutrino masses. In both models the new particles interact with lepton doublets, so the coefficients of operators with flavour-changing currents involving singlet charged leptons are suppressed by the lepton Yukawas13 and neglected here:
Footnote 13: This Yukawa also appears in \(C^{\mu\mu}_{DR}\), but since the operator is defined with the muon mass (see Eq. 1), the Yukawa does not “count” as a suppression in this case.
\[C_{D,L},C^{e\mu ee}_{V,RR},C^{e\mu ee}_{V,RL},C^{e\mu ee}_{S,RR},C^{e\mu}_{ Alight,R},C^{e\mu}_{Aheavy,R},C^{e\mu ee}_{S,LL}\sim 0\quad,\quad(\mbox{type II, inverse seesaw})\]
So in the twelve-dimensional space that can be probed by experiment, these models can only occupy 5 dimensions: should one of the above coefficients be observed (in the absence of the unsuppressed ones), then these models would be excluded. In addition, these vanishing coefficients imply that \(\mu A\to eA\) only occurs via the dipole and vector interactions.
Section II shows that in the type II model, three of the remaining coefficients, \(C^{e\mu}_{Alight,L}\), \(C^{e\mu}_{Aheavy,L}\) and \(C^{e\mu ee}_{V,LR}\)(given in Eq 4), arise from the same- loop diagrams and are all proportional to the same combination of invariants. This implies that the model occupies a line in the three-dimensional space of these three coefficients, and that one of the three rates for \(\mu\to e_{L}\gamma\), \(\mu\to e_{L}\overline{e}e_{R}\) and \(\mu A\to e_{L}A\) being predicted by the other two. The coefficients \(C^{e\mu}_{D,R}\) and \(C^{e\mu ee}_{V,LL}\) can be expressed in terms of two other invariants, all of which are constructed with the Yukawa matrix of the triplet scalar. This is proportional to the neutrino mass matrix, so known, up to the overall magnitude, the neutrino mass hierarchy, the lightest mass \(m_{\rm min}\) and the Majorana Phases \(\alpha_{1,2}\). So although the model generates \(C^{e\mu ee}_{V,LL}\) at tree level, suggesting \(\mu\to e\bar{e}e\) to discover the type II seesaw, this coefficient can vanish (for specific values of the Majorana phases and a range of \(m_{min}\)), as could \(C^{e\mu}_{D,R}\) or \(C^{e\mu ee}_{V,LR}\). When this occurs, the ratio of the remaining two coefficients is restricted, so there are combinations of observations that the type II seesaw cannot predict. This is illustrated in Figure 2, where coefficient ratios (that correspond to angular coordinates in the three remaining dimensions of the original ellipse) are varied over the ranges accessible to upcoming experiments. We find that at least one of the four-fermion coefficients is always larger than the dipole, so that observing \(\mu\to e\gamma\) with a branching
ratio \(Br(\mu\to e\gamma)\gtrsim 10^{-14}\) without detecting \(\mu\to 3e\) in upcoming searches with \(Br(\mu\to 3e)\gtrsim 10^{-16}\) can rule out the type II seesaw model studied here.
Section III studies the inverse seesaw model, and shows that \(C^{e\mu}_{D,R},C^{e\mu ee}_{V,LL},C^{e\mu ee}_{V,LR},C^{e\mu}_{Alight,L},C^{e\mu} _{Aheavy,L}\) are functions of four invariants constructed from the neutrino Yukawa and sterile neutrino mass matrices, as given in Eq (III.4). This implies that \(Br(\mu Au\to e_{L}Au)\) could be predicted, given the rates for \(\mu Al\to e_{L}Al\), \(\mu\to e_{L}\bar{e}_{L}e_{L}\), \(\mu\to e_{L}\bar{e}_{R}e_{R}\) and \(\mu\to e_{L}\gamma\). The relevant contributions to these \(\mu\to e\) coefficients arise via loop diagrams in matching (no four-SM-fermion operators are generated at tree-level), and are non-linear functions of the nondegenerate singlet masses. The RGEs of QED just renormalise these coefficients by a few percent, but do not generate additional invariants. So we observe that the number of invariants is reduced, if the sterile neutrino masses can be approximated as degenerate - as occurs for mass differences of \(\mathcal{O}(v)\), see Eq. (III.5). In this limit, the five non-negligeable coefficients are controlled by two invariants constructed from the neutrino Yukawas : \(\mathcal{O}(Y_{\nu}Y_{\nu}^{\dagger})\) and \(\mathcal{O}(Y_{\nu}Y_{\nu}^{\dagger}Y_{\nu}^{\dagger})\). This implies that when two coefficients are known, the remaining three are predicted, implying, _eg_, that the model predicts \(Br(\mu A\to eA)\), from the rates for \(\mu\to e\gamma\) and \(\mu\to e\bar{e}e\). In the twelve-dimensional ellipse, our inverse seesaw model with degenerate sterile neutrinos therefore occupies a two-dimensional subspace (see Eq.s III.8 and III.9), which we illustrate in Figure 4 by plotting the model prediction for the real part of three coefficients.
Finally, in section IV, we investigate the \(\mu\to e\) predictions of a singlet scalar leptoquark, selected to fit the excess of \(b\to c\bar{r}\nu\) events observed in the \(R_{D}\) ratio. The model contributes to all but two of the \(\mu\to e\) observable coefficients with different coupling combinations, implying that it could entirely fill 10 dimensions of the 12-D ellipse. Only the observation of a non-zero \(\mu\to 3e\) scalar coefficient \(C^{e\mu ee}_{S,XX}\) could not be explained by the leptoquark. On the other hand, the model is more predictive when the leptoquark only interacts with one quark generation. In this case, all the invariants become \(\propto\lambda_{X}^{eQ}\lambda_{L}^{\mu Q*}\) or \(\lambda_{X}^{eQ}\lambda_{R}^{\mu Q*}\), so once four coefficients are measured, the remaining eight can be predicted. For a specific chirality of the outgoing electron in the LFV current, this resembles the degenerate inverse seesaw case, and the equations relating the coefficients are given in Appendix B. The relations between coefficient ratios that are expected when the leptoquark only interacts with one quark generation are illustrated in Fig. 7.
The results of this manuscript will be extended in a subsequent publication, where also some technical details of our EFT calculations will be discussed. We will explore the impact of complementary observables and the uses of invariants, and we will discuss the consequences of relative complex phases for the operator coefficients.
In summary, we find that there are observations of \(\mu\to e\) processes that could rule out the three models we considered. The type II seesaw model predicts coefficients in part of a 3-dimensional subspace of the 12-d coefficient space accessible to experiments. The inverse seesaw maps onto a 4-d subspace of the 12-d space, in the case of non-dengenerate sterile neutrinos, but is more predictive for (nearly) degenerate steriles, where it is restricted to a 2-dimensional subspace. The singlet scalar leptoquark model does not generate sizable scalar four-lepton operators but can give arbitrary contributions to all other Wilson coefficients, thus completely filling 10 dimensions of the 12-d ellipse. However, if the leptoquark couplings to the electron and muon involve a single quark generation, the model predictions are restricted to a 4-dimensional subspace.
###### Acknowledgements
We thank Luca Silvestrini for a helpful suggestion. MA was supported by a doctoral fellowship from the IN2P3. The work of SL is supported in part by the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 860881-HIDDeN.
## Appendix A Appendix: the \(\mu A\to eA\) operators
The Spin-Independent \(\mu\to e\) conversion rate, normalised to the \(\mu\) capture rate [90; 129] can be written [90]
\[\text{BR}_{SI}(\mu A\to eA) = \frac{32G_{F}^{2}m_{\mu}^{5}}{\Gamma_{cap}}\Big{[}\big{|}\widetilde {C}^{pp}_{V,R}I^{(p)}_{A,V}+\widetilde{C}^{pp}_{S,L}I^{(p)}_{A,S}+\widetilde{C} ^{nn}_{V,R}I^{(n)}_{A,V}+\widetilde{C}^{{}^{\prime}nn}_{S,L}I^{(n)}_{A,S}+C_{D, L}\frac{I_{A,D}}{4}\big{|}^{2}+\{L\leftrightarrow R\}\ \Big{]}\] (A.1)
where \(I^{(N)}_{A,V}\), \(I^{(N)}_{A,S}\) and \(I_{A,D}\) are target(\(A\))-dependent "overlap integrals" inside the nucleus of the lepton wavefunctions and the appropriate nucleon density. This shows that a target probes a linear combination of coefficients identified by the overlap integrals. With current theoretical uncertainties on the overlap integrals, at least two independent combinations of the coefficients-on-nucleons \(\{\bar{C}\}\) could be constrained [10]. We will take these two combinations to correspond to light and heavy nuclei.
For light targets like Aluminium or Titanium, all the four-fermion overlap integrals are comparable, so the four-fermion operator that is probed is approximately
\[{\cal O}_{Alight,X}\approx\frac{1}{2}\Big{(}(\overline{e}P_{X}\mu)(\overline{p}p) +(\overline{e}\gamma^{\alpha}P_{X}\mu)(\overline{p}\gamma_{\alpha}p)+( \overline{e}P_{X}\mu)(\overline{n}n)+(\overline{e}\gamma^{\alpha}P_{X}\mu)( \overline{n}\gamma_{\alpha}n)\Big{)}\] (A.2)
or more precisely, the KKO calculation says that the combination of coefficients probed by Aluminium is [10]
\[\tilde{C}_{Alight,X} = 0.455\widetilde{C}^{pp}_{S,X}+0.473\widetilde{C}^{pp}_{V,Y}+0.49 0\widetilde{C}^{nn}_{S,X}+0.508\widetilde{C}^{nn}_{V,Y})\quad.\]
For Gold, the coefficient combination is slightly misaligned:
\[\tilde{C}_{Aheavy,X} = 0.289\widetilde{C}^{pp}_{S,X}+0.458\widetilde{C}^{pp}_{V,Y}+0.4 32\widetilde{C}^{nn}_{S,X}+0.686\widetilde{C}^{nn}_{V,Y}\quad,\]
indeed, the operator probed by heavy targets can be written as \({\cal O}_{Aheavy,X}=\cos\phi{\cal O}_{Alight,X}+\sin\phi{\cal O}_{Aheavy \perp,X}\). Measuring the coefficient of \({\cal O}_{Aheavy\perp,X}\) is the new information that can be obtained from heavy targets, but not light ones.
The definition of \({\cal O}_{Aheavy\perp,X}\) depends on whether it is constructed in the nucleon EFT, or the quark EFT relevant above a scale of 2 GeV. This is because there is information loss in matching nucleons to quarks, because the scalar densities of both \(u\) and \(d\) quarks in the neutron and proton are all comparable, so the scalar \(u\) and \(d\) coefficients \(C^{qq}_{SX}\) are indistinguishable unless the scalar nucleon coefficients \(\widetilde{C}^{NN}_{S,X}\) are accurately measured. In addition, there is a several-\(\sigma\) discrepancy between the determinations of the scalar quark densities in the nucleon from the lattice and pion data.
So in this manuscript we focus on \(\mu A\to eA\) on light targets, because only the leptoquark induces scalar quark coefficients, and we prefer to avoid the quark-scalar uncertainties associated with defining \({\cal O}_{Aheavy\perp,X}\). We will consider the complementary information from heavy targets in [83].
## Appendix B Appendix: If the leptoquark interacts only with one generation of quarks
In this appendix, we give formulae for the operator coefficients in the leptoquark model, for the case where the leptoquark interacts only with one generation of quarks.
If the leptoquark only interacts with top quarks, one obtains:
\[\frac{m_{LQ}^{2}}{v^{2}}C_{DR}(m_{\mu}) \simeq 2.3\times 10^{-4}[\lambda_{L}\lambda_{L}^{\dagger}]_{e\mu}-12[ \lambda_{L}\lambda_{R}^{\dagger}]_{e\mu}\] (B.1) \[\frac{m_{LQ}^{2}}{v^{2}}C^{\mu ee}_{V,XY}(m_{\mu}) \simeq -4.45\times 10^{-3}[\lambda_{X}\lambda_{X}^{\dagger}]_{e\mu}[ \lambda_{Y}\lambda_{Y}^{\dagger}]_{ee}+(1.36\times 10^{-3}-g_{Y}^{\xi}0.033)[ \lambda_{X}\lambda_{X}^{\dagger}]_{e\mu}\] (B.2) \[\frac{m_{LQ}^{2}}{v^{2}}C_{Alight,L} \simeq -6.8\times 10^{-4}\left[\lambda_{L}\lambda_{L}^{\dagger}\right]^{e \mu}-0.0084[\lambda_{L}\lambda_{L}^{\dagger}]^{e\mu}+2.4\times 10^{-4}\lambda_{L}^{ e\dagger}\lambda_{R}^{\mu t*}\]
where \(X,Y\in\{L,R\}\) for the vector four-lepton coefficients, for which the contributions due to boxes are included. The three terms of the four-fermion contribution to \(\mu A\to eA\) are induced by the photon and \(Z\) penguins, and the top-loop contribution to the \(\mu\to e\) Yukawa coupling.
If the leptoquark only interacts with charm quarks, then:
\[\frac{m_{LQ}^{2}}{v^{2}}C_{DR}(m_{\mu}) \simeq 2.3\times 10^{-4}[\lambda_{L}\lambda_{L}^{\dagger}]_{e\mu}-0.42[ \lambda_{L}\lambda_{R}^{\dagger}]_{e\mu}\] (B.3) \[\frac{m_{LQ}^{2}}{v^{2}}C^{\mu ee}_{V,XY}(m_{\mu}) \simeq -4.5\times 10^{-3}[\lambda_{X}\lambda_{X}^{\dagger}]_{e\mu}[ \lambda_{Y}\lambda_{Y}^{\dagger}]_{ee}+(4.8\times 10^{-3}-g_{Y}^{\xi}1.9 \times 10^{-6})[\lambda_{X}\lambda_{X}^{\dagger}]_{e\mu}\] \[\frac{m_{LQ}^{2}}{v^{2}}C_{Alight,L} \simeq -2.4\times 10^{-3}\left[\lambda_{L}\lambda_{L}^{\dagger}\right]^{e \mu}-0.02\lambda_{L}^{ee}\lambda_{R}^{\mu e*}\]
where \(X,Y\in\{L,R\}\) in the vector four-lepton coefficients, for which the \(Z\)-penguin ( the last term) is suppressed \(\propto m_{c}^{2}/v^{2}\), and any box can contribute because \((g-2)_{e}\) only constrains \(\lambda_{L}^{ee}\lambda_{R}^{ee}<0.7\).
And finally for a leptoquark that only has \(\mu\!\rightarrow\!e\) interactions on \(u\) quarks, one obtains:
\[\frac{m_{LQ}^{2}}{v^{2}}C_{DR}(m_{\mu}) \simeq 2.3\times 10^{-4}[\lambda_{L}\lambda_{L}^{\dagger}]_{e\mu}-7.3 \times 10^{-4}[\lambda_{L}\lambda_{R}^{\dagger}]_{e\mu}\] (B.4) \[\frac{m_{LQ}^{2}}{v^{2}}C_{V,XY}^{e\mu ee}(m_{\mu}) \simeq -4.5\times 10^{-3}[\lambda_{X}\lambda_{X}^{\dagger}]_{e\mu}[ \lambda_{Y}\lambda_{Y}^{\dagger}]_{ee}+4.8\times 10^{-3}[\lambda_{X}\lambda_{X}^{ \dagger}]_{e\mu}\] \[\frac{m_{LQ}^{2}}{v^{2}}C_{Alight,L} \simeq 0.38\lambda_{L}^{e\mu}\lambda_{L}^{\mu us}-2.0\eta\lambda_{L}^{ e\mu}\lambda_{R}^{\mu us}\] (B.5)
|
2305.19580 | Monofluorinated Ether Electrolyte with Acetal Backbone for
High-Performance Lithium Metal Batteries | High degree of fluorination for ether electrolytes has resulted in improved
cycling stability of lithium metal batteries (LMBs) due to stable SEI formation
and good oxidative stability. However, the sluggish ion transport and
environmental concerns of high fluorination degree drives the need to develop
less fluorinated structures. Here, we introduce bis(2-fluoroethoxy)methane
(F2DEM) which features monofluorination of the acetal backbone. High coulombic
efficiency (CE) and stable long-term cycling in Li||Cu half cells can be
achieved with F2DEM even under fast Li metal plating conditions. The
performance of F2DEM is further compared with diethoxymethane (DEM) and
2-[2-(2,2-Difluoroethoxy)ethoxy]-1,1,1-Trifluoroethane (F5DEE). The structural
similarity of DEM allows us to better probe the effects of monofluorination,
while F5DEE is chosen as the one of the best performing LMB electrolytes for
reference. The monofluorine substitution provides improved oxidation stability
compared to non-fluorinated DEM, as demonstrated in the linear sweep
voltammetry (LSV) and voltage holding experiments in Li||Pt and Li||Al cells.
Higher ionic conductivity compared to F5DEE is also observed due to the
decreased degree of fluorination. Furthermore, 1.75 M lithium
bis(fluorosulfonyl)imide (LiFSI) / F2DEM displays significantly lower
overpotential compared with the two reference electrolytes, which improves
energy efficiency and enables its application in high-rate conditions.
Comparative studies of F2DEM with DEM and F5DEE in anode-free (LiFePO4) LFP
pouch cells and high-loading LFP coin cells with 20 {\mu}m excess Li further
show improved capacity retention of F2DEM electrolyte. | Elizabeth Zhang, Yuelang Chen, Zhiao Yu, Yi Cui, Zhenan Bao | 2023-05-31T05:59:23Z | http://arxiv.org/abs/2305.19580v1 | # Monofluorinated Ether Electrolyte with Acetal Backbone for High-Performance Lithium Metal Batteries
###### Abstract
High degree of fluorination for ether electrolytes has resulted in improved cycling stability of lithium metal batteries (LMBs) due to stable SEI formation and good oxidative stability. However, the sluggish ion transport and environmental concerns of high fluorination degree drives the need to develop less fluorinated structures. Here, we introduce bis(2-fluoroethoxy)methane (F2DEM) which features monofluorination of the acetal backbone. High coulombic efficiency (CE) and stable long-term cycling in Li\(\parallel\)Cu half cells can be achieved with F2DEM even under fast Li metal plating conditions. The performance of F2DEM is further compared with diethoxmethane (DEM) and 2-[2-(2,2-Difluoroethoxy)ethoxy]-1,1,1-Trifluoroethane (F5DEE). The structural similarity of DEM allows us to better probe the effects of monofluorination, while F5DEE is chosen as the one of the best performing LMB electrolytes for reference. The monofluorine substitution provides improved oxidation stability compared to non-fluorinated DEM, as demonstrated in the linear sweep voltammetry (LSV) and voltage holding experiments in Li\(\parallel\)Pt and Li\(\parallel\)Al cells. Higher ionic conductivity compared to F5DEE is also observed due to the decreased degree of fluorination. Furthermore, 1.75 M lithium bis(fluorosulfonyl)imide (LiFSI) / F2DEM displays significantly lower overpotential compared with the two reference electrolytes, which improves energy efficiency and enables its application in high-rate conditions. Comparative studies of F2DEM with DEM and F5DEE in anode-free (LiFePO\({}_{4}\)) LFP pouch cells and high-loading LFP coin cells with 20 \(\upmu\)m excess Li further show improved capacity retention of F2DEM electrolyte.
## Introduction
Lithium metal has emerged as a highly promising battery anode material with its high theoretical specific capacity (3860 mAh g-1) and low standard reduction potential (-3.04 V vs standard hydrogen electrode.[1, 2, 3] Despite their potential benefits, lithium metal batteries (LMB) still suffer from low coulombic efficiency (CE) and poor cycling stability.[4, 5, 6] Generally, if 1000 stable cycles with more than 90% capacity retention is desired, the averaged CE would have to be at least 99.99%.[7] One major factor that significantly impacts the CE is the formation of a stable SEI layer on the surface of the anode. The SEI layer is critical for preventing further reactions between the anode and electrolyte.[8, 9] However, SEI is prone to cracking during cycling, which results in mossy Li growth, the formation of "dead Li," irreversible loss of lithium inventory, and excess SEI formation.[6]
Among the different strategies to modify the SEI formation and improve CE, rational electrolyte design is essential.[4, 10, 11, 12] Some of the electrolyte engineering strategies that have been extensively investigated in recent years include high concentration electrolytes,[13] localized high concentration electrolytes,[14, 15] additive design,[16] mixed solvent systems,[17, 18] dual-salt-dual-solvent electrolytes,[19, 20] and single-salt-single-solvent electrolytes.[21, 22, 23] Among the various electrolyte systems, fluoroethers have emerged as a promising class of solvent for lithium metal batteries.[22, 23, 24, 25, 26, 27, 28] Solvent fluorination not only modifies solvation structure for favorable SEI formation, but also improves oxidation stability with high-voltage cathodes. However, there are two major drawbacks. First, ionic conductivity in fluoroether electrolytes is significantly lower than the commercial carbonate electrolytes, which limits their operation to low charging rates. Second, the environmental concerns over the perfluorinated methyl and methylene carbons could limit their wide application in the battery industry.[29] Therefore, it is necessary to explore alternative ether derivatives with low degree of fluorination and high conductivity while maintaining favorable solvation structure for electrochemical stability.
To address the aforementioned issues, we designed and synthesized bis(2-fluoroethoxy)methane (F2DEM). The weakly solvating acetal backbone enables high Li CE. The monofluorine substitution on the end carbons improves oxidation stability, ionic conductivity, and further improves the Li CE. Compared with the two reference electrolytes, 1.7 M LiFSI / diethoxymethane (DEM) and 1.2 M LiFSI / 2-[2-(2,2-Difluoroethoxy)ethoxy]-1,1,1-Trifluoroethane (F5DEE), 1.75 M LiFSI / F2DEM showed excellent CE and stable long-term cycling in Li\(\parallel\)Cu half cells. A significantly lower overpotential was also observed with F2DEM, which improves the energy efficiency and enables its application in high-rate conditions. Additionally, F2DEM-based anode-free LFP pouch cells cycled under various charging and discharging conditions showed excellent performance, even under fast charge and slow discharge. Improved capacity retention was also observed with F2DEM in high-loading LFP coin cells with 20 \(\upmu\)m excess Li, surpassing that of our previously reported high-performing F5DEE electrolyte.
#### Molecular design
Ether solvents have demonstrated noticeable potential as electrolyte solvents in LMB due to their better stability with the Li anode as compared to carbonates [21, 30, 31, 32]. However, their low oxidative stability poses challenges to applications in high-voltage batteries. To improve the cathode stability of ether electrolytes, incorporating electron-withdrawing fluorine atoms was found to deepen the HOMO energy level [24, 25, 26, 27, 28, 22, 23, 24]. While a higher degree of fluorination (i.e. CF\({}_{2}\), CF\({}_{3}\)) is expected to yield a higher oxidative stability, they are often accompanied by slower ion transport that leads to limited application in faster rate conditions [22]. Environmental concerns of highly fluorinated compounds also drive the need to develop monofluorinated alternatives [29].
Monofluorinated ethers have been reported with ethylene glycol ether backbone, namely 1,2-bis(2-fluoroethoxy) ethane (FDEE) [28]. However, when used as the single solvent with LiFSI, poor Li cycling stability was observed. To stabilize the CE of monofluorinated ethers, previous approach relied on a localized high concentration electrolyte (LHCE) design to reduce solvent participation in interfacial reactions and derive a more stable inorganic-rich SEI [28]. However, the LHCE approach not only requires highly fluorinated compounds with great environmental concerns, but the addition of highly fluorinated diluents can also hinder the ion transport and lower the overall conductivity of the electrolyte. These drawbacks motivate us to test the feasibility of designing a single-solvent system that achieves excellent Li cycling performance even with low degree of fluorination.
Our recent work suggested that the acetal structure can effectively weaken the solvation power. We hypothesize that this will enable more stable cycling even in monofluorinated systems, without having to increase the concentration of the electrolyte or introducing highly fluorinated diluent. Based on these design principles, F2DEM is synthesized with monofluorination of the acetal backbone (Figure 1), which ensures stability at both the anode and cathode side, in addition to maintaining good ionic conductivity.
#### Electrolyte characterization
Adequate ion transport is essential to a high-performing LMB system, and Li salt concentrations can affect the ionic conductivity [33]. Therefore, it is necessary to first determine the optimal salt concentration before proceeding with cell cycling and characterizations. Electrolyte solutions were obtained by mixing LiFSI with F2DEM at 1.2 to 3 mol LiFSI / L solvent and ionic conductivities
Figure 1: Molecular design of F2DEM that harvests the benefits of enhanced oxidative stability of monofluorination and weakened solvation of acetal backbone.
were measured without and with a separator. The setup without a separator allows us to measure the intrinsic conductivity of the electrolytes, while conductivity with separator better mimics the condition in realistic cells. For the setup with separator, SS\(\parallel\)SS coin cells were assembled with Celgard 2325 separator swelled by different concentrations F2DEM. To measure the conductivity without separator, Swaglok cells were used. The ionic conductivity with separator peaked at around 2 mol LiFSI / L F2DEM, which correlates to 1.75 M LiFSI / F2DEM (Figure 2a). In the separator-free Swaglok setup, 1.75 M remained the optimal concentration for high ionic conductivity (Figure 2b). The transport numbers of different concentrations were also measured for F2DEM. Higher transport number is generally desired as a higher fraction of the current is carried by Li\({}^{+}\). As shown in Figure 2c, the measured values for all concentrations (\(>\) 0.4) are comparable to the transport number commonly reported in ether electrolytes.[12, 22, 27]
Overpotential is another critical parameter in LMB as it is directly related to the high-rate performance and energy efficiency of the system. To assess the overpotential of F2DEM, Li\(\parallel\)Li symmetric cells were made with thick Li (750 \(\upmu\)m) on both sides. The cells were cycled under various current densities. With a capacity of 3 mAh cm-2 for each cycle, the current density was gradually increased from 1 mA cm-2 to 8 mA cm-2, where the cells were cycled 10 times under each current density. As shown on Figure 2d, the overpotential of 1.75 M LiFSI / F2DEM is roughly 50% less than that of 1.2 M LiFSI/ F5DEE under all current densities. Under a current density of 1 mA cm-2, an overpotential of around 55 mV was observed for F5DEE, similar to our previous report.[22] The overpotential of F2DEM, however, is only around 30 mV. This significant drop in overpotential suggests a much higher energy efficiency of F2DEM over F5DEE. This set up also simultaneously assesses the fast-charging capability of the F2DEM electrolyte. Zooming in on the voltage profile under 6 mA cm-2, soft short was observed for both F2DEM and F5DEE, indicating that the system may be unstable under fast rates over 6 mA cm-2.
### Electrochemical stability
Next, we investigate the performance of 1.75 M LiFSI / F2DEM in Li\(\parallel\)Cu half cells as compared to 1.2 M LiFSI/ F5DEE and 1.7 M LiFSI / DEM. The average coulombic efficiency (CE) was first evaluated by Aurbach method in Li\(\parallel\)Cu half cells[34, 35] (Figure 3a). Based on this standard protocol, 5 mAh cm-2 of Li was first deposited onto the Cu foil as Li reservoir. This was followed by 10 subsequent cycles of plating and stripping at 0.5 mA cm-2 for 1 mAh cm-2. Finally, all deposited Li was stripped from Cu, and the total capacity recovered was divided by the amount deposited to obtain the CE. The average CE of F2DEM was measured as 99.53 \(\pm\) 0.03% for four cells, which is higher than the 99.46 \(\pm\) 0.01% (3 cells) previously observed with F5DEE[22] and 99.19 \(\pm\) 0.07% (4 cells) with DEM. The high CE of F2DEM is indicative of the excellent stability at the Li anode.
Figure 2: The transport properties of F2DEM. (a) Ionic conductivity of F2DEM measured with Celgard 2325 separator in SS \(\parallel\) SS cells across four different concentrations. The ionic conductivity values are averaged among three repeated cells and the standard deviations are shown on the plot. (b) The intrinsic ionic conductivity without a separator measured with Swagłk cells across four different concentrations. (c) Transport numbers measured in Li\(\parallel\)Li symmetric cells for the four different concentrations. The transport numbers are averaged among three repeated cells and the standard deviations are shown on the plot. (d) Li\(\parallel\)Li symmetric cells cycled at increasing current densities (1 mA cm-2 to 8 mA cm-2) and 3 mAh cm-2 capacity. The cells are cycled 10 times under each current density.
Furthermore, we corroborate these observations through long-term cycling of Li\(\parallel\)Cu half cells. To probe the long-term cycling stability, the Cu surface was first conditioned by cycling between 0 and 1 V at 0.2 mA cm-2 for 10 cycles. For long-term cycling, 1 mAh cm-2 of Li was plated onto Cu at 0.5 mA cm-2, and stripped to 1 V at a rate of 0.5 mA cm-2. Zooming in on the first 50 cycles (Figure 3b), fast activation (number of cycles required to reach 99%) was observed for F2DEM, indicating that less capacity was lost during the initial cycles toward the establishment of a stable SEI. This is highly desirable for anode-free cells with limited Li supply. Excluding the activation cycles, the average CE of F2DEM in the first 5 to 50 cycles (99.44%) was also higher than that of F5DEE and DEM (both at 99.29%). Similar experiments were carried out under a faster charge and slower discharge condition (1 mA cm-2 plating, 0.4 mA cm-2 stripping) with a higher capacity (2 mAh cm-2) to further evaluate the fast-charging capabilities of F2DEM. Under this harsher condition, a more distinct difference can be observed among the three different electrolytes. F2DEM showed a superior cycling stability with a higher CE over 250 cycles compared with F5DEE and DEM (Figure 3c). Comparing to the long-term cycling at 0.5 mA cm-2, this fast plating and slow stripping condition requires more number of cycles to reach stable cycling at 99% for all three electrolytes. However, a more distinct difference can be observed. While F2DEM only took around 12 cycles to reach 99% CE, F5DEE took over 25 cycles to establish a stable SEI and DEM fails to achieve a stable cycling over 99%. These results suggest that F2DEM is more stable against the Li anode under higher rate conditions.
Considering the modified solvation environment and addition of electro-withdrawing fluorine atoms in F2DEM molecules, we would expect an improved oxidative stability of the 1.75 M F2DEM. To corroborate the oxidative stability enhancement of F2DEM upon fluorination, we carried out linear sweep voltammetry (LSV) on the three different electrolytes. To best mimic the realistic full cell cycling environment, Al was first chosen as the working electrode. Sweeping up to 7 V at 1 mV s-1, the resulting leakage current would allow us to evaluate the corrosion of Al current collector.[36, 37] With this set up, we observed a leakage current onset at around 6 V for both F2DEM and DEM (Figure 3d). The delayed voltage onset around 6.5 V for F5DEE is likely because of its increased degree of fluorination, which results in a higher oxidative stability.
In addition to LSV, the electrode/electrolyte stability is further assessed through voltage-holding experiments in the Li\(\parallel\)Al cell setup. In this case, the voltage was slowly increased from an open circuit to 4.4 V at a scan rate of 1 mV s-1, then the cells were held at 4.4 V for an extended period of time, and the corresponding leakage current was monitored. In this case, no significant leakage current should be observed while holding at 4.4 V if a stable passivation layer was established on the Al surface.[21] As shown in Figure 3f, we did not observe a significant leakage current for all three electrolytes. Overall, the passivation of Al is relatively stable at 4.4 V for the tested systems.
LSV experiments were also conducted with Pt working electrode. Unlike the Al cathode current collector, no passivation reactions occured on the Pt surface. With the absence of a passivating
layer, the onset voltage is expected to decrease for all three electrolytes. As shown in Figure 3e, the leakage current was observed at an onset voltage of around 4 V for F2DEM, which is higher than the 3 V observed with DEM. With a higher degree of fluorination in F5DEE, it expectedly shows a higher onset voltage at around 4.5 V. This trend corroborates our hypothesis that increasing the degree of fluorination enhances the oxidative stability and agrees well with the previous research [22].
Figure 3: Electrochemical stability of 1.75 M LiFSI / F2DEM compared with 1.7 M LiFSI / DEM and 1.2 M LiFSI / F5DEE. (a) The CE measurements of the three electrolytes based on the modified Aurbach method [35]. The CE was averaged among four cells for F2DEM and DEM and the standard deviations are shown on the plot. For F5DEE, the CE was averaged between two cells and the standard deviation is shown on the plot. (b) Li\(\parallel\)Cu CE of all three electrolytes. The cells were cycled at 0.5 mA cm\({}^{-2}\) for 1 mAh cm\({}^{-2}\) and stripped to 1 V at 0.5 mA cm\({}^{-2}\). Note that prior to cycling, the copper surface is pre-conditioned by cycling between 0 and 1 V at 0.2 mA cm\({}^{-2}\) for 10 cycles. The figure shows the CE of the first 50 cycles. The average CE values were calculated based on cycle 5 to 50, excluding the activation cycles. Four repeated cells for F2DEM and DEM, and two repeated cells for F5DEE are made, showing the consistent trend. (c) Li\(\parallel\)Cu CE over 250 cycles under a fast plating (1 mA cm\({}^{-2}\)) and slow stripping (0.4 mA cm\({}^{-2}\)) condition. Four repeated cells for F2DEM and DEM, and two repeated cells for F5DEE are made, showing the consistent trend. (d) The linear sweep voltammetry (LSV) in Li\(\parallel\)Al cells. The leakage current of the three electrolytes were measured by sweeping up to 7 V at 1 mV s\({}^{-1}\). (e) Leakage current measured in Li\(\parallel\)Al cells. LSV was first applied to the cells to sweep from open circuit voltage to 4.4 V at 1 mV s\({}^{-1}\). The cells were then held at 4.4 V for over 60 hours. Two repeated cells are tested for each
electrolyte and similar trend is observed. (f) LSV in Li\(\parallel\)Pt cells showing improved oxidative stability comparing to DEM. The leakage currents of the three electrolytes were measured by sweeping up to 7 V at 1 mV s-1.
### P pouch cell performance
The performance of 1.75 M LiFSI / F2DEM was further assessed in Cu\(\parallel\)LFP anode-free pouch cells, with a voltage range from 2.5 V to 3.65 V. Various charging and discharging rates were implemented to fully assess the pouch cell performance of F2DEM (1C = 200 mA or 2 mA cm-2). Note that the nominal capacity at a C/3 charge rate was 210 mAh, or 2.1 mAh cm-2, and the electrolyte loading was 0.5 mL. We first studied the performance of 1.75 M LiFSI / F2DEM under C/2 charge and 2C discharge rate, since slow charge and fast discharge conditions were generally implemented to enhance the Li morphology. This was compared with the 1.2 M LiFSI / F4DEE and 1.2 M LiFSI / F5DEE electrolytes. Among the three electrolytes, F2DEM showed a higher capacity utilization and a slower capacity loss (Figure 4a-b). This improved capacity retention of F2DEM can be partially attributed to its low overpotential and higher ionic conductivity, which is consistent with our previous observation in Li\(\parallel\)Li symmetric cells (Figure 2d). With C/2 charge and a slower C/5 discharge rate, significant improvement in discharge capacity and CE was observed in 1.75 M LiFSI / F2DEM compared to 1.2 M LiFSI / F5DEE and F4DEE (Figure 4c-d). This is the more demanding condition since it has been reported that slower discharge than charge can lead to higher surface area Li morphology [38]. The improvement seen in F2DEM under a slower discharge rate potentially indicates that F2DEM can facilitate the formation of a more stable SEI. Under symmetric C/2 charge rate and C/2 discharge rate, 1.75 M LiFSI / F2DEM yielded similar cycling performance as the two reference electrolytes (Figure 4e-f). This is likely because the advantage of low overpotential cannot be clearly observed under slow conditions. Overall, 1.75 M LiFSI / F2DEM showed excellent performance in the anode-free LFP pouch cells under all tested rates, and particularly under fast charge and slow discharge condition where a more stable SEI is required to enhance the cycling stability.
### Coin cell performance
The cycling performance of 1.75 M LiFSI / F2DEM was also evaluated in Li\(\parallel\)LFP coin cells with 20-\(\upmu\)m-thick Li anode and high-loading 3.5 mAh cm-\({}^{2}\) LFP cathode. The additional lithium source
Figure 4: Performance of Cu\(\parallel\)LFP pouch cell cycling between 2.5 V and 3.65 V. The nominal capacity at C/3 is 210 mAh, or 2.1 mAh cm-\({}^{2}\). The electrolyte loading is 0.5 mL. The 1.75 M LiFSI / F2DEM is compared to 1.2 M LiFSI / F5DEE and F4DEE under various charging and discharging conditions. (a, b) C/2 charge and 2C discharge capacity (1C = 200 mA or 2 mA cm-\({}^{2}\)) and CE profile over 80 cycles. Four repeated cells for F2DEM are shown. (c, d) C/2 charge and C/5 discharge capacity and CE profile over 150 cycles. Two repeated cells for F5DEE and four repeated cells for F2DEM are shown. (e, f) C/2 charge and C/2 discharge capacity and CE profile over 100 cycles. Two repeated cells for F5DEE and F4DEE, and four repeated cells for F2DEM are shown. The data for F4DEE and F5DEE are taken from Ref. [22].
will allow us to probe the long-term cycling performance of Li\(\parallel\)LFP cells. Various charge and discharge current densities (0.75 mA cm-2 charge and 1.5 mA cm-2 discharge, 1.5 mA cm-2 charge and 3 mA cm-2 discharge, 0.4 mA cm-2 charge and 2 mA cm-2 discharge, and 1mA cm-2 charge and 2 mA cm-2 discharge) were applied with 3.8 V cutoff. With 0.75 mA cm-2 charge and 1.5 mA cm-2 discharge, a higher capacity retention was observed for F2DEM than F5DEE and DEM (Figure 5a-b). From the charge and discharge curves of all three electrolytes (Figure 6a-c), the voltage plateau increased over cycling, indicating an increase in overpotential. However, since voltage divergence can still be observed after 150 cycles, overpotential increase should not be the only failure mechanism. The lithium inventory loss may also contribute to the capacity loss. Therefore, the improved capacity retention of F2DEM is likely due to its low and stable overpotential, in combination with its formation of a more stable SEI to reduce lithium inventory loss, prolonging the cycle life of F2DEM-based cells. Further increasing the current densities to 1.5 mA cm-2 charge and 3 mA cm-2 discharge, F2DEM can retain 80% of the initial capacity over 125 cycles while the capacity of F5DEE dropped below 80% retention after 100 cycles (Figure 5c-d). The charge and discharge curves (Figure 6d-e) showed a similar trend as the 0.75 mA cm-2 charge and 1.5 mA cm-2 discharge condition, where F2DEM showed superior capacity retention with a lower overpotential increase and a less lithium inventory loss. DEM was not included in this comparison due to its poor performance even under slower charging conditions. The performance of F2DEM is also assessed in 0.4 mA cm-2 charge and 2 mA cm-2 discharge, as well as 1 mA cm-2 charge and 2 mA cm-2 discharge conditions. Under a relatively fast discharge rate of 2 mA cm-2, F2DEM can retain 80% of its first cycle capacity over 350 cycles with 0.4 mA cm-2 charge rate (Figure 5e-f). When the charge rate was increased to 1 mA cm-2, the cells can still achieve stable CE and 80% capacity retention over 200 cycles (Figure 5g-h). These observations corroborate F2DEM's excellent stability even under high-rate conditions.
Figure 5: Performance of Li\(\parallel\)LFP cells with 20-\(\upmu\)m Li anode and high-loading 3.5 mAh cm\({}^{-2}\) LFP cathode for 1.75 M LiFSI / F2DEM, 1.2 M LiFSI / F5DEE, and 1.7 M LiFSI / DEM. The cells are cycled between 2.5 V to 3.8 V. The 80% capacity retention is defined by setting the cell with highest first cycle discharge capacity as the 100% capacity reference. (a, b) Discharge capacity and CE profile of cells cycled under 0.75 mA cm\({}^{-2}\) charge and 1.5 mA cm\({}^{-2}\) discharge (with comparison to F5DEE and DEM). Two repeated cells for F5DEE and DEM, and four repeated cells for F2DEM are shown. (c, d) Discharge capacity and CE profile of cells cycled under 1.5 mA cm\({}^{-2}\) charge and 3 mA cm\({}^{-2}\) discharge (with comparison to F5DEE). Two repeated cells for F5DEE and four repeated cells for F2DEM are shown. (e, f) Discharge capacity and CE profile of F2DEM cells cycled under 0.4 mA cm\({}^{-2}\) charge and 2 mA cm\({}^{-2}\) discharge. Four repeated cells for F2DEM
are shown. (g, h) Discharge capacity and CE profile of F2DEM cells cycled under 1 mA cm-2 charge and 2 mA cm-2 discharge. Four repeated cells for F2DEM are shown.
**Conclusion**
In summary, we found that a monofluorinated ether electrolyte (F2DEM) is effective in improving the coulombic efficiency (CE) and cycling stability of LMBs. The weakly solvating acetal backbone promotes a stable solid electrolyte interface (SEI) formation and lower overpotential, while the monofluorine substitution on the end carbons further improves oxidation stability, ionic conductivity, and Li passivation. These modifications enabled high Li CEs and stable long-term cycling in Li\(\parallel\)Cu half cells, even under fast plating and slow stripping conditions. Compared to
Figure 6: Charge / discharge curves of Li\(\parallel\)LFP cells with 20-\(\upmu\)m Li anode and high-loading 3.5 mAh ch-2 LFP cathode. All cells are cycled between 2.5 V to 3.8 V. (a, b, c) Charge / discharge curves of cycle 10, 100, and 150 for 1.7 M LiFSI / DEM, 1.2 M LiFSI / F5DEE, and 1.75 M LiFSI / F2DEM, respectively. The cells were cycled under 0.75 mA cm-2 charge and 1.5 mA cm-2 discharge. The same trend is observed for the two repeated cells for F5DEE and DEM, and four repeated cells for F2DEM. The plot is based on the cell with the best capacity retention for all three electrolytes. (d, e) Charge / discharge curves of cycle 10, 100, and 150 for 1.2 M LiFSI / F5DEE and 1.75 M LiFSI / F2DEM, respectively. The cells were cycled under 1.5 mA cm-2 charge and 3 mA cm-2 discharge. The same trend is observed for the two repeated cells for F5DEE and four repeated cells for F2DEM. The plot is based on the cell with the best capacity retention for the two electrolytes.
reference electrolytes (1.2 M LiFSI / F5DEE and 1.7 M LiFSI / DEM), 1.75 M LiFSI / F2DEM exhibited good ionic conductivity, high transport number, and significantly lower overpotential. Furthermore, comparative studies in anode-free LFP pouch cells and high-loading LFP coin cells with 20 \(\upmu\)m excess Li demonstrated that F2DEM-based systems with improved capacity retention compared to the reference electrolytes under various charging and discharging rates. These results indicate the potential of monofluorinated DEM for enhancing the performance of LMBs.
### General Materials
F5DEE was provided by Feon Energy. 2-Fluoroethanol was purchased from Matrix Scientific. 2,2-Difluoroethanol was purchased from SynQuest. NaOH, tetraglyme, dibromomethane, DEM and other general reagents were purchased from Sigma Aldrich or Fisher. The separator Celgard 2325 (25-\(\upmu\)m thick, polypropylene/polyethylene/polypropylene) was purchased from Celgard. Thick Li foil (roughly 750-\(\upmu\)m thick) and Cu current collector (25-\(\upmu\)m thick) were purchased from Alfa Aesar. Thin Li foils (roughly 50- and 20-\(\upmu\)m thick, supported on Cu substrate) were purchased from China Energy Lithium. Commercial LFP cathode sheets were purchased from Targray. Industrial dry Cu\(\parallel\)NMC532 and Cu\(\parallel\)LFP pouch cells were purchased from Li-Fun Technology. Other battery materials, such as 2032-type coin-cell cases, springs and spacers, were all purchased from MTI. All materials were used as received.
### Electrolyte synthesis
To a 500 mL round flask was added 64 g of 2-fluoroethanol, 85 g of dibromomethane, 43 g of NaOH and 200 mL tetraglyme. The suspension was stirred at room temperature for 2 h under air and then heated to 40 \({}^{\circ}\)C to stir overnight. The suspension turned brownish with yellow fine powder. The suspension was further heated to 70 \({}^{\circ}\)C to stir overnight. The suspension was directly distilled under vacuum (vapor temperature \(\sim\)70-75 \({}^{\circ}\)C at \(\sim\)1 kPa) to obtain colorless liquid as the product. The crude product was distilled under vacuum for four times to ensure purity.
### Electrochemical Measurements
All battery components used in this work were commercially available and all electrochemical tests were carried out using 2032-type coin cells. The cells were fabricated in an argon-filled glovebox, and one layer of Celgard 2325 was used as a separator for all batteries. Thick Li foil (750 \(\upmu\)m) with a diameter of 7/16 in. was used for cell assembly unless otherwise specified, and 40 \(\upmu\)L of electrolyte was injected in all cells with Cu\(\parallel\)LFP as an exception. Both the ionic conductivity and transport number of the electrolytes were measured with Biologic VSP system. Ionic conductivity was derived from bulk impedance in symmetric cells with two stainless steel electrodes and electrolyte soaked separator. Li+ transport number was obtained by a Li\(\parallel\)Li symmetric cell under a polarization potential of 10 mV. Li\(\parallel\)Cu, Li\(\parallel\)Li, Li\(\parallel\)LFP cells were tested on
Land battery testing station, and Cu\(\parallel\)LFP pouch cells were tested on Arbin. The Li\(\parallel\)Li cells were cycled under different charge and discharge current densities (1 mA cm\({}^{-2}\), 4 mA cm\({}^{-2}\), 6 mA cm\({}^{-2}\), 8 mA cm\({}^{-2}\)). The CEs of the electrolytes were measured based on a modified Aurbach method[35] in Li\(\parallel\)Cu cells, where 5 mAh cm\({}^{-2}\) of Li was first deposited onto the Cu foil as Li reservoir. This was followed by 10 subsequent cycles of plating and stripping at 0.5 mA cm\({}^{-2}\) for 1 mAh cm\({}^{-2}\). Finally, all deposited Li was stripped from Cu, and the total capacity recovered was divided by the amount deposited to obtain the CE. The long-term Li\(\parallel\)Cu cycling were carried out by first conditioning the Cu surface through cycling between 0 to 1V at 0.2 mA cm\({}^{-2}\) for 10 cycles. Upon preconditioning the Cu, 1 mAh cm\({}^{-2}\) of Li was plated on the Cu at 0.5 mA cm\({}^{-2}\) and stripped to 1V at the same rate. A faster plating and slower stripping condition was also used to further evaluate the cells, where 2 mAh cm\({}^{-2}\) of Li was plated on Cu at 1 mA cm\({}^{-2}\) and stripped to 1V at 0.4 mA cm\({}^{-2}\). The linear sweep voltammetry (LSV) was also carried out in Li\(\parallel\)Al and Li\(\parallel\)Pt cells using Biologic VSP300. In both setups, the voltage was swept from open-circuit value to 7V vs Li\({}^{+}\)/Li at 1 mV s\({}^{-1}\). The leakage current were evaluated by dividing the measured value by the electrode area of 2.11 cm\({}^{-2}\). Al corrosion tests were carried out in Li\(\parallel\)Al cells, where the voltage is held at 4.4V for over 60 hours. The Cu\(\parallel\)LFP pouch cells were tested with 0.5 mL electrolyte injected into the purchased cells. The pouch cells were clamped in woodworking vises to ensure an estimated pressure of 1000 kPa and cycled under various charge and discharge conditions between 2.5V and 3.65V. The Li\(\parallel\)LFP coin cells were assembled with 20\(\upmu\)m Li and 3.5 mAh cm\({}^{-2}\) LFP. The cells were cycled between 2.5V and 3.8V under different rates after a formation cycle at 0.4 mA cm\({}^{-2}\) charge and 1.5 mA cm\({}^{-2}\)/3 mA cm\({}^{-2}\) discharge currents. The Al-clad cathode cases were also used for Li\(\parallel\)LFP coin cells, any defects in the Al cladding are expected to be minimized with Al foil inserted in the cathode case.
## Author information
**Corresponding Authors**
Zhenan Bao - Department of Chemical Engineering, Stanford University, Stanford, California 94305, United States; orcid.org/0000-0002-0972-1715; Email: [email protected]
Yi Cui - Department of Materials Science and Engineering, Stanford University, Stanford, California 94305, United States; Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, Menlo Park, California 94305, United States; orcid.org/0000-00026103-6352; Email: [email protected]
## Authors
Elizabeth Zhang - Department of Chemical Engineering, Stanford University, Stanford, California 94305, United States; Department of Materials Science and Engineering, Stanford University, Stanford, California 94305, United States; orcid.org/0000-0002-1117-8635
Yuelang Chen - Department of Chemical Engineering, Stanford University, Stanford, California 94305, United States; Department of Chemistry, Stanford University, Stanford, California 94305, United States; orcid.org/ 0000-0002-5249-0596
* Department of Chemical Engineering, Stanford University, Stanford, California 94305, United States; Department of Chemistry, Stanford University, Stanford, California 94305, United States; orcid.org/0000-00018746-1640
## Author Contributions
E.Z. and Y.Chen. contributed equally to this work.
## Acknowledgment
The work was supported by the Assistant Secretary for Energy Efficiency and Renewable Energy, Office of Vehicle Technologies of the U.S. Department of Energy under the Battery 500 Consortium program. E. Zhang acknowledges the support from National Science Foundation Graduate Research Fellowships Program (NSF GRFP). Y.Chen acknowledges the support from Chevron Fellowship. Part of this work was performed at the Stanford Nano Shared Facilities (SNSF), supported by the National Science Foundation under Award ECCS-2026822. We thank Feon Energy for providing F5DEE.
## Conflict of Interest
This work has been filed as PCT application US23/21234.
|
2309.08758 | Physics of the Majorana-superconducting qubit hybrids | Manipulation of decoupled Majorana zero modes (MZMs) could enable
topologically-protected quantum computing. However, the practical realization
of a large number of perfectly decoupled MZMs needed to perform nontrivial
quantum computation has proven to be challenging so far. Fortunately, even a
small number of imperfect MZMs can be used to qualitatively extend the behavior
of standard superconducting qubits, allowing for new approaches for noise
suppression, qubit manipulation and read-out. Such hybrid devices take
advantage of interplay of Cooper pair tunneling, coherent single electron
tunneling, and Majorana hybridization. Here we provide a qualitative
understanding of this system, give analytical results for its ground state
energy spanning full parameter range, and describe potential sensing
applications enabled by the interplay between Majorana and Cooper pair
tunneling. | D. B. Karki, K. A. Matveev, Ivar Martin | 2023-09-15T20:57:16Z | http://arxiv.org/abs/2309.08758v1 | # Physics of the Majorana-superconducting qubit hybrids
###### Abstract
Manipulation of decoupled Majorana zero modes (MZMs) could enable topologically-protected quantum computing. However, the practical realization of a large number of perfectly decoupled MZMs needed to perform nontrivial quantum computation has proven to be challenging so far. Fortunately, even a small number of imperfect MZMs can be used to qualitatively extend the behavior of standard superconducting qubits, allowing for new approaches for noise suppression, qubit manipulation and read-out. Such hybrid devices take advantage of interplay of Cooper pair tunneling, coherent single electron tunneling, and Majorana hybridization. Here we provide a qualitative understanding of this system, give analytical results for its ground state energy spanning full parameter range, and describe potential sensing applications enabled by the interplay between Majorana and Cooper pair tunneling.
## I Introduction
Majorana zero modes (MZMs) are the key element of proposed topological quantum computers [1; 2]. In the presence of MZMs, the ground state acquires topological degeneracy and manipulation of MZMs allows it to effect non-trivial transformations in the multidimensional ground-state manifold [2]. While MZMs are naturally expected to appear at the edges or inside vortex cores in the topological superconductors [3; 4; 5; 6], the scarcity of such bulk materials has led to alternative theoretical proposals for realization and manipulation of MZMs based on superconducting proximity effect [7; 8; 9; 10; 11; 12]. Despite significant experimental advances, which report certain features consistent with MZMs [13; 14; 15; 16; 17; 18; 19; 20], convincingly realizing MZMs remains a challenge [21].
For topological quantum computation, MZMs have to be well spatially isolated from each other. However, in practice MZMs are expected to be exponentially localized on the scale of the superconducting coherence length [22; 23], which can be quite long, particularly when topological superconductivity is induced via proximity effect. Achieving decoupled MZMs thus requires large clean systems, which is very difficult to achieve in practice. Implementing MZMs in smaller systems appears feasible; however, finite Majorana hybridization cannot be ignored in that case.
When MZMs are not decoupled, they cannot be easily used for topological quantum computing, even though certain purification procedures can be applied to dynamically decouple them [24]. Fortunately, introducing even imperfect (coupled) MZMs into other quantum devices can lead to new interesting phenomena and functionalities. Indeed, hybrids of MZMs with the more standard superconducting quantum technology have been gaining attention recently [25; 26; 27; 28]. Various ideas have been put forwarded differing mainly in the type of base qubit, ranging from the use of flux qubit [29; 30], charge qubit [31], fluxonium [32] and transmon [33; 34].
When combined with superconducting (SC) qubits, MZMs significantly extend their functionality by modifying the current-phase relationship and introducing a new degree of freedom, the fermion parity. The extra tunability of this hybrid setup allows for additional means of suppression of environmental noises and new qubit manipulation and read out protocols [35; 36; 37; 38]. However, due to the complexity of the interplay between coherent single electron tunneling, pair tunneling and the MZMs hybridization in the MZMs-SC qubit hybrid, this problem has been tackled mostly numerically in the previous works [35; 36; 37].
In this work, we focus on the qualitative aspects of the hybrid MZMs-SC qubit setup. We provide a simple physical way to understand this system based purely on the coherent charge dynamics, which avoids the need of more subtle considerations of the quantum wave function boundary conditions used in the previous works.
Figure 1: (a) The schematic representation of MZMs-SC qubit hybrid setup under consideration (see the text). (b) The diagrammatic illustration of the model Hamiltonian given in Eq. (1). Orange and blue are different number states in odd and even parity sectors respectively, separated by an energy \(2h\), with \(h\) being Majorana overlap. Josephson (Cooper pair) tunneling \(E_{J}\) preserves the parity sector, connecting states \(\ket{n}\) and \(\ket{n+2}\); Majorana tunneling \(v\) connects states \(\ket{n}\) and \(\ket{n+1}\) in the two sectors.
We also develop an analytical method for obtaining the ground state energy of the system. We find that this hybrid device has a rich variety of operational regimes. We discuss their characteristic signatures and the crossovers between them. We highlight the practical applications of the proposed hybrid device and show how it could contribute to the unambiguous detection of Majorana zero modes.
The organization of the paper is as follows. The section II introduces general features of MZMs-SC qubit hybrid along with its Hamiltonian formulation. Different parity sectors of this setup are discussed in Sec III. Here we provide a simple physical way to understand periodic and anti-periodic boundary conditions invoked in Ref. [33] to correctly solve the Hamiltonian of MZMs-SC qubit setup. We describe the qualitative features of the exact spectrum in Sec. IV. The section V is devoted to the presentation of analytical calculations for the lowest energy state of the hybrid setup. In Sec. VI, we discuss the emergent Josephson physics in the proposed hybrid device and suggest some practical applications. Finally, we conclude in Sec. VII. Mathematical details of our calculations are deferred to the Appendices.
## II Setup and model
We consider a "floating" superconducting island with charging energy \(E_{C}\), gated capacitively by voltage \(V_{g}\) as shown in Fig. 1(a). The island is tunnel coupled to the superconducting lead via the Josephson coupling \(E_{J}\), which allows Cooper pairs to coherently tunnel between the island and the lead. In addition, there are two nanowires, each hosting a pair of MZMs \(\gamma_{1,2}\) and \(\gamma_{3,4}\), attached to the island and the lead, respectively. The parities of the fermion number of wires are given by \(i\gamma_{1}\gamma_{2}\) and \(i\gamma_{3}\gamma_{4}\). Tunnel coupling \(v\) between between \(\gamma_{2}\) and \(\gamma_{3}\), allows a coherent transfer of individual electrons between the island and the lead. While it flips the individual wire parities, it preserves the global fermion parity of the system, \(\gamma_{1}\gamma_{2}\gamma_{3}\gamma_{4}\), which can be either odd or even, forming two disconnected sectors. In each sector, we can define Pauli operators that span the remaining two-dimensional Hilbert space, \(\sigma_{z}=i\gamma_{1}\gamma_{2}\), \(\sigma_{x}=i\gamma_{2}\gamma_{3}\), and \(\sigma_{y}=i\gamma_{3}\gamma_{1}\).
Using this representation, the Hamiltonian is
\[H = E_{C}(n-n_{g})^{2}+h\sigma_{z} \tag{1}\] \[-\frac{v}{2}\sigma_{x}\left(\left|n+1\right\rangle\left\langle n \right|+\left|n\right\rangle\left\langle n+1\right|\right)\] \[-\frac{E_{J}}{2}\left(\left|n+2\right\rangle\left\langle n \right|+\left|n\right\rangle\left\langle n+2\right|\right).\]
Here \(n\) counts the number of electrons on the island and \(n_{g}\) is the offset charge, which is proportional to the gate voltage \(V_{g}\). The term proportional to \(h\) takes into account the hybridization between \(\gamma_{1}\) and \(\gamma_{2}\) on the wire attached to the island (due to the conservation of the total parity, the hybridization between \(\gamma_{3}\) and \(\gamma_{4}\) has the same form). The Hamiltonian (1) can be equivalently expressed in eigenbasis of the phase operator, conjugate to the particle number, \(n=-i\partial_{\varphi}\), such that \(\left\langle n\right|\left.\varphi\right\rangle=e^{in\varphi}\). In this basis the tunneling is diagonal, however the charging energy becomes a differential operator,
\[H=E_{C}\left(-i\partial_{\varphi}-n_{g}\right)^{2}+h\sigma_{z}-v\sigma_{x} \cos\varphi-E_{J}\cos 2\varphi. \tag{2}\]
Compared to the standard Hamiltonian for a Cooper pair box, in addition to the pair tunneling, now _coherent_ single electron tunneling is also allowed. Note that in conventional superconductors single-electron tunneling is always incoherent since it corresponds to creation or destruction of an unpaired quasiparticle in the continuous spectrum. The presence of MZMs enables coherent single electron tunneling. The spectral isolation of MZMs implies that the state of the system can be completely described by the number of electrons on the island and the internal state of the two level system associated with four MZMs. This is a qualitative difference between the standard Josephson qubit devices, and the Majorana-enriched devices, which leads to new opportunities in the spectral engineering and quantum state control.
## III Parity sectors and boundary conditions
From the Hamiltonian (1) it is clear that there is a relationship between \(\sigma_{z}\) and the parity of island charge \(n\), with \(\sigma_{z}(-1)^{n}\) being conserved by the Hamiltonian. For instance, if we assume that we are in the sector \(\sigma_{z}(-1)^{n}=1\), then, in the absence of tunneling, the spectrum is given by \(E_{n}=E_{C}(n-n_{g})^{2}+(-1)^{n}h\) and the wave functions for the two parities of \(n\) by \(\left|2k\right\rangle\left|\uparrow\right\rangle\) and \(\left|2k+1\right\rangle\left|\downarrow\right\rangle\).
The parity constraints in representation (2) are more obscured. To reveal them, consider again the limit of zero MZM and Josephson tunneling, i.e., \(v=E_{J}=0\). The eigenstates are expected to be \(e^{in\varphi}\left|\sigma_{z}\right\rangle\), with the energies \(E_{n}=E_{C}(n-n_{g})^{2}+h\sigma_{z}\). However, as we saw above, the value of \(\sigma_{z}\) is pinned to the parity of \(n\). Therefore, the physical wave functions have the form \(\left(\begin{array}{c}e^{i2k\varphi}\\ 0\end{array}\right)\) and \(\left(\begin{array}{c}0\\ e^{i(2k+1)\varphi}\end{array}\right)\). Note that both functions have period \(2\pi\); however, the former is also periodic on interval of length \(\pi\) and the latter is antiperiodic on that interval. When both the Majorana and Josephson tunnelings are turned on, the wave function becomes a superposition of all allowed charge states and thus acquires the form \(\Psi_{\varphi}=\left(\begin{array}{c}g_{\varphi}\\ f_{\varphi}\end{array}\right)\), with general \(g_{\varphi+\pi}=g_{\varphi}\) and \(f_{\varphi+\pi}=-f_{\varphi}\). This observation, also made previously but using a different reasoning [33; 35; 36; 37] is important for avoiding non-physical states - imposing only the periodicity on the full interval \(2\pi\) retains both \(\sigma_{z}(-1)^{n}=\pm 1\) sectors, which is not physical.
### Hamiltonian in a fixed \(\sigma_{z}(-1)^{n}=-1\) sector
In this section we go a step further, and show that given the structure of the Hamiltonian (1) which pins the parity of charge and the spin associated with the MZMs, it is possible to eliminate the spin degree of freedom and associated redundancy completely. Without loss of generality, in the sector \(\sigma_{z}(-1)^{n}=-1\), Hamiltonian becomes
\[H =E_{C}(n-n_{g})^{2}-h(-1)^{n}\] \[-\frac{v}{2}\left(\left|n+1\right\rangle\left\langle n\right|+ \left|n\right\rangle\left\langle n+1\right|\right)\] \[-\frac{E_{J}}{2}\left(\left|n+2\right\rangle\left\langle n \right|+\left|n\right\rangle\left\langle n+2\right|\right). \tag{3}\]
The staggered potential evokes an analogy with a charge density wave ordering in electronic systems. The charging energy \(E_{C}\) breaks the translational invariance in \(n\) space, superimposing a parabolic confining potential.
The wave function \(\left|\psi\right\rangle=\sum_{n}\psi_{n}\left|n\right\rangle\) satisfies the standard Shrodinger equation, \(E\psi=H\psi\),
\[E\psi_{n} =\left[E_{C}\left(n-n_{g}\right)^{2}-h(-1)^{n}\right]\psi_{n}\] \[\quad-\frac{v}{2}\left(\psi_{n+1}+\psi_{n-1}\right)\] \[\quad-\frac{E_{J}}{2}\left(\psi_{n+2}+\psi_{n-2}\right). \tag{4}\]
Instead of the particle number basis, we can specify the wave function in the conjugate phase space. We chose the transformation between the two bases as
\[\psi_{n}=\frac{1}{\sqrt{2\pi}}\int_{-\pi}^{\pi}d\varphi\psi(\varphi)e^{-i(n-n_ {g})\varphi}. \tag{5}\]
It is convenient to explicitly include the \(n_{g}\) shift in the exponent, which corresponds to imposing the twisted boundary conditions, \(\psi(\varphi+2\pi)=e^{-2\pi n_{g}}\psi(\varphi)\); this eliminates \(n_{g}\) from the Hamiltonian. In terms of the function \(\psi(\phi)\), the Schrodinger equation (II) takes the form
\[E\psi(\varphi)= -\left(E_{C}\partial_{\varphi}^{2}+v\cos\varphi+E_{J}\cos 2 \varphi\right)\psi(\varphi)\] \[-h\psi\left(\varphi-\pi\mathrm{sgn}\varphi\right)e^{-i\pi n_{g} \mathrm{sgn}\varphi}, \tag{6}\]
where we assume \(-\pi\leq\varphi\leq\pi\). It is now apparent that the role of the staggered potential \(h\) is to introduce coupling between the Fourier components \(\phi\) and \(\phi+\pi\) of \(\psi(\varphi)\), in analogy with the charge density waves.
For further simplification, we define a two-component function \(\Psi(\varphi)\) as
\[\Psi(\varphi)=\begin{pmatrix}\Psi_{\uparrow}(\varphi)\\ \Psi_{\downarrow}(\varphi)\end{pmatrix}, \tag{7}\]
where \(\Psi_{\uparrow,\downarrow}\) are defined in the half interval \(0\leq\varphi\leq\pi\), and are related to \(\psi\) by
\[\Psi_{\uparrow}(\varphi) =\psi(\varphi+\pi)e^{i\pi n_{g}},\] \[\Psi_{\downarrow}(\varphi) =\psi(\varphi). \tag{8}\]
The boundary conditions are now imposed on the \([0,\pi]\) interval, mixing \(\Psi_{\uparrow,\downarrow}\) such that
\[\Psi_{\uparrow}(\varphi+\pi) =\Psi_{\downarrow}(\varphi)e^{-i\pi n_{g}},\] \[\Psi_{\downarrow}(\varphi+\pi) =\Psi_{\uparrow}(\varphi)e^{-i\pi n_{g}}. \tag{9}\]
In this representation, Eq. (II) takes the convenient form
\[-\left(E_{C}\partial_{\varphi}^{2}+E_{J}\cos 2\varphi-v\cos \varphi\right)\Psi_{\uparrow}-h\Psi_{\downarrow}=E\Psi_{\uparrow}\] \[-\left(E_{C}\partial_{\varphi}^{2}+E_{J}\cos 2\varphi+v\cos \varphi\right)\Psi_{\downarrow}-h\Psi_{\uparrow}=E\Psi_{\downarrow}.\]
More compactly, these equations can be expressed in the form
\[\mathbb{M}\Psi=E\Psi, \tag{10}\]
where the matrix \(\mathbb{M}\) is given by
\[\mathbb{M}=-\left(E_{C}\partial_{\varphi}^{2}+E_{J}\cos 2\varphi\right)+v \cos\varphi\sigma_{z}-h\sigma_{x}. \tag{11}\]
In the following, we will seek the solution of the Schrodinger equation (10) with the boundary conditions (II). This representation eliminates the redundancy associated with the disconnected sectors \(\sigma_{z}(-1)^{n}=\pm 1\), which are both present in Eq. (II). In Appendix A we provide a correspondence between this approach and the one followed in [33] and related works.
## IV Qualitative features of the exact spectrum
Before presenting the analytical results, in this brief section we discuss some representative features of the MZMs-SC qubit system. For our numerical calculations we use Eq. (II). Several examples of the spectra are shown in Fig. 2. All energies and parameters are normalized by the charging energy \(E_{C}\). In panel (a) the only other nonzero parameter (apart from \(E_{C}\)) is the Majorana tunneling \(v\). The spectrum as a function of \(n_{g}\) has the overall characteristic form of Coulomb parabolas \((n-n_{g})^{2}\), \(n\) being all integers, with the avoided crossings that decay exponentially as \(\sim(v/E_{C})^{|n_{1}-n_{2}|}\). When \(E_{J}\neq 0\) and \(v=0\), as in the panel (b), the avoided crossings appear only between parabolas that correspond to the even differences of \(n\). The two sets of bands that correspond to different parities of \(n\) are present simultaneously (solid and dashed lines, respectively), without hybridization. That is, unless the island parity is allowed to be flipped by finite \(v\), the dispersion has period 2 in \(n_{g}\).
Panel (c) illustrates the effect of finite MZMs hybridization within the island, \(h\), assuming that tunneling between the island and the lead is turned off. These are simple Coulomb parabolas, but now the odd and even parabolas are offset in energy.
Panels (d)-(f) show what happens when the MZMs hybridization \(h\) is large compared to the charging energy. The period 2 in \(n_{g}\) becomes apparent. Another
notable feature is that for sufficiently strong \(E_{J},|v|\gtrsim 1\), there are two weakly dispersing bands, one associated with the ground states in each parity sector (approximately separated by \(2h\) in energy). This is in contrast to the standard transmons, where there is only one such band, at the lowest energy. Within a given parity sector, the system approximately behaves as a transmon, but with an effective Josephson coupling \(\tilde{E}_{J}=E_{J}-\sigma_{z}v^{2}/4h\), and the lowest band width scaling as \(e^{-\sqrt{32\tilde{E}_{J}/E_{C}}}\). Interestingly, in the upper band, \(\tilde{E}_{J}\) can vanish when the MZM-mediated tunneling exactly offsets the standard Josephson pair tunneling. This corresponds to a unique situation when the fermion parity on one side of the junction can drastically affect the Josephson critical current, which potentially can be used as a smoking gun for the presence of MZMs, or for charge/photo sensing applications. Further discussion of this effect is presented in Sec. VI.
## V Analytical calculations for the lowest energy state
In this section we turn our attention to the properties of the lowest-energy band, \(E_{0}(n_{g})\). To isolate the unique features associated with MZMs, we will set \(E_{J}=0\), which eliminates the standard Cooper pair tunneling. For \(h=0\) in this case we should obtain \(E_{0}(n_{g}+1)=E_{0}(n_{g})\). This charge-translation symmetry is broken, however, by any finite \(h\), leaving only the \(E_{0}(n_{g}+2)=E_{0}(n_{g})\) symmetry intact. This is the same symmetry as in the conventional superconductor transmons and Cooper-pair boxes, even though we are not allowing standard pair tunneling; instead, it is a result of the Majorana hybridization \(within\) the island, which makes the energies of the even and odd charge states different.
Our focus will be on the case \(v/E_{C}\gg 1\), which corresponds to small charging energy. In this case, the effects of gate charge fluctuations are strongly screened, making this an attractive regime for MZMs-SC qubit. The dominant energy scale is associated with the Majorana tunneling, which tends to pin the phase near \(\varphi=0\), see Eq. (6). The role of the nonzero charging energy \(E_{C}\) is to provide fluctuations around this value, and to make the energy sensitive to the twist in the boundary conditions, \(\psi(\varphi+2\pi)=e^{-2\pi in_{g}}\psi(\varphi)\). This creates a finite bandwidth for \(E_{0}(n_{g})\). In the language of tight-binding, the charging energy gives a finite mass to the particles, allowing them to tunnel between the potential minima, located at \(\phi=2\pi n\), with \(n\) integer. The gate charge \(n_{g}\) plays the role of the quasimomentum associated with this lattice, as can be seen from the form of the boundary conditions.
For \(h=0\), there is no formal difference between the Majorana-based model, and the standard Cooper-pair transmon, apart from rescaling all charges by a factor of two (single electron charge vs. Cooper pair charge). At finite \(h\), the situation changes dramatically. There is no natural analog of \(h\) in the case of Cooper pairs, it is a very special feature of a system with Majoranas that the energy can depend not on charge, but solely on the \(parity\) of charge. This leads to the appearance of several qualitatively distinct regimes, separated by crossovers.
### Transmon regime: zero Majorana overlap
In the ideal case of zero Majorana overlap \(h\), the Schrodinger equation becomes the standard Mathieu equation with cosine potential,
\[E\psi(\varphi)=-\left(E_{C}\partial_{\varphi}^{2}+v\cos\varphi\right)\psi( \varphi). \tag{12}\]
Figure 3: Illustration of instanton tunneling in the cosine potential. The frequency \(\Omega\) of small amplitude oscillation around the minimum \(\varphi=\pm\pi\) is defined by \(m\Omega^{2}=v\). In the limit of \(v\gg E_{C}\), the right and left turning point of the potential \(v\cos\varphi\) are \(\varphi=\pm\pi\mp z\), with \(z^{2}\simeq h/(m\Omega)\). The zero point energy is represented by the dashed line.
Figure 2: The first five energy levels of the spectrum as a function of offset charge obtained from the numerical solution of Eq. (1). The plots (a)–(f) correspond to \((h/E_{C},\,v/E_{C},\,E_{J}/E_{C})\)=(0, 0.2, 0), (0, 0, 0.2), (0.2, 0, 0), (4, 2, 2), (4, 5, 5), (4, 2, 0.25) respectively.
Transparent analytical results for the ground state energy can be obtained in the strong barrier limit \(v/E_{C}\gg 1\), by using the semiclassical Wentzel-Kramers-Brillouin (WKB) method. The WKB regime can also be understood in terms of the instanton tunneling [39] events between the neighboring minima of the cosine potential as illustrated in Fig. 3. The phase in the twisted boundary conditions then acquires the meaning of the lattice quasimomentum. The instanton tunneling produces the well-known energy splitting \(\Delta^{(0)}\), which is given by [40]
\[\frac{\Delta^{(0)}}{E_{C}}=\frac{2^{\frac{14}{4}}}{\sqrt{\pi}}\left(\frac{v}{ E_{C}}\right)^{\frac{3}{2}}\exp\left(-4\sqrt{2}\sqrt{\frac{v}{E_{C}}}\right). \tag{13}\]
This tunneling matrix element determines the width of the lowest energy band. Explicitly, reintroducing the "quasimomentum" \(n_{g}\), ground state energy takes the form
\[E_{0}=-\Delta^{(0)}\cos(2\pi n_{g})+\text{const}. \tag{14}\]
The energy has period \(1\) in \(n_{g}\), thanks to the single-electron tunneling enabled by the presence of Majorana fermion states. This should be contrasted with the case of the standard superconducting junctions where only Cooper pairs can tunnel coherently and hence the energy has period \(2\) as a function of \(n_{g}\).
### Perturbative regime: small Majorana overlap
In this section, we account for the small Majorana overlap \(h\) perturbatively. To this end, we use Eq. (10) and treat the second term as a perturbation to get the linear in \(h\) correction to the ground state energy
\[\delta E_{0}=h\left.\frac{\partial\left\langle\mathbb{M}\right\rangle}{ \partial h}\right|_{h\to 0}. \tag{15}\]
Using equations (15), (11) and (8), it is straightforward to show that
\[\delta E_{0}=-h\int_{-\pi}^{\pi}d\varphi\psi_{0}^{*}(\varphi)\psi _{0}(\varphi-\pi\text{sgn}\varphi)e^{-i\pi n_{g}\text{sgn}\varphi}. \tag{16}\]
Here the subscript in \(\psi_{0}\) represents the limiting case of \(h=0\). In the limit of \(v/E_{C}\gg 1\), the wave function \(\psi_{0}\) can be calculated using semiclassical approximation. As detailed in the appendix B, the leading order contribution to \(\delta E_{0}\) takes the form
\[\delta E_{0}=-\Delta^{(1)}\cos\pi n_{g}, \tag{17}\]
where
\[\frac{\Delta^{(1)}}{E_{C}}=\left(\frac{h}{E_{C}}\right)2^{\frac{1 4}{4}}(\sqrt{2}-1)\exp\left[-4(\sqrt{2}-1)\sqrt{\frac{v}{E_{C}}}\right]. \tag{18}\]
Using equations (13) and (17), we get the ground state energy
\[E_{0}=-\Delta^{(0)}\cos 2\pi n_{g}-\Delta^{(1)}\cos\pi n_{g}+\text{const}. \tag{19}\]
Notably, while the first term has period \(1\) in \(n_{g}\), the second contribution due to \(h\) has period \(2\). From equations (13) and (18), it is readily seen that for exponentially small Majorana overlap
\[h/E_{C}\simeq(v/E_{C})^{3/4}\exp\left(-4\sqrt{v/E_{C}}\right), \tag{20}\]
the tunnel splittings \(\Delta^{(0)}\) and \(\Delta^{(1)}\) are of the same order of magnitude. Therefore, already for exponentially small \(h\), the energy dependence on the gate charge has the same period \(2\) as in the case of conventional superconductors. This can make the determination of the presence of MZMs in the experimental devices of this kind challenging.
We note that the splitting relation (18) has been derived using the first order perturbation theory. It is not yet obvious what is the regime of its applicability. This issue is addressed below in Sec. V.4.
### WKB regime: intermediate and large Majorana overlap
The Hamiltonian (11) can be interpreted as a spin-\(1/2\) interacting with a non-linear oscillator. When the oscillator dynamics is slow compared to the "Zeeman" field acting on the spin, the latter simply follows the field. This allows to integrate out the spin, assuming that it is either aligned or antialigned with the Zeeman field. The corresponding equation of motion for the oscillator is
\[\frac{\partial^{2}\Psi(\varphi)}{\partial\varphi^{2}}+\frac{1}{E_{C}}\left[E -\mathbb{V}(\varphi)\right]\Psi(\varphi)=0. \tag{21}\]
The effective potential \(\mathbb{V}\) obtained from Eq. (11) consists of two branches
\[\mathbb{V}_{\pm} =v\left(\sqrt{\alpha^{2}+1}\pm\sqrt{\alpha^{2}+(\sin\varphi)^{2} }\right), \tag{22}\] \[\alpha =h/v, \tag{23}\]
Figure 4: Illustration of WKB regime where the Majorana overlap \(h\) is sufficiently large such that the upper potential branch can be disregarded. The potential \(\mathbb{V}_{-}\) has right and left classical turning points at \(\varphi=\pm\pi/2\mp l\). The zero point energy is represented by the dashed line (see text for the details).
where we set \(E_{J}=0\) as before, shifted the phase variable by \(\pi/2\) and added the constant \(\sqrt{h^{2}+v^{2}}\) for convenience, see Fig. 4. The separation between the potential branches is the above-mentioned phase-dependent Zeeman field. It reaches minimum value of \(2h\) at \(\varphi=0\). We thus expect the adiabatic approximation to hold for sufficiently large \(h\), which is to be determined in the following.
In the adiabatic limit, the upper potential branch \(\mathbb{V}_{+}\) can be ignored for the calculation of ground state splitting. In the strong barrier limit, the usual WKB approach can be exploited for the evaluation of the tunnel splitting \(\Delta^{(2)}\), which is given by (see appendix C for details)
\[\frac{\Delta^{(2)}}{E_{C}}=2^{\frac{1}{4}}\sqrt{\pi}\left(\frac{v}{E_{C}} \right)^{\frac{3}{4}}\left(\frac{1}{1+\alpha^{2}}\right)^{\frac{3}{8}}e^{ \mathcal{S}_{0}}\ e^{\mathcal{S}_{1}}. \tag{24}\]
Here \(\mathcal{S}_{0}\) and \(\mathcal{S}_{1}\) are expressed in terms of standard Hypergeometric functions as
\[\mathcal{S}_{0} =-2\sqrt{\frac{v}{E_{C}}}\bigg{(}\frac{1}{4(1{+}\alpha^{2})} \bigg{)}^{1/4}{}_{3}F_{2}\left(\frac{1}{4},\frac{3}{4},1;\frac{3}{2},\frac{3} {2};\frac{1}{\alpha^{2}{+}1}\right),\] \[\mathcal{S}_{1} =\log\left(\frac{4}{\pi}\right)-\!\frac{1}{8(1{+}\alpha^{2})}\, _{4}F_{3}\!\left(\frac{3}{4},1,1,\frac{5}{4};\frac{3}{2},\frac{3}{2},2;\frac{1 }{\alpha^{2}{+}1}\right). \tag{25}\]
Equation (24) in the limit of large \(\alpha\) yields the splitting of the form
\[\frac{\Delta_{a}^{(2)}}{E_{C}}=\frac{2^{\frac{9}{4}}}{\sqrt{\pi}}\left(\frac{v ^{2}}{hE_{C}}\right)^{\frac{3}{4}}\exp\left(-\sqrt{\frac{2v^{2}}{hE_{C}}} \right). \tag{26}\]
This result is closely related to the band width of the lowest state in the conventional transmon, if we replace the Josephson energy by \(v^{2}/4h\). This is indeed expected since for large \(\alpha\), transferring an electron from the island costs energy \(h\), which has to be offset either by returning electron to the island or by tunneling a second electron from the island to the lead. This is in direct analogy to the Josephson coupling which originates from the processes where Cooper pairs are transiently broken in the process of tunneling; however, there the energy of the virtual state is given by the superconducting gap, instead of \(h\).
The small \(\alpha\) limit of Eq. (24) is given by
\[\frac{\Delta_{b}^{(2)}}{E_{C}}=\frac{2^{\frac{13}{4}}(\sqrt{2}-1) }{\sqrt{\pi}}\left(\frac{v}{E_{C}}\right)^{\frac{\pi}{2}+\frac{3}{4}}\left[4 \left(\sqrt{2}-1\right)^{2}\right]^{2\eta}\] \[\times e^{\eta}\eta^{-\eta}\ \exp\left(-4(\sqrt{2}-1)\sqrt{\frac{v}{E_{C}}} \right), \tag{27}\]
where we introduced additional control parameter \(\eta\), which is defined by
\[\eta=\frac{h^{2}}{4\sqrt{E_{C}}v^{3}}. \tag{28}\]
It is readily seen that the small \(\alpha\) limit of the splitting given by Eq. (27) is not the same as the linear in \(h\) result obtained earlier in Eq. (18). This is expected since for the WKB calculation to remain valid, \(h\) needs to be sufficiently large such that the upper potential branch can be neglected. Therefore, the should exist a crossover regime that connects the WKB and perturbative regimes. In the following section, we study this crossover regime and show that the parameter \(\eta\) defined in Eq. (28) indeed serves as the crossover parameter.
### Crossover between the perturbative and WKB regimes
In the previous subsection, we considered the case of sufficiently large \(h\) and neglected the upper potential branch \(\mathbb{V}_{+}\). In this case, the WKB wave function corresponding to the potential \(\mathbb{V}_{-}\) in the limit of \(h/v<|\varphi|\ll 1\) for \(\varphi<0\) is given by (see appendix D)
\[\Psi^{\text{WKB}}=\Bigg{(}\frac{2^{5/4}(\sqrt{2}-1)}{\sqrt{\pi}} \Bigg{)}^{1/2}\!\bigg{(}\frac{v}{E_{C}}\bigg{)}^{1/8}\left(\frac{4\left(\sqrt {2}{-}1\right)^{2}}{|\varphi|}\right)^{\eta}\] \[\times\exp\left[-2(\sqrt{2}{-}1)\sqrt{\frac{v}{E_{C}}}\right] \!\exp\left[\sqrt{\frac{v}{E_{C}}}\left(|\varphi|{-}\frac{|\varphi|^{2}}{4} \right)\right]. \tag{29}\]
For \(h\) sufficiently small, the adiabatic approximation exploited in derivation of Eq. (29) is no longer valid. Instead, we need to solve the Schrodinger equation \(\mathbb{M}\Psi=E\Psi\), where the matrix \(\mathbb{M}\) is defined in Eq. (11) with the shift \(\varphi\rightarrow\varphi-\pi/2\). We then linearize \(\mathbb{M}\) in the vicinity of \(\varphi=0\) as illustrated in Fig. 5. For \(h\ll v\), the resulting Schrodinger equation takes the form
\[E_{C}\frac{\partial^{2}\Psi}{\partial\varphi^{2}}-\left(v\varphi\sigma_{z}-h \sigma_{x}\right)\Psi=v\Psi. \tag{30}\]
In the following, we will be interested in the special limit
\[E_{C}=\text{const},\ \ h\rightarrow\infty,\ \ v\rightarrow\infty,\ \ \eta\rightarrow\text{const}. \tag{31}\]
In this case, the tail of the wave function follows semi-classical approximation and the components of \(\Psi\) take
Figure 5: Illustration of crossover between the perturabtive and the WKB regimes. The dashed box represents the region of linearization for \(|\varphi|\ll 1\) and \(h\ll v\) (see text for details).
the similar form of the WKB wave function (29). We first focus in the linearized regime with negative \(\varphi\) and define the wave function \(\Psi_{L}\) as
\[\Psi_{L}(\varphi)=\begin{pmatrix}\Psi_{\uparrow}(\varphi)\\ \Psi_{\downarrow}(\varphi)\end{pmatrix},\;\;-\frac{\pi}{2}\ll\varphi<0. \tag{32}\]
Inspired by the behavior (29) of the wave function at small \(\varphi\), we write \(\Psi_{\uparrow,\downarrow}\) as
\[\Psi_{\uparrow,\downarrow}(y)=\exp\left[-\left(v/E_{C}\right)^{ \frac{1}{4}}y\right]\chi_{\uparrow,\downarrow}(y), \tag{33}\]
where we introduced the new variable \(y=(v/E_{C})^{1/4}\varphi\). We now turn to determining the form of the functions \(\chi_{\uparrow,\downarrow}\). Using the new variables, Eq. (30) can be expressed as
\[\Psi_{\uparrow,\downarrow}(y)=-\frac{1}{2\sqrt{\eta}}\left[ \left(\frac{v}{E_{C}}\right)^{-\frac{1}{4}}\frac{\partial^{2}}{\partial y^{2} }\pm y-\left(\frac{v}{E_{C}}\right)^{\frac{1}{4}}\right]\Psi_{\downarrow, \uparrow}(y). \tag{34}\]
Substituting Eq. (33), in the limit (31), we obtain the following two coupled differential equations for \(\chi_{\uparrow,\downarrow}\)
\[\chi_{\uparrow,\downarrow}=-\frac{1}{\sqrt{\eta}}\left(-\frac{ \partial}{\partial y}\pm\frac{y}{2}\right)\chi_{\downarrow,\uparrow}. \tag{35}\]
Importantly, the right hand side of Eq. (35) does not contain the second derivative \(\partial^{2}/\partial y^{2}\) present in Eq. (34). This significant simplification occurs under the limiting procedure (31) and enables the subsequent analytic treatment of the problem. Combining the above two equations for \(\chi_{\uparrow,\downarrow}\) results in the standard Weber equation
\[\frac{\partial^{2}\chi_{\uparrow}}{\partial y^{2}}+\left(n+\frac{ 1}{2}-\frac{y^{2}}{4}\right)\chi_{\uparrow}=0,\;\;\;n=-\eta. \tag{36}\]
Comparing \(\Psi_{\uparrow}\) from Eq. (33) and the WKB wave function \(\Psi^{\text{WKB}}\) given by Eq. (29), we see that in the limit \(|y|\gg\sqrt{\eta}\), \(\chi_{\uparrow}\) takes the form \(\chi_{\uparrow}(y)\sim e^{-y^{2}/4}|y|^{-\eta}\). The only solution of Eq. (36) that has this asymptotic behavior at \(y\to-\infty\) is \(D_{-\eta}(-y)\), where \(D_{n}(y)\) is the standard parabolic cylinder function. Therefore, we arrive at the result that \(\chi_{\uparrow}(y)\propto D_{-\eta}(-y)\). Substituting this form of \(\chi_{\uparrow}\) into the coupled differential equation (35) and using the properties of standard parabolic cylinder functions, we arrive at \(\chi_{\downarrow}(y)\propto-\sqrt{\eta}D_{-\eta-1}(-y)\). We thus find
\[\chi_{\uparrow}(y)=\mathcal{A}D_{-\eta}(-y),\;\chi_{\downarrow} (y)=-\mathcal{A}\sqrt{\eta}D_{-\eta-1}(-y), \tag{37}\]
where \(\mathcal{A}\) is a constant. Substitution of \(\chi_{\uparrow,\downarrow}\) from Eq. (37) into Eq. (33) finally provides the required expressions for the wave functions \(\Psi_{\uparrow,\downarrow}\) in terms of yet unknown parameter \(\mathcal{A}\).
To find the coefficient \(\mathcal{A}\), we compare the asymptotic behavior at \(y\to-\infty\) of \(\Psi_{\uparrow}\) given by Eqs. (37) and (33) with the WKB wave function (29). This procedure gives
\[\mathcal{A} =\left(\frac{2^{\frac{5}{4}}\left(\sqrt{2}-1\right)}{\sqrt{\pi}} \right)^{\frac{1}{2}}\left(\frac{v}{E_{C}}\right)^{\frac{1}{8}+\frac{\eta}{2}}\times\] \[\left[4\left(\sqrt{2}-1\right)^{2}\right]^{\eta}\;\exp\left[-2( \sqrt{2}-1)\sqrt{\frac{v}{E_{C}}}\right]. \tag{38}\]
So far we obtained the complete information of \(\Psi_{L}\) expressed in Eq. (32). The evaluation of the tunnel splitting in the crossover regime also requires an expression of the wave function \(\Psi_{R}\) defined for \(1\gg\varphi>0\). From the symmetry of Eq. (30), it is straightforward to write \(\Psi_{R}(\varphi)=\sigma_{x}\Psi_{L}(-\varphi)\). Having derived the expressions of \(\Psi_{L/R}\), we are now in a position to calculate the tunnel splitting using the standard technique outlined in Ref. [41]. As detailed in the appendix E, our final result for the tunnel splitting \(\Delta^{(3)}\) in the crossover regime is given by
\[\frac{\Delta^{(3)}}{E_{C}}=8\mathcal{A}^{2}\sqrt{\frac{v}{E_{C}}} \sqrt{\frac{\pi}{2\eta}}\frac{1}{\Gamma(\eta)}, \tag{39}\]
where \(\Gamma(\eta)\) is the standard Gamma function. It is readily seen that \(\Delta^{(3)}\) given by Eq. (39) recovers the corresponding perturbative expression (18) in the limit of \(\eta\to 0\) and also the WKB expression (27) in the limit of \(\eta\gg 1\). The regime \(\eta\sim 1\) is thus the actual crossover between the perturbative and the WKB regimes.
### Comparison of analytical and numerical results
In the previous subsections, we analytically calculated the tunnel splitting \(\Delta\), assuming \(E_{C}\ll v\). It encodes the dependence of ground state energy on the gate charge,
\[E_{0}(n_{g})=-\Delta^{(0)}\cos 2\pi n_{g}-\Delta\cos\pi n_{g}+\text{const}. \tag{40}\]
Different regimes were identified based on the relation between the strengths of Majorana overlap \(h\), the single particle tunneling \(v\) and the charging energy \(E_{C}\). These three parameters are further connected by the crossover parameter \(\eta\) defined by \(\eta=h^{2}/\sqrt{16E_{C}v^{3}}\). The perturbative regime requires \(\eta\ll 1\) or, equivalently, \(h\ll(E_{C}v^{3})^{1/4}\) and corresponds to the tunnel splitting \(\Delta^{(1)}\). The WKB regime is achieved at \((E_{C}v^{3})^{1/4}\ll h\ll v^{2}/E_{C}\). In this regime, the tunnel splitting is fully characterized by \(\Delta^{(2)}\), which approaches to \(\Delta^{(2)}_{a}\) and \(\Delta^{(2)}_{b}\) in the limits \(h\gg v\) and \(h\ll v\)
Figure 6: Summary of tunnel splitting in various regimes.
respectively. The perturbative regime and the WKB regime are connected by the crossover regime, in which \(h\sim(E_{C}v^{3})^{1/4}\). The tunnel splitting in the crossover regime is given by \(\Delta^{(3)}\). In the case of \(h\gtrsim v^{2}/2E_{C}\), our system is trivially described by the Mathieu equation. These regimes are illustrated in Fig. 6. The value of tunnel splitting \(\Delta\) can be obtained numerically by evaluating the ground state energy of the Hamiltonian (3) at \(n_{g}=0\) and \(1\). According to Eq. (40), \(\Delta=(E_{0}(1)-E_{0}(0))/2\). This procedure confirms our analytical results, see Fig. 7.
It is important to note that the splitting (39) has non-monotonic features as a function of the crossover parameter \(\eta\). While the splitting in the vicinity of \(\eta=1\) increases with \(\eta\), it also manifests a decaying tail in the limit of \(\eta\gg 1\). It could be difficult to numerically access the complete non-monotonic feature of tunnel splitting since the requirement of the conditions \(\eta\gg 1\) and \(\alpha\ll 1\) must be fulfilled simultaneously. Nevertheless, this condition might actually be relevant in the context of real experimental setups.
## VI Emergent Josephson effect
In Sec. V we focused on the analytical calculation of the lowest energy level at \(E_{J}=0\) in the limit of large \(v\) as a function of MZMs hybridization \(h\). The general case of finite \((h,v,E_{J})\) can be easily studied numerically following the discussion of the section III, and certain features of the spectrum have been already discussed in Sec. IV. In the following, we focus on an interesting situation that arises when \(h\) is the largest energy scale and all parameters \((h,v,E_{J})\) are finite.
As discussed previously, the two sectors that correspond to the even and odd \(n\) on the island get effectively decoupled when Majorana hybridization \(h\) exceeds all other energy scales. In this case, the Majorana tunneling events amount to virtual processes as illustrated in Fig. 8. These processes induce a contribution to effective Josephson coupling in each sector. As mentioned in the subsection V.3, when \(h\gg v\), the strength of effective induced Josephson coupling is \(\sim v^{2}/h\). Interestingly, the sign of this contribution is the opposite in even and odd \(n\) sectors since the intermediate state has a higher energy than the initial state in the even sector and lower in the odd. Therefore, the induced Josephson coupling is given by
\[E_{J}^{\rm even,odd}=E_{J}\pm\frac{v^{2}}{4h}. \tag{41}\]
Because the two sectors are essentially decoupled in the large \(h\) limit, one can define the pseudo ground state in the odd \(n\) sector, in addition to the true ground state in the even parity sector. In the pseudo-ground state, the induced Josephson coupling can offset, nullify, or switch the sign of the Josephson contribution \(E_{J}\) originating from the tunneling of the Cooper pairs. The sign change of \(E_{J}\) makes the standard Josephson junction into a \(\pi\)- junction [42; 43], a desirable element of some superconducting quantum circuits that naturally leads to bistability.
This effect could also be useful for sensing applications, including single microwave photon detection. This can be achieved by coupling a microwave photon field to the MZM tunneling \(v\) via gate, as in the gate-mon qubit [44]. From Eq. (11) it is seen that such modulation will enable photon-assisted transitions between the ground and the pseudo-ground states of the system, when the photon energy matches their splitting, \(2h\). Transition into pseudo-ground state reduces the Josephson critical current on the junction; thus a possible detection scheme corresponds to current-biasing the junction below the value of the critical current in the ground-state but above the one in the pseudo-ground state. Then, photon absorption in the junction would cause the junction to jump into the resisting state, where it would stay until the system is reset back into the ground state.
We finally note that the emergent Josephson physics expressed by Eq. (41) arises due to the blockade of MZMs tunnelling because of large parity-splitting energy. This effect is very particular to MZMs, and
Figure 8: Illustration of energy parabolas in odd and even sectors. When Majorana hybridization becomes the largest energy scale, virtual tunneling between the two sectors produces Josephson-like coupling. The sign of this coupling is opposite in the two sectors.
Figure 7: (a) Tunnel splitting in perturbative and crossover regimes as a function of Majorana hybridization \(h\). (b) Comparison between the analytically obtained tunnel splitting in the WKB regime with that calculated numerically as a function of Majorana hybridization. Different plots show that with increasing the single particle tunneling strength \(v/E_{C}\), WKB results asymptotically coincides with that obtained numerically.
therefore can be a useful indicator for the presence of MZMs in real experimental systems, complementary to other proposed methods as in Ref. [37]. We note that related parity-sensitive effects have been discussed recently in a rather different setup hosting Majorana Kramers pairs [45; 46].
## VII Conclusion
To conclude, we developed an analytical as well as numerical method for solving energy spectra of the Majorana-SC qubit hybrid. We showed that various competing effects in this hybrid device result in several operational regimes. Unique features associated with different regimes have been explored using both qualitative and quantitative approaches. Moreover, we studied the crossover among these regimes and derived the compact expressions for the crossover scales.
We demonstrated that the competition between intra-wire (\(h\)) and inter-wire (\(v\)) Majorana tunneling gives rise to the "Majorana-blockade" effect. This effect may serve as a smoking-gun for the presence of Majorana modes and can also be exploited for sensing applications.
The main qualitative feature of the Majorana-SC qubit hybrid is the possibility of the coherent single electron tunneling between the island and the lead. It allows the system's energy be a periodic function of the gate charge \(n_{g}\) with period 1 when \(E_{J}=h=0\) (as compared to the period 2 for the conventional superconducting devices). This feature parallels the doubling of the period in the current-phase relationship, which is considered one of the hallmarks of Josephson junctions containing MZMs [47]. It has been recognized however that the observation of this effect requires the junction parity to be conserved on the timescale of the experiment. Indeed, any hybridization of the junction MZMs (\(\gamma_{2}\) and \(\gamma_{3}\) in Fig. 1) with the "external" MZMs (\(\gamma_{1}\) and \(\gamma_{4}\)) allows the junction parity \(i\gamma_{2}\gamma_{3}\) to adjust to minimize energy, resulting in the standard periodicity of the current-phase relationship. In contrast, from Eq. (19), we find that the single-charge (Majorana) periodicity dominates up to a finite \(h\), given by Eq. (20). The fact that a sufficiently large \(h\) is necessary to wash out the Majorana signatures, highlights the importance of the finite charging energy, which introduces dynamics. Such dynamics is absent in the derivation of the current-phase relationship in which case the charging energy is set to zero.
## Acknowledgements
We are thankful to A. Shnirman, M. Vavilov, and J. Koch for fruitful discussions. This work was supported by the US Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division.
## Appendix A Equivalence between the approach of Sec. III and Ref. [33]
In this appendix we show the relationship between our approach presented in the section III and the approach of Ref. [33]. To this end, we chose the transformation between the electron number and phase space in the form
\[\psi_{n}=\frac{1}{\sqrt{2\pi}}\int_{-\pi}^{\pi}d\varphi\psi(\varphi)e^{-in \varphi}. \tag{10}\]
Unlike the main text, here we do not include the \(n_{g}\) shift in the exponent of Eq. (10), and thus the standard boundary condition applies, i.e., \(\psi(\varphi+2\pi)=\psi(\varphi)\). In terms of the function \(\psi(\phi)\), the Schrodinger equation (4) can be written as
\[E\psi(\varphi)= \left[E_{C}(-i\partial_{\varphi}-n_{g})^{2}-v\cos\varphi-E_{J} \cos 2\varphi\right]\psi(\varphi)\] \[-h\psi\left(\varphi-\pi\mathrm{sgn}\varphi\right). \tag{11}\]
When the gate voltage is explicitly included into the Hamiltonian, two components of the function \(\Psi\) defined in Eq. (7) satisfy the boundary conditions
\[\Psi_{\uparrow}(\varphi+\pi) = \Psi_{\downarrow}(\varphi),\] \[\Psi_{\downarrow}(\varphi+\pi) = \Psi_{\uparrow}(\varphi). \tag{12}\]
Equation. (11) then takes the form
\[\mathbb{L}\Psi=E\Psi, \tag{13}\]
where
\[\mathbb{L}=E_{C}(-i\partial_{\varphi}-n_{g})^{2}-E_{J}\cos 2\varphi+v\cos \varphi\sigma_{z}-h\sigma_{x}. \tag{14}\]
The combination of Hamiltonian (14) and boundary conditions (12) is physically equivalent to the combination of Hamiltonian and boundary conditions in Sec. III. However, the former is more convenient for demonstrating the correspondence with the approach presented in Ref. [33].
Indeed, we can define the unitary matrix
\[\mathbb{U}=\frac{1}{\sqrt{2}}\begin{pmatrix}1&1\\ 1&-1\end{pmatrix}, \tag{15}\]
which rotates the basis into
\[\begin{pmatrix}\Psi_{\mathrm{e}}\\ \Psi_{\mathrm{o}}\end{pmatrix}=\mathbb{U}\begin{pmatrix}\Psi_{\uparrow}\\ \Psi_{\downarrow}\end{pmatrix}. \tag{16}\]
The rotated Hamiltonian \(\mathbb{L}^{\prime}=\mathbb{U}\mathbb{L}\mathbb{U}^{\dagger}\) is equivalent to the Hamiltonian studied in Ref. [33] after the trivial modification \(\varphi\to 2\varphi\). This difference accounts for the fact that in this work we measure charge in the units of single electrons, while Ref. [33] counts charge in units of Cooper pairs; this also results in \(n_{g}\to n_{g}/2\). The basis \(\Psi_{\mathrm{e/o}}\) satisfies respectively the periodic and anti-periodic boundary conditions with period \(\pi\), i.e., \(\Psi_{\mathrm{e/o}}(\varphi+\pi)=\pm\Psi_{\mathrm{e/o}}(\varphi)\). This establishes a formal equivalence of our treatment and the one of the Ref. [33].
## Appendix B Derivation of tunnel splitting in perturbative regime
In this appendix, we outline the derivation of Eq. (18) starting from Eq. (16). For the sake of simplicity of presentation, we introduce the wave functions \(\Psi_{1,2}\) defined by \(\psi_{0}(x)=\Psi_{1}(x)\) and \(\psi_{0}(x-\pi)=\Psi_{2}(x)\). The potentials corresponding to \(\Psi_{1,2}\) then take the form
\[\mathbb{V}_{1,2}(x)=v\left(1\pm\cos x\right), \tag{19}\]
where we added the constant \(v\) to \(\mathbb{V}_{1,2}\). The wave functions \(\Psi_{1,2}\) under the semiclassical WKB approximation can be written as
\[\Psi_{1}(x) =\bigg{(}\frac{\Omega^{2}}{4\pi e}\bigg{)}^{1/4}\!\bigg{(}\! \frac{m}{|p_{1}(x)|}\bigg{)}^{1/2}\!\exp\left[-\frac{1}{\hbar}\int_{x}^{a_{1}}\! \!dy\;|p_{1}(y)|\right],\] \[\Psi_{2}(x) =\bigg{(}\frac{\Omega^{2}}{4\pi e}\bigg{)}^{1/4}\!\bigg{(}\! \frac{m}{|p_{2}(x)|}\bigg{)}^{1/2}\!\exp\left[-\frac{1}{\hbar}\int_{a_{2}}^{x} \!\!dy\;|p_{2}(y)|\right]. \tag{20}\]
To arrive at Eq. (20) we exploited the fact that near the minima of potentials (19), the corresponding wave functions are well approximated by the ground state harmonic oscillator wave functions [41]. The frequency \(\Omega\) of small amplitude oscillation around the minima of potentials (19) is given by \(m\Omega^{2}=v\), where \(m=\hbar^{2}/2E_{C}\). The classical turning points \(a_{1,2}\) are defined in terms of the parameter \(z\) satisfying \(z^{2}\simeq\hbar/m\Omega\) as
\[a_{1}=\pi-z,\ \ a_{2}=z,\ \ z\ll 1. \tag{21}\]
The semiclassical momenta \(p_{\sigma}\) in Eq. (20) can be expressed in the form
\[p_{\sigma}(x)=\sqrt{2m\left[\mathbb{V}_{\sigma}(x)-\mathbb{V}_{\sigma}(x=a_{ \sigma})\right]},\ \ \sigma=1,2. \tag{22}\]
In the limit of \(v\gg E_{C}\), the leading order contribution to the tunnel splitting given by Eq. (17) takes the form
\[\Delta^{(1)}=2mh\left(\frac{\Omega^{2}}{4\pi e}\right)^{1/2}\!\!\int_{0}^{ \pi}\!\frac{dx}{\sqrt{|p_{1}(x)||p_{2}(x)|}}\exp\left[-\frac{\mathcal{F}(x)}{ \hbar}\right], \tag{23}\]
where the function \(\mathcal{F}\) is defined by
\[\mathcal{F}(x)=-\int_{a_{1}}^{x}\!dy\;|p_{1}(y)|+\int_{a_{2}}^{x}dy\;|p_{2}(y )|. \tag{24}\]
To evaluate the integral in Eq. (24), we proceed with the saddle point method. Assuming that the integrand of Eq. (24) becomes maximum at \(x=\pi/2\), we write the saddle point solution for \(\mathcal{F}\), which takes the form
\[\mathcal{F}(x)\simeq\mathcal{F}\left(\frac{\pi}{2}\right)+\frac{\hbar}{z^{2} \sqrt{2}}\left(x-\frac{\pi}{2}\right)^{2}. \tag{25}\]
From equations (23) and (25), we obtain the expression for \(\Delta^{(1)}\) to the leading order in \(z\)
\[\Delta^{(1)}=h\frac{2^{1/4}z}{\sqrt{2e}}\exp\left[-\frac{\mathcal{F}(\pi/2)}{ \hbar}\right]. \tag{26}\]
In the limit of \(z\ll 1\), the evaluation of the exponential factor in the above equation yields
\[\exp\left[-\frac{\mathcal{F}(\frac{\pi}{2})}{\hbar}\right] = \frac{8\sqrt{e}}{z(\sqrt{2}\!+\!1)}\exp\!\left[-4(\sqrt{2}\!-\!1) \sqrt{\frac{v}{E_{C}}}\right]. \tag{27}\]
Using equations (27) and Eq. (26), we obtain the required expression of tunnel splitting in the perturbative regime, which is quoted in the main text Eq. (18).
## Appendix C Derivation of tunnel splitting in WKB regime
For the evaluation of tunnel splitting formula in WKB regime, we focus in the double-well regime, as shown in Fig. 4, of the potential \(\mathbb{V}_{-}\) defined in Eq. (23). The frequency \(\omega\) of small amplitude oscillation around the minimum \(\varphi=\pi/2\) of the potential \(\mathbb{V}_{-}\) is given by
\[\omega^{2}=\left.\frac{d^{2}\mathbb{V}_{-}(\varphi)}{d\varphi^{2}}\right|_{ \varphi=\pi/2}=\frac{v}{m}\frac{1}{\sqrt{1+\alpha^{2}}},\ \ m=\frac{\hbar^{2}}{2E_{C}}. \tag{28}\]
The potential \(\mathbb{V}_{-}\) has left and right classical turning points at \(\varphi=\pm\pi/2\mp l\), where potential coincides with the ground state energy;
\[\mathbb{V}_{-}(\varphi=\pm\pi/2\mp l)=E_{0}=\frac{1}{2}\hbar\omega. \tag{29}\]
For \(l\ll\pi/2\), we use the semi-classical approximation \(l^{2}\simeq\hbar/m\omega\). In this case, the tunnel splitting corresponding to the potential \(\mathbb{V}_{-}\) in Eq. (23) is given by [41; 48]
\[\frac{\Delta^{(2)}}{E_{C}}=\frac{\hbar\omega}{E_{C}\sqrt{e\pi}}\exp\left[-\sqrt {\frac{v}{E_{C}}}\int_{-\frac{\pi}{2}+l}^{\frac{\pi}{2}-l}d\varphi\;U(\varphi) \right], \tag{30}\]
where
\[U(\varphi)=\sqrt{\sqrt{\alpha^{2}+(\cos l)^{2}}-\sqrt{\alpha^{2}+(\sin\varphi) ^{2}}}. \tag{31}\]
In the strong barrier limit \(l\ll\pi/2\), exploiting the fact that \(\mathbb{V}_{-}(\varphi=\pm\pi/2)=0\), the Eq. (30) can be expressed into the form [48]
\[\frac{\Delta^{(2)}}{E_{C}}=\frac{\hbar\omega}{E_{C}}\left(\frac{m\omega\pi}{ \hbar}\right)^{1/2}\exp\mathcal{S}_{0}\ \exp\mathcal{S}_{1}, \tag{32}\]
where
\[\mathcal{S}_{0} =-\frac{1}{\hbar}\int_{-\pi/2}^{\pi/2}\sqrt{2m\mathbb{V}_{-}( \varphi)}\;d\varphi, \tag{33}\] \[\mathcal{S}_{1} =\lim_{\delta\to 0^{+}}\int_{0}^{\pi/2-\delta}\left(\frac{m\omega}{ \sqrt{2m\mathbb{V}_{-}(\varphi)}}-\frac{1}{\frac{\pi}{2}-\varphi}\right)\;d\varphi. \tag{34}\]
We calculate these integrals and substitute them into Eq. (32) to arrive at the expression of tunnel splitting (24).
## Appendix D Derivation of the WKB wave function
The WKB wave function corresponding to the potential \(\mathbb{V}_{-}\), which is sketched in Fig. 4, for \(\varphi\leq 0\) can be written as [41]
\[\Psi^{\rm WKB}(\varphi)=\left(\frac{\omega^{2}}{4\pi e}\right)^{1/4}\left(\frac {m}{|p(\varphi)|}\right)^{1/2}\exp\mathscr{M}(\varphi), \tag{107}\]
where the function \(\mathscr{M}\) is defined by
\[\mathscr{M}(\varphi)=-\frac{1}{\hbar}\int_{-\frac{\pi}{2}+l}^{\varphi}\,dx|p (x)|. \tag{108}\]
Here the semiclassical momentum \(p(x)\) takes the usual form
\[p(x)=\sqrt{2m\left(\mathbb{V}_{-}(x)-\mathbb{V}_{-}(x=-\frac{\pi}{2}+l) \right)}. \tag{109}\]
We now express \(\mathscr{M}\) into the form
\[\mathscr{M}=\mathscr{M}_{0}+\mathscr{M}_{1}, \tag{110}\]
with
\[\mathscr{M}_{0}(\varphi) =-\sqrt{\frac{v}{E_{C}}}\int_{-\frac{\pi}{2}+l}^{\varphi}\,dx\sqrt {\cos l-|\sin x|}, \tag{111}\] \[\mathscr{M}_{1}(\varphi) =-\sqrt{\frac{v}{E_{C}}}\int_{-\frac{\pi}{2}+l}^{\varphi}\,dx \Bigg{[}-\sqrt{\cos l-|\sin x|}\] \[+\sqrt{\sqrt{\alpha^{2}+(\cos l)^{2}}-\sqrt{\alpha^{2}+(\sin x)^ {2}}}\Bigg{]}. \tag{112}\]
To the leading order in the small parameters \(\alpha\) and \(l\), \(\mathscr{M}_{0}\) and \(\mathscr{M}_{1}\) in the limit of \(\alpha<|\varphi|\ll 1\) take the form
\[\mathscr{M}_{0}(\varphi) =-\sqrt{\frac{v}{E_{C}}}\Bigg{[}2(\sqrt{2}-1)-|\varphi|+\frac{| \varphi|^{2}}{4}\] \[\qquad-\frac{l^{2}}{4\sqrt{2}}-\frac{l^{2}}{2\sqrt{2}}\log\left( \frac{8(\sqrt{2}-1)}{l}\right)\Bigg{]}, \tag{113}\]
and
\[\mathscr{M}_{1}(\varphi)=\eta\log\left(\frac{4\left(\sqrt{2}-1\right)^{2}}{| \varphi|}\right). \tag{114}\]
For \(\alpha\ll 1\) and \(l\ll 1\), we make further approximations
\[\frac{l^{2}}{\sqrt{2}}\sqrt{\frac{v}{E_{C}}}=1,\,\omega^{2}=\sqrt{\frac{2E_{C }v}{\hbar^{2}}},\,\,\frac{m}{|p(\varphi)|}=\frac{\hbar}{2}\sqrt{\frac{1}{E_{C }v}}. \tag{115}\]
Substituting Eqs. (110) and (113)\(-\)(115) into Eq. (107), we obtain the required expression for the wave function quoted in the main text Eq. (29).
## Appendix E Tunnel splitting in the crossover regime
To apply the standard method of evaluating tunnel splitting outlined in Ref. [41] to our problem, we first need to generalize it for the case of two-component wave functions \(\Psi_{L/R}\). To this end, we define the symmetric and anti-symmetric combinations of the wave functions \(\Psi_{L/R}\) as
\[\Psi_{\pm}(\varphi)=\frac{\Psi_{L}(\varphi)\pm\Psi_{R}(\varphi)}{\sqrt{2}}, \tag{116}\]
with corresponding energies \(\epsilon_{\pm}\). Since both \(\Psi_{L}\) and \(\Psi_{+}\) follow the Eq. (30), we write
\[E_{C}\frac{\partial^{2}\Psi_{L}}{\partial\varphi^{2}}-\left(v\varphi\sigma_{z} -h\sigma_{x}\right)\Psi_{L}=v\Psi_{L}, \tag{117}\]
and
\[E_{C}\frac{\partial^{2}\Psi_{+}}{\partial\varphi^{2}}-\left(v\varphi\sigma_{z} -h\sigma_{x}\right)\Psi_{+}=\epsilon_{+}\Psi_{+}. \tag{118}\]
We proceed further by multiplying Eq. (117) from left by \(\Psi_{+}\), Eq. (118) from left by \(\Psi_{L}\) and subtracting the resulting expressions. Thus obtained result upon integrating from \(-\infty\) to \(0\) gives
\[\frac{v-\epsilon_{+}}{E_{C}} =\sqrt{2}\int_{-\infty}^{0}\!\!\!d\varphi\left(\Psi_{+}\cdot \frac{\partial^{2}\Psi_{L}}{\partial\varphi^{2}}\!-\!\Psi_{L}\cdot\frac{ \partial^{2}\Psi_{+}}{\partial\varphi^{2}}\right)\] \[=2\left.\left(\Psi_{\downarrow}\frac{\partial\Psi_{\uparrow}}{ \partial\varphi}+\Psi_{\uparrow}\frac{\partial\Psi_{\downarrow}}{\partial \varphi}\right)\right|_{\varphi=0}. \tag{119}\]
To arrive at Eq. (119), we exploited the identity
\[\int_{-\infty}^{0}d\varphi\Psi_{L}.\Psi_{+}\simeq\frac{1}{\sqrt{2}}. \tag{120}\]
The evaluation of \((v-\epsilon_{-})/E_{C}\) proceeds similarly and thus the tunnel splitting \(\Delta^{(3)}\equiv\epsilon_{-}-\epsilon_{+}\) can be written as
\[\frac{\Delta^{(3)}}{E_{C}}=4\left.\left(\Psi_{\downarrow}\frac{\partial\Psi_{ \uparrow}}{\partial\varphi}+\Psi_{\uparrow}\frac{\partial\Psi_{\downarrow}}{ \partial\varphi}\right)\right|_{\varphi=0}. \tag{121}\]
From Eqs. (33) and (37), we have for the expression of \(\Psi_{\uparrow}\) in the form
\[\Psi_{\uparrow}(\varphi)=\mathcal{A}\exp\left(-\sqrt{\frac{v}{E_{C}}}\varphi \right)D_{-\eta}\left[-\left(\frac{v}{E_{C}}\right)^{\frac{1}{4}}\!\varphi \right]. \tag{122}\]
Similarly, \(\Psi_{\downarrow}\) can be written as
\[\Psi_{\downarrow}(\varphi)= -\mathcal{A}\exp\left(-\sqrt{\frac{v}{E_{C}}}\varphi\right) \sqrt{\eta}\] \[\times D_{-\eta-1}\left[-\left(\frac{v}{E_{C}}\right)^{\frac{1}{4}} \varphi\right]. \tag{123}\]
Substitution of Eqs. (10) and (11) into Eq. (12) gives the required formula for tunnel splitting in the crossover regime. In the regime of our interest \(v\gg E_{C}\), the main contribution to the derivatives in Eq. (12) comes from the exponential factor of the wave functions \(\Psi_{\uparrow,\downarrow}\). Therefore, the leading order term of Eq. (12) can be evaluated by neglecting the derivative of parabolic cylinder functions. The resulting expression of tunnel splitting takes the compact form given by
\[\frac{\Delta^{(3)}}{E_{C}}=8\mathcal{A}^{2}\sqrt{\frac{v}{E_{C}}}D_{-\eta}(0) \sqrt{\eta}D_{-\eta-1}(0), \tag{13}\]
Using the properties of parabolic cylinder functions, Eq. (13) can equivalently be expressed in the form of Eq. (39).
|
2309.06775 | Long-time instability of planar Poiseuille-type flow in compressible
fluid | It is well-known that at the high Reynolds number, the linearized
Navier-Stokes equations around the inviscid stable shear profile admit growing
mode solutions due to the destabilizing effect of the viscosity. This
phenomenon, called Tollmien-Schlichting instability, has been rigorously
justified by Grenier-Guo-Nguyen [Adv. Math. 292 (2016); Duke J. Math. 165
(2016)] for Poiseuille flows and boundary layers in the incompressible fluid.
To reveal this intrinsic instability mechanism in the compressible setting, in
this paper, we study the long-time instability of the Poiseuille flow in a
channel. Note that this instability arises in a low-frequency regime instead of
a high-frequency regime for the Prandtl boundary layer. The proof is based on
the quasi-compressible-Stokes iteration introduced by Yang-Zhang in [50] and
subtle analysis of the dispersion relation for the instability. Note that we do
not require symmetric conditions on the background shear flow or perturbations. | Andrew Yang, Zhu Zhang | 2023-09-13T07:50:00Z | http://arxiv.org/abs/2309.06775v1 | # Long-time instability of planar Poiseuille-type flow in compressible fluid
###### Abstract
It is well-known that at the high Reynolds number, the linearized Navier-Stokes equations around the inviscid stable shear profile admit growing mode solutions due to the destabilizing effect of the viscosity. This phenomenon, called Tollmien-Schlichting instability, has been rigorously justified by Grenier-Guo-Nguyen [Adv. Math. 292 (2016); Duke J. Math. 165 (2016)] for Poiseuille flows and boundary layers in the incompressible fluid. To reveal this intrinsic instability mechanism in the compressible setting, in this paper, we study the long-time instability of the Poiseuille flow in a channel. Note that this instability arises in a low-frequency regime instead of a high-frequency regime for the Prandtl boundary layer. The proof is based on the quasi-compressible-Stokes iteration introduced by Yang-Zhang in [50] and subtle analysis of the dispersion relation for the instability. Note that we do not require symmetric conditions on the background shear flow or perturbations.
## 1 Introduction
Stability/instability properties of laminar flows at the high Reynolds number is the fundamental problem in _Hydrodynamic Stability Theory_. Significant progress was made by pioneering works of Kelvin [29], Orr [42], Rayleigh [44], and Heisenberg [24], among many others. The mathematical theory in this area has been rapidly developed in this decade.
For inviscid flow when the Reynolds number is infinity, the main stabilizing mechanism is the inviscid damping. That is, due to the mixing between vorticity and the background flow, the velocity field asymptotically converges to a laminar flow that is closed to its background. This phenomenon has been rigorously justified for Couette flow in linear/nonlinear settings, and for perturbations in high-regularity spaces, cf. [4; 26; 51; 52]. On the other hand, if the perturbation is less smooth, some instabilities arise that prevent the solution from converging to the shear flow. In this direction, Lin-Zeng [35] constructed the time-periodic solution close to the Couette flow in \(H^{\frac{3}{2}-}\)-space. Nonlinear instability in Gevrey \(2^{-}\) space of Couette flow was shown by Deng-Masmoudi [13]. For shear flows other than the Couette, the dynamics is more complicated even in the linear level [2; 20; 26; 34; 48; 49], and the stability in full nonlinear setting is shown recently by Ionescu-Jia [25] and Masmoudi-Zhao [40]. We also refer to [19; 31] for nonlinear instability results of generic shear flows.
In the high Reynolds number regime, the role of viscosity is mixed. On the one hand, the viscosity can help stabilize the shear flow with the aid of an enhanced dissipation mechanism. Such a mechanism can also be used to characterize the critical size of perturbations for the nonlinear stability of the corresponding shear flows. We refer to the list of references
[3, 5, 6, 7, 9, 10, 11, 16, 20, 39, 40, 45, 49] for the research in this direction. On the other hand, small viscosity may destabilize the flow. This phenomenon, called Tollmien-Schlichting instability, cf. [18, 46, 47], typically occurs in the laminar flow with boundary layers. Precisely, for the linearized incompressible Navier-Stokes equations around a Prandtl boundary layer, at high tangential frequency, there exists an unstable eigenmode corresponding to Gevrey function space with index \(\frac{3}{2}\). Substantial progress on this topic was made by Grenier-Guo-Nguyen [22] in which they justified this phenomenon with mathematical rigor. In addition, the validity of Prandtl expansion with critical Gevrey index \(\frac{3}{2}\) was obtained by Gerard-Varet-Maekawa-Masmoudi in [23]. For plane Poiseuille flow, this mechanism induces a long-wave type of instability, cf. [21].
One of the powerful analytic tools in above studies for incompressible flow is the Orr-Sommerfeld equation, which describes the time evolution of the Fourier mode behavior of the stream function. In the absence of viscosity, the Orr-Sommerfeld equation is the Rayleigh equation for inviscid flow. Thus, it can be regarded as a singular perturbation of the Rayleigh equation at high Reynolds numbers. The derivation of this famous equation relies on the stream function representation of the velocity field so that it is effective in the two-dimensional incompressible setting. However, for compressible flow, eigenvalue problems for stability can not be reduced to a simple single equation like the Orr-Sommerfeld. Therefore, despite many works from the physical perspectives [8, 14, 32, 33, 37, 43], mathematical results on hydrodynamic stability in compressible fluids are very few. The works that we are aware of in this direction are [1, 53] for the Couette flow in two and three-dimensional spaces (without boundary) respectively. See also [27, 28, 41] for stability of Poiseuille flow in different regimes.
In order to capture the Orr-Sommerfeld structure in the compressible setting, in [50], the authors introduced a quasi-compressible system so that the Orr-Sommerfeld type equation can be derived and used to approximate the compressible Navier-Stokes for isentropic flow. This newly derived formulation allows one to use the "incompressible techniques" to treat problems for compressible flows, and it has been proved helpful in the subsonic regime through the quasi-compressible and Stokes iteration to investigate both the Tollmien-Schlichting instability and validity of Prandtl ansatz in suitable function spaces in various physical situations. We also refer to the recent work [38] for the whole subsonic regime.
As a further exploration of the compressible flow, particularly in a channel, we will study the long-time instability phenomena of the planar Poiseuille flow in this paper. Note that although this kind of instability phenomena has been studied in [21] for Poiseuille flows in the incompressible fluid, there is no corresponding work for compressible models. Motivated by this, we consider the two-dimensional compressible Navier-Stokes equations for isentropic flow in a channel \(\Omega=\{(x,y)\mid x\in\mathbb{R},y\in[-1,1]\}\) :
\[\begin{cases}\partial_{t}\rho+\nabla\cdot(\rho\vec{U})=0,\\ \rho\partial_{t}\vec{U}+\rho\vec{U}\cdot\nabla\vec{U}+\nabla P(\rho)=\mu\epsilon \Delta\vec{U}+\lambda\epsilon\nabla(\nabla\cdot\vec{U})+\rho\vec{F},\\ \vec{U}|_{y=\pm 1}=\vec{0}.\end{cases} \tag{1.1}\]
Here, \(\rho\), \(\vec{U}=(u,v)\) and \(P(\rho)\) represent the density, velocity field and pressure of the fluid respectively. The vector field \(\vec{F}\) is a given external force. The constants \(\mu>0,\lambda\geq 0\) are the rescaled shear and bulk viscosity, and \(0<\epsilon\ll 1\) is a small parameter which is proportional to the reciprocal of the Reynolds number. Without loss of generality, the constant \(\mu\) is set to 1 throughout the paper.
As well known, the planar Poiseuille flow is a typical steady solution profile to the Navier-Stokes equations when the tangential derivative of the pressure function is balanced by the normal diffusion of the velocity field. In the compressible setting under consideration, we can fix a class of Poiseuille-type profile as
\[(\rho_{s},\vec{\mathbf{U}}_{s})\stackrel{{\text{\tiny def}}}{{=}}(1,U_{s}(y),0),\ y\in(-1,1), \tag{1.2}\]
which is a steady solution to (1.1) with a small constant force \(\vec{F}=(-2U_{s}^{\prime\prime}\mu,0)\). Here we assume that \(U_{s}\) satisfies
\[U_{s}(\pm 1)=0,\ U_{s}^{\prime}(-1)>0,\ U_{s}^{\prime}(1)<0,\ \text{and}\ U_{s}^{ \prime\prime}(y)<0,\ \text{ for }y\in(-1,1). \tag{1.3}\]
A typical example of this class of profile is the plane Poiseuille flow \(U_{s}(y)=1-y^{2}\).
To study the stability/instability of the Poiseuille flow, we consider a small perturbation of the Poiseuille profile
\[\rho=1+\tilde{\rho},\ u=U_{s}(y)+\tilde{u},\ v=\tilde{v}.\]
By linearization, we have the following system for \((\rho,\vec{U})\)
\[\begin{cases}\partial_{t}\rho+U_{s}\partial_{x}\rho+\nabla\cdot\vec{U}=0,\ t>0,\ (x,y)\in\mathbb{R}\times(-1,1),\\ \partial_{t}\vec{U}+U_{s}\partial_{x}\vec{U}+m^{-2}\nabla\rho+v\partial_{y}U_ {s}\vec{e}_{1}-\epsilon\Delta\vec{U}-\lambda\epsilon\nabla(\nabla\cdot\vec{U} )-\rho\vec{F}=0,\\ \vec{U}|_{y=\pm 1}=\vec{0}.\end{cases} \tag{1.4}\]
Here we have dropped tilde for the simplicity of notations. Note that in our setting, \(m=\frac{1}{\sqrt{P^{\prime}(1)}}\) is the Mach number because the maximum magnitude of the Poiseuille profile is \(1\) and \(\sqrt{P^{\prime}(1)}\) is the background sound speed.
For stability analysis, we look for solution \((\rho,u,v)\) to the linearized compressible Navier-Stokes system (1.4) in the following form
\[(\rho,u,v)=e^{ik(x-ct)}(\hat{\rho},\hat{u},\hat{v})(y), \tag{1.5}\]
where \(k>0\) is frequency and \(c\in\mathbb{C}\) is the spectral parameter. Substituting (1.5) into (1.4), we obtain the following eigenvalue problem for the profile function \((\hat{\rho},\hat{u},\hat{v})\)
\[\begin{cases}ik(U_{s}-c)\hat{\rho}+\text{div}_{k}(\hat{u},\hat{v})=0,\ \ y\in(-1,1),\\ -\epsilon\Delta_{k}\hat{u}-\lambda ik\epsilon\text{div}_{k}(\hat{u},\hat{v}) +ik(U_{s}-c)\hat{u}+(ikm^{-2}+\epsilon U_{s}^{\prime\prime})\hat{\rho}+\hat{ v}U_{s}^{\prime}=0,\\ -\epsilon\Delta_{k}\hat{v}-\lambda\epsilon\partial_{y}\text{div}_{k}(\hat{u},\hat{v})+ik(U_{s}-c)\hat{v}+m^{-2}\partial_{y}\hat{\rho}=0,\end{cases} \tag{1.6}\]
with no-slip boundary conditions on two boundaries \(y=\pm 1\)
\[\hat{u}|_{y=\pm 1}=\hat{v}|_{y=\pm 1}=0. \tag{1.7}\]
Here, in (1.6), \(\Delta_{k}=(\partial_{y}^{2}-k^{2})\) and \(\text{div}_{k}(u,v)=iku+\partial_{y}v\) denote the Fourier transform of Laplacian and divergence operators respectively. For convenience, we denote by \(\mathcal{L}(\hat{\rho},\hat{u},\hat{v})\) the linear operator (1.6).
The stability analysis relies on the solvability of the eigenvalue problem (1.6) with boundary conditions (1.7). In fact, if for some complex number \(c\in\mathbb{C}\) with \(\text{Im}c>0\) and frequency \(k\in\mathbb{R}\), the ODE system (1.6) with (1.7) has a non-zero solution, then the Poiseuille flow (1.2) is spectral unstable. Otherwise, it is spectral stable.
In this paper we concentrate on the case when the background profile is the planar shear flow that satisfies (1.3), and the main result is stated in the following theorem.
**Theorem 1.1**.: _Let Mach number \(m\in\left(0,\frac{1}{\sqrt{3}}\right)\). For sufficiently small \(0<\epsilon\ll 1\), there exist frequency \(k\sim\epsilon^{\frac{1}{7}}\) and wave number \(c\in\mathbb{C}\) with \(\text{Imc}\sim\epsilon^{\frac{2}{7}}\), such that the linearized compressible Navier-Stokes system admits a solution in the form of_
\[(\rho,u,v)(t,x,y)=e^{ik(x-ct)}(\hat{\rho}.\hat{u},\hat{v})(y).\]
We have several remarks on the above result.
**Remark 1.1**.: _Unlike the case of boundary layer profile [22, 50], for the planar Poiseuille flow, the unstable mode localizes at low frequency \(k\sim\epsilon^{\frac{1}{7}}\) rather than in a high-frequency regime, and it grows exponentially in time like \(e^{ik\text{Imc}}\sim e^{\epsilon^{\frac{2}{7}}t}\). Note that the slow growth is consistent with the linear inviscid stability for Poiseuille flow, see Lees-Lin's criterion [30]._
**Remark 1.2**.: _In the work [27] by Kagei-Nishida, the instability of planar Poiseuille flows is shown in the high Mach number regime, which is different from the regime studied in this paper. Even though instabilities investigated in both papers occur at low frequencies, the mechanisms are different. In [27], the unstable eigenmode bifurcates from the zero eigenvalue of hyperbolic part of linearized operator at zero frequency, while in this paper, the instability arises from the interaction between the inviscid perturbation and boundary layers._
**Remark 1.3**.: _The dispersion relation is derived from full boundary conditions (1.7), instead of building upon symmetry assumptions on background flows and perturbations as in [21]. Thus, the analysis can be used for more generic shear flows without symmetry._
**Remark 1.4**.: _The above theorem is proved in the subsonic regime, covering the incompressible case when the Mach number is \(0\). Note that the subsonic condition is essential because the ellipticity of compressible Orr-Sommerfeld equation does not hold when the Mach number \(m\geq 1\). Hence, how to establish mathematical theory for the cases in transonic and sonic regimes remains very challenging and unsolved._
Now we briefly present the ideas and strategies for the proof of main theorem. To match the full boundary conditions in (1.7), we need to construct four independent solutions to the eigenvalue problem (1.6). As in the case of boundary layer profiles [22, 50] and the case of Poiseuille flow under incompressible perturbations [21], the first step is to construct approximate solutions consisting of the inviscid and viscous parts. The inviscid component is constructed by using the Rayleigh approximate operator in the low-frequency regime, usually called the slow mode. In constrast, the viscous component takes effect in a region close to the two boundaries \(y=\pm 1\) to capture interactions between the viscosity and no-slip boundary conditions.
The next step is to construct four exact solutions to (1.6) near these approximations or equivalently to solve the remainder system (3.1). The difficulty in analysis comes from the lack of stream function expression in (1.6), so the resolvent problem (3.5) can not be reduced to the Orr-Sommerfeld equation as in the incompressible case. To overcome this difficulty, we apply the quasi-compressible-Stokes iteration introduced in [50] for studying boundary layers to the study of flows in the channel. Roughly speaking, we introduce the quasi-compressible operator, where the intrinsic effective stream function can be defined by considering the compressibility of fluids. The Orr-Sommerfeld type equation can be derived accordingly. On the other hand, to recover the loss of regularity due to the quasi-compressible approximation, the Stokes operator is used to capture the elliptic structure of the system. The convergence of iteration scheme will be presented in Section 3.3. The proof is based on energy estimates, which is different from the
Green function approach used in [21]. The concavity of Poiseuille flow plays an important role in analysis.
Finally, in Section 4, we solve the dispersion relation for the instability of Poiseuille flow. We remark that the dispersion relation is more complicated than the one studied in [50] for the instability of boundary layer, because, for latter case, one can choose the slow and fast modes decaying at infinity so that far-field conditions can be automatically satisfied. In this paper, to remove the symmetry assumptions made in [21], the dispersion relation is derived from full boundary conditions (1.7): we look for the solution to eigenvalue problem (1.6) in the following form
\[(\rho,u,v)=(\rho_{+}^{s},u_{+}^{s},v_{+}^{s})+\alpha_{1}(\rho_{-}^{s},u_{-}^{s },v_{-}^{s})+\alpha_{2}(\rho_{-}^{f},u_{-}^{f},v_{-}^{f})+\alpha_{3}(\rho_{+}^ {f},u_{+}^{f},v_{+}^{f}),\]
where \((\rho_{\pm}^{s},u_{\pm}^{s},v_{\pm}^{s})\) are slow modes and \((\rho_{\pm}^{f},u_{\pm}^{f},v_{\pm}^{f})\) are fast modes that involve boundary layer structures near \(y=\pm 1\) respectively, and constants \(\alpha_{1}\), \(\alpha_{2}\), \(\alpha_{3}\) are chosen such that the following three boundary conditions are fulfilled: \(v(-1)=v(1)=u(-1)=0\). In this way, the zero point problem for the \(4\times 4\) matrix of dispersion relation is reduced into a single equation for matching the last boundary condition \(u(1)=0\). Inspired by [12; 15], we solve this by applying the Rouche theorem.
The rest of the paper is organized as follows. In the next section, we will construct the approximate solutions consisting of the inviscid modes and boundary layers corresponding to the inviscid and viscous effects respectively. Then, in the third section, the remainder of these approximate solutions are estimated by studying the resolvent problem. The quasi-compressible and Stokes iteration is used there. The dispersion relation for the instability is solved in Section 4. Finally, we list several useful estimates in the Appendices.
_Regime of parameters:_ For the sake of readers' convenience, we list the regimes of the parameters considered in this paper. Let \(\epsilon\) be the viscosity coefficient which is suffciently small. We set the tangential frequency \(k=T_{0}\epsilon^{\frac{1}{7}}\) where the constant \(T_{0}\gg 1\) is sufficiently large and of \(O(1)\) with respect to \(\epsilon\). We also define the rescaled frequency \(n\stackrel{{\text{\tiny def}}}{{=}}k/\epsilon=T_{0}\epsilon^{- \frac{6}{7}}\). The wave speed \(c=\text{Re}c+i\text{Im}c\) and satisfies
\[\text{Re}c\approx T_{0}^{2}\epsilon^{\frac{2}{7}},\text{ and }\text{Im}c \approx T_{0}^{-\frac{3}{2}}\epsilon^{\frac{2}{7}}. \tag{1.8}\]
The precise range of \(c\) is given in (4.29).
_Notations:_ Throughout the paper, \(C\) denotes a positive constant independent of \(\epsilon\) and may vary from line to line. \(A\lesssim B\) and \(A=O(1)B\) mean that there exists a positive constant \(C\), not depending on \(\epsilon\), such that \(A\leq CB\). Similarly, \(A\gtrsim B\) means \(A\geq CB\) for some positive constant \(C\). \(A=O(1)\epsilon^{\infty}\) means \(A\leq C_{N}\epsilon^{N}\) for any positive integer \(N\). We use the notation \(A\approx B\) if \(A\lesssim B\) and \(A\gtrsim B\). We denote by \(\|\cdot\|_{L^{2}}\) and \(\|\cdot\|_{L^{\infty}}\) respectively the standard \(L^{2}\) and \(L^{\infty}\) norm in the interval \((-1,1)\). For any non-negative integer \(s\) and \(p\in[1,\infty]\), the standard Sobolev space \(W^{s,p}=\{f\in L^{p}(-1,1)\mid\partial_{y}^{j}f\in L^{p}(-1,1),\ j=1,2,\cdots,s\}\) are used. In particular, \(H^{s}\stackrel{{\text{\tiny def}}}{{=}}W^{s,2}\).
## 2 Approximate solutions
The approximate solutions consisting of inviscid approximations and boundary layers will be given in the following two subsections. The estimates on errors generated by these approximations will be given in Section 2.3.
### Inviscid solutions
We start from the following inviscid system corresponding to (1.6)
\[\begin{cases}ik(U_{s}-c)\rho+\text{div}_{k}(u,v)=0,\ \ y\in(-1,1),\\ ik(U_{s}-c)u+m^{-2}ik\rho+vU_{s}^{\prime}=0,\\ ik(U_{s}-c)v+m^{-2}\partial_{y}\rho=0.\end{cases} \tag{2.1}\]
As in [30; 50], the solution \((\rho,u,v)\) to the inviscid system (2.1) can be represented by using a new function \(\varphi=\frac{i}{k}\nu\). In fact, the first equation in (2.1) gives
\[u=\partial_{y}\varphi-(U_{s}-c)\rho. \tag{2.2}\]
Substituting this into the second equation of (2.1), we can obtain the following relation between \(\rho\) and \(\varphi\):
\[-m^{-2}A(y)\rho=(U_{s}-c)\partial_{y}\varphi-\varphi U_{s}^{\prime},\]
where
\[A(y)\overset{\text{\tiny def}}{=}1-m^{2}(U_{s}-c)^{2}. \tag{2.3}\]
Note that for the Poiseuille flow in the _subsonic_ regime, that is, \(m\in(0,1)\), \(A(y)\) is invertible when the module \(|c|\) is sufficiently small. Thus, we can represent \(\rho\) in terms of \(\varphi\) as follows:
\[\rho=-m^{2}A^{-1}(y)\left[(U_{s}-c)\partial_{y}\varphi-\varphi U_{s}^{\prime} \right]. \tag{2.4}\]
Plugging (2.4) into the third line of (2.1), we derive the following Lees-Lin equation (Rayleigh's type) for \(\varphi\), cf. [30; 50]:
\[\text{Ray}_{\text{CNS}}(\varphi)\overset{\text{\tiny def}}{=} \partial_{y}\left[A^{-1}\left[(U_{s}-c)\partial_{y}\varphi-U_{s}^{\prime} \varphi\right]\right]-k^{2}(U_{s}-c)\varphi=0,\ \ y\in(-1,1). \tag{2.5}\]
Compared with the classical Lees-Lin equation around the Prandtl boundary layer profile studied in [50], here we need to analyze (2.5) in a finite interval \((-1,1)\) so that the boundary conditions for both \(y=1\) and \(y=-1\) need to be taken into account. For this, we construct two independent approximate solutions to the Lees-Lin equation (2.5) at low frequency \(k\ll 1\). We start from \(k=0\). In this case, the equation (2.5) reduces to
\[\partial_{y}\left\{A^{-1}\left[(U_{s}-c)\partial_{y}\varphi-U_{s}^{\prime} \varphi\right]\right\}=0,\ y\in(-1,1),\]
and it admits following two independent solutions
\[\varphi_{+}(y) =U_{s}(y)-c, \tag{2.6}\] \[\varphi_{-}(y) =(U_{s}(y)-c)\int_{0}^{y}\frac{1}{(U_{s}(x)-c)^{2}}\mathrm{d}x-m ^{2}\left(U_{s}(y)-c\right)y. \tag{2.7}\]
Based on \(\varphi_{\pm}\), we construct two approximate solutions to (2.5) for \(k\ll 1\).
Define the approximate Green's function:
\[G(x,y)=-(U_{s}(x)-c)^{-1}\begin{cases}\varphi_{+}(y)\varphi_{-}(x),\ x<y,\\ \varphi_{+}(x)\varphi_{-}(y),\ x>y,\end{cases}\]
and two correctors at \(k^{2}\)-order:
\[\varphi_{\pm,k}(y)\stackrel{{\text{\tiny def}}}{{=}}\int_{-1}^{1}G(x, y)(U_{s}(x)-c)\varphi_{\pm}(x)\mathrm{d}x. \tag{2.8}\]
Set
\[\varphi_{\pm}^{s}=\varphi_{\pm}+k^{2}\varphi_{\pm,k}. \tag{2.9}\]
Plugging (2.9) into (2.5) gives
\[\mathrm{Ray}_{\mathrm{CNS}}(\varphi_{\pm}^{s})=-k^{4}(U_{s}-c)\varphi_{\pm,k}. \tag{2.10}\]
Thus, \(\varphi_{\pm}^{s}\) solves the Lees-Lin equation (2.1.5) up to \(O(k^{4})\).
For later use, in the following lemma we summarize the boundary values of \(\varphi_{\pm}^{s}\) at \(y=\pm 1\).
**Lemma 2.1**.: _There exists a positive constant \(\gamma_{1}\), such that for \(k\leq\gamma_{1}\) and \(|c|\leq\gamma_{1}\), the boundary values of \(\varphi_{\pm}^{s}\) have the following asymptotic behavior:_
\[\varphi_{+}^{s}(-1) =-c+k^{2}\frac{1}{U_{s}^{\prime}(-1)}\int_{-1}^{1}U_{s}^{2}(x) \mathrm{d}x+O(1)k^{2}|c\log Imc|, \tag{2.11}\] \[\partial_{y}\varphi_{+}^{s}(-1) =U_{s}^{\prime}(-1)+O(1)k^{2}|\log Imc|,\] (2.12) \[\varphi_{+}^{s}(1) =-c\left(1+O(1)k^{2}\right),\] (2.13) \[\partial_{y}\varphi_{+}^{s}(1) =U_{s}^{\prime}(1)+O(1)k^{2},\] (2.14) \[\varphi_{-}^{s}(\pm 1) =-\frac{1}{U_{s}^{\prime}(\pm 1)}+O(1)\bigg{(}|c\log Imc|+k^{2} \bigg{)},\] (2.15) \[\partial_{y}\varphi_{-}^{s}(\pm 1) =O(1)|\log Imc|. \tag{2.16}\]
Proof.: Consider \(\varphi_{+}^{s}\) first. By the definition of \(\varphi_{+}^{s}\) in (2.9) and using (2.8), we obtain the following explicit formula:
\[\varphi_{+}^{s}(y)=\varphi_{+}(y)-k^{2}\varphi_{+}(y)\bigg{(}\int_{-1}^{y} \varphi_{+}(x)\varphi_{-}(x)\mathrm{d}x\bigg{)}-k^{2}\varphi_{-}(y)\bigg{(} \int_{y}^{1}\varphi_{+}^{2}(x)\mathrm{d}x\bigg{)}. \tag{2.17}\]
Evaluating (2.17) at \(y=-1\) and using the boundary value \(\varphi_{-}(-1)\) given in (5.1), it holds that
\[\varphi_{+}^{s}(-1) =-c-k^{2}\varphi_{-}(-1)\bigg{(}\int_{-1}^{1}(U_{s}(x)-c)^{2} \mathrm{d}x\bigg{)}\] \[=-c-k^{2}\bigg{(}\frac{-1}{U_{s}^{\prime}(-1)}\int_{-1}^{1}U_{s}^ {2}(x)\mathrm{d}x+O(1)|c\log\mathrm{Im}\ c|\bigg{)}\] \[=-c+\frac{k^{2}}{U_{s}^{\prime}(-1)}\int_{-1}^{1}U_{s}^{2}(x) \mathrm{d}x+O(1)k^{2}|c\log\mathrm{Im}\ c|,\]
which is (2.11). Similarly, we evaluate \(\varphi_{+}^{s}(y)\) at \(y=1\) to obtain
\[\varphi_{+}^{s}(1) =-c+ck^{2}\bigg{[}\int_{-1}^{1}(U_{s}(x)-c)\varphi_{-}(x)\mathrm{ d}x\bigg{]}\] \[=-c\left(1+O(1)k^{2}\right).\]
Here in the last line we have used the inequality \(\left|\int_{-1}^{1}(U_{s}-c)\varphi_{-}(x)\mathrm{d}x\right|\leq C\|\varphi_{-}\| _{L^{\infty}}\leq C\), with the aid of (5.9). Thus, (2.13) follows.
Next we compute boundary values of \(\partial_{y}\varphi_{+}^{s}\) at \(y=\pm 1\). Differentiate (2.17) to obtain
\[\partial_{y}\varphi_{+}^{s}(y)=U_{s}^{\prime}(y)-k^{2}U_{s}^{\prime}(y)\int_{ -1}^{y}(U_{s}(x)-c)\varphi_{-}(x)\mathrm{d}x-k^{2}\varphi_{-}^{\prime}(y)\int _{y}^{1}(U_{s}(x)-c)^{2}\mathrm{d}x. \tag{2.18}\]
Evaluating (2.18) at \(y=-1\) and using (5.2) in Lemma 5.1, we deduce that
\[\partial_{y}\varphi_{+}^{s}(-1) = U_{s}^{\prime}(-1)-k^{2}\varphi_{-}^{\prime}(-1)\bigg{(}\int_{ -1}^{1}(U_{s}(x)-c)^{2}\mathrm{d}x\bigg{)}\] \[= U_{s}^{\prime}(-1)+O(1)k^{2}|\log\mathrm{Im}c|,\]
which yields (2.12). Similarly, we can compute
\[\partial_{y}\varphi_{+}^{s}(1) = U_{s}^{\prime}(1)-k^{2}U_{s}^{\prime}(1)\left(\int_{-1}^{1}(U_{ s}(x)-c)\varphi_{-}(x)\mathrm{d}x\right)\] \[= U_{s}^{\prime}(1)-O(1)k^{2},\]
which is (2.14). Therefore, we have completed estimates on boundary data of \(\varphi_{+}^{s}\).
Now we turn to consider \(\varphi_{-}^{s}\). By definition (2.9), it holds that
\[\varphi_{-}^{s}(y)=\varphi_{-}(y)-k^{2}\bigg{(}\int_{-1}^{y}\varphi_{-}^{2}(x )\mathrm{d}x\bigg{)}\varphi_{+}(y)-k^{2}\bigg{(}\int_{y}^{1}\varphi_{+}(x) \varphi_{-}(x)\mathrm{d}x\bigg{)}\varphi_{-}(y). \tag{2.19}\]
Then evaluating \(\varphi_{-}^{s}\) at \(y=-1\) and using (5.1) again, we have
\[\varphi_{-}^{s}(-1) = \varphi_{-}(-1)\left(1-k^{2}\int_{-1}^{1}\varphi_{+}(x)\varphi_{- }(x)\mathrm{d}x\right) \tag{2.20}\] \[= \left(\frac{-1}{U_{s}^{\prime}(-1)}+O(1)|c\log\mathrm{Im}c|\right) \left(1+O(1)k^{2}\right)\] \[= \frac{-1}{U_{s}^{\prime}(-1)}+O(1)\left(|c\log\mathrm{Im}c|+k^{2} \right).\]
Similarly, the value of \(\varphi_{-}^{s}(1)\) can be computed as follows.
\[\varphi_{-}^{s}(1) = \varphi_{-}(1)-k^{2}\bigg{(}\int_{-1}^{1}\varphi_{-}^{2}(x) \mathrm{d}x\bigg{)}\varphi_{+}(1) \tag{2.21}\] \[= \frac{-1}{U_{s}^{\prime}(1)}+O(1)\left(|c\log\mathrm{Im}c|+k^{2} \right).\]
Combining (2.20) and (2.21) together yields (2.15).
Finally we estimate the boundary values of derivative \(\partial_{y}\varphi_{-}^{s}\). Differentiating (2.19), we obtain
\[\partial_{y}\varphi_{-}^{s}(y)=\varphi_{-}^{\prime}(y)-k^{2}\bigg{(}\int_{-1}^ {y}\varphi_{-}^{2}(x)\mathrm{d}x\bigg{)}\varphi_{+}^{\prime}(y)-k^{2}\bigg{(} \int_{y}^{1}\varphi_{+}(x)\varphi_{-}(x)\mathrm{d}x\bigg{)}\varphi_{-}^{\prime }(y). \tag{2.22}\]
Evaluating (2.22) at \(y=-1\) and using Lemma 5.1, we have
\[\partial_{y}\varphi_{-}^{s}(-1)=\varphi_{-}^{\prime}(-1)\bigg{(}1-k^{2}\int_{ -1}^{1}\varphi_{-}(x)\varphi_{+}(x)\mathrm{d}x\bigg{)}=O(1)|\log\mathrm{Im}c|.\]
Similarly, we have
\[\partial_{y}\varphi_{-}^{s}(1)=\varphi_{-}^{\prime}(1)-k^{2}\bigg{(}\int_{-1}^{1} \varphi_{-}^{2}(x)\mathrm{d}x\bigg{)}U_{s}^{\prime}(1)=O(1)|\log\mathrm{Im}c|.\]
Combining these two equalities gives (2.16). Therefore, we have shown (2.11)-(2.16). The proof of Lemma 2.1 is completed.
The following estimates on \(\varphi_{\pm}^{s}\) can be derived from explicit formula in (2.6)-(2.9) and bounds on \(\varphi_{-}\) given in Lemma 5.2.
**Lemma 2.2**.: _Let parameters \(k=T_{0}\epsilon^{\frac{1}{7}}\) and \(c\) satisfies (1.8). The approximate Lees-Lin solutions \(\varphi_{\pm}^{s}\) satisfy the following uniform estimates with respect to \(\epsilon\)._
\[\|\varphi_{+}^{s}\|_{W^{2,\infty}}+\epsilon^{\frac{1}{7}}\| \partial_{y}^{s}\varphi_{+}^{s}\|_{L^{2}}+\epsilon^{\frac{2}{7}}\|\partial_{y }^{3}\varphi_{+}^{s}\|_{L^{\infty}}\leq C, \tag{2.23}\] \[\|\varphi_{-}^{s}\|_{L^{\infty}}+\frac{\|\partial_{y}\varphi_{-} ^{s}\|_{L^{\infty}}}{\left|\log\epsilon\right|}+\|\partial_{y}\varphi_{-}^{s} \|_{L^{2}}+\sum_{j=2}^{4}\epsilon^{\frac{2j-3}{7}}\|\partial_{y}^{j}\varphi_{- }^{s}\|_{L^{2}}+\sum_{j=2}^{4}\epsilon^{\frac{2j-1}{7}}\|\partial_{y}^{j} \varphi_{-}^{s}\|_{L^{\infty}}\leq C. \tag{2.24}\]
Proof.: First we estimate \(\varphi_{+}^{s}\). Differentiating (2.17) yields
\[\partial_{y}\varphi_{+}^{s} =\varphi_{+}^{\prime}-k^{2}\varphi_{+}^{\prime\prime}\left(\int _{-1}^{y}\varphi_{+}\varphi_{-}\mathrm{d}x\right)-k^{2}\varphi_{-}^{\prime \prime}\left(\int_{y}^{1}\varphi_{+}^{2}\mathrm{d}x\right), \tag{2.25}\] \[\partial_{y}^{2}\varphi_{+}^{s} =\varphi_{+}^{\prime\prime}-k^{2}\varphi_{+}^{\prime\prime} \left(\int_{-1}^{y}\varphi_{+}\varphi_{-}\mathrm{d}x\right)-k^{2}\varphi_{-}^ {\prime\prime}\left(\int_{y}^{1}\varphi_{+}^{2}\mathrm{d}x\right)-k^{2} \varphi_{+}^{\prime}\varphi_{+}\varphi_{-}+k^{2}\varphi_{-}^{\prime}\varphi_{ +}^{2},\] \[\partial_{y}^{3}\varphi_{+}^{s} =\varphi_{+}^{\prime\prime\prime}-k^{2}\varphi_{+}^{\prime\prime \prime}\left(\int_{-1}^{y}\varphi_{+}\varphi_{-}\mathrm{d}x\right)-k^{2} \varphi_{-}^{\prime\prime\prime}\left(\int_{y}^{1}\varphi_{+}^{2}\mathrm{d}x\right)\] \[\quad+2k^{2}\varphi_{-}^{\prime\prime}\varphi_{+}^{2}-2k^{2} \varphi_{+}^{\prime\prime}\varphi_{+}\varphi_{-}+k^{2}\varphi_{+}\varphi_{+}^ {\prime}\varphi_{-}^{\prime}-k^{2}(\varphi_{+}^{\prime})^{2}\varphi_{-}.\]
By the explicit formula (2.25), using bounds \(\|\partial_{y}^{j}\varphi_{+}\|_{L^{\infty}}\leq C\), for \(j=0,1,2,3\), and (5.9) in Lemma 5.2, we can obtain, for \(k\approx\epsilon^{\frac{1}{7}}\) and \(\mathrm{Im}c\approx\epsilon^{\frac{1}{7}}\), that
\[\|\varphi_{+}^{s}\|_{W^{2,\infty}} \leq C\left(1+k^{2}\|\varphi_{-}\|_{W^{2,\infty}}\right)\leq C \left(1+k^{2}|\mathrm{Im}c|^{-1}\right)\leq C,\] \[\|\partial_{y}^{3}\varphi_{+}^{s}\|_{L^{2}} \leq C\left(1+k^{2}\|\varphi_{-}\|_{H^{3}}\right)\leq C\left(1+k^{ 2}|\mathrm{Im}c|^{-\frac{3}{2}}\right)\leq C\epsilon^{-\frac{1}{7}},\] \[\|\partial_{y}^{3}\varphi_{+}^{s}\|_{L^{\infty}} \leq C\left(1+k^{2}\|\varphi_{-}\|_{W^{3,\infty}}\right)\leq C \left(1+k^{2}|\mathrm{Im}c|^{-2}\right)\leq C\epsilon^{-\frac{2}{7}}.\]
Combining these inequalities together yields (2.23). Similarly, for \(\varphi_{-}^{s}\), we differentiate (2.19) up to the fourth order and deduce that
\[\partial_{y}\varphi_{-}^{s} =\varphi_{-}^{\prime}-k^{2}\left(\int_{-1}^{y}\varphi_{-}^{2} \mathrm{d}x\right)\varphi_{+}^{\prime}-k^{2}\left(\int_{y}^{1}\varphi_{+} \varphi_{-}\mathrm{d}x\right)\varphi_{-}^{\prime}, \tag{2.26}\] \[\partial_{y}^{2}\varphi_{-}^{s} =\varphi_{-}^{\prime\prime}-k^{2}\varphi_{+}^{\prime\prime}\left( \int_{-1}^{y}\varphi_{-}^{2}\mathrm{d}x\right)-k^{2}\varphi_{-}^{\prime\prime} \left(\int_{y}^{1}\varphi_{+}\varphi_{-}\mathrm{d}x\right)-k^{2}\varphi_{+}^{ \prime}\varphi_{-}^{2}+k^{2}\varphi_{-}^{\prime}\varphi_{+}\varphi_{-},\] \[\partial_{y}^{3}\varphi_{-}^{s} =\varphi_{-}^{\prime\prime\prime}-k^{2}\varphi_{+}^{\prime \prime\prime}\left(\int_{-1}^{y}\varphi_{-}^{2}\mathrm{d}x\right)-k^{2}\varphi_{- }^{\prime\prime\prime}\left(\int_{y}^{1}\varphi_{+}\varphi_{-}\mathrm{d}x\right)\] \[\quad-2k^{2}\varphi_{+}^{\prime\prime}\varphi_{-}^{2}+2k^{2} \varphi_{-}^{\prime\prime}\varphi_{+}\varphi_{-}+k^{2}\varphi_{+}(\varphi_{-}^{ \prime})^{2}-k^{2}\varphi_{-}^{\prime}\varphi_{+}^{\prime}\varphi_{-},\] \[\partial_{y}^{4}\varphi_{-}^{s} =\partial_{y}^{4}\varphi_{-}-k^{2}\partial_{y}^{4}\varphi_{+}\left( \int_{-1}^{y}\varphi_{-}^{2}\mathrm{d}x\right)-k^{2}\partial_{y}^{4}\varphi_{-} \left(\int_{y}^{1}\varphi_{+}\varphi_{-}\mathrm{d}x\right)-3k^{2}\varphi_{+}^{ \prime\prime\prime}\varphi_{-}^{2}+3k^{2}\varphi_{-}^{\prime\prime\prime} \varphi_{+}\varphi_{-}\] \[\quad-5k^{2}\varphi_{+}^{\prime\prime}\varphi_{-}+4k^{2} \varphi_{-}^{\prime\prime}\varphi_{-}\varphi_{+}+k^{2}\varphi_{-}^{\prime \prime}\varphi_{+}^{\prime}\varphi_{-}.\]
Then from the explicit formula in (2.26), it holds, for \(j=0,1,2,3,4\), that
\[\|\partial_{\gamma}^{j}\varphi_{-}^{s}\|_{L^{\prime}}\leq C\left(\| \varphi_{-}\|_{W^{j,\rho}}+1\right),\ p=2\ \text{or}\ \infty. \tag{2.27}\]
The estimate (2.24) follows from (2.27) and (5.9). Therefore, the proof of Lemma 2.2 is completed.
Based on the approximate solutions \(\varphi_{\pm}^{s}\) to the Lees-Lin equation (2.5), we can define the following two inviscid modes: \(\vec{\Xi}_{\pm,\text{app}}^{s}=(\rho_{\pm,\text{app}}^{s},u_{\pm,\text{app}}^{ s},v_{\pm,\text{app}}^{s})\), where
\[v_{\pm,\text{app}}^{s} =-ik\varphi_{\pm}^{s}, \tag{2.28}\] \[\rho_{\pm,\text{app}}^{s} =-m^{2}A^{-1}(y)\bigg{[}(U_{s}-c)\partial_{y}\varphi_{\pm}^{s}- \varphi_{\pm}^{s}U_{s}^{\prime}\bigg{]},\] (2.29) \[u_{\pm,\text{app}}^{s} =\partial_{y}\varphi_{\pm}^{s}-(U_{s}-c)\rho_{\pm}^{s}. \tag{2.30}\]
From (2.10), we can see that \(\vec{\Xi}_{\pm,\text{app}}^{s}\) solve the inviscid system (2.1) up to \(k^{4}\).
From Lemma 2.1, the following asymptotic expansions of boundary data for \(u_{\pm,\text{app}}^{s}\) and \(v_{\pm,\text{app}}^{s}\) are obtained.
**Lemma 2.3**.: _Under the same assumption on \(k\) and \(c\) as in Lemma 2.1, it holds that_
\[v_{+,app}^{s}(-1) =ik\left(c-\frac{k^{2}}{U_{s}^{\prime}(-1)}\int_{-1}^{1}U_{s}^{2} (x)\mathrm{d}x\right)+O(1)k^{3}|c\log Imc|, \tag{2.31}\] \[v_{+,app}^{s}(1) =ikc+O(1)k^{3}|c|,\] (2.32) \[v_{-,app}^{s}(\pm 1) =\frac{ik}{U_{s}^{\prime}(\pm 1)}+O(1)k\left(|c|+k^{2}\right)| \log Imc|,\] (2.33) \[u_{+,app}^{s}(\pm 1) =U_{s}^{\prime}(\pm 1)+O(1)\left(|c|+k^{2}\right)|\log Imc|,\] (2.34) \[u_{-,app}^{s}(\pm 1) =O(1)|\log Imc|. \tag{2.35}\]
Proof.: With Lemma 2.1 for the boundary values of \(\varphi_{\pm}^{s}\), the expansions (2.31)-(2.33) for \(v_{\pm,\text{app}}^{s}\) directly follow from (2.28). Next we show (2.34) and (2.35). From Lemma 2.1, the relation (2.29) and boundary conditions \(U_{s}(\pm 1)=0\), we have
\[\rho_{+,\text{app}}^{s}(\pm 1) =m^{2}A^{-1}(\pm 1)\left[-c\partial_{y}\varphi_{+}^{s}(\pm 1)- \varphi_{+}^{s}(\pm 1)U_{s}^{\prime}(\pm 1)\right]\] \[=O(1)\left(k^{2}+|c|\right), \tag{2.36}\]
and
\[\rho_{-,\text{app}}^{s}(\pm 1) =m^{2}A^{-1}(\pm 1)\left[-c\partial_{y}\varphi_{-}^{s}(\pm 1)- \varphi_{-}^{s}(\pm 1)U_{s}^{\prime}(\pm 1)\right]=O(1). \tag{2.37}\]
Thus from (2.30) for \(u_{+,\text{app}}^{s}\) and the boundary values (2.12), (2.14) and (2.36), we obtain
\[u_{+,\text{app}}^{s}(\pm 1) =\partial_{y}\varphi_{+}^{s}(\pm 1)+O(1)|c|\left|\rho_{+,\text{app} }^{s}(\pm 1)\right|\] \[=U_{s}^{\prime}(\pm 1)+\left(k^{2}+|c|\right)|\log\mathrm{Im}c|,\]
which implies (2.34). Similarly, from (2.30) for \(u_{-,\text{app}}^{s}\), (2.16) and (2.37), there holds
\[u_{-,\text{app}}^{s}(\pm 1) =\partial_{y}\varphi_{-}^{s}(\pm 1)+O(1)|c|\left|\rho_{-,\text{app} }^{s}(\pm 1)\right|=O(1)|\log\mathrm{Im}c|.\]
This yields (2.35). Therefore, the proof of Lemma 2.3 is completed.
Using Lemma 2.2 and (2.28)-(2.30), we can obtain the following estimates on \(\vec{\Xi}_{\pm,\text{app}}^{s}\).
**Corollary 2.1**.: _Under the same assumptions on \(k\) and \(c\) as in Lemma 2.2, the inviscid modes \(\vec{\Xi}_{\pm,\text{app}}^{s}=(\rho_{\pm,\text{app}}^{s},u_{\pm,\text{app}}^{s},v_{\pm,\text{app}}^{s})\) have the following bounds:_
\[\|v_{+,\text{app}}^{s}\|_{W^{2,\infty}}+\left\|\left(m^{-2}\rho_{+,\text{app}}^ {s},u_{+,\text{app}}^{s}\right)\right\|_{W^{1,\infty}}+\epsilon^{\frac{1}{7}} \left\|\left(m^{-2}\partial_{y}^{2}\rho_{+,\text{app}}^{s},\partial_{y}^{2}u_ {+,\text{app}}^{s}\right)\right\|_{L^{2}}\leq C, \tag{2.38}\]
_and_
\[\|v_{-,\text{app}}^{s}\|_{H^{2}} +\epsilon^{\frac{2}{7}}\|\partial_{y}^{3}v_{-,\text{app}}^{s}\| _{L^{2}}+\left\|\left(m^{-2}\rho_{-,\text{app}}^{s},u_{-,\text{app}}^{s} \right)\right\|_{L^{2}}\] \[+\sum_{j=1}^{3}\epsilon^{\frac{2j-1}{7}}\left\|\left(m^{-2} \partial_{y}^{j}\rho_{-,\text{app}}^{s},\partial_{y}^{j}u_{-,\text{app}}^{s} \right)\right\|_{L^{2}}\leq C. \tag{2.39}\]
**Remark 2.1**.: _Note that in the above estimates, compared with \(\vec{\Xi}_{+,\text{app}}^{s}\), we need bound on one more derivative of \(\vec{\Xi}_{-,\text{app}}^{s}\) in order to get better estimates on boundary values of remainder for \(\vec{\Xi}_{-\text{app}}^{s}\). See Section 4 for details._
Proof.: The bounds for \(v_{\pm,\text{app}}^{s}\) are straightforward. That is, by using (2.28) and bounds for \(\varphi_{\pm}^{s}\) in Lemma 2.2, we obtain
\[\|v_{+,\text{app}}^{s}\|_{W^{2,\infty}}\leq C\|\varphi_{+}^{s}\|_ {W^{2,\infty}}\leq C, \tag{2.40}\] \[\|v_{-,\text{app}}^{s}\|_{H^{2}}+\epsilon^{\frac{2}{7}}\|\partial _{y}^{3}v_{-,\text{app}}^{s}\|_{L^{2}}\leq\|\varphi_{-}^{s}\|_{H^{2}}+\epsilon ^{\frac{3}{7}}\|\partial_{y}^{3}\varphi_{-}^{s}\|_{L^{2}}\leq C. \tag{2.41}\]
For the density and tangential velocity component, we have from the relation (2.29) and (2.30) that
\[m^{-2}\rho_{\pm,\text{app}}^{s},\ \ u_{\pm,\text{app}}^{s}\approx|\partial_{y} \varphi_{\pm}^{s}|+|\varphi_{\pm}^{s}|.\]
Thus
\[\left\|\left(m^{-2}\rho_{+,\text{app}}^{s},u_{+,\text{app}}^{s}\right)\right\| _{W^{1,\infty}}\leq C\|\varphi_{+}^{s}\|_{W^{2,\infty}}\leq C,\]
and
\[\left\|\left(m^{-2}\partial_{y}^{2}\rho_{+,\text{app}}^{s},\partial_{y}^{2}u_ {+,\text{app}}^{s}\right)\right\|_{L^{2}}\leq C\|\varphi_{+}^{s}\|_{H^{3}} \leq C\epsilon^{-\frac{1}{7}}.\]
Combining above estimates with (2.40) yields the bound (2.38) for \(\vec{\Xi}_{+,\text{app}}^{s}\). Similarly, for \(\rho_{-,\text{app}}^{s}\) and \(u_{-,\text{app}}^{s}\), we have
\[\left\|\left(m^{-2}\rho_{-,\text{app}}^{s},u_{-,\text{app}}^{s} \right)\right\|_{L^{2}} +\sum_{j=1}^{3}\epsilon^{\frac{2j-1}{7}}\left\|\left(m^{-2} \partial_{y}^{j}\rho_{-,\text{app}}^{s},\partial_{y}^{j}u_{-,\text{app}}^{s} \right)\right\|_{L^{2}}\] \[\leq C\|\varphi_{-}^{s}\|_{H^{1}}+C\sum_{j=2}^{4}\epsilon^{\frac{2 j-3}{7}}\|\partial_{y}^{j}\varphi_{-}^{s}\|_{L^{2}}\leq C.\]
Here we have used (2.24) in the last inequality. Combining this with (2.41) for \(v_{-,\text{app}}^{s}\) gives (2.39). Therefore, the proof of Corollary 2.1 is completed.
### Boundary layers
To match the exact boundary conditions in (1.7), we need to construct two boundary layers near \(y=\pm 1\). Firstly, consider the boundary layer on the bottom \(y=-1\). For this, we introduce the fast variable
\[z=\frac{1+y}{\delta}\]
where \(\delta>0\) is the scale of boundary layer to be determined later. The approximate solution to (1.6) is in the following form
\[\Xi_{-,\text{app}}^{f}=(\underline{\rho}^{-},\underline{u}^{-},ik\delta \underline{v}^{-})(z). \tag{2.42}\]
Substituting the ansatz (2.42) into (1.6), we obtain the following system in the new variable \(z\):
\[\begin{cases}(U_{s}-c)\underline{\rho}^{-}+\underline{u}^{-}+ \partial_{z}\underline{v}^{-}=0,\\ -\partial_{z}^{2}\underline{u}^{-}+\frac{ik\delta^{2}}{\epsilon}(U_{s}-c) \underline{u}^{-}+\frac{ik\delta^{3}}{\epsilon}\underline{v}^{-}U_{s}^{\prime }\\ +\frac{\delta^{2}}{\epsilon}(ikm^{-2}+\epsilon U^{\prime\prime}) \underline{\rho}^{-}+(\lambda+1)k^{2}\delta^{2}\underline{u}^{-}+\lambda k^{ 2}\delta^{2}\partial_{z}\underline{v}^{-}=0,\\ -\partial_{z}^{2}\underline{v}^{-}+\frac{ik\delta^{2}}{\epsilon}(U_{s}-c) \underline{v}^{-}+\frac{m^{-2}}{ik\epsilon}\partial_{z}\underline{\rho}^{-}+ k^{2}\delta^{2}\underline{v}^{-}-\lambda\partial_{z}(\underline{u}^{-}+ \partial_{z}\underline{v}^{-})=0.\end{cases} \tag{2.43}\]
To obtain the leading order profile \((\underline{\rho}^{-},\underline{u}^{-},\underline{v}^{-})\) near \(y=-1\), we expand \(U_{s}(y)\) as follows:
\[U_{s}(y)-c =U_{s}(-1)-c+U_{s}^{\prime}(-1)(1+y)+O(1)(1+y)^{2}\] \[=U_{s}^{\prime}(-1)\delta z-c+O(1)|\delta\underline{z}|^{2}\] \[=\delta U_{s}^{\prime}(-1)(z+z_{0})+O(1)|\delta z|^{2}, \tag{2.44}\]
where the boundary condition \(U_{s}(-1)=0\) of Poiseuille flow has been used. Here,
\[z_{0}=\frac{-c}{U_{s}^{\prime}(-1)\delta}. \tag{2.45}\]
We can also expand \(U_{s}^{\prime}(y)\) at \(y=-1\) as
\[U_{s}^{\prime}(y)=U_{s}^{\prime}(-1)+O(1)|\delta z|. \tag{2.46}\]
Now we plug (2.44) and (2.46) into (2.43) and derive that the scale
\[\delta=\frac{e^{-\frac{1}{6}\pi i}}{\left[U_{s}^{\prime}(-1)n\right]^{\frac{ 1}{3}}},\text{ where }n=\frac{k}{\epsilon}, \tag{2.47}\]
to balance the first three terms in the second equation in (2.43). Then by taking the leading order of (2.43), the following system for boundary layer profile \((\underline{\rho}^{-},\underline{u}^{-},\underline{v}^{-})\) is derived.
\[\underline{\rho}^{-}=0,\ \underline{u}^{-}+\partial_{z}\underline{v}^{-}=0,\ - \partial_{z}^{2}\underline{u}^{-}+(z+z_{0})\underline{u}^{-}+\underline{v}^{-}=0. \tag{2.48}\]
From the above system, we can obtain the following equation for \(\underline{v}^{-}\).
\[-\partial_{z}^{4}\underline{v}^{-}+(z+z_{0})\partial_{z}^{2}\underline{v}^{- }=0. \tag{2.49}\]
As in [22, 23, 50], to solve (2.49) we use the function \(\mathrm{Ai}(z)\) which is the solution to the Airy equation:
\[\partial_{z}^{2}\mathrm{Ai}-z\mathrm{Ai}=0.\]
The first and second order primitive functions of \(\mathrm{Ai}(z)\) are denoted by \(\mathrm{Ai}(1,z)\) and \(\mathrm{Ai}(2,z)\) respectively. They satisfy
\[\partial_{z}\mathrm{Ai}(2,z)=\mathrm{Ai}(1,z),\text{ and }\partial_{z} \mathrm{Ai}(1,z)=\mathrm{Ai}(z).\]
Moreover, \(\mathrm{Ai}(z)\), \(\mathrm{Ai}(1,z)\) and \(\mathrm{Ai}(2,z)\) decay at infinity along the straight line \(e^{\frac{1}{6}\pi i}\mathbb{R}_{+}\). Set
\[\underline{v}^{-}=\frac{\mathrm{Ai}(2,z+z_{0})}{\mathrm{Ai}(2,z_{0})},\ \underline{u}^{-}=-\frac{\mathrm{Ai}(1,z+z_{0})}{\mathrm{Ai}(2,z_{0})}.\]
It is straightforward to check that \((\underline{u}^{-},\underline{v}^{-})\) satisfies (2.48). In view of (2.42), we define the boundary layer profile at \(y=-1\) as follows:
\[\vec{\Xi}^{f}_{-,\mathrm{app}}=(\rho_{-}^{f},u_{-}^{f},v_{-}^{f})=\left(0, \underline{u}^{-},ik\delta\underline{v}^{-}\right)(z),\ \ z=e^{\frac{1}{6}\pi i}(1+y)\left[U_{s}^{\prime}(-1)n\right]^{\frac{1}{3}}. \tag{2.50}\]
The boundary layer at \(y=1\) can be constructed similarly. Set the fast variable
\[\tilde{z}=\frac{1-y}{\tilde{\delta}},\]
where the scale
\[\tilde{\delta}=\frac{e^{-\frac{1}{6}\pi i}}{\left[-U_{s}^{\prime}(1)n\right]^ {\frac{1}{3}}} \tag{2.51}\]
of boundary layer can be derived in the same way as (2.47). Following (2.42)-(2.50), we can construct the boundary layer profile near \(y=1\):
\[\vec{\Xi}^{f}_{+,\mathrm{app}}=(\rho_{+}^{f},u_{+}^{f},v_{+}^{f})=\left(0, \underline{u}^{+},ik\tilde{\delta}\underline{v}^{+}\right)(\tilde{z}), \tag{2.52}\]
where
\[\underline{v}^{+}=\frac{\mathrm{Ai}(2,\tilde{z}+\tilde{z}_{0})}{\mathrm{Ai}( 2,\tilde{z}_{0})},\ \underline{u}^{+}=\frac{\mathrm{Ai}(1,\tilde{z}+\tilde{z}_{0})}{\mathrm{Ai}(2, \tilde{z}_{0})},\ \text{ with }\tilde{z}_{0}=\frac{c}{U_{s}^{\prime}(1)\tilde{\delta}}. \tag{2.53}\]
In the next Lemma, we summarize some pointwise estimates on \(\vec{\Xi}^{f}_{\pm,\mathrm{app}}\). The proof follows from Lemma 3.9 in [36]. Therefore, we omit the details for brevity.
**Lemma 2.4**.: _For \(j=0,1,2\) and \(l\geq 0\), there exists a positive constant \(\tau_{1}\), such that the boundary layers \(\vec{\Xi}^{f}_{\pm,\mathrm{app}}\) satisfy the following pointwise bounds:_
\[\left|(1\pm y)^{l}\partial_{y}^{j}v_{\pm,\mathrm{app}}^{f}\right| \leq C|n|^{\frac{2(j-l)-3}{6}}e^{-\tau_{1}n^{\frac{1}{3}}(1\pm y)}, \tag{2.54}\] \[\left|(1\pm y)^{l}\partial_{y}^{j}u_{\pm,\mathrm{app}}^{f}\right| \leq C|n|^{\frac{l-l}{3}}e^{-\tau_{1}n^{\frac{1}{3}}(1\pm y)}, \tag{2.55}\]
_for any \(y\in(-1,1).\) Here the constant \(C\) is independent of \(\epsilon\)._
By using explicit formula (2.50), (2.52) and pointwise estimates (2.54), (2.55), we have the following lemma about the asymptotic behavior of boundary values for \((u_{\pm,\mathrm{app}}^{f},v_{\pm,\mathrm{app}}^{\prime})\).
**Lemma 2.5**.: _For sufficiently small \(\epsilon\), boundary values of \((u^{f}_{\pm,\text{app}},v^{f}_{\pm,\text{app}})\) have the following expansions:_
\[u^{f}_{+,\text{app}}(1)=\frac{Ai(1,\bar{z}_{0})}{Ai(2,\bar{z}_{0} )},\ v^{f}_{+,\text{app}}(1)=ik\bar{\delta},\ \ u^{f}_{+,\text{app}}(-1),v^{f}_{+,\text{app}}(-1)=O(1) \epsilon^{\infty}, \tag{2.56}\] \[u^{f}_{-,\text{app}}(-1)=-\frac{Ai(1,z_{0})}{Ai(2,z_{0})},\ v^{f }_{-,\text{app}}(-1)=ik\delta,\ \ u^{f}_{-,\text{app}}(1),v^{f}_{-,\text{app}}(1)=O(1) \epsilon^{\infty}. \tag{2.57}\]
Proof.: We only show (2.56). Boundary conditions for \((u^{f}_{+,\text{app}},v^{f}_{+,\text{app}})\) at \(y=1\) directly follow from the explicit formula (2.52) and (2.53). From pointwise estimates (2.54) and (2.55) for \(j=l=0\), we can obtain
\[\left|v^{f}_{+,\text{app}}(-1)\right| \leq Cn^{-\frac{1}{2}}e^{-2\tau_{1}n^{\frac{1}{3}}}\leq Ce^{-2 \tau_{1}\epsilon^{-\frac{2}{7}}},\] \[\left|u^{f}_{+,\text{app}}(-1)\right| \leq Ce^{-2\tau_{1}n^{\frac{1}{3}}}\leq Ce^{-\tau_{1}\epsilon^{- \frac{2}{7}}}.\]
The proof of (2.57) is similar.
### Error estimates
Based on the approximate slow and fast modes constructed in the previous two subsections, we now summarize the formulation and related estimates on errors generated by the approximations. Recall inviscid modes \(\vec{\Xi}^{s}_{\pm,\text{app}}\) defined in (2.28)-(2.30), and viscous modes \(\vec{\Xi}^{f}_{\pm,\text{app}}\) given by (2.42) and (2.52) respectively. Plugging these solutions into (1.6), we obtain the following error functions:
\[\vec{E}^{s}_{\pm}\overset{\text{\tiny def}}{=}\mathcal{L}(\vec{ \Xi}^{s}_{\pm,\text{app}})=\bigg{(}0, -\epsilon\Delta_{k}u^{s}_{\pm,\text{app}}-\lambda\epsilon ik\text{ div}_{k}(u^{s}_{\pm,\text{app}},v^{s}_{\pm,\text{app}})+\epsilon U^{\prime\prime}_{s} \rho^{s}_{\pm,\text{app}},\] \[-\epsilon\Delta_{k}v^{s}_{\pm,\text{app}}-\lambda\epsilon\partial _{y}\text{ div}_{k}(u^{s}_{\pm,\text{app}},v^{s}_{\pm,\text{app}})-k^{4}(U_{s}-c)\varphi_{ \pm,k}\bigg{)}, \tag{2.58}\]
where \(\varphi_{\pm,k}\) are defined in (2.8), and
\[\vec{E}^{f}_{\pm}\overset{\text{\tiny def}}{=}\mathcal{L}(\vec{ \Xi}^{f}_{\pm,\text{app}})=\bigg{(}0, \ k^{2}\epsilon\mu^{f}_{\pm,\text{app}}+ik\left[U_{s}(y)-U^{\prime}_{s}(\pm 1 )(1\pm y)\right]u^{f}_{\pm,\text{app}}\] \[+\left(U^{\prime}_{s}(y)-U^{\prime}_{s}(-1)\right)v^{f}_{\pm, \text{app}},\ -\epsilon\Delta_{k}v^{f}_{\pm,\text{app}}+ik(U_{s}-c)v^{f}_{\pm, \text{app}}\bigg{)}. \tag{2.59}\]
Based on bounds in Corollary 2.1 and Lemma 2.4, we have the following estimates on error terms \(\vec{E}^{s}_{\pm}\) and \(\vec{E}^{f}_{\pm}\).
**Proposition 2.1**.: _Decompose the error function \(\vec{E}^{s}_{+}\) as_
\[\vec{E}^{s}_{+}=\vec{E}^{s}_{+,1}+\vec{E}^{s}_{+,2}, \tag{2.60}\]
_where \(\vec{E}^{s}_{+,2}=(0,0,-k^{4}(U_{s}-c)\varphi_{+,k})\). Then it holds that_
\[\|\vec{E}^{s}_{+,1}\|_{L^{2}}\leq C\epsilon^{\frac{\epsilon}{7}},\ \|\vec{E}^{s}_{+,2}\|_{H^{1}}\leq C\epsilon^{\frac{\epsilon}{7}}. \tag{2.61}\]
_The error function \(\vec{E}^{s}_{-}\) satifies the following uniform estimate:_
\[\|\vec{E}^{s}_{-}\|_{L^{2}}+\epsilon^{\frac{2}{7}}\|\partial_{y}\vec{E}^{s}_{- }\|_{L^{2}}\leq C\epsilon^{\frac{\epsilon}{7}}. \tag{2.62}\]
_For the error functions \(\vec{E}^{f}_{\pm}\) generated by the boundary layers, it holds that_
\[\|\vec{E}^{f}_{\pm}\|_{L^{2}}\leq C\epsilon^{\frac{\epsilon}{7}}. \tag{2.63}\]
Proof.: We start with the error function \(\vec{E}_{+}^{s}\) defined in (2.58). By using (2.38) in Corollary 2.1, we obtain
\[\|\vec{E}_{+,1}^{s}\|_{L^{2}}\leq C\epsilon\left\|\left(u_{+,\text{app}}^{s},v_{+, \text{app}}^{s}\right)\right\|_{H^{2}}+\epsilon\|\varrho_{+,\text{app}}^{s}\|_ {L^{2}}\leq C\epsilon^{\frac{\xi}{7}}, \tag{2.64}\]
and
\[\|\vec{E}_{+,2}^{s}\|_{H^{1}}\leq k^{4}\left(1+\|\varphi_{-}\|_{H^{1}}\right) \leq\epsilon^{\frac{4}{7}}. \tag{2.65}\]
Then combining (2.64) and (2.65) yields (2.61).
For \(\vec{E}_{-}^{s}\), we use (2.39) to obtain that
\[\|\vec{E}_{-}^{s}\|_{L^{2}}\leq C\epsilon\left\|\left(u_{-,\text{app}}^{s},v_{-,\text{app}}^{s}\right)\right\|_{H^{2}}+\epsilon\|\varrho_{-,\text{app}}^{s} \|_{L^{2}}+k^{4}\|\varphi_{-}\|_{L^{2}}\leq C\epsilon^{\frac{4}{7}}, \tag{2.66}\]
and
\[\|\partial_{y}\vec{E}_{-}^{s}\|_{L^{2}}\leq C\epsilon\left\|\left(u_{-,\text{ app}}^{s},v_{-,\text{app}}^{s}\right)\right\|_{H^{3}}+C\epsilon\|\varrho_{-, \text{app}}^{s}\|_{H^{1}}+Ck^{4}\|\varphi_{-}\|_{H^{1}}\leq C\epsilon^{\frac{2 }{7}}. \tag{2.67}\]
Thus the inequality (2.62) follows from (2.66) and (2.67).
Finally for \(\vec{E}_{\pm}^{f}\), using pointwise estimates (2.54) and (2.55) gives
\[\|\vec{E}_{\pm}^{f}\|_{L^{2}}\leq Ck^{2}\epsilon\|u_{\pm,\text{app}}^{f}\|_{L^{2}}+Ck\|(1\pm y)^{2}u_{ \pm,\text{app}}^{f}\|_{L^{2}}+C\|(1\pm y)v_{\pm,\text{app}}^{f}\|_{L^{2}}\] \[+C\left(\epsilon\|v_{\pm,\text{app}}^{f}\|_{H^{2}}+k\|(1\pm y)v_{ \pm,\text{app}}^{f}\|_{L^{2}}+|c\|\|v_{\pm,\text{app}}^{f}\|_{L^{2}}\right)\] \[\leq C\epsilon^{\frac{\xi}{7}},\]
which is (2.63). The proof of Proposition 2.1 is completed.
## 3 Control of remainder
In the last section, we have constructed four independent approximate solutions to the eigenvalue problem (1.6). To correct errors generated by the approximation, we need to solve the following remainder system
\[\begin{cases}ik(U_{s}-c)\rho+\text{div}_{k}(u,v)=0,\quad y\in(-1,1),\\ -\epsilon\Delta_{k}u-\lambda ik\text{div}_{k}(u,v)+ik(U_{s}-c)u+(ikm^{-2}+ \epsilon U_{s}^{\prime\prime})\rho+vU_{s}^{\prime}=f_{u},\\ -\epsilon\Delta_{k}v-\lambda\epsilon\partial_{y}\text{div}_{k}(u,v)+ik(U_{s}- c)v+m^{-2}\partial_{y}\rho=f_{v},\\ v|_{v=\pm 1}=0,\end{cases} \tag{3.1}\]
where \((f_{u},f_{v})\) are in homogenuous source terms. Notice that in (3.1), we only impose the boundary condition for normal velocity \(v\). The system (3.1) is studied in the regime of parameters: \(k\approx\epsilon^{\frac{1}{7}}\) and \(c\) satisfies (1.8).
The main result in this section is the following proposition.
**Proposition 3.1**.: _Let the Mach number \(m\in\left(0,\frac{1}{\sqrt{3}}\right)\). If \(f_{u},f_{v}\in L^{2}(-1,1)\), then remainder system (3.1) admits a solution \((\rho,u,v)\in H^{1}(-1,1)\times\left(H^{2}(-1,1)\right)^{2}\). Moreover, the solution satisfies following uniform-in-\(\epsilon\) estimates_
\[\epsilon^{\frac{5}{7}}\|(m^{-1}\rho,u,v)\|_{L^{2}}+\epsilon^{\frac{5}{7}}\|(m ^{-2}\partial_{y}\rho,\partial_{y}u,\partial_{y}v)\|_{L^{2}}+\epsilon^{\frac {9}{7}}\|(\partial_{y}^{2}u,\partial_{y}^{2}v)\|_{L^{2}}\leq C\|(f_{u},f_{v}) \|_{L^{2}}. \tag{3.2}\]
_For \(f_{u}\), \(f_{v}\in H^{1}(-1,1)\), we define the operator_
\[\Omega(f_{u},f_{v})=-f_{v}+\frac{1}{ik}\partial_{y}(A^{-1}f_{u}). \tag{3.3}\]
_Then the solution \((\rho,u,v)\) satisfies the following improved estimates:_
\[\epsilon^{\frac{2}{7}}\|(m^{-1}\rho,u,v)\|_{L^{2}}+\epsilon^{\frac {2}{7}}\|(m^{-2}\partial_{y}\rho,\partial_{y}u,\partial_{y}v)\|_{L^{2}}+ \epsilon^{\frac{4}{7}}\|(\partial_{y}^{2}u,\partial_{y}^{2}v)\|_{L^{2}}\] \[\qquad\leq C\|\Omega(f_{u},f_{v})\|_{L^{2}}+C\epsilon^{\frac{1}{7 }}\|(f_{u},f_{v})\|_{L^{2}}+C\epsilon^{\frac{2}{7}}\|div_{k}(f_{u},f_{v})\|_{L ^{2}}. \tag{3.4}\]
_Furthermore, the boundary values \(u(\pm 1;c)\) are analytic in \(c\)._
### Quasi-compressible approximation
To solve (3.1), we introduce the following quasi-compressible approximate problem:
\[\begin{cases}ik(U_{s}-c)\varrho+\mathrm{div}_{k}(\mathfrak{u}, \mathfrak{v})=0,\;y\in(-1,1),\\ -\epsilon\Delta_{k}(\mathfrak{u}+(U_{s}-c)\varrho)+ik(U_{s}-c)\mathfrak{u}+ \mathfrak{v}U_{s}^{\prime}+ikm^{-2}\varrho=s_{1},\\ -\epsilon\Delta_{k}\mathfrak{v}+ik(U_{s}-c)\mathfrak{v}+m^{-2}\partial_{y} \varrho=s_{2},\\ \mathfrak{v}|_{y=\pm 1}=0.\end{cases} \tag{3.5}\]
Here \((s_{1},s_{2})\) is a given source term. We denote the operator on the left hand side of (3.5) by \(\mathfrak{Q}\).
As in [50], we can decouple \((\varrho,\mathfrak{u},\mathfrak{v})\) and reformulate the system (3.5) into an Orr-Sommerfeld type equation. In fact, by the first equation in (3.5), it is natural to define the effective stream function \(\Phi\), which satisfies:
\[\partial_{y}\Phi=\mathfrak{u}+(U_{s}-c)\varrho,\;-ik\Phi=\mathfrak{v},\;\; \Phi|_{y=\pm 1}=0. \tag{3.6}\]
Then plugging (3.6) into the second equation of (3.5), we can express \(\varrho\) in terms of \(\Phi\) as
\[m^{-2}\varrho=-A^{-1}(y)\bigg{[}\frac{i}{n}\Delta_{k}\partial_{y}\Phi+(U_{s}- c)\partial_{y}\Phi-U_{s}^{\prime}\Phi-(ik)^{-1}s_{1}\bigg{]}, \tag{3.7}\]
where the function \(A(y)\) is defined in (2.3). Substituting (3.7) into the third equation, we derive the following equation for \(\Phi\):
\[\mathrm{OS}_{\mathrm{CNS}}(\Phi)=\frac{i}{n}\Lambda(\Delta_{k}\Phi)+(U_{s}-c )\Lambda(\Phi)-\partial_{y}(A^{-1}U_{s}^{\prime})\Phi=\Omega(s_{1},s_{2}),\;y \in(-1,1). \tag{3.8}\]
Here \(n=\frac{k}{\epsilon}\) is the rescaled frequency, and
\[\Lambda:H^{2}(-1,1)\cap H^{1}_{0}(-1,1)\to L^{2}(-1,1),\] \[\Lambda(\Phi)=\partial_{y}(A^{-1}\partial_{y}\Phi)-k^{2}\Phi\]
is the modified vorticity, and the operator \(\Omega\) is defined in (3.3). To solve (3.8), the following boundary conditions are imposed:
\[\Lambda(\Phi)|_{y=\pm 1}=\Phi|_{y=\pm 1}=0.\]
If \(\Phi\) is a solution to (3.8), then \((\varrho,\mathfrak{u},\mathfrak{v})\) is a solution to the quasi-compressible system (3.5).
According to the above argument, it is suffcient to study the following Orr-Sommerfeld type equation.
\[\begin{cases}\mathrm{OS}_{\mathrm{CNS}}(\Phi)=h,\ y\in(-1,1),\\ \Phi|_{y=\pm 1}=\Lambda(\Phi)|_{y=\pm 1}=0.\end{cases} \tag{3.9}\]
Here \(h\in L^{2}(-1,1)\) is a given source. Once the solution \(\Phi\) is obtained, the fluid quantities \((\varrho,\mathfrak{u},\mathfrak{v})\) can be recovered from (3.6) and (3.7).
For the solvability of (3.9), we have the following lemma.
**Lemma 3.1**.: _Let the Mach number \(m\in\left(0,\frac{1}{\sqrt{3}}\right)\). There exists a unique solution \(\Phi\in H^{4}(-1,1)\cap H^{1}_{0}(-1,1)\) to the Orr-Sommerfeld equation (3.9). Moreover, the solution satisfies_
\[\|\partial_{y}\Phi,k\Phi\|_{L^{2}}+\|\Lambda\Phi\|_{L^{2}} \leq C\epsilon^{-\frac{2}{7}}\|h\|_{L^{2}}, \tag{3.10}\] \[\|\partial_{y}\Lambda(\Phi),k\Lambda(\Phi)\|_{L^{2}} \leq C\epsilon^{-\frac{4}{7}}\|h\|_{L^{2}}. \tag{3.11}\]
Proof.: We only show the a priori estimates (3.10) and (3.11). The existence and uniqueness parts can be proved in the same way as in Lemma 3.5 of [50]. Set the following weight function.
\[w(y)=-[\partial_{y}(A^{-1}\partial_{y}U_{s})]^{-1}. \tag{3.12}\]
Multiplying (3.9) by \(-w\overline{\Lambda(\Phi)}\) and integrating the resultant equation over \((-1,1)\), we have:
\[\underbrace{-\frac{i}{n}\int_{-1}^{1}w\Lambda(\Delta_{k}\Phi) \overline{\Lambda(\Phi)}\mathrm{d}y}_{I_{1}} +\underbrace{\int_{-1}^{1}-(U_{s}-c)w|\Lambda(\Phi)|^{2}\mathrm{ d}y}_{I_{2}}\] \[+\underbrace{\int_{-1}^{1}-\Phi\overline{\Lambda(\Phi)}\mathrm{ d}y}_{I_{3}}+\underbrace{\int_{-1}^{1}hw\overline{\Lambda(\Phi)}\mathrm{d}y}_{I_{4}}=0. \tag{3.13}\]
Now we estimate \(I_{i}\), \(i=1,2,3,4\), term by term. First we consider \(I_{1}\). Denote \([\Delta_{k},\Lambda]=\Delta_{k}(\Lambda(\Phi))-\Lambda(\Delta_{k}(\Phi))\) and write \(I_{1}\) as
\[I_{1}=-\frac{i}{n}\int_{-1}^{1}w\Delta_{k}\left(\Lambda(\Phi)\right)\overline {\Lambda(\Phi)}\mathrm{d}y+\frac{i}{n}\int_{-1}^{1}w[\Delta_{k},\Lambda](\Phi )\overline{\Lambda(\Phi)}\mathrm{d}y=I_{11}+I_{12}. \tag{3.14}\]
For the first term \(I_{11}\), integrating by parts and using boundary conditions \(\Lambda(\Phi)|_{y=\pm 1}=0\) give
\[I_{11}=\frac{i}{n}\int_{-1}^{1}w(|\partial_{y}\Lambda(\Phi)|^{2}+k^{2}|\Lambda (\Phi)|^{2})\mathrm{d}y+\frac{i}{n}\int_{-1}^{1}w^{\prime}\partial_{y}\Lambda( \Phi)\overline{\Lambda(\Phi)}\mathrm{d}y, \tag{3.15}\]
in which the last integral is bounded by
\[\left|\frac{i}{n}\int_{-1}^{1}w^{\prime}\partial_{y}\Lambda(\Phi)\overline{ \Lambda(\Phi)}\mathrm{d}y\right|\leq C\epsilon^{\frac{6}{7}}\|\partial_{y} \Lambda(\Phi)\|_{L^{2}}\|\Lambda(\Phi)\|_{L^{2}}. \tag{3.16}\]
Concerning \(I_{12}\), we first represent the commutator \([\Delta_{k},\Lambda]\Phi\) in terms of \(\Lambda(\Phi)\), \(\partial_{y}\Phi\) and \(\Phi\):
\[[\Delta_{k},\Lambda]\Phi =\partial_{y}^{3}(A^{-1}\partial_{y}\Phi)-\partial_{y}(A^{-1} \partial_{y}^{3}\Phi)\] \[=2\partial_{y}(A^{-1})\partial_{y}^{3}\Phi+3\partial_{y}^{2}(A^{-1 })\partial_{y}^{2}\Phi+\partial_{y}^{3}(A^{-1})\partial_{y}\Phi. \tag{3.17}\]
Then by using the relations
\[\partial_{y}^{2}\Phi =A\Lambda(\Phi)+A^{-1}\partial_{y}A\partial_{y}\Phi+k^{2}A\Phi, \tag{3.18}\] \[\partial_{y}^{3}\Phi =A\partial_{y}\Lambda(\Phi)+2\Lambda(\Phi)\partial_{y}A+\partial_ {y}\Phi(A^{-1}\partial_{y}^{2}A+k^{2}A)+2k^{2}\Phi\partial_{y}A, \tag{3.19}\]
we obtain \(|\partial_{y}^{2}\Phi|\approx|\Lambda(\Phi)|+\left\langle\left(\partial_{y} \Phi,k\Phi\right)\right|\), and \(|\partial_{y}^{3}\Phi|\approx\left|\left(\partial_{y}\Lambda(\Phi),\Lambda( \Phi)\right)\right|+\left|\left(\partial_{y}\Phi,k\Phi\right)\right|\). Plugging this into commutator (3.17) and taking \(L^{2}\) norm, we obtain
\[\|[\Delta_{k},\Lambda]\Phi\|_{L^{2}}\leq C\left(\left\|\left(\partial_{y} \Lambda(\Phi),\Lambda(\Phi)\right)\right\|_{L^{2}}+\|(\partial_{y}\Phi,k\Phi) \|_{L^{2}}\right).\]
Thus, \(I_{12}\) is bounded as
\[|I_{12}| \leq\frac{C}{n}\|[\Delta_{k},\Lambda]\Phi\|_{L^{2}}\|\Lambda( \Phi)\|_{L^{2}}\] \[\leq C\epsilon^{\frac{6}{7}}\|\Lambda(\Phi)\|_{L^{2}}\left(\left\| \left(\partial_{y}\Lambda(\Phi),\Lambda(\Phi)\right)\right\|_{L^{2}}+\|( \partial_{y}\Phi,k\Phi)\|_{L^{2}}\right). \tag{3.20}\]
By substituting estimates (3.15), (3.16) for \(I_{11}\) and (3.20) for \(I_{12}\) into (3.14), then taking real and imaginary parts of the result respectively, we deduce that
\[|\mathrm{Re}I_{1}|\leq \frac{C\|\mathrm{Im}w\|_{L^{\infty}}}{n}\left\|\partial_{y} \Lambda(\Phi)\right\|_{L^{2}}^{2}+C\epsilon^{\frac{6}{7}}\|\Lambda(\Phi)\|_{L^ {2}}\left(\left\|\left(\partial_{y}\Lambda(\Phi),\Lambda(\Phi)\right)\right\| _{L^{2}}+\|(\partial_{y}\Phi,k\Phi)\|_{L^{2}}\right)\] \[\leq C\epsilon^{\frac{6}{7}}\left\|\partial_{y}\Lambda(\Phi)\right\|_ {L^{2}}^{2}+C\epsilon^{\frac{6}{7}}\|\Lambda(\Phi)\|_{L^{2}}\left(\left\| \left(\partial_{y}\Lambda(\Phi),\Lambda(\Phi)\right)\right\|_{L^{2}}+\|( \partial_{y}\Phi,k\Phi)\|_{L^{2}}\right), \tag{3.21}\]
and
\[\mathrm{Im}I_{1}\geq \frac{1}{n}\int_{-1}^{1}\mathrm{R}ew\left|\left(\partial_{y} \Lambda(\Phi),k\Lambda(\Phi)\right)\right|^{2}\mathrm{dy}-C\epsilon^{\frac{6} {7}}\|\Lambda(\Phi)\|_{L^{2}}\bigg{(}\left\|\left(\partial_{y}\Lambda(\Phi), \Lambda(\Phi)\right)\right\|_{L^{2}}+\|(\partial_{y}\Phi,k\Phi)\|_{L^{2}} \bigg{)}\] \[\geq \epsilon^{\frac{6}{7}}\left\|\left(\partial_{y}\Lambda(\Phi),k \Lambda(\Phi)\right)\right\|^{2}-C\epsilon^{\frac{6}{7}}\|\Lambda(\Phi)\|_{L^ {2}}\bigg{(}\left\|\left(\partial_{y}\Lambda(\Phi),\Lambda(\Phi)\right)\right\| _{L^{2}}+\|(\partial_{y}\Phi,k\Phi)\|_{L^{2}}\bigg{)}. \tag{3.22}\]
In (3.21) and (3.22), we have used the fact that \(|\mathrm{Im}w|\lesssim|c|\lesssim\epsilon^{\frac{2}{7}}\) and \(\mathrm{R}ew\gtrsim 1\) respectively.
Next we estimate \(I_{2}\). By using (5.13) and (5.14), we can obtain
\[\left|\mathrm{Re}\Big{(}(U_{s}-c)w\bigg{)}\right|\lesssim 1,\]
and
\[\mathrm{Im}\bigg{(}-(U_{s}-c)w\bigg{)}=(w_{0}-U_{s}w_{1})\,\mathrm{Im}c+O(1)|c |^{2}\gtrsim\gamma_{0}\mathrm{Im}c+O(1)|c|^{2}\gtrsim\frac{\gamma_{0}}{2} \epsilon^{\frac{2}{7}}.\]
Therefore, by taking real and imaginary part of \(I_{2}\) respectively, we deduce that:
\[|\mathrm{Re}I_{2}|\leq\left\|\mathrm{Re}\Big{(}(U_{s}-c)w\bigg{)}\right\|_{L^{ \infty}}\|\Lambda(\Phi)\|_{L^{2}}^{2}\leq C\|\Lambda(\Phi)\|_{L^{2}}, \tag{3.23}\]
and
\[\mathrm{Im}I_{2}\geq\int_{-1}^{1}\mathrm{Im}\bigg{(}-(U_{s}-c)w\bigg{)}\,| \Lambda(\Phi)|^{2}\,\mathrm{dy}\geq\frac{\gamma_{0}}{2}\epsilon^{\frac{7}{7}}\| \Lambda(\Phi)\|_{L^{2}}^{2}. \tag{3.24}\]
For \(I_{3}\), integrating by parts and using the boundary conditions \(\Phi|_{y=\pm 1}=0\) give
\[I_{3}=\int_{-1}^{1}\bar{A}^{-1}|\partial_{y}\Phi|^{2}+k^{2}|\Phi|^{2}\mathrm{d}y.\]
Then we have
\[\bar{A}^{-1}=(1-m^{2}U_{s}^{2})^{-2}\left(1-m^{2}U_{s}^{2}-2m^{2}U_{s}\bar{c}+O( 1)|c|^{2}\right),\]
which implies that \(\mathrm{Re}\left(\bar{A}^{-1}\right)\sim 1\) and \(\mathrm{Im}\left(\bar{A}^{-1}\right)\gtrsim\left(m^{2}\mathrm{Im}c\right)U_{s }-|c|^{2}\). Thus, we obtain
\[\mathrm{Re}I_{3}=\int_{-1}^{1}\mathrm{Re}\left(\bar{A}^{-1}\right)|\partial_{y }\Phi|^{2}+k^{2}|\Phi|^{2}\mathrm{d}y\gtrsim\|(\partial_{y}\Phi,k\Phi)\|_{L^{ 2}}^{2}, \tag{3.25}\]
and
\[\mathrm{Im}I_{3} =\int_{-1}^{1}\mathrm{Im}\left(\bar{A}^{-1}\right)|\partial_{y} \Phi|^{2}\mathrm{d}y\gtrsim m^{2}\mathrm{Im}c\left(\int_{-1}^{1}U_{s}| \partial_{y}\Phi|^{2}\mathrm{d}y\right)-|c|^{2}\|\partial_{y}\Phi\|_{L^{2}}^{2}\] \[\gtrsim-\epsilon^{\frac{4}{7}}\|\partial_{y}\Phi\|_{L^{2}}^{2}. \tag{3.26}\]
Finally, by Cauchy-Schwarz inequality, it holds that:
\[|I_{4}|\lesssim\|h\|_{L^{2}}\|\Lambda(\Phi)\|_{L^{2}}.\]
Thus, we have completed the estimates for \(I_{1}\) to \(I_{4}\). By plugging them into (3.13) and taking real and imaginary parts respectively, we obtain
\[\|(\partial_{y}\Phi,k\Phi)\|_{L^{2}}^{2}\leq C\|\Lambda(\Phi)\|_{L^{2}}^{2}+C\epsilon^{\frac{8}{7}}\left\| \partial_{y}\Lambda(\Phi)\right\|_{L^{2}}^{2}+C\|h\|_{L^{2}}^{2}\] \[\leq C\|\Lambda(\Phi)\|_{L^{2}}^{2}+C\epsilon^{\frac{8}{7}}\left\| \partial_{y}\Lambda(\Phi)\right\|_{L^{2}}^{2}+C\|h\|_{L^{2}}^{2}, \tag{3.27}\]
and
\[\epsilon^{\frac{4}{7}}\left\|\left(\partial_{y}\Lambda(\Phi),k \Lambda(\Phi)\right)\right\|_{L^{2}}^{2}+\|\Lambda(\Phi)\|_{L^{2}}^{2}\] \[\quad\leq\epsilon^{\frac{4}{7}}\|\Lambda(\Phi)\|_{L^{2}}\left( \left\|\left(\partial_{y}\Lambda(\Phi),\Lambda(\Phi)\right)\right\|_{L^{2}}+ \|(\partial_{y}\Phi,k\Phi)\|_{L^{2}}\right)\] \[\quad\quad+\epsilon^{\frac{2}{7}}\|\partial_{y}\Phi\|_{L^{2}}^{2} +\epsilon^{-\frac{2}{7}}\|h\|_{L^{2}}\|\Lambda(\Phi)\|_{L^{2}}.\] \[\quad\leq o(1)\|(\partial_{y}\Phi,k\Phi)\|_{L^{2}}^{2}+\epsilon^ {-\frac{4}{7}}\|h\|_{L^{2}}^{2}. \tag{3.28}\]
Set
\[\mathfrak{B}:=\epsilon^{\frac{4}{7}}\left\|\left(\partial_{y}\Lambda(\Phi),k \Lambda(\Phi)\right)\right\|_{L^{2}}^{2}+\|\Lambda(\Phi)\|_{L^{2}}^{2}+\|( \partial_{y}\Phi,k\Phi)\|_{L^{2}}^{2}.\]
Combining (3.27) and (3.28) suitably, we have:
\[\mathfrak{B}\leq o(1)\mathfrak{B}+\epsilon^{-\frac{4}{7}}\|h\|_{L^{2}}.\]
For sufficiently small \(\epsilon\ll 1\), it holds that
\[\mathfrak{B}\leq C\epsilon^{-\frac{4}{7}}\|h\|_{L^{2}}^{2}.\]
This implies inequalities (3.10) and (3.11). The proof of lemma 3.1 is completed.
With the solution \(\Phi\) to Orr-Sommerfeld type equation (3.8), we can recover fluid quantities \((\varrho,u,v)\) in the following corollary.
**Corollary 3.1**.: _For any \(s_{1},s_{2}\in H^{1}(-1,1)\), there exists a solution \((\varrho,u,v)\in H^{2}(-1,1)\) to the quasi-compressible system (3.5). The solution satisfies the following estimates:_
\[\|u\|_{H^{1}} +\|(m^{-2}\varrho,v)\|_{H^{2}}+\epsilon^{-\frac{1}{7}}\|div_{k}(u,v )\|_{H^{1}}\] \[\leq C\epsilon^{-\frac{2}{7}}\|\Omega(s_{1},s_{2})\|_{L^{2}}+C \epsilon^{-\frac{1}{7}}\|s_{1}\|_{L^{2}}+C\|s_{2}\|_{L^{2}}+C\|div_{k}(s_{1},s _{2})\|_{L^{2}}, \tag{3.29}\]
_and_
\[\|\partial_{y}^{2}u\|_{L^{2}} \leq C\epsilon^{-\frac{4}{7}}\|\Omega(s_{1},s_{2})\|_{L^{2}}+C \epsilon^{-\frac{1}{7}}\|s_{1}\|_{L^{2}}+C\|s_{2}\|_{L^{2}}+C\|div_{k}(s_{1},s _{2})\|_{L^{2}}. \tag{3.30}\]
Proof.: Let \(\Phi\) be the solution to (3.8). From (3.10), (3.11), (3.18) and (3.19), we obtain
\[\|\partial_{y}^{2}\Phi\|_{L^{2}}\leq C\|\Lambda(\Phi)\|_{L^{2}}+C\|(\partial_{y}\Phi,k\Phi)\|_{L^{2}} \leq C\epsilon^{-\frac{2}{7}}\|\Omega(s_{1},s_{2})\|_{L^{2}}, \tag{3.31}\]
and
\[\|\partial_{y}^{3}\Phi\|_{L^{2}} \leq C\|\partial_{y}\Lambda(\Phi)\|_{L^{2}}+C\|\Lambda(\Phi)\|_{L ^{2}}+C\|(\partial_{y}\Phi,k\Phi)\|_{L^{2}}\leq C\epsilon^{-\frac{4}{7}}\| \Omega(s_{1},s_{2})\|_{L^{2}}. \tag{3.32}\]
From \(v=-ik\Phi\), it holds that
\[\|v\|_{H^{2}} \leq C\|\partial_{y}^{2}\Phi\|_{L^{2}}+C\|(\partial_{y}\Phi,k \Phi)\|_{L^{2}}\leq C\epsilon^{-\frac{2}{7}}\|\Omega(s_{1},s_{2})\|_{L^{2}}.\]
This gives (3.29) for \(v\). For the density \(\varrho\), by using the explicit formula (3.7) and the elementary inequality
\[\|\Phi\|_{L^{\infty}} \leq\int_{-1}^{1}|\Phi^{\prime}(x)|\mathrm{d}x\leq\sqrt{2}\| \Phi^{\prime}\|_{L^{2}},\ \text{for any $\Phi\in H^{1}_{0}(-1,1)$}, \tag{3.33}\]
we have
\[m^{-2}\|\varrho\|_{L^{2}} \leq\frac{C}{n}\|\partial_{y}^{3}\Phi\|_{L^{2}}+C(1+\frac{k^{2}}{ n})\|\partial_{y}\Phi\|_{L^{2}}+C\|\Phi\|_{L^{\infty}}\|\partial_{y}U_{s}\|_{L^{2}} +\frac{C}{k}\|s_{1}\|_{L^{2}}\] \[\leq\frac{C}{n}\|\partial_{y}^{3}\Phi\|_{L^{2}}+C\|\partial_{y} \Phi\|_{L^{2}}+\frac{C}{k}\|s_{1}\|_{L^{2}}\] \[\leq C\epsilon^{-\frac{2}{7}}\|\Omega(s_{1},s_{2})\|_{L^{2}}+C \epsilon^{-\frac{1}{7}}\|s_{1}\|_{L^{2}}. \tag{3.34}\]
Here we have used (3.31) and (3.32) in the last inequality of (3.34). For the derivatives of \(\varrho\), we firstly observe that
\[-m^{-2}\partial_{y}\varrho =\text{OS}_{\text{CNS}}(\Phi)+k^{2}\left[\frac{i}{n}\Delta_{k} \Phi+(U_{s}-c)\Phi\right]-\frac{1}{ik}\partial_{y}(A^{-1}s_{1})\] \[=-s_{2}+k^{2}\left[\frac{i}{n}\Delta_{k}\Phi+(U_{s}-c)\Phi\right].\]
Then taking \(L^{2}\)-norm on both side of the above equation and using (3.31) give
\[m^{-2}\|\partial_{y}\varrho\|_{L^{2}} \leq C\|s_{2}\|_{L^{2}}+C\|\partial_{y}^{2}\Phi\|_{L^{2}}+Ck\| \Phi\|_{L^{2}}\] \[\leq C\epsilon^{-\frac{2}{7}}\|\Omega(s_{1},s_{2})\|_{L^{2}}+C\|s_ {2}\|_{L^{2}}. \tag{3.35}\]
Moreover, by taking divergence of the last two equations in (3.5), we can derive the following equation:
\[-m^{-2}\Delta_{k}\varrho=-\mathrm{div}_{k}(s_{1},s_{2})+k^{2}(U_{s}-c)^{2}\varrho+ 2k^{2}\Phi U^{\prime}_{s}.\]
Thus taking \(L^{2}\)-norm and using (3.31) and (3.34) yield that
\[m^{-2}\|\partial_{y}^{2}\varrho\|_{L^{2}} \leq C\|\mathrm{div}_{k}(s_{1},s_{2})\|_{L^{2}}+C(1+m^{-2})\| \varrho\|_{L^{2}}+Ck\|\Phi\|_{L^{2}}\] \[\leq C\|\mathrm{div}_{k}(s_{1},s_{2})\|_{L^{2}}+C\epsilon^{-\frac {2}{\eta}}\|\Omega(s_{1},s_{2})\|_{L^{2}}+C\epsilon^{-\frac{1}{\eta}}\|s_{1}\| _{L^{2}}. \tag{3.36}\]
Combining (3.34), (3.35) and (3.36) together yields (3.29) for \(\varrho\). Concerning about the divergence field \(\mathrm{div}_{k}(\mathfrak{u},\mathfrak{v})\), we have from the continuity equation (3.5)\({}_{1}\) that
\[\epsilon^{-\frac{1}{\eta}}\|\mathrm{div}_{k}(\mathfrak{u},\mathfrak{v})\|_{H ^{1}}\leq C\|\varrho\|_{H^{1}}\leq C\epsilon^{-\frac{2}{\eta}}\|\Omega(s_{1}, s_{2})\|_{L^{2}}+C\epsilon^{-\frac{1}{\eta}}\|s_{1}\|_{L^{2}}+C\|s_{2}\|_{L^{2}}.\]
Finally, we estimate \(\mathfrak{u}\). From (3.6), it holds that \(|\mathfrak{u}|\sim|\partial_{y}\Phi|+|\varrho|\). Thus, by using (3.31), (3.32) and (3.34)-(3.36), we obtain
\[\|\mathfrak{u}\|_{H^{1}}\leq C\|\partial_{y}\Phi\|_{H^{1}}+C\|\varrho\|_{H^{1 }}\leq C\epsilon^{-\frac{2}{\eta}}\|\Omega(s_{1},s_{2})\|_{L^{2}}+C\epsilon^{- \frac{1}{\eta}}\|s_{1}\|_{L^{2}}+C\|s_{2}\|_{L^{2}},\]
and
\[\|\partial_{y}^{2}\mathfrak{u}\|_{L^{2}}\leq C\|\partial_{y}^{3}\Phi\|_{L^{2} }+C\|\varrho\|_{H^{2}}\leq C\epsilon^{-\frac{4}{\eta}}\|\Omega(s_{1},s_{2})\| _{L^{2}}+C\epsilon^{-\frac{1}{\eta}}\|s_{1}\|_{L^{2}}+C\|s_{2}\|_{L^{2}}+C\| \mathrm{div}_{k}(s_{1},s_{2})\|_{L^{2}}.\]
Therefore, the estimates (3.29) and (3.30) for \(\mathfrak{u}\) are obtained. This completes the proof of the corollary.
It is straightforward to see that the quasi-compressible operator \(\mathfrak{Q}\) generates the error
\[\vec{E}_{\mathfrak{Q}}(\varrho,\mathfrak{u},\mathfrak{v}) \stackrel{{\text{\tiny def}}}{{=}}\mathcal{L}(\varrho, \mathfrak{u},\mathfrak{v})-\mathfrak{Q}(\varrho,\mathfrak{u},\mathfrak{v})\] \[=\bigg{(}0,\epsilon\Delta_{k}\left((U_{s}-c)\varrho\right)- \lambda\epsilon ik\mathrm{div}_{k}(\mathfrak{u},\mathfrak{v})+\epsilon U_{s} ^{\prime\prime}\varrho,-\epsilon\lambda\partial_{y}\mathrm{div}_{k}( \mathfrak{u},\mathfrak{v})\bigg{)}, \tag{3.37}\]
which is not in \(H^{1}(-1,1)\). This fact prevents us from iterating \(\mathfrak{Q}\) directly to solve (3.1). In the next subsection, we will introduce the Stokes operator to recover the regularity.
### Stokes approximation
To smooth out the error term (3.37) generated by the quasi-compressible system (3.5), we introduce the following Stokes approximation:
\[\begin{cases}ik(U_{s}-c)\mathcal{P}+\mathrm{div}_{k}(\mathcal{U},\mathcal{V}) =q_{0},\ y\in(-1,1),\\ -\epsilon\Delta_{k}\mathcal{U}-ik\epsilon\mathrm{div}_{k}(\mathcal{U}, \mathcal{V})+ik(U_{s}-c)\mathcal{U}+(ikm^{-2}+\epsilon U_{s}^{\prime\prime}) \mathcal{P}=q_{1},\\ -\epsilon\Delta_{k}\mathcal{V}-\lambda\epsilon\partial_{y}\mathrm{div}_{k}( \mathcal{U},\mathcal{V})+ik(U_{s}-c)\mathcal{V}+m^{-2}\partial_{y}\mathcal{P} =q_{2},\\ \partial_{y}\mathcal{U}|_{y=\pm 1}=\mathcal{V}|_{y=\pm 1}=0.\end{cases} \tag{3.38}\]
Here \((q_{0},q_{1},q_{2})\) is any given inhomogenuous source term. Compared with the original resolvent problem (3.1), we remove the stretching term \(\mathcal{V}U_{s}^{\prime}\) in the Stokes system (3.38). The solvability of Stokes system is summarized as follows.
**Lemma 3.2**.: _The Stokes system (3.38) admits a unique solution \((\mathcal{P},\mathcal{U},\mathcal{V})\in H^{1}(-1,1)\times(H^{2}(-1,1))^{2}\). Moreover, \((\mathcal{P},\mathcal{U},\mathcal{V})\) satisfies the following estimates:_
\[\|(m^{-1}\mathcal{P},\mathcal{U},\mathcal{V})\|_{L^{2}} \leq C\epsilon^{-\frac{3}{7}}\|(m^{-1}q_{0},q_{1},q_{2})\|_{L^{2}}, \tag{3.39}\] \[\|(\partial_{y}\mathcal{U},k\mathcal{U})\|_{L^{2}}+\|(\partial_{y }\mathcal{V},k\mathcal{V})\|_{L^{2}} \leq C\epsilon^{-\frac{5}{7}}\|(m^{-1}q_{0},q_{1},q_{2})\|_{L^{2}},\] (3.40) \[\|div_{k}(\mathcal{U},\mathcal{V})\|_{H^{1}}+m^{-2}\|\partial_{y }\mathcal{P}\|_{L^{2}} \leq C\epsilon^{-\frac{2}{7}}\|(m^{-2}q_{0},q_{1},q_{2})\|_{L^{2} }+C\|\partial_{y}q_{0}\|_{L^{2}},\] (3.41) \[\|(\Delta_{k}\mathcal{U},\Delta_{k}\mathcal{V})\|_{L^{2}} \leq C\epsilon^{-\frac{2}{7}}\|(m^{-1}q_{0},q_{1},q_{2})\|_{L^{2} }+C\|\partial_{y}q_{0}\|_{L^{2}}. \tag{3.42}\]
**Remark 3.1**.: _In view of (3.39)-(3.41), the divergence \(div_{k}(\mathcal{U},\mathcal{V})\) and density \(\mathcal{P}\) have stronger estimates than other components. This indicates the absence of boundary layers for these two components._
Proof.: We follow the proof in [50]. Multiplying the second and third equations by \(-\bar{\mathcal{U}}\) and \(-\bar{\mathcal{V}}\) respectively and integrating by parts, we obtain:
\[\epsilon\bigg{(}\|\partial_{y}\mathcal{U},k\mathcal{U}\|_{L^{2}}^ {2}+\|\partial_{y}\mathcal{V},k\mathcal{V}\|_{L^{2}}^{2}\bigg{)}+\lambda \epsilon\|\mathrm{div}_{k}(\mathcal{U},\mathcal{V})\|_{L^{2}}^{2}\] \[+\underbrace{ik\int_{-1}^{1}(U_{s}-c)\bigg{(}|\mathcal{U}|^{2}+| \mathcal{V}|^{2}\bigg{)}\mathrm{d}y}_{J_{1}}+\underbrace{m^{-2}\int_{-1}^{1} -\mathcal{P}\overline{\mathrm{div}_{k}(\mathcal{U},\mathcal{V})}\mathrm{d}y}_ {J_{2}}\] \[\qquad=\underbrace{-\int_{-1}^{1}(q_{1}+\epsilon U_{s}^{\prime \prime}\mathcal{P})\bar{\mathcal{U}}+q_{2}\bar{\mathcal{V}}\mathrm{d}y}_{J_{3 }}. \tag{3.43}\]
The real part of \(J_{1}\) gives
\[\mathrm{Re}J_{1}=k\mathrm{Im}c\,\|(\mathcal{U},\mathcal{V})\|_{L^{2}}^{2} \gtrsim\epsilon^{\frac{3}{7}}\|(\mathcal{U},\mathcal{V})\|_{L^{2}}^{2}. \tag{3.44}\]
For \(J_{2}\), by using the continuity equation (3.38)\({}_{1}\), we have
\[\overline{\mathrm{div}_{k}(\mathcal{U},\mathcal{V})}=ik(U_{s}-\bar{c})\bar{ \mathcal{P}}+\bar{q_{0}}.\]
Plugging this into \(J_{2}\) and taking real part give
\[\mathrm{Re}J_{2} =-ikm^{-2}\int_{-1}^{1}(U_{s}-\bar{c})|\mathcal{P}|^{2}\mathrm{d} y-m^{-2}\int_{-1}^{1}\mathcal{P}\bar{q_{0}}\mathrm{d}y\] \[\geq k\mathrm{Im}c\|m^{-1}\mathcal{P}\|_{L^{2}}^{2}-m^{-2}\| \mathcal{P}\|_{L^{2}}\|q_{0}\|_{L^{2}}\] \[\gtrsim\epsilon^{\frac{3}{7}}\|m^{-1}\mathcal{P}\|_{L^{2}}^{2}-m^ {-2}\|\mathcal{P}\|_{L^{2}}\|q_{0}\|_{L^{2}}. \tag{3.45}\]
By Cauchy-Schwarz inequality, \(J_{3}\) is bounded by
\[|J_{3}|\lesssim\|(\mathcal{U},\mathcal{V})\|_{L^{2}}\|(q_{1},q_{2})\|_{L^{2}}+ \epsilon\bigg{(}\|\mathcal{P}\|_{L^{2}}^{2}+\|\mathcal{U}\|_{L^{2}}^{2}\bigg{)}. \tag{3.46}\]
Therefore, by substituting (3.44), (3.45), (3.46) into (3.43) and then taking the real part, we deduce that
\[\epsilon\bigg{(}\|\partial_{y}\mathcal{U},k\mathcal{U}\|_{L^{2}}^{2}+\| \partial_{y}\mathcal{V},k\mathcal{V}\|_{L^{2}}^{2}\bigg{)}+\lambda\epsilon\| \mathrm{div}_{k}(\mathcal{U},\mathcal{V})\|_{L^{2}}^{2}+\epsilon^{\frac{3}{7}} \|(m^{-1}\mathcal{P},\mathcal{U},\mathcal{V})\|_{L^{2}}^{2}\]
\[\leq C\|(m^{-1}{\cal P},{\cal U},{\cal V})\|_{L^{2}}\|(m^{-1}q_{0},q_{1},q_{2})\|_{L^{2}}\] \[\leq o(1)\epsilon^{\frac{3}{7}}\|(m^{-1}{\cal P},{\cal U},{\cal V}) \|_{L^{2}}^{2}+C\epsilon^{-\frac{3}{7}}\|m^{-1}q_{0},q_{1},q_{2})\|_{L^{2}}^{2}.\]
Absorbing the first term on the right hand side to the left gives (3.39) and (3.40).
Next we estimate the density and divergence part of the solution. For this, we denote the vorticity by
\[\omega=\partial_{y}{\cal U}-ik{\cal V}\]
and the divergence of \(({\cal U},{\cal V})\) by \({\cal D}=\operatorname{div}_{k}({\cal U},{\cal V})\). The diffusion in (3.38) can be expressed in terms of \({\cal D}\) and \(\omega\):
\[\Delta_{k}{\cal U}=\partial_{y}\omega+ik{\cal D},\;\Delta_{k}{\cal V}=-ik \omega+\partial_{y}{\cal D}.\]
Thus we can rewrite the momentum equations in (3.38) as
\[\begin{cases}ikm^{-2}{\cal P}=\epsilon\partial_{y}\omega+\epsilon(1+\lambda) ik{\cal D}-ik(U_{s}-c){\cal U}-q_{1}-\epsilon U^{\prime\prime}_{s}{\cal P},\\ m^{-2}\partial_{y}{\cal P}=-\epsilon ik\omega+\epsilon(1+\lambda)\partial_{y}{ \cal D}-ik(U_{s}-c){\cal V}-q_{2}.\end{cases} \tag{3.47}\]
By taking inner product of the first and second equations with \(\overline{ik{\cal P}}\) and \(\partial_{y}\bar{\cal P}\) respectively, we obtain:
\[m^{-2}\|(\partial_{y}{\cal P},k{\cal P}\|_{L^{2}}^{2}=\underbrace{ \epsilon\int_{-1}^{1}\partial_{y}\omega\overline{ik{\cal P}}-ik\omega\partial _{y}\bar{\cal P}\mathrm{d}y}_{J_{4}}+\underbrace{\epsilon(1+\lambda)\int_{-1} ^{1}k^{2}{\cal D}\bar{\cal P}+\partial_{y}{\cal D}\partial_{y}\bar{\cal P} \mathrm{d}y}_{J_{5}}\] \[\quad+\underbrace{\int_{-1}^{1}(q_{1}+\epsilon U^{\prime\prime}_ {s}{\cal P})ik\bar{\cal P}-q_{2}\partial_{y}\bar{\cal P}\mathrm{d}y}_{J_{6}}+ \underbrace{\int_{-1}^{1}ik(U_{s}-c)(ik\bar{\cal P}{\cal U}-{\cal V}\partial_{y }\bar{\cal P})\mathrm{d}y}_{J_{7}}. \tag{3.48}\]
Integrating by parts and using the boundary condition \(\omega|_{y=\pm 1}=0\) give
\[J_{4}=-ik\epsilon\bar{\cal P}\omega\Big{|}_{y=-1}^{y=1}=0. \tag{3.49}\]
From the continuity equation (3.38)\({}_{1}\), it holds that \({\cal D}=-ik(U_{s}-c){\cal P}+q_{0}\), and \(\partial_{y}{\cal D}=-ik(U_{s}-c)\partial_{y}{\cal P}-ikU^{\prime}_{s}{\cal P} +\partial_{y}q_{0}\). Plugging these into \(J_{5}\), we obtain
\[J_{5}= -\epsilon(1+\lambda)\int_{-1}^{1}ik(U_{s}-c)(|\partial_{y}{\cal P }|^{2}+k^{2}|{\cal P}|^{2})\mathrm{d}y\] \[-ik\epsilon(1+\lambda)\int_{-1}^{1}U^{\prime}_{s}{\cal P}\partial _{y}{\cal P}\mathrm{d}y+\epsilon(1+\lambda)\int_{-1}^{1}\left(\partial_{y} \bar{\cal P}\partial_{y}q_{0}+k^{2}q_{0}\bar{\cal P}\right)\mathrm{d}y. \tag{3.50}\]
By Cauchy-Schwarz and Young's inequalities, the last two terms on the right hand side of \(J_{5}\) are bounded by
\[\left|ik\epsilon(1+\lambda)\int_{-1}^{1}U^{\prime}_{s}{\cal P}\partial_{y}{ \cal P}dy\right|\leq C\epsilon\|(\partial_{y}{\cal P},k{\cal P})\|_{L^{2}}^{2},\]
and
\[\left|\epsilon(1+\lambda)\int_{-1}^{1}\partial_{y}\bar{\cal P} \partial_{y}q_{0}+k^{2}q_{0}\bar{\cal P}dy\right|\leq C\epsilon\|(\partial_{y}{\cal P},k{\cal P})\|_{L^{2}}\|( \partial_{y}q_{0},kq_{0})\|_{L^{2}}\] \[\leq \frac{m^{-2}}{8}\|(\partial_{y}{\cal P},k{\cal P})\|_{L^{2}}^{2}+ Cm^{2}\epsilon^{2}\|(\partial_{y}q_{0},kq_{0})\|_{L^{2}}^{2}.\]
Therefore, by taking real part in (3.50), we deduce
\[\mathrm{Re}J_{5} \leq-\epsilon k\mathrm{Imc}(1+\lambda)\|(\partial_{y}\mathcal{P},k \mathcal{P})\|_{L^{2}}^{2}+C\bigg{(}\epsilon+\frac{m^{-2}}{8}\bigg{)}\|( \partial_{y}\mathcal{P},k\mathcal{P})\|_{L^{2}}^{2}+Cm^{2}\epsilon^{2}\|( \partial_{y}q_{0},kq_{0})\|_{L^{2}}^{2}\] \[\leq\frac{m^{-2}}{4}\|(\partial_{y}\mathcal{P},k\mathcal{P})\|_{L ^{2}}^{2}+Cm^{2}\epsilon^{2}\|(\partial_{y}q_{0},kq_{0})\|_{L^{2}}^{2}. \tag{3.51}\]
Moreover, by Cauchy-Schwarz and Young's inequalities again, it holds that
\[|J_{6}|+|J_{7}| \leq\bigg{(}\|q_{1},q_{2}\|_{L^{2}}+k\|(\mathcal{U},\mathcal{V}) \|_{L^{2}}\bigg{)}\|(\partial_{y}\mathcal{P},k\mathcal{P})\|_{L^{2}}+C \epsilon^{\frac{6}{7}}\|k\mathcal{P}\|_{L^{2}}^{2}\] \[\leq\frac{m^{-2}}{4}\|(\partial_{y}\mathcal{P},k\mathcal{P})\|_ {L^{2}}^{2}+Cm^{2}\bigg{(}\|(q_{1},q_{2})\|_{L^{2}}^{2}+k^{2}\|(\mathcal{U}, \mathcal{V})\|_{L^{2}}^{2}\bigg{)}. \tag{3.52}\]
By combining the estimates (3.49), (3.51) and (3.52) for \(J_{4}\) to \(J_{7}\), and taking the real part of (3.48), we obtain that
\[m^{-2}\|(\partial_{y}\mathcal{P},k\mathcal{P})\|_{L^{2}} \leq C\|(q_{1},q_{2})\|_{L^{2}}+C\epsilon\|(\partial_{y}q_{0},kq_{ 0})\|_{L^{2}}+Ck\|(\mathcal{U},\mathcal{V})\|_{L^{2}}\] \[\leq C\epsilon^{-\frac{7}{7}}\|(m^{-1}q_{0},q_{1},q_{2})\|_{L^{2} }+C\epsilon\|\partial_{y}q_{0}\|_{L^{2}}. \tag{3.53}\]
Here we have used (3.39) for \((\mathcal{U},\mathcal{V})\) in the last line of (3.53). For the divergence, it holds from (3.38)\({}_{1}\) and (3.53) that
\[\|\mathrm{div}_{k}(\mathcal{U},\mathcal{V})\|_{H^{1}} \leq C\|(\partial_{y}\mathcal{P},k\mathcal{P})\|_{L^{2}}+C\|q_{0} \|_{H^{1}}\] \[\leq C\epsilon^{-\frac{7}{7}}\|(m^{-1}q_{0},q_{1},q_{2})\|_{L^{2} }+C\|\partial_{y}\mathcal{J}_{0}\|_{L^{2}}. \tag{3.54}\]
Putting bounds (3.53) and (3.54) together gives (3.41).
Finally, we estimate second order derivatives of \((\mathcal{U},\mathcal{V})\). By taking inner product of the first and second equations of (3.47) with \(\partial_{y}\bar{\omega}\) and \(\overline{ik\omega}\) respectively, we can deduce
\[\epsilon\|(\partial_{y}\omega,k\omega)\|_{L^{2}}^{2}= \int_{-1}^{1}(q_{1}+\epsilon U_{s}^{\prime\prime}\mathcal{P}) \partial_{y}\bar{\omega}+q_{2}\overline{ik\omega}\mathrm{d}y+\int_{-1}^{1}ik (U_{s}-c)(\mathcal{U}\partial_{y}\bar{\omega}+\mathcal{V}\overline{ik\omega} )\mathrm{d}y\] \[\leq C\bigg{(}\|(q_{1},q_{2})\|_{L^{2}}+\epsilon\|\mathcal{P}\|_{L ^{2}}+k\|(\mathcal{U},\mathcal{V})\|_{L^{2}}\bigg{)}\|(\partial_{y}\omega,k \omega)\|_{L^{2}}. \tag{3.55}\]
Thus by using (3.39) and (3.55), we have
\[\|(\partial_{y}\omega,k\omega)\|_{L^{2}} \leq C\epsilon^{-1}\|(q_{1},q_{2})\|_{L^{2}}+C\epsilon^{-\frac{6} {7}}\|(m^{-1}\mathcal{P},\mathcal{U},\mathcal{V})\|_{L^{2}}\] \[\leq C\epsilon^{-\frac{9}{7}}\|(m^{-1}q_{0},q_{2},q_{2})\|_{L^{2}}. \tag{3.56}\]
The estimate (3.42) follows from (3.54) and (3.56). Therefore, the proof of the lemma is completed.
Note that the error generated by \(\mathfrak{S}\) is
\[\vec{E}_{\mathfrak{S}}(\mathcal{P},\mathcal{U},\mathcal{V})\stackrel{{ \text{\tiny def}}}{{=}}\mathcal{L}(\mathcal{P},\mathcal{U},\mathcal{V})- \mathfrak{S}(\mathcal{P},\mathcal{U},\mathcal{V})=(0,\mathcal{V}U_{s}^{ \prime},0)\,. \tag{3.57}\]
By Lemma 3.2, we have \(\vec{E}_{\mathfrak{S}}\in H^{2}(-1,1)\).
### Solvability of resolvent problem
In this subsection, we solve the resolvent problem (3.1) by alternatively iterating the above Quasi-compressible \(\mathfrak{D}\) and Stokes operators \(\mathfrak{D}\). First of all, we construct the solution \((\rho,u,v)\) to (3.1) for any given inhomogeneous source terms \(f_{u},f_{v}\in L^{2}\), and show the estimate (3.2). At the zeroth step, we introduce the Stokes solution \(\vec{\Xi}_{0}=(\mathcal{P}_{0},\mathcal{U}_{0},\mathcal{V}_{0})\), which solves the following system:
\[\mathfrak{D}(\mathcal{P}_{0},\mathcal{U}_{0},\mathcal{V}_{0})=(0, f_{u},f_{v}). \tag{3.58}\]
Recall the error operator \(\vec{E}_{\mathfrak{D}}\) defined in (3.57). The error generated at this step is
\[\vec{E}_{\mathfrak{D}}(\mathcal{P}_{0},\mathcal{U}_{0},\mathcal{ V}_{0})=(0,\mathcal{V}_{0}U_{s}^{\prime},0)\,. \tag{3.59}\]
Since \(\vec{E}_{\mathfrak{D}}(\mathcal{P}_{0},\mathcal{U}_{0},\mathcal{V}_{0})\in H ^{2}(-1,1)\), we can introduce the solution \((\varrho_{1},\mathfrak{u}_{1},\mathfrak{v}_{1})\) to the quasi-compressible system:
\[\mathfrak{D}(\varrho_{1},\mathfrak{u}_{1},\mathfrak{v}_{1})=- \vec{E}_{\mathfrak{D}}(\mathcal{P}_{0},\mathcal{U}_{0},\mathcal{V}_{0}). \tag{3.60}\]
The error term generated by \(\mathfrak{D}\) is
\[\vec{E}_{\mathfrak{D}}(\varrho_{1},\mathfrak{u}_{1},\mathfrak{v }_{1})=\left(0,\epsilon\Delta_{k}\left[(U_{s}-c)\varrho_{1}\right]-\epsilon ik \mathrm{div}_{k}(\mathfrak{u}_{1},\mathfrak{v}_{1})+\epsilon U_{s}^{\prime \prime}\varrho_{1},\epsilon\lambda\partial_{y}\mathrm{div}_{k}(\mathfrak{u}_{ 1},\mathfrak{v}_{1})\right). \tag{3.61}\]
In view of Corollary 3.1, the right hand side of (3.61) is in \(L^{2}(-1,1)\) instead of \(H^{1}(-1,1)\), which forbids us to iterate \(\mathfrak{D}\) directly. In order to recover the regularity, we introduce \((\mathcal{P}_{1},\mathcal{U}_{1},\mathcal{V}_{1})\) as the solution to Stokes approximation
\[\mathfrak{D}(\mathcal{P}_{1},\mathcal{U}_{1},\mathcal{V}_{1})=- \vec{E}_{\mathfrak{D}}. \tag{3.62}\]
Now we define the following corrector \(\vec{\Xi}_{1}\) at the Step 1:
\[\vec{\Xi}_{1}=(\rho_{1},u_{1},v_{1})\stackrel{{ \text{\tiny def}}}{{=}}(\varrho_{1},\mathfrak{u}_{1},\mathfrak{v}_{1})+( \mathcal{P}_{1},\mathcal{U}_{1},\mathcal{V}_{1}). \tag{3.63}\]
One can check that the error term generated at this step is
\[\vec{\mathcal{E}}_{1} \stackrel{{\text{\tiny def}}}{{=}}\mathcal{L}(\vec {\Xi}_{0}+\vec{\Xi}_{1})-(0,f_{u},f_{v})\] \[=\vec{E}_{\mathfrak{D}}(\mathcal{P}_{1},\mathcal{U}_{1}, \mathcal{V}_{1})=(0,\mathcal{V}_{1}U_{s}^{\prime},0). \tag{3.64}\]
Approximate solutions up to any order can be constructed by induction. Suppose that at \(N\)-th step, we have the corrector
\[\vec{\Xi}_{N}=(\rho_{N},u_{N},v_{N})\stackrel{{ \text{\tiny def}}}{{=}}(\varrho_{N},\mathfrak{u}_{N},\mathfrak{v}_{N})+( \mathcal{P}_{N},\mathcal{U}_{N},\mathcal{V}_{N}), \tag{3.65}\]
and the error
\[\vec{\mathcal{E}}_{N}\stackrel{{\text{\tiny def}}}{{=}} \mathcal{L}\left(\sum_{j=0}^{N}\vec{\Xi}_{j}\right)-(0,f_{u},f_{v})=(0, \mathcal{V}_{N}U_{s}^{\prime},0). \tag{3.66}\]
Then we define the \((N+1)\)-th step corrector as
\[\vec{\Xi}_{N+1}=(\rho_{N+1},u_{N+1},v_{N+1})\stackrel{{\text{ \tiny def}}}{{=}}(\varrho_{N+1},\mathfrak{u}_{N+1},\mathfrak{v}_{N+1})+( \mathcal{P}_{N+1},\mathcal{U}_{N+1},\mathcal{V}_{N+1}), \tag{3.67}\]
where \((\varrho_{N+1},u_{N+1},v_{N+1})\) is the solution to quasi-compressible system
\[\mathfrak{L}(\varrho_{N+1},u_{N+1},v_{N+1})=-\vec{\mathcal{E}}_{N}, \tag{3.68}\]
that cancels the error \(\vec{\mathcal{E}}_{N}\) (3.66) generated from \(N\)-th step, and \((\mathcal{P}_{N+1},\mathcal{U}_{N+1},\mathcal{V}_{N+1})\) solves the Stokes system
\[\mathfrak{L}(\mathcal{P}_{N+1},\mathcal{U}_{N+1},\mathcal{V}_{N+1})=-\vec{ \mathcal{E}}_{\mathfrak{L}}(\varrho_{N+1},u_{N+1},v_{N+1}), \tag{3.69}\]
with the error operator \(\vec{\mathcal{E}}_{\mathfrak{L}}\) being defined in (3.37). One can check that the new error generated at \((N+1)\)-th step is
\[\vec{\mathcal{E}}_{N+1} \stackrel{{\text{\tiny def}}}{{=}}\mathcal{L}\left( \sum_{j=0}^{N+1}\vec{\Xi}_{j}\right)-(0,f_{u},f_{v})\] \[=\vec{E}_{\mathfrak{L}}(\mathcal{P}_{N+1},\mathcal{U}_{N+1}, \mathcal{V}_{N+1})=(0,\mathcal{V}_{N+1}U_{s}^{\prime},0). \tag{3.70}\]
Thus, the approximate solutions up to any order have been constructed. Finally, if \(\vec{\Xi}=\sum_{j=0}^{\infty}\vec{\Xi}_{j}\) converges, then this series defines a solution to the resolvent problem (3.1).
Next, if both \(f_{u}\) and \(f_{v}\) have one order regularity, that is: \((f_{u},f_{v})\in H^{1}(-1,1)\), then at the zeroth step we can introduce \((\varrho_{0},u_{0},v_{0})\) which solves quasi-compressible system
\[\mathfrak{L}(\varrho_{0},u_{0},v_{0})=(0,f_{u},f_{v}). \tag{3.71}\]
The error generated by \((\varrho_{0},u_{0},v_{0})\) is given by
\[\vec{\mathcal{E}}_{-1} \stackrel{{\text{\tiny def}}}{{=}}\vec{E}_{\mathfrak{ L}}(\varrho_{0},u_{0},v_{0})\] \[=\left(0,\epsilon\Delta_{k}\left[(U_{s}-c)\varrho_{0}\right]- \epsilon\lambda ik\text{div}_{k}(u_{0},v_{0})+\epsilon U_{s}^{\prime\prime} \varrho_{0},-\epsilon\lambda\partial_{y}\text{div}_{k}(u_{0},v_{0})\right). \tag{3.72}\]
Since \(\vec{\mathcal{E}}_{-1}\in L^{2}(-1,1)\), we can repeat the procedure (3.59)-(3.70) to construct a solution \(\vec{\Upsilon}=(\tilde{\rho},\tilde{u},\tilde{v})\) to the resolvent problem \(\mathcal{L}(\vec{\Upsilon})=-\vec{\mathcal{E}}_{-1}\). Finally, the solution to the original resolvent problem (3.1) is given by \(\vec{\Xi}\stackrel{{\text{\tiny def}}}{{=}}\vec{\Upsilon}+( \varrho_{0},u_{0},v_{0})\).
We are now in the position to prove the convergence of iteration and solvability of the resolvent problem.
**Proof of Proposition 3.1**: Recall (3.67) the definition of \((N+1)\)-step corrector. First we estimate \(\vec{\Xi}_{N+1}=(\varrho_{N+1},u_{N+1},v_{N+1})\) which solves the quasi-compressible system (3.68) with source term \(s_{1}=\mathcal{V}_{N}U_{s}^{\prime}\) and \(s_{2}=0\). To use Corollary 3.1, we compute
\[\Omega(s_{1},s_{2})=\frac{1}{ik}\partial_{y}\left(A^{-1}\mathcal{V}_{N}U_{s}^ {\prime}\right)=\frac{1}{ik}\left(\partial_{y}(A^{-1}U_{s}^{\prime})\mathcal{ V}_{N}+A^{-1}U_{s}^{\prime}\partial_{s}\mathcal{V}_{N}\right).\]
By taking \(L^{2}\)-norm of \(\Omega(s_{1},s_{2})\), using the elementary inequality (3.33) and the fact \(\partial_{y}\mathcal{V}_{N}=\text{div}_{k}(\mathcal{U}_{N},\mathcal{V}_{N})- ik\mathcal{U}_{N}\), we deduce that
\[\|\Omega(s_{1},s_{2})\|_{L^{2}} \leq C\xi^{-1}\left(\|\partial_{y}(A^{-1}U_{s}^{\prime})\|_{L^{2} }\|\mathcal{V}_{N}\|_{L^{\infty}}+\|A^{-1}U_{s}^{\prime}\|_{L^{\infty}}\| \partial_{y}\mathcal{V}_{N}\|_{L^{2}}\right)\] \[\leq C\xi^{-1}\|\partial_{y}\mathcal{V}_{N}\|_{L^{2}}\leq C \epsilon^{-\frac{1}{\gamma}}\|\text{div}_{k}(\mathcal{U}_{N},\mathcal{V}_{N}) \|_{L^{2}}+C\|\mathcal{U}_{N}\|_{L^{2}}. \tag{3.73}\]
Similarly, it holds that
\[\|s_{1}\|_{L^{2}}\leq C\|\mathcal{V}_{N}\|_{L^{\infty}}\leq C\left(\|\text{ div}_{k}(\mathcal{U}_{N},\mathcal{V}_{N})\|_{L^{2}}+\epsilon^{\frac{1}{\gamma}}\| \mathcal{U}_{N}\|_{L^{2}}\right), \tag{3.74}\]
\[\|\mathrm{div}_{k}(s_{1},s_{2})\|_{L^{2}}\leq Ck\|\mathcal{V}_{N}\|_{L^{ \infty}}\leq C\epsilon^{\frac{1}{7}}\left(\|\mathrm{div}_{k}(\mathcal{U}_{N}, \mathcal{V}_{N})\|_{L^{2}}+\epsilon^{\frac{1}{7}}\|\mathcal{U}_{N}\|_{L^{2}} \right). \tag{3.75}\]
Then by applying Corollary 3.1 to \((\varrho_{N+1},\mathrm{u}_{N+1},\mathrm{v}_{N+1})\) and using the bounds (3.73) to (3.75), we obtain
\[\|u_{N+1}\|_{H^{1}}+\|(m^{-2}\varrho_{N+1},\mathrm{v}_{N+1})\|_{H ^{2}}+\epsilon^{-\frac{1}{7}}\|\mathrm{div}_{k}(\mathrm{u}_{N+1},\mathrm{v}_{ N+1})\|_{H^{1}}\] \[\quad\leq C\epsilon^{-\frac{2}{7}}\left(\epsilon^{-\frac{1}{7}} \|\mathrm{div}_{k}(\mathcal{U}_{N},\mathcal{V}_{N})\|_{L^{2}}+\|\mathcal{U}_{ N}\|_{L^{2}}\right), \tag{3.76}\]
and
\[\|\partial_{y}^{2}\mathrm{u}_{N+1}\|_{L^{2}}\leq C\epsilon^{- \frac{4}{7}}\left(\epsilon^{-\frac{1}{7}}\|\mathrm{div}_{k}(\mathcal{U}_{N}, \mathcal{V}_{N})\|_{L^{2}}+\|\mathcal{U}_{N}\|_{L^{2}}\right). \tag{3.77}\]
Next we estimate \((\mathcal{P}_{N+1},\mathcal{U}_{N+1},\mathcal{V}_{N+1})\) which solves the Stokes system (3.69). Recall the error operator \(\vec{E}_{\mathbb{C}}\) defined in (3.37). Then by using the estimate (3.76) we obtain
\[\|\vec{E}_{\mathbb{C}}(\varrho_{N+1},\mathrm{u}_{N+1},\mathrm{v}_ {N+1})\|_{L^{2}}\leq C\epsilon\left(\|\varrho_{N+1}\|_{H^{2}}+\|\mathrm{div}_{k}( \mathrm{u}_{N+1},\mathrm{v}_{N+1})\|_{H^{1}}\right)\] \[\leq C\epsilon\left(\|m^{-2}\varrho_{N+1}\|_{H^{2}}+k^{-1}\|\mathrm{ div}_{k}(\mathrm{u}_{N+1},\mathrm{v}_{N+1})\|_{H^{1}}\right)\] \[\leq C\epsilon^{\frac{5}{7}}\left(\epsilon^{-\frac{1}{7}}\|\mathrm{ div}_{k}(\mathcal{U}_{N},\mathcal{V}_{N})\|_{L^{2}}+\|\mathcal{U}_{N}\|_{L^{2}} \right). \tag{3.78}\]
By applying Lemma 3.2 to \((\mathcal{P}_{N+1},\mathcal{U}_{N+1},\mathcal{V}_{N+1})\) and using the bound (3.78), we deduce
\[\|(m^{-1}\mathcal{P}_{N+1},\mathcal{U}_{N+1},\mathcal{V}_{N+1})\| _{L^{2}} \leq C\epsilon^{-\frac{3}{7}}\|\vec{E}_{\mathbb{C}}(\varrho_{N+1},\mathrm{u}_{N+1},\mathrm{v}_{N+1})\|_{L^{2}}\] \[\leq C\epsilon^{\frac{5}{7}}\left(\epsilon^{-\frac{1}{7}}\| \mathrm{div}_{k}(\mathcal{U}_{N},\mathcal{V}_{N})\|_{L^{2}}+\|\mathcal{U}_{N} \|_{L^{2}}\right), \tag{3.79}\] \[\epsilon^{-\frac{1}{7}}\left(\|\mathrm{div}_{k}(\mathcal{U}_{N+1 },\mathcal{V}_{N+1})\|_{H^{1}}+m^{-2}\|\partial_{\mathcal{P}}\mathcal{P}_{N+1} \|_{L^{2}}\right) \leq C\epsilon^{-\frac{5}{7}}\|\vec{E}_{\mathbb{C}}(\varrho_{N+1},\mathrm{u}_{N+1},\mathrm{v}_{N+1})\|_{L^{2}}\] \[\leq C\epsilon^{\frac{5}{7}}\left(\epsilon^{-\frac{1}{7}}\| \mathrm{div}_{k}(\mathcal{U}_{N},\mathcal{V}_{N})\|_{L^{2}}+\|\mathcal{U}_{N} \|_{L^{2}}\right),\] (3.80) \[\|\partial_{y}\mathcal{U}_{N+1},k\mathcal{U}_{N+1}\|_{L^{2}}+\| \partial_{y}\mathcal{V}_{N+1},k\mathcal{V}_{N+1}\|_{L^{2}} \leq C\epsilon^{-\frac{5}{7}}\|\vec{E}_{\mathbb{C}}(\varrho_{N+1},\mathrm{u}_{N+1},\mathrm{v}_{N+1})\|_{L^{2}}\] \[\leq C\left(\epsilon^{-\frac{1}{7}}\|\mathrm{div}_{k}(\mathcal{U}_{N },\mathcal{V}_{N})\|_{L^{2}}+\|\mathcal{U}_{N}\|_{L^{2}}\right),\] (3.81) \[\|(\Delta_{k}\mathcal{U}_{N+1},\Delta_{k}\mathcal{V}_{N+1})\|_{L^{ 2}} \leq C\epsilon^{-\frac{5}{7}}\|\vec{E}_{\mathbb{C}}(\varrho_{N+1},\mathrm{u}_{N+1},\mathrm{v}_{N+1})\|_{L^{2}}\] \[\leq C\epsilon^{-\frac{4}{7}}\left(\epsilon^{-\frac{1}{7}}\| \mathrm{div}_{k}(\mathcal{U}_{N},\mathcal{V}_{N})\|_{L^{2}}+\|\mathcal{U}_{N}\|_{ L^{2}}\right). \tag{3.82}\]
Now for \(N=0,1,2,\cdots\), we define
\[E_{N}\stackrel{{\mbox{\tiny def}}}{{=}}\|(m^{-1}\mathcal{P}_{N}, \mathcal{U}_{N},\mathcal{V}_{N})\|_{L^{2}}+\epsilon^{-\frac{1}{7}}\left(\| \mathrm{div}_{k}(\mathcal{U}_{N},\mathcal{V}_{N})\|_{H^{1}}+m^{-2}\|\partial_{ \mathcal{P}}\mathcal{P}_{N}\|_{L^{2}}\right). \tag{3.83}\]
From (3.79) and (3.80), it holds that \(E_{N+1}\leq C\epsilon^{\frac{5}{7}}E_{N}\). Then by taking \(\epsilon\ll 1\) suffciently small, we have
\[\sum_{j=0}^{\infty}E_{j}\leq\sum_{j=0}^{\infty}\left(\frac{1}{2} \right)^{j}E_{0}\leq CE_{0}. \tag{3.84}\]
For other components, by using the estimates (3.76), (3.77), (3.81) and (3.82), we obtain that
\[\sum_{j=1}^{\infty}\|\mathfrak{u}_{j}\|_{H^{1}}+\|(m^{-2}\varrho_{j },\mathfrak{v}_{j})\|_{H^{2}}+\epsilon^{-\frac{1}{7}}\|\mathrm{div}_{k}( \mathfrak{u}_{j},\mathfrak{v}_{j})\|_{H^{1}}\leq C\epsilon^{-\frac{2}{7}}\! \left(\sum_{j=0}^{\infty}E_{j}\right)\leq C\epsilon^{-\frac{2}{7}}E_{0}, \tag{3.85}\] \[\sum_{j=1}^{\infty}\|\partial_{y}^{2}\mathfrak{u}_{j}\|_{L^{2}} \leq C\epsilon^{-\frac{4}{7}}\!\left(\sum_{j=0}^{\infty}E_{j}\right)\leq C \epsilon^{-\frac{4}{7}}E_{0},\] (3.86) \[\sum_{j=1}^{\infty}\|(\partial_{y}\mathcal{U}_{j},\partial_{y} ^{\prime}\mathcal{V}_{j})\|_{L^{2}}\leq C\left(\sum_{j=0}^{\infty}E_{j} \right)\leq CE_{0},\] (3.87) \[\sum_{j=1}^{\infty}\|(\partial_{y}^{2}\mathcal{U}_{j},\partial_{y }^{2}\mathcal{V}_{j})\|_{L^{2}}\leq C\epsilon^{-\frac{4}{7}}\!\left(\sum_{j=0 }^{\infty}E_{j}\right)\leq C\epsilon^{-\frac{4}{7}}E_{0}. \tag{3.88}\]
Next we estimate \(\vec{\Xi}_{0}=(\mathcal{P}_{0},\mathcal{U}_{0},\mathcal{V}_{0})\). Since \(\vec{\Xi}_{0}\) solves the Stokes system (3.58), we can apply Lemma 3.2 to \(\vec{\Xi}_{0}\) with \(q_{0}=0\), \(q_{1}=f_{u}\), and \(q_{2}=f_{v}\). Thus, it holds that
\[E_{0} \leq C\epsilon^{-\frac{3}{7}}\|(f_{u},f_{v})\|_{L^{2}}, \tag{3.89}\] \[\|\partial_{y}\mathcal{U}_{0},k\mathcal{U}_{0}\|_{L^{2}}+\| \partial_{y}\mathcal{V}_{0},k\mathcal{V}_{0}\|_{L^{2}} \leq C\epsilon^{-\frac{9}{7}}\|(f_{u},f_{v})\|_{L^{2}},\] (3.90) \[\|(\partial_{y}^{2}\mathcal{U}_{0},\partial_{y}^{2}\mathcal{V}_{ 0})\|_{L^{2}} \leq C\epsilon^{-\frac{9}{7}}\|(f_{u},f_{v})\|_{L^{2}}. \tag{3.91}\]
Combining estimates (3.84)-(3.91) together gives
\[\|(m^{-1}\rho,u,v)\|_{L^{2}} \leq\sum_{j=0}^{\infty}\|(m^{-1}\mathcal{P}_{j},\mathcal{U}_{j}, \mathcal{V}_{j})\|_{L^{2}}+\sum_{j=1}^{\infty}\|(m^{-1}\varrho_{j},\mathfrak{ u}_{j},\mathfrak{v}_{j})\|_{L^{2}}\] \[\leq C\left(1+\epsilon^{-\frac{2}{7}}\right)E_{0}\leq C\epsilon^{ -\frac{5}{7}}\|(f_{u},f_{v})\|_{L^{2}}, \tag{3.92}\] \[\|m^{-2}\partial_{y}\varrho\|_{L^{2}}+\|\mathrm{div}_{k}(u,v)\|_ {L^{2}} \leq\epsilon^{\frac{1}{7}}\left(\sum_{j=0}^{\infty}E_{j}\right)+ \sum_{j=1}^{\infty}\left(\|m^{-2}\partial_{y}\varrho_{j}\|_{L^{2}}+\| \mathrm{div}_{k}(\mathfrak{u}_{j},\mathfrak{v}_{j})\|_{L^{2}}\right)\] \[\leq C(1+\epsilon^{-\frac{2}{7}})E_{0}\leq C\epsilon^{-\frac{5}{7 }}\|(f_{u},f_{v})\|_{L^{2}},\] (3.93) \[\|(\partial_{y}u,\partial_{y}v)\|_{L^{2}} \leq\sum_{j=0}^{\infty}\|(\partial_{y}\mathcal{U}_{j},\partial_{y }\mathcal{V}_{j})\|_{L^{2}}+\sum_{j=1}^{\infty}\|(\partial_{y}\mathfrak{u}_{j},\partial_{y}\mathfrak{v}_{j})\|_{L^{2}}\] \[\leq C\left(1+\epsilon^{-\frac{2}{7}}\right)E_{0}+C\epsilon^{- \frac{5}{7}}\|(f_{u},f_{v})\|_{L^{2}}\leq C\epsilon^{-\frac{5}{7}}\|(f_{u},f_{ v})\|_{L^{2}},\] (3.94) \[\|(\partial_{y}^{2}u,\partial_{y}^{2}v)\|_{L^{2}} \leq\sum_{j=0}^{\infty}\|(\partial_{y}^{2}\mathcal{U}_{j},\partial _{y}^{2}\mathcal{V}_{j})\|_{L^{2}}+\sum_{j=1}^{\infty}\|(\partial_{y}^{2} \mathfrak{u}_{j},\partial_{y}^{2}\mathfrak{v}_{j})\|_{L^{2}}\] \[\leq C\left(1+\epsilon^{-\frac{4}{7}}\right)E_{0}+C\epsilon^{- \frac{9}{7}}\|(f_{u},f_{v})\|_{L^{2}}\leq C\epsilon^{-\frac{9}{7}}\|(f_{u},f_{ v})\|_{L^{2}}. \tag{3.95}\]
By putting (3.92)-(3.95) together, the inequality (3.2) is obtained.
Finally, we prove the improved estimate (3.4). Suppose that \(f_{u}\) and \(f_{v}\in H^{1}(-1,1)\). As mentioned before, we look for a solution in the form of
\[(\rho,u,v)=(\varrho_{0},\mathfrak{u}_{0},\mathfrak{v}_{0})+(\tilde{\rho},\tilde{u },\tilde{v}),\]
where \((\varrho_{0},\mathfrak{u}_{0},\mathfrak{v}_{0})\) solves the quasi-compressible system (3.71) that generates an error \(\vec{\mathcal{E}}_{-1}\) defined in (3.72), and \((\tilde{\rho},\tilde{u},\tilde{v})\) solves the resolvent problem \(\mathcal{L}(\tilde{\rho},\tilde{u},\tilde{v})=-\vec{\mathcal{E}}_{-1}\). Now we estimate
\((\varrho_{0},u_{0},v_{0})\). Use Corollary 3.1 to have
\[\|u_{0}\|_{H^{1}} +\|(m^{-2}\varrho_{0},v_{0})\|_{H^{2}}+\epsilon^{-\frac{1}{7}}\| \mathrm{div}_{k}(u_{0},v_{0})\|_{H^{1}}\] \[\leq C\epsilon^{-\frac{2}{7}}\|\Omega(f_{u},f_{v})\|_{L^{2}}+C \epsilon^{-\frac{1}{7}}\|(f_{u},f_{v})\|_{L^{2}}+C\|\mathrm{div}_{k}(f_{u},f_{v })\|_{L^{2}}, \tag{3.96}\]
and
\[\|\partial_{y}^{2}u_{0}\|_{L^{2}}\leq C\epsilon^{-\frac{4}{7}}\| \Omega(f_{u},f_{v})\|_{L^{2}}+\epsilon^{-\frac{1}{7}}\|(f_{u},f_{v})\|_{L^{2}} +C\|\mathrm{div}_{k}(f_{u},f_{v})\|_{L^{2}}. \tag{3.97}\]
Moreover, to bound \((\tilde{\rho},\tilde{u},\tilde{v})\), we first observe that
\[\|\vec{\mathcal{E}}_{-1}\|_{L^{2}} \leq\|\vec{E}_{\Sigma}(\varrho_{0},u_{0},v_{0})\|_{L^{2}}\leq C \epsilon\|\varrho_{0}\|_{H^{2}}+C\epsilon\|\mathrm{div}_{k}(u_{0},v_{0})\|_{H ^{1}}\] \[\leq C\epsilon\left(\epsilon^{-\frac{2}{7}}\|\Omega(f_{u},f_{v}) \|_{L^{2}}+C\epsilon^{-\frac{1}{7}}\|(f_{u},f_{v})\|_{L^{2}}+C\|\mathrm{div}_{ k}(f_{u},f_{v})\|_{L^{2}}\right), \tag{3.98}\]
where we have used (3.96) in the last inequality. Then by applying (3.2) to \((\tilde{\rho},\tilde{u},\tilde{v})\), and using the bound (3.98) on source \(\vec{\mathcal{E}}_{-1}\), we obtain that
\[\|(m^{-1}\tilde{\rho},\tilde{u},\tilde{v})\|_{L^{2}}+\|(m^{-2} \partial_{y}\tilde{\rho},\partial_{y}\tilde{u},\partial_{y}\tilde{v})\|_{L^{ 2}}+\epsilon^{\frac{4}{7}}\|(\partial_{y}^{2}\tilde{u},\partial_{y}^{2}\tilde{ v})\|_{L^{2}}\] \[\leq C\epsilon^{-\frac{5}{7}}\|\vec{\mathcal{E}}_{-1}\|_{L^{2}} \leq C\|\Omega(f_{u},f_{v})\|_{L^{2}}+C\epsilon^{\frac{1}{7}}\|(f_{u},f_{v})\| _{L^{2}}+C\epsilon^{\frac{2}{7}}\|\mathrm{div}_{k}(f_{u},f_{v})\|_{L^{2}}. \tag{3.99}\]
Putting (3.96), (3.97) and (3.99) together gives
\[\|(m^{-1}\rho,u,v)\|_{L^{2}}+\|(m^{-2}\partial_{y}\rho,\partial_ {y}u,\partial_{y}v)\|_{L^{2}}\] \[\qquad\leq C\epsilon^{-\frac{2}{7}}\|\Omega(f_{u},f_{v})\|_{L^{2}} +C\epsilon^{-\frac{1}{7}}\|(f_{u},f_{v})\|_{L^{2}}+C\|\mathrm{div}_{k}(f_{u},f _{v})\|_{L^{2}}, \tag{3.100}\]
and
\[\|(\partial_{y}^{2}u,\partial_{y}^{2}v)\|_{L^{2}}\leq C\epsilon^{-\frac{4}{7} }\|\Omega(f_{u},f_{v})\|_{L^{2}}+C\epsilon^{-\frac{3}{7}}\|(f_{u},f_{v})\|_{L^ {2}}+C\epsilon^{-\frac{2}{7}}\|\mathrm{div}_{k}(f_{u},f_{v})\|_{L^{2}}. \tag{3.101}\]
Combining (3.100) and (3.101) together implies (3.4). The analyticity can be proved as in [50]. We omit the details for brevity. Therefore, the proof of the proposition is completed.
## 4 Dispersion relation
Recall in Section 2.1 and 2.2 that we have four independent approximate solutions:
\[\Xi_{\pm,\mathrm{app}}^{s}\text{ and }\Xi_{\pm,\mathrm{app}}^{f}.\]
Here \(\Xi_{\pm,\mathrm{app}}^{s}\) are inviscid modes which are determined in terms of solutions \(\varphi_{\pm}^{s}\) to the Lees-Lin equation (2.5), and \(\Xi_{\pm,\mathrm{app}}^{f}\) are viscous modes which are boundary layers at \(y=\pm 1\). Using Proposition 3.1, we can construct four exact independent solutions to (1.6) near \(\Xi_{\pm,\mathrm{app}}^{s}\) and \(\Xi_{\pm,\mathrm{app}}^{f}\).
**Proposition 4.1**.: _The eigenvalue problem (1.6) admits four solutions, that is, \(\Xi_{\pm}^{s}=(\rho_{\pm}^{s},u_{\pm}^{s},v_{\pm}^{s})\) and \(\Xi_{\pm}^{f}=(\rho_{\pm}^{f},u_{\pm}^{f},v_{\pm}^{f})\). Moreover, their boundary values satisfy the following asymptotic properties:_
\[v_{+}^{s}(-1)=ik\left(c-\tau k^{2}\right)+O(1)\epsilon^{\frac{5}{7}}|\log \epsilon|,\text{ with }\tau=\frac{1}{U_{s}^{\prime}(-1)}\int_{-1}^{1}U_{s}^{2}(x)\mathrm{d}x, \tag{4.1}\]
\[v^{s}_{+}(1) =ikc+O(1)\epsilon^{\frac{5}{7}}, \tag{4.2}\] \[v^{s}_{-}(\pm 1) =\frac{ik}{U^{\prime}_{s}(\pm 1)}+O(1)\epsilon^{\frac{3}{7}}|\log \epsilon|,\] (4.3) \[u^{s}_{+}(\pm 1) =U^{\prime}_{s}(\pm 1)+O(1)\epsilon^{\frac{1}{7}},\] (4.4) \[u^{s}_{-}(\pm 1) =O(1)|\log\epsilon|,\] (4.5) \[u^{f}_{+}(1) =\frac{Ai(1,\bar{z}_{0})}{Ai(2,\bar{z}_{0})}+O(1)\epsilon^{\frac {1}{7}},\ v^{f}_{+}(1)=ik\bar{\delta},\ v^{f}_{+}(-1)=O(1)\epsilon^{\infty},\ u^{f}_{ \pm}(-1)=O(1)\epsilon^{\frac{1}{7}},\] (4.6) \[u^{f}_{-}(-1) =-\frac{Ai(1,z_{0})}{Ai(2,z_{0})}+O(1)\epsilon^{\frac{1}{7}},\ v^ {f}_{-}(-1)=ik\delta,\ v^{f}_{-}(1)=O(1)\epsilon^{\infty},\ u^{f}_{-}(1)=O(1) \epsilon^{\frac{1}{7}}. \tag{4.7}\]
_Here \(z_{0}\) and \(\bar{z}_{0}\) are defined in (2.45) and (2.53) respectively._
Proof.: First we construct \(\vec{\Xi}^{s}_{+}\). We look for the solution in form of
\[\vec{\Xi}^{s}_{+}=\vec{\Xi}^{s}_{+,\text{app}}+\vec{\Xi}^{s}_{+,r},\]
where the approximate solution \(\vec{\Xi}^{s}_{+,\text{app}}\) is defined in (2.28)-(2.30). Recall the error function \(\vec{E}^{s}_{+}\) in (2.58) and decomposition \(\vec{E}^{s}_{+}=\vec{E}^{s}_{+,1}+\vec{E}^{s}_{+,2}\) in (2.60). Then we decompose the remainder accordingly into: \(\vec{\Xi}^{s}_{+,r}=\vec{\Xi}^{s}_{+,r,1}+\vec{\Xi}^{s}_{+,r,2}\). Here, \(\vec{\Xi}^{s}_{+,r,j}=(\rho^{s}_{+,r,j},u^{s}_{+,r,j},v^{s}_{+,r,j})\) satisfies
\[\mathcal{L}(\vec{\Xi}^{s}_{+,r,j})=-\vec{E}^{s}_{+,j},\ \ v^{s}_{+,r,j} \big{|}_{y=\pm 1}=0,\ j=1,2. \tag{4.8}\]
The solvability of \(\vec{\Xi}^{s}_{+,r,j}\) is guaranteed by Proposition 3.1. By using (2.61) and (3.2), we obtain
\[\big{|}u^{s}_{+,r,1}(\pm 1)\big{|}\leq\|u^{s}_{+,r,1}\|_{H^{1}}\leq C \epsilon^{-\frac{5}{7}}\|\vec{E}^{s}_{+,1}\|_{L^{2}}\leq C\epsilon^{\frac{1} {7}}, \tag{4.9}\]
and
\[\big{|}u^{s}_{+,r,2}(\pm 1)\big{|}\leq\|u^{s}_{+,r,2}\|_{H^{1}} \leq C\epsilon^{-\frac{2}{7}}\big{(}\|\Omega(\vec{E}^{s}_{+,2})\| _{L^{2}}+\epsilon^{\frac{1}{7}}\|\vec{E}^{s}_{+,2}\|_{L^{2}}+\epsilon^{\frac{ 2}{7}}\|\text{div}_{k}(\vec{E}^{s}_{+,2})\|_{L^{2}}\big{)}\] \[\leq C\epsilon^{-\frac{2}{7}}\|\vec{E}^{s}_{+,2}\|_{H^{1}}\leq C \epsilon^{\frac{2}{7}}. \tag{4.10}\]
From (2.31), (2.32), (2.34), (4.8), (4.9) and (4.10), it holds that
\[v^{s}_{+}(-1) =v^{s}_{+,\text{app}}(-1)=ik(c-\tau k^{2})+O(1)\epsilon^{\frac{5} {7}}|\log\epsilon|,\] \[v^{s}_{+}(1) =v^{s}_{+,\text{app}}(1)=ikc+O(1)\epsilon^{\frac{5}{7}},\] \[u^{s}_{+}(\pm 1) =u^{s}_{+,\text{app}}(\pm 1)+u^{s}_{+,r,1}(\pm 1)+u^{s}_{+,r,2}(\pm 1 )=U^{\prime}_{s}(\pm 1)+O(1)\epsilon^{\frac{1}{7}}.\]
Thus we have shown the asymptotic properties given in (4.1), (4.2) and (4.4) for the boundary data of \(\vec{\Xi}^{s}_{-}\).
For \(\vec{\Xi}^{s}_{-}\), we recall the approximate solution \(\vec{\Xi}^{s}_{-,\text{app}}\) defined in (2.28)-(2.30). This approximation generates an error \(\vec{E}^{s}_{-}\) which is given in (2.58). To get rid of the error, it is natural to seek the solution \(\vec{\Xi}^{s}_{-}\) in the following form
\[\vec{\Xi}^{s}_{-}=\vec{\Xi}^{s}_{-,\text{app}}+\vec{\Xi}^{s}_{-,r},\]
where the remainder \(\vec{\Xi}^{s}_{-,r}=(\rho^{s}_{-,r},u^{s}_{-,r},v^{s}_{-,r})\) satisfies
\[\mathcal{L}(\vec{\Xi}^{s}_{-,r})=-\vec{E}^{s}_{-},\ \ v^{s}_{-}\big{|}_{y= \pm 1}=0. \tag{4.11}\]
By using (3.4) and (2.62), we obtain that
\[\big{|}u^{s}_{-,r}(\pm 1)\big{|}\leq C\|u^{s}_{-,r}\|_{H^{1}} \leq C\epsilon^{-\frac{2}{7}}\left(\|\Omega(\vec{E}^{s}_{-})\|_{L ^{2}}+\epsilon^{\frac{1}{7}}\|\vec{E}^{s}_{-}\|_{L^{2}}+\epsilon^{\frac{2}{7}} \|\mathrm{div}_{k}(\vec{E}^{s}_{-})\|_{L^{2}}\right)\] \[\leq C\epsilon^{-\frac{2}{7}}\|\vec{E}^{s}_{-}\|_{H^{1}}\leq C. \tag{4.12}\]
Then by (2.33), (4.11) and (4.12), we have
\[v^{s}_{-}(\pm 1)=v^{s}_{-,\mathrm{app}}(\pm 1)=\frac{ik}{U^{s}_{s}(\pm 1)}+O( 1)\epsilon^{\frac{2}{7}}|\log\epsilon|,\]
and
\[u^{s}_{-}(\pm 1)=u^{s}_{-,\mathrm{app}}(\pm 1)+u^{s}_{-,r}(\pm 1)=O( 1)|\log\epsilon|.\]
This completes the proof of (4.3) and (4.5).
Finally we construct the viscous modes \(\vec{\Xi}^{f}_{\pm}\) near the boundary layer profiles \(\vec{\Xi}^{f}_{\pm,\mathrm{app}}\) defined in (2.50) and (2.52) respectively. Without loss of generality, we consider \(\vec{\Xi}^{f}_{+}\). Recall the error function \(\vec{E}^{f}_{+}\) given in (2.59). We look for the solution in form of
\[\vec{\Xi}^{f}_{+}=\vec{\Xi}^{f}_{+,\mathrm{app}}+\vec{\Xi}^{f}_{+,r},\]
where \(\vec{\Xi}^{f}_{+,r}\) solves the following resolvent problem:
\[\mathcal{L}(\vec{\Xi}^{f}_{+,r})=-\vec{E}^{f}_{+,}\ \ v^{f}_{+,r} \big{|}_{y=\pm 1}=0. \tag{4.13}\]
Using (2.63) and (3.2), we can obtain that
\[|u^{f}_{+,r}(\pm 1)|\leq C\|u^{f}_{+,r}\|_{H^{1}}\leq C\epsilon^{- \frac{2}{7}}\|\vec{E}^{f}_{+}\|_{L^{2}}\leq C\epsilon^{\frac{1}{7}}. \tag{4.14}\]
Then from (2.56), (4.13) and (4.14), it holds that
\[v^{f}_{+}(1)=v^{f}_{+,\mathrm{app}}(1)=ik\vec{\delta},\ v^{f}_{+ }(-1)=v^{f}_{+,\mathrm{app}}(-1)=O(1)\epsilon^{\infty},\] \[u^{f}_{+}(1)=u^{f}_{+,\mathrm{app}}(1)+u^{f}_{+,r}(1)=\frac{ \mathrm{Ai}(1,\vec{z}_{0})}{\mathrm{Ai}(2,\vec{z}_{0})}+O(1)\epsilon^{\frac{ 1}{7}},\] \[u^{f}_{+}(-1)=u^{f}_{+,\mathrm{app}}(-1)+u^{f}_{+,r}(-1)=O(1) \epsilon^{\frac{1}{7}}.\]
Combining these estimates together yields (4.6). The asymptotic expansion (4.7) for \(\vec{\Xi}^{f}_{-}\) can be proved similarly. Therefore, the proof of the proposition is completed.
To match exact boundary conditions at \(y=\pm 1\), we construct a solution \(\vec{\Xi}\) to the eigenvalue problem (1.6) by the following linear combination of \(\vec{\Xi}^{s}_{\pm}\) and \(\vec{\Xi}^{f}_{\pm}\) :
\[\vec{\Xi}=(\rho,u,v)\stackrel{{\mbox{\tiny def}}}{{=}}\vec{\Xi}^ {s}_{+}+\alpha_{1}(c)\vec{\Xi}^{s}_{-}+\alpha_{2}(c)\vec{\Xi}^{f}_{-}+\alpha_{ 3}(c)\vec{\Xi}^{f}_{+}, \tag{4.15}\]
where \(\alpha_{1},\alpha_{2},\alpha_{3}\) are constants. Now we choose \(\alpha_{1}\), \(\alpha_{2}\) and \(\alpha_{3}\) so that the following three boundary conditions are satisfied:
\[v(-1) =v_{+}^{s}(-1)+\alpha_{1}v_{-}^{s}(-1)+\alpha_{2}v_{-}^{f}(-1)+ \alpha_{3}v_{+}^{f}(-1)=0, \tag{4.16}\] \[u(-1) =u_{+}^{s}(-1)+\alpha_{1}u_{-}^{s}(-1)+\alpha_{2}u_{-}^{f}(-1)+ \alpha_{3}u_{+}^{f}(-1)=0,\] (4.17) \[v(1) =v_{+}^{s}(1)+\alpha_{1}v_{-}^{s}(1)+\alpha_{2}v_{-}^{f}(1)+ \alpha_{3}v_{+}^{f}(1)=0. \tag{4.18}\]
This reduces to solve the following algebraic system:
\[\mathcal{M}(\alpha_{1},\alpha_{2},\alpha_{3})^{T}=-\bigg{(}v_{+}^{s}(-1),u_{+ }^{s}(-1),v_{+}^{s}(1)\bigg{)}.\]
where the coefficient matrix \(\mathcal{M}\) is given by:
\[\mathcal{M}=\begin{pmatrix}v_{-}^{s}(-1)&v_{-}^{f}(-1)&v_{+}^{f}(-1)\\ u_{-}^{s}(-1)&u_{-}^{f}(-1)&u_{+}^{f}(-1)\\ v_{-}^{s}(1)&v_{-}^{f}(1)&v_{+}^{f}(1)\end{pmatrix}.\]
The following lemma gives the invertibility of \(\mathcal{M}\) and asymptotic formula of coefficients \(\alpha_{1}\), \(\alpha_{2}\) and \(\alpha_{3}\) for \(\epsilon\ll 1\). Denote \(\beta=\frac{U_{s}^{\prime}(-1)}{|U_{s}^{\prime}(1)|}\).
**Lemma 4.1**.: _For any \(\epsilon\ll 1\), the matrix \(\mathcal{M}\) is invertible. Moreover, the coefficients \(\alpha_{1}\), \(\alpha_{2}\) and \(\alpha_{3}\) in (4.16)-(4.18) have the following asymptotic expansions:_
\[\alpha_{1} =-\delta\left[U_{s}^{\prime}(-1)\right]^{2}\frac{Ai(2,z_{0})}{Ai (1,z_{0})}-U_{s}^{\prime}(-1)(c-\tau k^{2})+O(1)\epsilon^{\frac{3}{7}}, \tag{4.19}\] \[\alpha_{2} =U_{s}^{\prime}(-1)\frac{Ai(2,z_{0})}{Ai(1,z_{0})}+O(1)\epsilon^ {\frac{1}{7}},\] (4.20) \[\alpha_{3} =-\delta^{-1}\left[(1+\beta)\beta^{-\frac{1}{7}}c-\tau\beta^{ \frac{2}{7}}k^{2}\right]-U_{s}^{\prime}(-1)\beta^{\frac{2}{7}}\frac{Ai(2,z_{0} )}{Ai(1,z_{0})}+O(1)\epsilon^{\frac{1}{7}}. \tag{4.21}\]
Proof.: By using asymptotic expansions of boundary values in Proposition 4.1, we have
\[\mathcal{M}=\begin{pmatrix}\frac{ik}{U_{s}^{\prime}(-1)}+O(1) \epsilon^{\frac{3}{7}}|\log\epsilon|&ik\delta&O(1)\epsilon^{\infty}\\ O(1)|\log\epsilon|&-\frac{\text{Ai}(1,z_{0})}{\text{Ai}(2,z_{0})}+O(1) \epsilon^{\frac{1}{7}}&O(1)\epsilon^{\frac{1}{7}}\\ \frac{ik}{U_{s}^{\prime}(1)}+O(1)\epsilon^{\frac{3}{7}}|\log\epsilon|&O(1) \epsilon^{\infty}&ik\delta\end{pmatrix}.\]
Then we compute the determinant of \(\mathcal{M}\) as follows.
\[\det\mathcal{M}= ik\bar{\delta}\cdot\det\begin{pmatrix}\frac{ik}{U_{s}^{\prime}(-1)}+ O(1)\epsilon^{\frac{3}{7}}|\log\epsilon|&ik\delta\\ O(1)|\log\epsilon|&-\frac{\text{Ai}(1,z_{0})}{\text{Ai}(2,z_{0})}+O(1) \epsilon^{\frac{1}{7}}\end{pmatrix}\] \[+O(1)\epsilon^{\frac{1}{7}}\cdot\det\begin{pmatrix}\frac{ik}{U_{ s}^{\prime}(-1)}+O(1)\epsilon^{\frac{3}{7}}|\log\epsilon|&ik\delta\\ \frac{ik}{U_{s}^{\prime}(1)}+O(1)\epsilon^{\frac{3}{7}}|\log\epsilon|&O(1) \epsilon^{\infty}\end{pmatrix}+O(1)\epsilon^{\infty}\]
\[\begin{split}&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\
centered at
\[c_{0}=\frac{\tau\beta T_{0}^{2}}{1+\beta}\epsilon^{\frac{2}{3}}+T_{0}^{-\frac{3}{ 2}}\epsilon^{\frac{1}{2}\pi i}\frac{\left|U_{s}^{\prime}(1)\right|^{\frac{1}{2} }(1+\beta^{\frac{3}{2}})}{\tau^{\frac{1}{2}}\beta^{\frac{1}{2}}(1+\beta)^{ \frac{1}{2}}}\epsilon^{\frac{2}{3}}. \tag{4.30}\]
From Proposition 4.1 and (4.19)-(4.21) in Lemma 4.1, we have
\[\mathrm{I}(c)= U_{s}^{\prime}(1)-U_{s}^{\prime}(-1)\beta^{\frac{2}{3}}\frac{ \mathrm{Ai}(2,z_{0})}{\mathrm{Ai}(1,z_{0})}\frac{\mathrm{Ai}(1,\tilde{z}_{0}) }{\mathrm{Ai}(2,\tilde{z}_{0})}\] \[-\delta^{-1}\left[(1+\beta)\beta^{-\frac{1}{3}}c-\tau\beta^{ \frac{3}{2}}k^{2}\right]\frac{\mathrm{Ai}(1,\tilde{z}_{0})}{\mathrm{Ai}(2, \tilde{z}_{0})}+O(1)\epsilon^{\frac{1}{3}}, \tag{4.31}\]
where\(z_{0}\) and \(\tilde{z}_{0}\) are defined in (2.45) and (2.53) respectively. Next we compute the leading order terms of \(\frac{\mathrm{Ai}(1,\tilde{z}_{0})}{\mathrm{Ai}(2,\tilde{z}_{0})}\) and \(\frac{\mathrm{Ai}(2,z_{0})}{\mathrm{Ai}(1,\tilde{z}_{0})}\) as follows. By taking \(T_{0}>1\) sufficiently large, we can find a positive constant \(b_{0}\) independent of \(\epsilon\) and \(T_{0}\), such that for any \(c\in D\), it holds
\[\mathrm{Im}c\geq b_{0}T_{0}^{-\frac{1}{2}}\epsilon^{\frac{2}{3}},\ \ \mathrm{arg}\ c\in\left(0,b_{0}T_{0}^{-\frac{2}{2}}\right),\ \left|c\right|=\frac{\tau\beta T_{0}^{2}}{1+\beta}\epsilon^{\frac{2}{3}} \left(1+O(1)T_{0}^{-\frac{2}{2}}\right). \tag{4.32}\]
From (2.51), we have
\[\tilde{z}_{0}=-\bar{\delta}^{-1}c=-e^{\frac{1}{2}\pi i}\left|U_{s}^{\prime}(1 )\right|^{\frac{1}{3}}T_{0}^{\frac{1}{3}}\epsilon^{-\frac{2}{3}}c.\]
If \(c\) is in the disk \(D\), then from (4.32) we can obtain
\[|\tilde{z}_{0}|=\frac{\tau\beta\left|U_{s}^{\prime}(1)\right|^{\frac{1}{3}}} {1+\beta}T_{0}^{\frac{7}{3}}\left(1+O(1)T_{0}^{-\frac{7}{2}}\right),\ \ -\frac{5}{6}\pi<\arg z_{0}<-\frac{5}{6}\pi+b_{0}T_{0}^{-\frac{7}{2}}.\]
Then from the asymptotic behavior of the Airy profile (cf. [22, 23]), it holds that
\[\frac{\mathrm{Ai}(1,\tilde{z}_{0})}{\mathrm{Ai}(2,\tilde{z}_{0})}=-\bar{z}_{ 0}^{\frac{1}{2}}+O(1)|\tilde{z}_{0}|^{-1}=-e^{-\frac{5}{12}\pi i}\frac{\tau^{ \frac{1}{2}}\beta^{\frac{1}{2}}\left|U_{s}^{\prime}(1)\right|^{\frac{1}{6}}} {(1+\beta)^{\frac{1}{2}}}T_{0}^{\frac{2}{3}}+O(1)T_{0}^{-\frac{7}{3}}. \tag{4.33}\]
Similarly, we have
\[\frac{\mathrm{Ai}(2,z_{0})}{\mathrm{Ai}(1,z_{0})}=\frac{-e^{\frac{5}{12}\pi i} (1+\beta)^{\frac{1}{2}}}{\tau^{\frac{1}{2}}\beta^{\frac{1}{2}}\left[U_{s}^{ \prime}(-1)\right]^{\frac{1}{6}}}T_{0}^{-\frac{2}{6}}\left(1+O(1)T_{0}^{-\frac {7}{2}}\right). \tag{4.34}\]
Plugging the asymptotic expansion (4.33) and (4.34) into (4.31), we deduce that
\[\mathrm{I}(c)=\mathrm{I}_{\mathrm{lin}}(c)+O(1)\left(T_{0}^{-\frac{7}{2}}+ \epsilon^{\frac{1}{3}}\right),\]
where the linear function
\[\mathrm{I}_{\mathrm{lin}}(c)=U_{s}^{\prime}(1)-U_{s}^{\prime}(-1)\beta^{\frac{ 1}{2}}+e^{-\frac{1}{2}\pi i}\frac{\tau^{\frac{1}{2}}\left|U_{s}^{\prime}(1) \right|^{\frac{1}{2}}\beta^{\frac{5}{2}}}{(1+\beta)^{\frac{1}{2}}}T_{0}^{ \frac{3}{2}}\left[(1+\beta)\beta^{-\frac{1}{3}}c-\tau\beta^{\frac{2}{3}}k^{2} \right].\]
By using the same argument as in [50], one can show that the function \(\mathrm{I}(c)\) is analytic in the disk \(D\). Moreover, one can check that \(c_{0}\) given in (4.30) is the unique zero point of linear function \(\mathrm{I}_{\mathrm{lin}}(c)\). On the circle \(\partial D\), we have
\[|\mathrm{I}_{\mathrm{lin}}(c)|=\tau^{\frac{1}{2}}\left|U_{s}^{\prime}(1)\right|^ {\frac{1}{2}}\beta^{\frac{1}{2}}(1+\beta)^{\frac{1}{2}}T_{0}^{-\frac{1}{2}},\]
\[|\mathrm{I}(c)-\mathrm{I}_{\mathrm{lin}}(c)|\leq C\left(T_{0}^{-\frac{7}{2}}+ \epsilon^{\frac{1}{7}}\right)\leq\frac{1}{2}\tau^{\frac{1}{2}}\left|U_{s}^{ \prime}(1)\right|^{\frac{1}{2}}\beta^{\frac{1}{2}}(1+\beta)^{\frac{1}{2}}T_{0} ^{-\frac{1}{2}}\leq\frac{1}{2}\left|\mathrm{I}_{\mathrm{lin}}(c)\right|.\]
By Rouche's Theorem, the function \(\mathrm{I}(c)\) and \(\mathrm{I}_{\mathrm{lin}}(c)\) have the same number of zero point in \(D\). Thus, there exists a unique \(c\in D\), such that (4.28) defines a solution of eigenvalue problem (1.6) with boundary conditions (1.7). Therefore, the proof of Theorem 1.1 is completed.
## 5 Appendices
First of all, we give in the following lemma the boundary values of \(\varphi_{-}\) defined in (2.7).
**Lemma 5.1**.: _For sufficiently small \(|c|\), the boundary values of \(\varphi_{-}(y)\) satisfy:_
\[\varphi_{-}(\pm 1) =\frac{-1}{U_{s}^{\prime}(\pm 1)}+O(1)|c\log Imc|, \tag{5.1}\] \[\varphi_{-}^{\prime}(\pm 1) =O(1)|\log Imc|. \tag{5.2}\]
Proof.: We only show (5.1) and (5.2) for \(y=1\), since the boundary value at \(y=-1\) can be obtained in the same way. Evaluating (2.7) at \(y=1\), we have
\[\varphi_{-}(1) =(U_{s}(1)-c)\int_{0}^{1}\frac{1}{(U_{s}(x)-c)^{2}}\mathrm{d}x-m ^{2}(U_{s}(1)-c)\] \[=-c\int_{0}^{1}\frac{1}{(U_{s}(x)-c)^{2}}\mathrm{d}x+m^{2}c\] \[=-c\int_{\frac{1}{2}}^{1}\frac{1}{(U_{s}(x)-c)^{2}}\mathrm{d}x-c \int_{0}^{\frac{1}{2}}\frac{1}{(U_{s}(x)-c)^{2}}\mathrm{d}x+O(1)|c|. \tag{5.3}\]
For \(x\in\left(0,\frac{1}{2}\right)\), we have \(U_{s}(x)=1-x^{2}\geq\frac{3}{4}\). Then the second integral in (5.3) satisfies
\[\bigg{|}\int_{0}^{\frac{1}{2}}\frac{c}{(U_{s}(x)-c)^{2}}\mathrm{d}x\bigg{|} \leq\frac{|c|}{2\left(\frac{9}{16}-|c|^{2}\right)}\leq O(1)|c|,\text{ for }|c|\ll 1. \tag{5.4}\]
When \(x\in\left(\frac{1}{2},1\right)\), \(U\) is not denegerate because that \(|U_{s}^{\prime}(x)|=2x\geq 1\). Thus, we can compute the first integral in (5.3) as
\[-\int_{\frac{1}{2}}^{1}\frac{c}{(U_{s}(x)-c)^{2}}\mathrm{d}x=\int _{\frac{1}{2}}^{1}\frac{\mathrm{d}}{\mathrm{d}x}\bigg{(}\frac{c}{U_{s}(x)-c} \bigg{)}\frac{1}{U_{s}^{\prime}(x)}\mathrm{d}x\] \[\qquad=\frac{c}{U_{s}^{\prime}(1)(U_{s}(1)-c)}-\frac{c}{U_{s}^{ \prime}\left(\frac{1}{2}\right)\left(U_{s}\left(\frac{1}{2}\right)-c\right)} -\int_{\frac{1}{2}}^{1}\frac{c}{U_{s}(x)-c}\bigg{(}\frac{1}{U_{s}^{\prime}(x )}\bigg{)}^{\prime}\mathrm{d}x\] \[\qquad=-\frac{1}{U_{s}^{\prime}(1)}+c\int_{\frac{1}{2}}^{1}\frac {\mathrm{d}}{\mathrm{d}x}(\log(U_{s}(x)-c))\frac{U_{s}^{\prime\prime}(x)}{(U_{ s}^{\prime}(x))^{3}}+O(1)|c|\] \[\qquad=-\frac{1}{U_{s}^{\prime}(1)}+c\log(U_{s}(y)-c)\frac{U_{s}^ {\prime\prime}(y)}{(U_{s}^{\prime}(y))^{3}}\bigg{|}_{y=\frac{1}{2}}^{y=1}-c \int_{\frac{1}{2}}^{1}\log(U_{s}(x)-c)\left(\frac{U_{s}^{\prime\prime}}{(U_{s} ^{\prime})^{3}}\right)^{\prime}\mathrm{d}x+O(1)|c|\] \[\qquad=-\frac{1}{U_{s}^{\prime}(1)}+O(1)|c\log\mathrm{Im}c|. \tag{5.5}\]
Plugging (5.4) and (5.5) into (5.3), we obtain:
\[\varphi_{-}(1)=-\frac{1}{U_{s}^{\prime}(1)}+O(1)|c\log\mathrm{Im}c|,\]
which is (5.1). Next we show (5.2). Differentiating (2.7) yields
\[\varphi_{-}^{\prime}(y)=\frac{U_{s}^{\prime}(y)\varphi_{-}(y)+A(y)}{U_{s}(y)-c}. \tag{5.6}\]
Evaluaing both sides of (5.6) at \(y=1\) gives
\[\varphi_{-}^{\prime}(1) =\frac{U_{s}^{\prime}(1)\varphi_{-}(1)+1}{U_{s}(1)-c}-m^{2}(U_{s}( 1)-c)\] \[=\frac{O(1)|c\log\mathrm{Im}c|}{-c}+m^{2}c\] \[=O(1)|\log\mathrm{Im}c|,\]
which is (5.2). The proof of Lemma 5.1 is completed.
The following lemma concerns some bounds on \(\varphi_{-}\).
**Lemma 5.2**.: _There exists a positive constant \(\gamma\), such that if \(c\) lies in the half disk \(\{Imc>0,|c|\leq\gamma\}\), then \(\varphi_{-}\) satisfies the following estimates._
1. _If_ \(y\in\left(-\frac{1}{2},\frac{1}{2}\right)\)_, we have_ \[\left|\partial_{y}^{k}\varphi_{-}(y)\right|\leq C,\;k=0,1,2,3,4.\] (5.7)
2. _If_ \(y\in\left(-1,1\right)\setminus\left(-\frac{1}{2},\frac{1}{2}\right)\)_, we have the following pointwise estimate_ \[\left|\varphi_{-}(y)\right|\leq C,\;\varphi_{-}^{\prime}(y)= \frac{-U_{s}^{\prime\prime}(y)}{\left(U_{s}^{\prime}(y)\right)^{2}}\log\left( U_{s}(y)-c\right)+O(1),\] \[\varphi_{-}^{\prime\prime}(y)=-\frac{U_{s}^{\prime\prime}(y)}{U_ {s}^{\prime}(y)\left(U_{s}(y)-c\right)}-\frac{\left(U_{s}^{\prime\prime}(y) \right)^{2}}{\left(U_{s}^{\prime}(y)\right)^{3}}\log\left(U_{s}(y)-c\right)+O( 1),\] (5.8) \[\varphi_{-}^{\prime\prime\prime}(y)=\frac{U_{s}^{\prime\prime}(y)} {\left(U_{s}(y)-c\right)^{2}}+O(1),\;\partial_{y}^{4}\varphi_{-}=-\frac{3U_{ s}^{\prime}U_{s}^{\prime\prime}}{\left(U_{s}-c\right)^{3}}.\]
3. _Furthermore, it holds that_ \[\|\varphi_{-}\|_{L^{\infty}}\leq C,\;\|\varphi_{-}^{\prime}\|_{L ^{\infty}}\leq C\left|\log Imc\right|,\;\|\varphi_{-}^{\prime}\|_{L^{2}}\leq C,\] (5.9) \[\|\partial_{y}^{j}\varphi_{-}\|_{L^{\infty}}\leq C\left|Imc\right| ^{-j+1},\;j=2,3,4,\] \[\|\partial_{y}^{j}\varphi_{-}\|_{L^{2}}\leq C\left|Imc\right|^{-j \frac{1}{2}},\;j=1,2,3,4.\]
Proof.: By differentiating (2.7) up to the fourth order, we obtain
\[\varphi_{-}^{\prime} =U_{s}^{\prime}\int_{0}^{\gamma}\frac{1}{(U_{s}-c)^{2}}\mathrm{d} x+\frac{1}{U_{s}-c}-m^{2}\left(yU_{s}^{\prime}+U_{s}-c\right), \tag{5.10}\] \[\varphi_{-}^{\prime\prime} =U_{s}^{\prime\prime}\int_{0}^{\gamma}\frac{1}{(U_{s}-c)^{2}} \mathrm{d}x-m^{2}\left(yU_{s}^{\prime\prime}+2U_{s}^{\prime}\right),\] \[\varphi_{-}^{\prime\prime\prime} =\frac{U_{s}^{\prime\prime}}{(U_{s}-c)^{2}}-3m^{2}U_{s}^{\prime \prime},\;\partial_{y}^{4}\varphi_{-}=-\frac{2U_{s}^{\prime}U_{s}^{\prime \prime}}{(U_{s}-c)^{3}}.\]
Here we have used \(\partial_{y}^{j}U_{s}=0\) when \(j=3,4\). For \(y\in\left(-\frac{1}{2},\frac{1}{2}\right)\), \(|U_{s}-c|\gtrsim 1\). Thus (5.7) follows from the explicit formula (5.10). For \(y\in(-1,1)\setminus\left(-\frac{1}{2},\frac{1}{2}\right)\), as (5.5) we can obtain the following pointwise estimate
\[\int_{0}^{y}\frac{1}{(U_{s}-c)^{2}}\mathrm{d}x=-\frac{1}{U_{s}^{ \prime}(y)(U_{s}(y)-c)}-\log(U_{s}(y)-c)\frac{U_{s}^{\prime\prime}(y)}{(U_{s}^ {\prime}(y))^{3}}+O(1). \tag{5.11}\]
Substituting (5.11) into the explicit formula (5.10), we obtain pointwise estimates (5.8). The bounds (5.9) on \(L^{2}\) and \(L^{\infty}\)-norms can be obtained in the same way as in [36] and we omit the details for brevity. The proof of Lemma 5.2 is completed.
Finally, we have the following lemma concerning about the weight function \(w(y)\) defined in (3.12). Set
\[\mathcal{W}(y)=-(1-m^{2}U_{s}^{2})U_{s}^{\prime\prime}-2m^{2}U_{s }|U_{s}^{\prime}|^{2}. \tag{5.12}\]
**Lemma 5.3**.: _Let the Mach number \(m\in\left(0,\frac{1}{\sqrt{3}}\right)\). For sufficiently small \(|c|\ll 1\), the weight function \(w\) has the following expansion_
\[w(y)=w_{0}(y)+cw_{1}(y)+O(1)|c|^{2}, \tag{5.13}\]
_where_
\[w_{0} =(1-m^{2}U_{s}^{2})^{2}\mathcal{W}^{-1},\] \[w_{1} =4m^{2}U_{s}(1-m^{2}U_{s}^{2})\mathcal{W}^{-1}-2m^{2}(1-m^{2}U_{ s}^{2})^{2}\left(|U_{s}^{\prime\prime}|U_{s}+|U_{s}^{\prime}|^{2}\right).\]
_Moreover, there exists a positive constant \(\gamma_{0}\), such that_
\[w_{0}-U_{s}w_{1}\geq\gamma_{0}. \tag{5.14}\]
Proof.: We show that \(\mathcal{W}(y)\) has a strictly positive lower bound, hence \(w_{0}\) and \(w_{1}\) are well-defined. Then the expansion (5.13) follows from a straightforward computation. We take the plane Poiseuille flow \(U_{s}(y)=1-y^{2}\) as an example. Then
\[\mathcal{W}(y)=2(1-m^{2})+6m^{2}y^{2}\left(y^{2}-\frac{2}{3} \right).\]
Therefore, \(\min_{y\in[-1,1]}\mathcal{W}(y)=\mathcal{W}(y)|_{y=\pm\frac{1}{\sqrt{3}}}=2 \left(1-\frac{4m^{2}}{3}\right)\), which is positive when the Mach number \(m\in\left(0,\frac{\sqrt{3}}{2}\right)\). For (5.14), following the same computation as (3.29) in [50], one has \(w_{0}-U_{s}w_{1}\sim 1-3m^{2}U_{s}^{2}\). Therefore, \(w_{0}-U_{s}w_{1}\) has a strictly positive lower bound when the Mach number \(m\in\left(0,\frac{1}{\sqrt{3}}\right)\). The proof of Lemma 5.3 is completed.
**Acknowledgment.** The research of Yang was supported by the Hong Kong PhD Fellowship. The research of Zhang was supported by the Start-up Fund (P0043862), and by the Research Centre for Nonlinear Analysis of The Hong Kong Polytechnic University. |
2309.14371 | Deep learning based workflow for accelerated industrial X-ray Computed
Tomography | X-ray computed tomography (XCT) is an important tool for high-resolution
non-destructive characterization of additively-manufactured metal components.
XCT reconstructions of metal components may have beam hardening artifacts such
as cupping and streaking which makes reliable detection of flaws and defects
challenging. Furthermore, traditional workflows based on using analytic
reconstruction algorithms require a large number of projections for accurate
characterization - leading to longer measurement times and hindering the
adoption of XCT for in-line inspections. In this paper, we introduce a new
workflow based on the use of two neural networks to obtain high-quality
accelerated reconstructions from sparse-view XCT scans of single material metal
parts. The first network, implemented using fully-connected layers, helps
reduce the impact of BH in the projection data without the need of any
calibration or knowledge of the component material. The second network, a
convolutional neural network, maps a low-quality analytic 3D reconstruction to
a high-quality reconstruction. Using experimental data, we demonstrate that our
method robustly generalizes across several alloys, and for a range of sparsity
levels without any need for retraining the networks thereby enabling accurate
and fast industrial XCT inspections. | Obaidullah Rahman, Singanallur V. Venkatakrishnan, Luke Scime, Paul Brackman, Curtis Frederick, Ryan Dehoff, Vincent Paquit, Amirkoushyar Ziabari | 2023-09-24T00:43:34Z | http://arxiv.org/abs/2309.14371v1 | # Deep Learning Based Workflow for Accelerated Industrial X-Ray Computed Tomography
###### Abstract
X-ray computed tomography (XCT) is an important tool for high-resolution non-destructive characterization of additively-manufactured metal components. XCT reconstructions of metal components may have beam hardening artifacts such as cupping and streaking which makes reliable detection of flaws and defects challenging. Furthermore, traditional workflows based on using analytic reconstruction algorithms require a large number of projections for accurate characterization - leading to longer measurement times and hindering the adoption of XCT for in-line inspections. In this paper, we introduce a new workflow based on the use of two neural networks to obtain high-quality accelerated reconstructions from sparse-view XCT scans of single material metal parts. The first network, implemented using fully-connected layers, helps reduce the impact of BH in the projection data without the need of any calibration or knowledge of the component material. The second network, a convolutional neural network, maps a low-quality analytic 3D reconstruction to a high-quality reconstruction. Using experimental data, we demonstrate that our method robustly generalizes across several alloys, and for a range of sparsity levels without any need for retraining the networks thereby enabling accurate and fast industrial XCT inspections.
Obaidullah Rahman, Singanallur V. Venkatakrishnan, Luke Scime, Paul Brackman, Curtis Frederick, Ryan Dehoff, Vincent Paquit, Amirkoushyar Ziabari\({}^{\dagger}\)+
Footnote †: Corresponding author’s email address: [email protected]_. This manuscript has been authored by UT-Battelle, LLC, under contract DE-ACOS-000OR22725 with the US Department of Energy (DOE).Research sponsored by the US Department of Energy, Office of Energy Efficiency and Renewable Energy, Advanced Manufacturing Office and Technology Commercialization Fund (TCF-21-24881), under contract DE-ACOS-00OR22725 with UT-Battelle, LLC. The US government retains and the publisher, by accepting the article for publication, acknowledges that the US government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for US government purposes. DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan ([http://energy.gov/download/doc-public-access-plan](http://energy.gov/download/doc-public-access-plan)).
\({}^{\dagger}\)Oak Ridge National Lab (ORNL), Oak Ridge, TN 37830
\({}^{\star}\)Carl Zeiss Industrial Metrology, LLC, Maple Grove, MN 55369, USA
## 1 Introduction
Additive manufacturing (AM), also known as 3D printing, is an important process for printing complex components that cannot be manufactured via traditional machining methods. In it known that during the AM process, flaws such as pores (holes), cracks, and other defects, may form in the printed objects [1, 2], that could compromise their performance, and therefore advanced characterization techniques are critical for qualification and certification the components. XCT is a powerful method for non-destructive characterization of the flaws and defects in 3D at high-resolution. However, it is challenging to use XCT for the characterization of a large number of metal AM components.
Traditional workflows for XCT imaging based on using analytic reconstruction algorithms such as FDK [3] require a large number of projections - leading to longer measurement times and incurring significant labor and cost. Attempts to accelerate the XCT scans by acquiring only a fraction of the typically-made measurements lead to strong artifacts in the reconstructions and produce incorrect characterization results. Model-based iterative reconstruction (MBIR) algorithms can help reduce scan time by producing high-quality reconstructions from sparse scans [4]; but they require significant computation making them infeasible to use when characterizing hundreds of components. Another challenge with using industrial XCT systems for metal AM parts is that there can be significant artifacts in the reconstructions due to beam hardening (BH) from the interaction of a polychromatic X-ray source with dense metals. Several deep learning approaches have been developed to address these challenges [5, 6, 7, 4] for metal AM. Those methods handle the beam hardening, sparse scan artifact reduction and denoising using the same network. This could complicate the training process specially when materials of different density levels are being used during training, and also impact the performance of the network. In addition, the beam hardening correction for training data generation is material and X-ray spectrum dependant.
In this paper we present a two-stage deep learning-based workflow that disentangles beam hardening correction from artifact reduction while enabling high-quality reconstruction from sparse scans. Specifically, we use a neural network (NN) to estimate the beam hardening related parameters, correct the acquired data based on the outputs of the NN, perform a FDK reconstruction on the corrected data and use a convolutional neural network to map the FDK reconstruction to a high-quality reconstruction. The first network is trained purely based on a model (requires no measurements/calibration data), allowing for a material-agnostic beam hardening correction. The second network is trained using a collection of densely-sampled measurements from different materials and scan settings. Furthermore, by removing the bias term from the convolutional neural network [8], we attempt to improve
the generalization. We shed light on potential generalizability of our approach, using several sparse-view XCT data sets of metal parts. We demonstrate that our approach produces high-quality reconstructions across a variety of samples relevant to AM applications, including those that are different from the type of samples used to train the neural networks.
## 2 Method
In this section, we present details of the two-stage neural network-based workflow for the BH artifacts and sparse-view artifact suppression (see Fig. 1). The first step is BH correction network (BHCN), which is a neural network trained on experimentally-driven synthesized values to correct for BH in the projection data. The second step, is an artifact-reduction deep neural network to reduce noise and artifacts from the FDK reconstruction and to approximate MBIR-like dense-view reconstruction [9, 10]. In the following subsections, we describe the two neural networks used in the proposed method.
### Beam hardening correction network (BHCN)
Our method, BHCN [11], to correct for BH from single material scans consists of the following steps:
* Use BHCN to map _each_ projection value (\(p\)) and estimated thickness value (\(d\)) to the BH model parameters
* Average the model parameters estimated by the NN for all the (\(p,d\)) values
* Use the averaged model parameters to compute linearization polynomial
We train the proposed network solely on synthetically-generated data using bimodal energy model for BH developed by Van de Casteel et al. [12] who demonstrate that BH can be modeled using two dominant X-ray energies, \(E_{1}\) and \(E_{2}\). If \(\mu_{1}\) and \(\mu_{2}\) are the linear attenuation coefficients (LAC) of the material at these energies, the BH-affected projection is modeled as
\[p_{bh}=\mu_{2}d+\ln\frac{1+\alpha}{1+\alpha e^{-(\mu_{1}-\mu_{2})d}} \tag{1}\]
where \(d\) is the distance the X-ray beam traverses within the material (thickness). \(\alpha\) represents the ratio of the product of approximate X-ray spectrum value and detector efficiency at \(E_{1}\) and \(E_{2}\). The ideal (BH-free) projection varies linearly with distance and is given by
\[p_{bhc}=\frac{\alpha\mu_{1}+\mu_{2}}{1+\alpha}d \tag{2}\]
#### 2.1.1 Training
We used a NN of fully connected layers, which we call beam hardening correction network (BHCN) shown in Fig. 2.
The input layer consists of 2 nodes for the BH affected projection value and associated thickness; and the output layer consists of 3 nodes for the parameters of the model in (1). Samples of \(d^{tr},\ \alpha^{tr},\mu_{1}^{tr},\ \mu_{2}^{tr}\) are uniformly drawn from a realistic range, and used to calculate \(p^{tr}\) using Eq. 1. This range was inferred from past measurements of the densest and lightest alloys in our facility and going about \(20\%\) beyond and \(15\%\) below their range. Therefore we input the pair (\(p^{tr}\), \(d^{tr}\)) to BHCN and train it by minimizing the error between the output (\(\alpha^{out}\), \(\mu_{1}^{out}\), \(\mu_{2}^{out}\)) and the target (\(\alpha^{tr}\), \(\mu_{1}^{tr}\), \(\mu_{2}^{tr}\)).
#### 2.1.2 Inference
To obtain the parameters of Van de Casteel model [12] from the BHCN, we first reconstruct the measured data using the FDK algorithm [3]. Next, we perform a binary segmentation of this reconstruction using Otsu's algorithm [13] and forward project it to obtain an estimate of the thickness corresponding to each measured projection. Then, the BH-affected projection and thickness estimates are fed into the BHCN to obtain estimates of vectors \(\alpha\), \(\mu_{1}\) and \(\mu_{2}\). The mean of the outputs corresponding to each input is used to compute the final linearization polynomial. The projections are then corrected for BH followed by the use of the FDK to obtain a 3D reconstruction.
### Deep Learning MBIR (DL-MBIR)
For the artifact and noise suppression network, we use a modified [10] U-Net [14] as shown in Fig. 3.
We 3D-printed components using a CAD design that consists of complex structures with a cylindrical base of 15 mm diameter, and 3 separate fixtures of different sizes, namely fins, inclines, and rods; and made with three materials - steel, Aluminum
Figure 1: Block diagram of our proposed workflow consisting of beam hardening correction based on a network (BHCN), analytic reconstruction (FDK) and a DNN for suppressing sparse-view artifacts and noise (DL-MBIR).
Figure 2: Illustration of the BHCN architecture used for estimating the parameters of the beam hardening model in [12]. The measured projection value and corresponding estimated thickness are provided as inputs and the network estimates the BH model parameters for each measurement in the CT scan. The predicted model parameters are then averaged to obtain a single set of \(\alpha,\mu_{1}\) and \(\mu_{2}\) values which are then used to compute a linearization polynomial.
Cerium (AlCe) and Inconel [15]. We measured each part using a densely-sampled XCT scan, and then chose three of the data sets to generate training data for the neural networks. Detailed measurement settings are provided in Table 1.
For generating training data, pairs of corrupted (with noise, BH and other artifact) and clean reconstruction data are obtained by using the FDK and MBIR algorithms respectively. To create training data, the samples were scanned densely using 500 to 1000 projections, and reconstructed using MBIR with BH correction applied (section 2.1) to create various reference data. Then, we sub-sampled the number of views by a factor of 3 or 4 for each data set, and reconstructed it using the FDK algorithm with BH correction applied to create noisy input data for training. The U-Net-based network [10], is then trained on pairs of FDK and MBIR data discussed above to learn to suppress the noise and artifact from the FDK input and produce a high-quality output approximating MBIR method with a reduction in BH artifacts. We hypothesize that training the network on a variety of data with different settings, materials and sparsity conditions, allows the network to better adapt to in- and out-of-distribution data.
## 3 Results
We used the approach from [4, 7] as the reference beam hardening correction method. This algorithm leverages the available CAD model of the part and physics-based model to extract BH parameters of an alloy used to correct for the beam hardening. But it is not universal and needs to be recalculated for every new alloy. We used 3D-printed components of several alloys such as AlCe, steel, Inconel, and NiCo to test the proposed method.
The standard scan time and number of projections are typically determined by the expert operator using the XCT system to address the trade-off between measurement time (cost) and quality of the reconstruction. The typical duration used for XCT scanning of materials and geometry we discussed in this work, are between 30-60 minutes.The time is calculated by multiplying number of views, integration time and number of images per view as noted in Table 1. With our method we are able to accelerate the XCT scans by up to 70% while obtaining reconstructions that are visually and quantitatively similar to the reference method. The description of the images that will be discussed in the next sub-sections can be found in Table 3.
that in the left column, which is the uncorrected reconstruction, the defects have low contrast because of beam hardening and noise from sparse scan. The center column which is our method clearly shows an advantage and the defects are more noticeable with noise suppressed which should lead to more accurate detection of the defect structures. The right column is the reference image.
We also used our method on data sets from alloy that were OOD with respect to the training data. Those include NiCo and Inconel, the latter of which was printed with a different 3D printer than the one used during training and scanned at a different setting. Fig. 6, shows the results for the OOD data highlighting the robustness of the method. While we haven't acquired the long dense scan required for creating a reference for test Inconel part, we can qualitatively observe the improved quality of reconstruction with respect to the input FDK reconstruction. Specifically, our method suppresses the BH artifact and noise, and highlights the defects. Our method also reduces artifacts on NiCo which is a significantly denser material compared to everything that was used during training. However, in the NiCo case, we notice that our method misses some pores that are present in the reference image RoI, implying that we need a better training that makes DL-MBIR more general. Finally, we compare the deviation of the reconstructions w.r.t the reference using peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) [16] in Table 2. Our method has higher PSNR and SSIM than uncorrected across all the alloys which further demonstrates the effectiveness and robustness of our method.
## 4 Conclusion
In this paper we have presented a neural network-based workflow to enable high-quality XCT reconstructions of dense metal parts from industrial XCT systems while being able to reduce scan time and produce results in time comparable to traditional reconstruction but with higher quality. The proposed workflow allows for disentangling the beam hardening correction from the denoising (and sparse scan artifact reduction) process. This in turn enables a robust material-agnostic BH correction, while suppressing the artifacts due to sparse-scanning strategy that is used to accelerate the scans and enable high-throughput inspections. Our experiments highlight that our workflow generalizes well to out-of-distribution data - a key requirement to be able to deploy deep neural network-based solutions for real-world systems. Future work includes a more comprehensive training of the network, while experimenting across a wide range of materials and measurement settings.
## 5 Acknowledgement
The 3D printing of Inconel data used for training was supported in a research collaboration with Dr. Brian Fisher from Raytheon Technologies. Authors, would like to acknowledge Drs. Michael Kirka, Michael Sprayberry, Peter Wang, and Alex Plotkowski for providing components of different materials for testing the model. The training and test parts were printed with different 3D printing systems at the Manufacturing Demonstration Facility (MDF) at Oak Ridge National Lab (ORNL).
Figure 5: Testing on In-Dist data (See Table 1) from two different materials that were used during training time. (left to right) Uncorrected, our method, reference; Profile plot. The inset image of a smaller patch from the reconstruction demonstrates that defect contrast with the proposed method is higher than that of uncorrected, and comparable to MBIR. The profile plots clearly show reduction in BH with the proposed method.
Figure 6: Testing on out-of-distribution data set. Please refer to Table 1. (Top subfigure left to right) Uncorrected, our method. (Bottom subfigure left to right) Uncorrected, our method, reference. From the inset images of a small patch, we notice that the resulting reconstructions have fewer artifacts and can discern defects better than the baseline method. |
2309.12714 | Unsupervised Representations Improve Supervised Learning in Speech
Emotion Recognition | Speech Emotion Recognition (SER) plays a pivotal role in enhancing
human-computer interaction by enabling a deeper understanding of emotional
states across a wide range of applications, contributing to more empathetic and
effective communication. This study proposes an innovative approach that
integrates self-supervised feature extraction with supervised classification
for emotion recognition from small audio segments. In the preprocessing step,
to eliminate the need of crafting audio features, we employed a self-supervised
feature extractor, based on the Wav2Vec model, to capture acoustic features
from audio data. Then, the output featuremaps of the preprocessing step are fed
to a custom designed Convolutional Neural Network (CNN)-based model to perform
emotion classification. Utilizing the ShEMO dataset as our testing ground, the
proposed method surpasses two baseline methods, i.e. support vector machine
classifier and transfer learning of a pretrained CNN. comparing the propose
method to the state-of-the-art methods in SER task indicates the superiority of
the proposed method. Our findings underscore the pivotal role of deep
unsupervised feature learning in elevating the landscape of SER, offering
enhanced emotional comprehension in the realm of human-computer interactions. | Amirali Soltani Tehrani, Niloufar Faridani, Ramin Toosi | 2023-09-22T08:54:06Z | http://arxiv.org/abs/2309.12714v1 | # Unsupervised Representations Improve Supervised Learning in Speech Emotion Recognition
###### Abstract
Speech Emotion Recognition (SER) plays a pivotal role in enhancing human-computer interaction by enabling a deeper understanding of emotional states across a wide range of applications, contributing to more empathetic and effective communication. This study proposes an innovative approach that integrates self-supervised feature extraction with supervised classification for emotion recognition from small audio segments. In the preprocessing step, to eliminate the need of crafting audio features, we employed a self-supervised feature extractor, based on the Wav2Vec model, to capture acoustic features from audio data. Then, the output feature-maps of the preprocessing step are fed to a custom designed Convolutional Neural Network (CNN)-based model to perform emotion classification. Utilizing the ShEMO dataset as our testing ground, the proposed method surpasses two baseline methods, i.e. support vector machine classifier and transfer learning of a pretrained CNN. comparing the propose method to the state-of-the-art methods in SER task indicates the superiority of the proposed method. Our findings underscore the pivotal role of deep unsupervised feature learning in elevating the landscape of SER, offering enhanced emotional comprehension in the realm of human-computer interactions.
Speech Emotion Recognition, Self-supervised Learning, Convolutional Neural Network
## I Introduction
Communication establishes the foundation for strong connections between individuals. A community where people feel heard, appreciated, and supported is highly influenced by thoughtful speaking and listening, as well as by taking into account how our words affect others. Speech is one of the most important means through which people communicate with their surroundings. Speaking remains the most natural, prevalent, and effective method of human engagement, despite the fact that modern communication increasingly takes place via keyboards and displays [1]. When we communicate, using the right emotions may affect how people hear and understand what we are saying while also demonstrating our interest, commitment, and sincerity. This makes our genuine intentions clear and makes it possible for receivers to react in a suitable way. Designing technologies for human usage seems to benefit from using emotion detection [34] to improve interaction given that human-computer interaction is being used in a variety of applications [35]. Therefore, a technique and tool for extracting and identifying emotions in speech would be useful. Speech emotion recognition (SER) seeks to automatically identify an individual's emotional or physical state from their voice. Despite not altering linguistic content, emotional state has a big impact on communication since it gives feedback in different situations [2]. The growing number of fields that have profited from SER research, including as contact centers, video games, and lie detection, has increased interest in this area [3]. SER can be used in interactive online learning as well. Computer systems can adaptively choose the most effective methods for presenting the remaining course material by detecting a student's emotional state [8]. In therapeutic settings, SER enables professionals to better understand patients' mental states and perhaps uncover hidden emotions [9]. Virtual voice assistants like Siri, Alexa, Cortana, and Google Assistant have proliferated in popularity as human interaction interfaces in recent years [19]. Systems run the danger of being viewed as cold, socially awkward, unreliable, and incompetent if they are unable to recognize or effectively respond to a user's emotional condition [20]. SER is therefore becoming a more crucial capability [19]. To represent emotional cues in voice signals, the SER approach primarily uses feature extraction, which is followed by feature classification to categorize emotions [4]. The first stage of SER is feature extraction, which entails finding emotional cue representations in speech signals. A major problem in SER continues to be identifying the best features to achieve improved performance [15]. Certain aspects must successfully communicate essential emotional information. Source-based excitation features, prosodic features, vocal tract factors, and hybrid feature sets are just a few of the feature types that researchers have developed for speech processing [5].Speech contains linguistic and auditory information that conveys emotion. Previous research produced embeddings for challenges involving spoken language using methods such as phone representations [21] and Mel-frequency approaches [22]. Energy-related features, pitch frequency features, formant frequencies [26, 27], zero crossing rate (ZCR), linear prediction coefficients (LPC), linear prediction cepstral features, mel-frequency cepstrum coefficients (MFCCs) and its first derivative [28], RASTA-PLP features [29], and others have all been studied to improve emotion classification.Pitch, intensity, and spectral characteristics were used as audio features in [18] to extract emotion data. For deep learning challenges, new
feature learning methods have recently emerged. Given the lack of labeled data for the majority of the 7,000 languages spoken worldwide, many modern language models use self-supervised rather than supervised learning. For the most part, there aren't many hours worth of annotated voice data. As a result, self-supervised approaches facilitate cross-lingual transfer and language learning for low-resource languages [14, 16]. In order to improve voice recognition, unsupervised pre-training techniques like Wav2Vec have been developed [14]. The 1,000 hours that are typically found in supervised academic datasets have been significantly surpassed by these strategies, which have scaled quickly to 1,000,000 hours of training data [17]. Because models automatically learn from unlabeled voices, this extensive scale is possible. Multilingual transcription systems and other speech communities have long been interested in cross-lingual learning, which tries to create models that use data from different languages to improve performance [12, 13].unsupervised pre-training methods have advanced state-of-the-art performance when fine-tuned on benchmark tasks, especially in low-resource environments [14].
Feature classification using both linear and nonlinear classifiers is applied in the second stage of SER systems [6]. Significant interest in voice recognition has been generated by a proposal to improve discrete language models using neural networks [7]. Speech emotion identification has made revolutionary advances in machine learning recently because of Deep Neural Networks (DNNs). Convolutional Neural Networks (CNNs) use frame-level features to identify patterns and local context. CNNs enable integrated feature representation and classification to be trained end-to-end in one optimization process. CNNs and long short-term memory (LSTM) networks are two common DNN architectures that recently emerged for voice emotion recognition [23]. A three-layer LSTM + DNN model for emotional computing was proposed in an early, significant study and trained on the functionals of LLDs [24]. Stuhlsatz et al. [25] used generalized discriminant analysis (GerDA) and restricted Boltzmann machines (RBMs) to directly extract discriminative features from raw data. in [10], After a deep neural network feature extractor, a support vector machine (SVM) was used as the final classifier for emotion recognition. Across multiple acoustic feature groups, this architecture intended to learn relevant representations. The proposed model's assertion 64% accuracy demonstrated better recognition performance. In contrast to conventional methods that rely on speech segmentation, the fully CNN with attention mechanisms presented in [30] can process variable-length speech directly. The accuracy of this design in recognizing emotions was 70.4%. To overcome restrictions caused by insufficient training data, the authors investigated transfer learning. Speech analysis should use careful attention mechanisms since silent speech contains prosodic clues and tonal dynamics. On the ShEMO dataset, a CNN + BLSTM architecture acting on low-level acoustic descriptors extracted from voice frames obtained 78.29% accuracy [11].
In [19], transfer learning for SER is investigated by using representations from a pre-trained wav2vec 2.0 model and basic neural networks. To combine outputs across layers of the pre-trained model, the authors provide trainable weights that are learned along with the downstream model. Using the IEMOCAP and RAVDESS emotion datasets, the performance of wav2vec 2.0 variations with and without speech recognition fine-tuning is examined
In this study, we extract acoustic features from audio data using a wav2vec feature extractor. Then, a novel audio classification model inspired by CNN is fed these features. The ShEMO emotion recognition dataset serves as the training and evaluating on the ShEMO On this dataset, our proposed method performs well while requiring little to no pre-processing of the raw audio.The adaption of CNN architectures for feature learning from auditory inputs is a significant contribution of our work. Our results show that this CNN-based model outperforms baseline approaches and is successful at extracting emotional cues from speech. In addition, our model offers a generalizable method for speech emotion recognition by relying on deep feature learning as opposed to hand-crafted audio features.
## II Proposed Method
The three main steps of our suggested methodology are preprocessing, feature extraction, and classification. To prepare the raw audio data, the preprocessing phase first applies initial transformations. Then, a feature extractor identifies representative cues that encode emotional data. In order to predict the target emotions, classification models subsequently examine the auditory representations. Three different classifier architectures are evaluated. The data can be preprocessed before being fed into the classifiers for emotion recognition using this staged pipeline. Support vector machines (SVMs) [45], transfer learning using prominent pre-trained models, and customized deep neural networks are all used in our categorization experiments. The pipeline is shown in Fig.1. We wanted to compare performance and accuracy on the voice emotion recognition task by evaluating various classifier architectures. The main goal was to categorize the emotional state that was exhibited in each input audio clip from our dataset.
### _Preprocessing_
Through zero-padding, our preprocessing standardized all audio clips to 7 seconds. The underrepresented 'fear' class was excluded. All audios are resampled to 16KHz. XLSR-53 [46], a self-supervised multi-lingual speech representation model that has been pre-trained on 53 languages, was used to extract features. We considered the output of 24-layer feature map with 349 x 1024 feature arrays produced by XLSR-53. A sample first layer output for "sadness" is shown in Fig.2. We used the mean feature vector from the first layer for the SVM classifier. We Utilized the full 349 x 1024 first layer feature map from XLSR-53 for deep neural network classifiers. CNNs received this altered data as an input in the form of a 300 x 300 2D representation. With the help of data preprocessing and the extraction of informative voice features, we improved
the ability of deep networks and SVMs to classify emotions. Diverse emotional cues in the audio data were intended to be captured by the various feature representations.
The preprocessing made it possible to use deep neural networks, which typically anticipate 2D inputs, for the task of recognizing speech emotions. Specifically, the audio data was converted into a format that was compatible with CNNs by reshaping the 349x1024 XLSR feature array into a 300x300 2D representation. These models could be used by matching the input structure to anticipated CNN architectures. Modern deep learning algorithms can be used to solve our audio emotion classification problem by first converting the audio to a matrix-shape format. The audio input was transformed by our preprocessing pipeline into a format that CNNs can use effectively.
### _Classification_
Reshaping the audio features into a 300x300 matrix representation enabled leveraging deep learning techniques designed for image inputs. CNNs are well-suited for this 2D structured data, as they employ 2D convolutions to extract spatial patterns and relationships. Moreover, transfer learning can be utilized by fine-tuning models pre-trained on image datasets. Additionally, we establish baseline performance by using SVMs on the mean audio features from the first XLSR layer, as a baseline method. Although deep learning algorithms are not anticipated to match the SVM, it offers a good linear benchmark for comparison. Hierarchical feature learning in CNN is expected to outperform the SVM model. However, the SVM offers a fundamental baseline for quantifying gains made by incorporating deep networks into our audio emotion recognition problem.
#### Iii-B1 Support Vector Machines (SVM)
As a baseline model, we utilized a Support Vector Machine (SVM) classifier. The input representation comprised the mean feature vector from the first layer of the XLSR model, summarizing information across time similarly to the original audio We standardized the data to have zero mean, then applied a radial basis function (RBF) kernel to map the data into a higher dimensional space where a linear decision boundary could separate it. The RBF kernel allowed the SVM model to handle any nonlinear relationships between the audio features and emotion labels. By comparing deep learning approaches to this nonlinear SVM baseline, we aimed to demonstrate the benefits of learned hierarchical representations. The SVM provides a reasonable benchmark for accuracy from a non-deep model applied directly to the raw audio features.
#### Iii-B2 Transfer Learning
We leveraged transfer learning as one of our deep learning approaches, utilizing EfficientNet architectures pre-trained on the ImageNet dataset. EfficientNets are a family of CNNs optimized for high accuracy with reduced parameters. We evaluated different scaled versions, including EfficientNetB0 through EfficientNetB7, which have increasing layers and capacities. The base EfficientNet model parameters were frozen, and we added new dense layers tuned for our audio emotion classification task. By reusing the pre-trained feature hierarchies in the convolutional base, we could take advantage of transferred knowledge from a source domain. Fine-tuning only the top layers adapted the model to extract emotionally relevant patterns from our pre-processed spectrograms. Compared to training a deep CNN from scratch, transfer learning provides faster convergence and better generalization with fewer training examples. Evaluating various EfficientNet scaling levels allowed us to explore trade-offs between model size, training time, and accuracy. This transfer learning approach efficiently applies deep convolution networks to recognize emotions from our reshaped audio representations.
Fig. 1: Structure of the model for SER paradigm
Fig. 2: Output of preprocessing step
#### Ii-A3 Proposed CNN Model
Finally, we propose a custom-designed CNN architecture specifically created for speech emotion classification, achieving improved performance. The model comprises seven 2D convolutional layers with progressively decreasing filter sizes, enabling the network to learn higher-level feature representations from the input spectrograms in a hierarchical manner. Batch normalization was used to increase training stability after each convolutional layer. In order to downsample the feature maps and uncover strong spatial patterns, max-pooling layers were carefully placed. Following layer pooling, dropout, and regularization were also used to reduce overfitting on the training set of data. Before a softmax classification layer, the final convolution layer outputs were flattened and routed through fully connected dense layers to compress the features shown in TABLEI. Our CNN model can learn a deep hierarchy of emotionally significant audio elements by increasing the filter depths and modifying the overall architecture. Our particular design seeks to achieve greater accuracy compared to traditional CNNs and transfer learning models by adjusting the network topology and hyperparameters to the specifics of our innovative preprocessed spectrogram input representation. In addition to the SVM baseline, the customized model offers a powerful deep learning strategy for generalizing across various human emotions inside the audio dataset. Fig.3 depicts the suggested CNN model architecture.
## III Experiment and Result
### _Experimental Setup_
#### Iii-A1 Dataset
The ShEMO dataset [31] is made up of dialogues from 87 individuals that were taken from the radio and spans a duration of around 3 hours and 25 minutes. Of these speakers, 31 are women and 56 are males. This dataset has been made of 3000 utterances in the mono channel which accurance of each emotion is in TABLEII, and at the frequency of 44.1 kHz. We employed the following five emotions in this work: surprise, happiness, sadness, neutral, and anger. We also excluded waveforms with fear labels since there were only 38 of them (1.26 % of the total). Additionally, audio waves longer than 7 seconds were clipped to this duration, while those shorter than 7 seconds were zero-padded to provide uniform sequence lengths, while a sample of each emotion for 3 seconds are illustrated in Fig.4.
#### Iii-A2 Parameters
In the SVM experiment, we employed the RBF kernel [40] and the gamma was set to auto. For optimization, we employed stochastic gradient descent (SGD) [37] to train the EfficientNet-B3 network [41] and Adam [38] with a fixed learning rate of 10-3 and scheduler in our proposed model experiments. A batch size of 4 was used during training. Model evaluation was performed on full audio recordings using wav2vec representations as input, without relying on segmentation. Additionally, regularization was employed to avoid overfitting and dropout [39]--which was applied with probabilities of 0.1, 0.2, 0.3, 0.4, and 0.6--was used after the max-pooling layers. Finally, for the loss function, we used categorical cross-entropy due to the labels being tagged first.
### _Results_
The transfer learning loss and accuracy curves are shown in Fig.5. Training loss decreased as the model fit improved over epochs. However, validation loss remained nearly constant before slightly increasing after 100 epochs, indicating potential overfitting. Accuracy followed comparable trends, with training accuracy quickly reaching 77.8% then plateauing as validation fluctuated without notable gains. The curves suggest modifications are needed to further enhance performance.
As shown in Fig.6, the CNN model exhibited decreasing training and validation loss over epochs, indicating successful learning from the data. The validation loss remained lower than training loss throughout training, suggesting effective generalization to new data. Both training and validation accuracy steadily increased over epochs. In later epochs, validation accuracy surpassed 80%, outperforming SVM and transfer learning models. These curves demonstrate our proposed architecture achieved higher precision and lower loss on training and validation sets compared to prior benchmarks. Moreover, superior validation over training performance highlights model generalizability.
We compare our results to prior work by Keesing et al. [42], which utilized wav2vec features on the ShEMO dataset. Our hypotheses are that expanding model capacity and depth can improve performance. As shown in TABLEIII, our proposed model achieves superior accuracy on the ShEMO validation and test sets compared to [42]. Specifically, our test set recall improves by an absolute value of approximately 1300 features on ShEMO, summarized in Table II. Notably, Yazdani et al. [44] achieved state-of-the-art accuracy using this approach. Our model surpasses their accuracy by 3.41%. Critically, unlike Yazdani et al.'s reliance on manually engineered features like MFCCs, our model utilizes an end-to-end architecture, operating directly on raw waveform inputs independent of temporal or spectral characteristics.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Emotion & Surprise & Happness & Sadness & Anger & Neutral & Fear \\ \hline Utterances & 225 & 201 & 449 & 1059 & 1028 & 38 \\ \hline
\begin{tabular}{c} Avg Length \\ (Seconds) \\ \end{tabular} & 1.79 & 3.81 & 4.84 & 3.61 & 4.89 & 3.17 \\ \hline \end{tabular}
\end{table} TABLE II: Data description of ShEMO dataset
\begin{table}
\begin{tabular}{|c|c|c|} \hline layer name & output size & Proposed Model \\ \hline conv1 & 300 x 300 x 64 & 3 x 3 3 & 64, stride 1 \\ \hline \multirow{3}{*}{conv2\_x} & \multirow{3}{*}{150 x 150 x 128} & 3 x 3 max pool, stride 1 \\ & & & [3 x 3, 128] x 2 \\ & & & [3 x 3, 128] x 2 \\ \hline \multirow{3}{*}{conv3\_x} & \multirow{3}{*}{75 x 75 x 256} & 3 x 3 & 256 \\ & & & [3 x 3, 256] x 2 \\ \hline \multirow{3}{*}{conv4\_x} & \multirow{3}{*}{37 x 37 x 512} & 3 x 3 & 512 \\ & & & [3 x 3, 512] x 2 \\ \hline \multirow{3}{*}{conv5\_x} & \multirow{3}{*}{18 x 18 x 512} & 3 x 3 & 512 \\ & & & & [3 x 3, 512] x 2 \\ \cline{1-1} \cline{2-4} & & 1 x & 1 x & 1024 \\ \hline Fully Connected & 64 & 64 & 64 x 1024 \\ \hline softmax & 5 & & \\ \hline \end{tabular}
\end{table} TABLE I: Proposed CNN model structure
Sharma et al. [44] presents the most recent benchmark on the full ShEMO dataset, achieving state-of-the-art results. Their reported test set weighted average recall is 0.862, just 0.8% absolutely lower than our model utilizing wav2vec features. For completeness, their F1 score is 0.860 compared to 0.870 for our proposed method, a 0.01 absolute improvement. By incorporating wav2vec representations into a deeper CNN, our approach advances the state-of-the-art in this dimension.
## IV Conclusion
In this study, we introduced a novel approach to SER by combining self-supervised feature extraction with a CNN architecture. Firstly, by harnessing wav2vec feature extraction and a costume designed CNN-based model, we achieved substantial improvements over prior SER methods. Notably, our proposed approach outperformed the SVM-based model and transfer learning methods, showcasing superior generalization capabilities. Comparing our model to prior work, we observed substantial performance enhancements. In particular, we surpassed the accuracy achieved by models relying on handcrafted features, such as MFCCs, underscoring the potential of end-to-end architectures that operate directly on raw waveform inputs. Furthermore, our approach demonstrated competitive results when benchmarked against the most recent state-of-the-art methods. Our findings contribute to the evolving landscape of SER, emphasizing the importance of data-driven approaches in advancing the understanding of emotional cues in speech.
|
2310.00273 | Safe Stabilizing Control for Polygonal Robots in Dynamic Elliptical
Environments | This paper addresses the challenge of safe navigation for rigid-body mobile
robots in dynamic environments. We introduce an analytic approach to compute
the distance between a polygon and an ellipse, and employ it to construct a
control barrier function (CBF) for safe control synthesis. Existing CBF design
methods for mobile robot obstacle avoidance usually assume point or circular
robots, preventing their applicability to more realistic robot body geometries.
Our work enables CBF designs that capture complex robot and obstacle shapes. We
demonstrate the effectiveness of our approach in simulations highlighting
real-time obstacle avoidance in constrained and dynamic environments for both
mobile robots and multi-joint robot arms. | Kehan Long, Khoa Tran, Melvin Leok, Nikolay Atanasov | 2023-09-30T06:26:12Z | http://arxiv.org/abs/2310.00273v2 | # Safe Stabilizing Control for Polygonal Robots in Dynamic Elliptical Environments
###### Abstract
This paper addresses the challenge of safe navigation for rigid-body mobile robots in dynamic environments. We introduce an analytic approach to compute the distance between a polygon and an ellipse, and employ it to construct a control barrier function (CBF) for safe control synthesis. Existing CBF design methods for mobile robot obstacle avoidance usually assume point or circular robots, preventing their applicability to more realistic robot body geometries. Our work enables CBF designs that capture complex robot and obstacle shapes. We demonstrate the effectiveness of our approach in simulations highlighting real-time obstacle avoidance in constrained and dynamic environments for both mobile robots and multi-joint robot arms.
## I Introduction
Obstacle avoidance in static and dynamic environments is a central challenge for safe mobile robot autonomy.
At the planning level, several motion planning algorithms have been developed to provide a feasible path that ensures obstacle avoidance, including prominent approaches like A\({}^{*}\)[1], RRT\({}^{*}\)[2], and their variants [3, 4]. These algorithms typically assume that a low-level tracking controller can execute the planned path. However, in dynamic environments where obstacles and conditions change rapidly, reliance on such a controller can be limiting. A significant contribution to the field was made by Khatib [5], who introduced artificial potential fields to enable collision avoidance during not only the motion planning stage but also the real-time control of a mobile robot. Later, Rimon and Koditschek [6] developed navigation functions, a particular form of artificial potential functions that guarantees simultaneous collision avoidance and stabilization to a goal configuration. In recent years, research has delved into the domain of trajectory generation and optimization, with innovative algorithms proposed for quadrotor safe navigation [7, 8, 9]. In parallel, the rise of learning-based approaches [10, 11, 12] has added a new direction to the field, utilizing machine learning to facilitate both planning and real-time obstacle avoidance. Despite their promise, these methods often face challenges in dynamic environments and in providing safety guarantees.
In the field of safe control synthesis, integrating control Lyapunov functions (CLFs) and control barrier functions (CBFs) into a quadratic program (QP) has proven to be a reliable and efficient strategy for formulating safe stabilizing controls across a wide array of robotic tasks [13, 14, 15]. While CBF-based methodologies have been deployed for obstacle avoidance [16, 17, 18, 19, 20], such strategies typically simplify the robot as a point or circle and assume static environments when constructing CBFs for control synthesis. Some recent advances have also explored the use of time-varying CBFs to facilitate safe control in dynamic environments [21, 22, 23]. However, this concept has yet to be thoroughly investigated in the context of obstacle avoidance for rigid-body robots. For the safe autonomy of robot arms, Koptev _et al._[24] introduced a neural network approach to approximate the signed distance function of a robot arm and use it for safe reactive control in dynamic environments. In [25], a CBF construction formula is proposed for a robot arm with a static and circular obstacle. A configuration-aware control approach for the robot arm was proposed in [26] by integrating geometric restrictions with CBFs. Thirugnanam _et al._[27] introduced a discrete CBF constraint between polytopes and further incorporated the constraint in a model predictive control to enable safe navigation. The authors also extended the formulation for continuous-time systems in [28] but the CBF computation between polytopes is numerical, requiring a duality-based formulation with non-smooth CBFs.
_Notations:_ The sets of non-negative real and natural numbers are denoted \(\mathbb{R}_{\geq 0}\) and \(\mathbb{N}\). For \(N\in\mathbb{N}\), \([N]:=\{1,2,\ldots N\}\). The orientation of a 2D body is denoted by \(0\leq\theta<2\pi\) for counter-clockwise rotation. We denote the corresponding rotation matrix as \(\mathbf{R}(\theta)=\begin{bmatrix}\cos\theta&-\sin\theta\\ \sin\theta&\cos\theta\end{bmatrix}\). The configuration of a 2D rigid-body is described by position and orientation, and the space of the positions and orientations in 2D is called the special Euclidean group, denoted as \(SE(2)\). Also, we use \(\|\mathbf{x}\|\) to denote the \(L_{2}\) norm for a vector \(\mathbf{x}\) and \(\otimes\) to denote the Kronecker product. The gradient of a differentiable function \(V\) is denoted by \(\nabla V\), and its Lie derivative along a vector field \(f\) by \(\mathcal{L}_{f}V=\nabla V\cdot f\). A continuous function \(\alpha:[0,a)\rightarrow[0,\infty)\) is of class \(\mathcal{K}\) if it is strictly increasing and \(\alpha(0)=0\). A continuous function \(\alpha:\mathbb{R}\rightarrow\mathbb{R}\) is of extended class \(\mathcal{K}_{\infty}\) if it is of class \(\mathcal{K}\) and \(\lim_{r\rightarrow\infty}\alpha(r)=\infty\). Lastly, consider the body-fixed frame of the ellipse \(\mathcal{E}^{\prime}\). The signed distance function (SDF) of the ellipse \(\psi_{\mathcal{E}}:\mathbb{R}^{2}\rightarrow\mathbb{R}\) is defined as
\[\psi_{\mathcal{E}}(\mathbf{p}^{\prime})=\left\{\begin{array}{ll}d(\mathcal{ E}^{\prime},\mathbf{p}^{\prime}),&\text{if }\mathbf{p}^{\prime}\in\mathcal{E}^{c},\\ -d(\mathcal{E}^{\prime},\mathbf{p}^{\prime}),&\text{if }\mathbf{p}^{\prime}\in \mathcal{E},\end{array}\right.\]
where \(d\) is the Euclidean distance. In addition, \(\|\nabla\psi_{\mathcal{E}}(\mathbf{p}^{\prime})\|=1\) for all \(\mathbf{p}^{\prime}\) except on the boundary of the ellipse and its center of mass, the origin.
**Contributions**: (i) We present an analytic distance formula in \(SE(2)\) for elliptical and polygonal objects, enabling closed-form calculations for distance and its gradient. (ii)
We introduce a novel time-varying control barrier function, specifically for rigid-body robots described by one or multiple \(SE(2)\) configurations. Its efficacy of ensuring safe autonomy is demonstrated in ground robot navigation and multi-link robot arm problems.
## II Problem Formulation
Consider a robot with dynamics governed by a non-linear control-affine system,
\[\dot{\mathbf{x}}=f(\mathbf{x})+g(\mathbf{x})\mathbf{u}, \tag{1}\]
where \(\mathbf{x}\in\mathcal{X}\subseteq\mathbb{R}^{n}\) is the robot state and \(\mathbf{u}\in\mathbb{R}^{m}\) is the control input. Assume that \(f:\mathbb{R}^{n}\mapsto\mathbb{R}^{n}\) and \(g:\mathbb{R}^{n}\mapsto\mathbb{R}^{n\times m}\) are continuously differentiable functions. We assume the robot operates in a 2D workspace with a state-dependent shape \(S(\mathbf{x})\subset\mathbb{R}^{2}\).
We assume the \(\mathbb{R}^{2}\) workspace is partitioned into a closed safe (free) region \(\mathcal{F}(t)\) and an open unsafe region \(\mathcal{O}(t)\) such that \(\mathcal{F}(t)\cap\mathcal{O}(t)=\emptyset\) and \(\mathbb{R}^{2}=\mathcal{F}(t)\cup\mathcal{O}(t)\). We assume the unsafe set \(\mathcal{O}(t)\) is characterized by a collection of dynamical elliptical obstacles with known rigid-body motions, denoted as \(\{\mathcal{E}(\mathbf{q}_{i}(t),\mathbf{R}(\theta_{i}(t)),a_{i},b_{i})\}_{i=1}^ {N}\). Here, \(\mathbf{q}_{i}\) denotes the center of mass and \(\mathbf{R}_{i}\) denotes the rotation matrix of the ellipse. In its body-fixed frame, \(a_{i}\) and \(b_{i}\) are the lengths of the semi-axes of the ellipse along the \(x\)-axis and \(y\)-axis, respectively.
**Problem**.: Given a robot with shape \(S(\mathbf{x})\) governed by dynamics (1) that can perfectly determine its state, the objective is to stabilize the robot safely within a goal region \(\mathcal{G}\subset\mathbb{R}^{2}\) such that \(S(\mathbf{x}(t))\cap\mathcal{O}(t)=\emptyset\) for all \(t\in\mathbb{R}_{\geq 0}\).
## III Preliminaries
In this section, we review preliminaries on control Lyapunov and barrier functions and discuss their use in synthesizing a safe stabilizing controller for dynamics in (1).
### _Control Lyapunov Function_
The notion of a control Lyapunov function (CLF) was introduced in [29, 30] to verify the stabilizability of control-affine systems (1). Specifically, a (exponentially stabilizing) CLF \(V:\mathcal{X}\mapsto\mathbb{R}\) is defined as follows,
**Definition III.1**.: A function \(V\in\mathbb{C}^{1}(\mathcal{X},\mathbb{R})\) is a _control Lyapunov function (CLF)_ on \(\mathcal{X}\) for system (1) if \(V(\mathbf{x})>0,\forall\mathbf{x}\in\mathcal{X}\setminus\{\mathbf{0}\},V( \mathbf{0})=0\), and it satisfies:
\[\inf_{\mathbf{u}\in\mathbb{R}^{m}}\text{CLC}(\mathbf{x},\mathbf{u})\leq 0, \quad\forall\mathbf{x}\in\mathcal{X}, \tag{2}\]
where \(\text{CLC}(\mathbf{x},\mathbf{u}):=\mathcal{L}_{f}V(\mathbf{x})+\mathcal{L}_ {g}V(\mathbf{x})\mathbf{u}+\alpha_{V}(V(\mathbf{x}))\) is the _control Lyapunov condition_ (CLC) defined for some class \(K\) function \(\alpha_{V}\).
### _Control Barrier Function_
To facilitate safe control synthesis, we consider a time-varying set \(\mathcal{C}(t)\) defined as the super zero-level set of a continuously differentiable function \(h:\mathcal{X}\times\mathbb{R}\mapsto\mathbb{R}\):
\[\mathcal{C}(t):=\{\mathbf{x}\in\mathcal{X}\subseteq\mathbb{R}^{n}:h(\mathbf{x},t)\geq 0\}. \tag{3}\]
Safety of the system (1) can then be ensured by keeping the state \(\mathbf{x}\) within the safe set \(\mathcal{C}(t)\).
**Definition III.2**.: A function \(h:\mathbb{R}^{n}\times\mathbb{R}_{\geq 0}\mapsto\mathbb{R}\) is a valid time-varying _control barrier function (CBF)_ on \(\mathcal{X}\subseteq\mathbb{R}^{n}\) for (1) if there exists an extended class \(\mathcal{K}_{\infty}\) function \(\alpha_{h}\) with:
\[\sup_{\mathbf{u}\in\mathcal{U}}\text{CBC}(\mathbf{x},\mathbf{u},t)\geq 0, \quad\forall\ (\mathbf{x},t)\in\mathcal{X}\times\mathbb{R}_{\geq 0}, \tag{4}\]
where the _control barrier condition (CBC)_ is:
\[\text{CBC}(\mathbf{x},\mathbf{u},t):=\dot{h}(\mathbf{x},t)+\alpha_{h}(h( \mathbf{x},t)) \tag{5}\] \[=\mathcal{L}_{f}h(\mathbf{x},t)+\mathcal{L}_{g}h(\mathbf{x},t) \mathbf{u}+\frac{\partial h(\mathbf{x},t)}{\partial t}+\alpha_{h}(h(\mathbf{x},t)).\]
Definition III.2 allows us to consider the following set of control values that render the safe set \(\mathcal{C}\) forward invariant,
\[K_{\text{CBF}}(\mathbf{x},t):=\left\{\mathbf{u}\in\mathcal{U}:\text{CBC}( \mathbf{x},\mathbf{u},t)\geq 0\right\}. \tag{6}\]
Suppose we are given a baseline feedback controller \(\mathbf{u}=\mathbf{k}(\mathbf{x})\) for the control-affine systems (1), and we aim to ensure the safety and stability of the system. By observing that both the CLC and CBC constraints are affine in the control input \(\mathbf{u}\), a quadratic program (QP) can be formulated for online synthesis of a safe stabilizing controller for (1):
\[\mathbf{u}(\mathbf{x}) =\operatorname*{arg\,min}_{\mathbf{u}\in\mathbb{R}^{m},\delta\in \mathbb{R}}\|\mathbf{u}-\mathbf{k}(\mathbf{x})\|^{2}+\lambda\delta^{2}, \tag{7}\] \[\mathrm{s.t.}\ \text{CLC}(\mathbf{x},\mathbf{u})\leq\delta,\text{CBC}( \mathbf{x},\mathbf{u},t)\geq 0,\]
where \(\delta\geq 0\) denotes a slack variable that relaxes the CLF constraints to ensure the feasibility of the QP, controlled by the scaling factor \(\lambda>0\).
## IV Analytic distance between ellipse and polygon
In this section, we derive an analytic formula for computing the distance between a polygon and an ellipse. From this distance function, we also compute the associated partial derivatives, which enable the formulation of CBFs to ensure safe autonomy.
We consider the mobile robot's body \(S(\mathbf{x})\) to be described as a polygon, denoted by \(\mathcal{P}(\tilde{\mathbf{q}},\tilde{\mathbf{R}}(\tilde{\theta}),\{\tilde{ \mathbf{p}}_{i}\}_{i=0}^{M-1})\). Here, \(\tilde{\mathbf{q}}\) denotes the center of mass and \(\tilde{\mathbf{R}}\) denotes the orientation in the inertial frame. In its fixed-body frame, \(\{\tilde{\mathbf{p}}_{i}\}\) denotes the vertices of the polygonal robot with line segments \(\tilde{\mathbf{d}}_{i}=\tilde{\mathbf{p}}_{[i+1]_{M}}-\tilde{\mathbf{p}}_{i}\) for \(i=0,1,\dots,M-1\) where \([\cdot]_{M}\) is the \(M\)-modulus.
For convenience, denote \(\mathcal{E}\) and \(\mathcal{P}\) as the bodies in the inertial frame, and we assume their intersection is empty. Now, denote \(\mathcal{E}^{\prime}\) and \(\mathcal{P}^{\prime}\) as the respective bodies in the body-fixed frame of the elliptical obstacle. As a result,
\[d(\mathcal{E},\mathcal{P})=d(\mathcal{E}^{\prime},\mathcal{P}^{\prime}) \tag{8}\]
by isometric transformation. Furthermore, let \(\tilde{\mathbf{p}}_{i}\) be a vertex in the robot's frame. Then in the inertial frame, it becomes \(\mathbf{p}_{i}=\tilde{\mathbf{q}}+\tilde{\mathbf{R}}\tilde{\mathbf{p}}_{i}\). In the obstacle's frame, it is
\[\mathbf{p}_{i}^{\prime}=\mathbf{R}^{\top}(\mathbf{p}_{i}-\mathbf{q})=\mathbf{R}^{ \top}\tilde{\mathbf{R}}\tilde{\mathbf{p}}_{i}+\mathbf{R}^{\top}(\tilde{ \mathbf{q}}-\mathbf{q}), \tag{9}\]
In short, \(\{\hat{\mathbf{p}}_{i}\}\) are vertices in the robot's frame, \(\{\mathbf{p}_{i}\}\) are vertices in the inertial frame, and \(\{\mathbf{p}^{\prime}_{i}\}\) are vertices in the obstacle's frame.
The distance function is
\[d(\mathcal{E}^{\prime},\mathcal{P}^{\prime}):=\min_{0\leq i<M}d(\mathcal{E}^{ \prime},\mathbf{d}^{\prime}_{i}), \tag{10}\]
which is the distance between the ellipse \(\mathcal{E}^{\prime}\) and each line segment \(\mathbf{d}^{\prime}_{i}\). We write each segment as
\[l^{\prime}_{i}(\tau)=(1-\tau)\mathbf{p}^{\prime}_{i}+\tau\mathbf{p}^{\prime}_{[ i+1]_{M}}, \tag{11}\]
for \(\tau\in[0,1]\). This further simplifies the function to
\[d(\mathcal{E}^{\prime},\mathbf{d}^{\prime}_{i})=\min_{\tau\in[0,1]}d(\mathcal{ E}^{\prime},l^{\prime}_{i}(\tau)). \tag{12}\]
Now, there are essentially two group of computations for the distance in (12): one is the distances between the ellipse \(\mathcal{E}^{\prime}\) and the endpoints of \(\mathbf{d}^{\prime}_{i}\); the other is the distance between the ellipse \(\mathcal{E}^{\prime}\) and the infinite line \(l^{\prime}_{i}(\tau)\) for arbitrary \(\tau\) with the caveat that the minimizing argument occurs at \(\tau^{*}\in(0,1)\). The two computations are detailed in the procedures which follow our next proposition.
**Proposition IV.1**.: _Let \(\mathcal{E}^{\prime}\) be an ellipse and \(l^{\prime}_{i}\) be a line segment in the frame of the ellipse. Denote \(\tau^{*}\) as the argument of the minimum in (12). Then, the distance_
\[d(\mathcal{E}^{\prime},\mathbf{d}^{\prime}_{i})=\left\{\begin{array}{ll} \|\mathbf{p}^{\prime}_{i}-\mathbf{p}^{\prime}_{i}\|,&\mbox{if $\tau^{*}=0$},\\ \|\mathbf{p}^{\prime}_{[i+1]_{M}}-\mathbf{p}^{\prime}_{[i+1]_{M}}\|,&\mbox{if $\tau^{*}=1$},\\ \|l^{\prime}_{i}(\tau^{*})-l^{\prime}_{i}\underline{(\tau^{*})}\|,&\mbox{if $\tau^{*} \in(0,1)$}.\end{array}\right. \tag{13}\]
_The points \(\mathbf{p}^{\prime}_{i}\) and \(\mathbf{p}^{\prime}_{[i+1]_{M}}\) on the ellipse are determined using **Procedure 1**, and both \(l^{\prime}_{i}(\tau^{*})\) and \(l^{\prime}_{i}(\tau^{*})\) (on the ellipse) are determined using **Procedure 2**._
**Procedure 1.** Let \(\mathbf{p}^{\prime}=(p^{\prime}_{x},p^{\prime}_{y})\) be one of the endpoints for the line segment \(\mathbf{d}^{\prime}_{i}\). Recall that the ellipse is defined by its semi-axes along \(x\)-axis and \(y\)-axis, denoted by \(a\) and \(b\), respectively. Then, the points on the ellipse are parameterized by
\[x(t)=a\cos(t),\quad y(t)=b\sin(t), \tag{14}\]
for \(0\leq t\leq 2\pi\). The goal is to determine the point \((x(t),y(t))\) on the ellipse that is closest to the point \(\mathbf{p}^{\prime}\), so it is a minimization problem of the squared Euclidean distance:
\[d^{2}(t)=(p^{\prime}_{x}-a\cos(t))^{2}+(p^{\prime}_{y}-b\sin(t))^{2}. \tag{15}\]
To find the minimum distance, we determine the critical point(s) by solving for \(0=\frac{d}{dt}d^{2}(t)\), which simplified to
\[0=(b^{2}-a^{2})\cos t\sin t+ap^{\prime}_{x}\sin t-bp^{\prime}_{y}\cos t. \tag{16}\]
Using single-variable optimization, we substitute
\[\cos t=\lambda,\quad\sin t=\sqrt{1-\lambda}, \tag{17}\]
and this yields \(bp^{\prime}_{y}\lambda=\sqrt{1-\lambda^{2}}((b^{2}-a^{2})\lambda+ap^{\prime}_{ x})\), which is a quartic equation in \(\lambda\). Furthermore, a monic quartic can be derived, which gives the following simplified coefficients:
\[0=\lambda^{4}+2m\lambda^{3}+(m^{2}+n^{2}-1)\lambda^{2}-2m\lambda-m^{2}, \tag{18}\]
where
\[m=p^{\prime}_{x}\frac{a}{b^{2}-a^{2}},\quad n=p^{\prime}_{y}\frac{b}{b^{2}-a^{ 2}}. \tag{19}\]
From this point, the real root(s) of the equation can be solved analytically following Cardano's and Ferrari's solution for the quartic equations [31]. Let \(\underline{t}\) be the solution so that \(\underline{\mathbf{p}^{\prime}}=(x(\underline{t}),y(\underline{t}))\) is a point on the ellipse and is closest to \(\mathbf{p}^{\prime}\). Hence,
\[d(\mathcal{E}^{\prime},\mathbf{d}^{\prime}_{i})=\|\mathbf{p}^{\prime}- \underline{\mathbf{p}^{\prime}}\| \tag{20}\]
where \(\mathbf{p}^{\prime}\) is either \(\mathbf{p}^{\prime}_{i}\) or \(\mathbf{p}^{\prime}_{[i+1]_{M}}\). This concludes **Procedure 1**.
**Procedure 2.** We compute the distance between the ellipse \(\mathcal{E}^{\prime}\) and the infinite line \(l^{\prime}_{i}(\tau)\) whose minimizing point occurs at \(\tau^{*}\in(0,1)\).
First define the unit normal of the infinite line as
\[\hat{\mathbf{n}}^{\prime}_{i}=\frac{1}{\|\mathbf{d}^{\prime}_{i}\|}(-d^{\prime }_{i,y},d^{\prime}_{i,x}). \tag{21}\]
Denote \(l^{\prime}_{i}(\tau^{*})\) as the point on the ellipse that is closest to the \(l^{\prime}_{i}(\tau^{*})\). In fact, this point \(l^{\prime}_{i}(\tau^{*})\) must have a tangent line at the ellipse which is parallel to \(\overline{l}^{\prime}_{i}\); this also means the normal at \(l^{\prime}_{i}(\tau^{*})\) is \(\pm\hat{\mathbf{n}}^{\prime}_{i}\), so we can compute this point on the ellipse up to a sign:
\[\frac{l^{\prime}_{i}(\tau^{*})}{\|I_{\epsilon}\hat{\mathbf{n}}^{\prime}_{i}\|}, \tag{22}\]
where \(I_{\epsilon}=\mbox{diag}(a,b)\). The correct sign is chosen when we are looking at the sign of the constant \(C\) in the line equation \(Ax+By+C=0\) of \(l^{\prime}_{i}\). In particular,
\[C=-\hat{\mathbf{n}}^{\prime\uparrow}_{i}\mathbf{p}^{\prime}_{i}. \tag{23}\]
If \(C>0\), then \(l^{\prime}_{i}(\tau^{*})=-\frac{I^{2}_{\epsilon}\hat{\mathbf{n}}^{\prime}_{i}}{\|I_{ \epsilon}\hat{\mathbf{n}}^{\prime}_{i}\|}\); otherwise, if \(C<0\), then \(\underline{l^{\prime}_{i}(\tau^{*})}{\|I_{\epsilon}\hat{\mathbf{n}}^{\prime}_{i}\|}\).
Finally, we can determine \(l^{\prime}_{i}(\tau^{*})\) on the line segment \(\mathbf{d}^{\prime}_{i}\) using projection:
\[l^{\prime}_{i}(\tau^{*})=\mathbf{p}^{\prime}_{i}+\mbox{proj}_{\mathbf{d}^{ \prime}_{i}}(\underline{l^{\prime}_{i}(\tau^{*})}-\mathbf{p}^{\prime}_{i}). \tag{24}\]
Here, we are done with **Procedure 2**.
We turn our attention to compute the partial derivatives of \(d(\mathcal{E}^{\prime},\mathcal{P}^{\prime})\) with respect to either \((\mathbf{q},\mathbf{R})\), the configuration of our obstacle, or \((\tilde{\mathbf{q}},\tilde{\mathbf{R}})\), the configuration of the polygonal robot.
Now, in general, both procedures above compute the distance using the Euclidean norm between two unique points: one point \(\mathbf{p}^{\prime}\) on a line segment of the robot, and the other \(\underline{\mathbf{p}^{\prime}}\) on the ellipse. This is, in fact, equivalent to the SDF of the ellipse evaluated at \(\mathbf{p}^{\prime}\) by the uniqueness of these two points. Therefore, let \(\mathbf{p}^{\prime}=l_{i}(\tau^{*})\) for some \(0\leq i<M\), then
\[d(\mathcal{E}^{\prime},\mathcal{P}^{\prime})=\psi_{\mathcal{E}}(\mathbf{p}^{ \prime})=\psi_{\mathcal{E}}(l_{i}(\tau^{*})). \tag{25}\]
Then, its gradient with respect to \(\mathbf{p}^{\prime}\) is
\[\nabla\psi_{\mathcal{E}}(\mathbf{p}^{\prime})=\frac{\mathbf{p}^{\prime}-\underline {\mathbf{p}^{\prime}}}{\|\mathbf{p}^{\prime}-\underline{\mathbf{p}^{\prime}}\|}. \tag{26}\]
However, notice that \(\mathbf{p}^{\prime}\) is a point transformed from the polygonal robot's frame using (9), which depends on both
the configurations of the elliptical obstacle and the polygonal robot. Hence the partial derivatives can be computed using the chain rule:
**Proposition IV.2**.: _Let \(\mathcal{E}^{\prime}\) and \(\mathcal{P}^{\prime}\) be the bodies of the elliptical obstacle and robot, respectively, in the obstacle's frame. Let \(\mathbf{p}^{\prime}\) and \(\underline{\mathbf{p}}^{\prime}\) be determined from Proposition IV.1, then_
\[\frac{\partial d}{\partial\mathbf{q}} =\left(\frac{\partial d}{\partial q_{x}},\frac{\partial d}{ \partial q_{y}}\right)=-\mathbf{R}\nabla\psi_{\mathcal{E}}(\mathbf{p}^{\prime }), \tag{27}\] \[\frac{\partial d}{\partial\mathbf{R}} =\nabla\psi_{\mathcal{E}}(\mathbf{p}^{\prime})\otimes(\tilde{ \mathbf{R}}\tilde{\mathbf{p}}+(\tilde{\mathbf{q}}-\mathbf{q})),\] (28) \[\frac{\partial d}{\partial\tilde{\mathbf{q}}} =\left(\frac{\partial d}{\partial\tilde{q}_{x}},\frac{\partial d }{\partial\tilde{q}_{y}}\right)=\mathbf{R}\nabla\psi_{\mathcal{E}}(\mathbf{p}^ {\prime}),\] (29) \[\frac{\partial d}{\partial\tilde{\mathbf{R}}} =\mathbf{R}(\nabla\psi_{\mathcal{E}}(\mathbf{p}^{\prime})\otimes \tilde{\mathbf{p}}). \tag{30}\]
_Furthermore, Eqs. (28) and (30) are derivatives with respect to the rotation matrices; one may compute the derivatives with respect to the rotation angle as_
\[\frac{\partial d}{\partial\theta} =\nabla\psi_{\mathcal{E}}(\mathbf{p}^{\prime})^{\top}\left[ \frac{\partial\mathbf{R}}{\partial\theta}^{\top}(\tilde{\mathbf{R}}\tilde{ \mathbf{p}}+(\tilde{\mathbf{q}}-\mathbf{q}))\right] \tag{31}\] \[=\text{tr}\left[\frac{\partial d}{\partial\mathbf{R}}\frac{ \partial\mathbf{R}}{\partial\theta}\right],\] \[\frac{\partial d}{\partial\tilde{\theta}} =\nabla\psi_{\mathcal{E}}(\mathbf{p}^{\prime})^{\top}\left[ \mathbf{R}^{\top}\frac{\partial\tilde{\mathbf{R}}}{\partial\tilde{\theta}} \tilde{\mathbf{p}}\right]=\text{tr}\left[\frac{\partial d}{\partial\tilde{ \mathbf{R}}}\frac{\partial\tilde{\mathbf{R}}}{\partial\tilde{\theta}}^{\top}\right]. \tag{32}\]
Following both propositions above, we can finally implement CBF using the distance function
\[\Phi(\mathbf{q},\mathbf{R},\tilde{\mathbf{q}},\tilde{\mathbf{R}})=d(\mathcal{ E},\mathcal{P})=d(\mathcal{E}^{\prime},\mathcal{P}^{\prime}) \tag{33}\]
for the elliptical obstacle \(\mathcal{E}(\mathbf{q},\mathbf{R},a,b)\) and polygonal robot \(\mathcal{P}(\tilde{\mathbf{q}},\tilde{\mathbf{R}},\{\tilde{\mathbf{p}}_{i}\})\). Furthermore, \(\Phi\) is differentiable with respect to both \((\mathbf{q},\mathbf{R})\) and \((\tilde{\mathbf{q}},\tilde{\mathbf{R}})\).
## V Polygon-Shaped Robot Safe Navigation in Dynamic Ellipse Environments
In this section, using the distance formula in (33) and assuming known motion of the ellipse obstacles, we derive time-varying control barrier functions to ensure safety for polygon-shaped robots operating in dynamic elliptical environments.
### _Time-Varying Control Barrier Function Constraints_
We assume that there are a total of \(N\) elliptical obstacles in the environment, each having a rigid-body motion with linear velocity \(v_{i}\) and angular velocity \(\omega_{i}\) around its center of mass. We define a time-varying CBF
\[h(\mathbf{x},t):=\min_{i=1}^{N}\Phi(\mathbf{q}_{i}(t),\mathbf{R}_{i}(t), \tilde{\mathbf{q}},\tilde{\mathbf{R}}), \tag{34}\]
where \(\Phi\) is the collision function and \(\mathbf{q}_{i}(t)\) and \(\mathbf{R}_{i}(t)\) denotes the position and orientation of the \(i\)-th ellipse at time \(t\), respectively.
Based on the TV-CBF definition, we construct the safety constraint as follows. We first identify the index \(k\) corresponding to the minimum collision function:
\[k=\arg\min_{i=1}^{N}\Phi(\mathbf{q}_{i}(t),\mathbf{R}_{i}(t), \tilde{\mathbf{q}},\tilde{\mathbf{R}}), \tag{35}\]
and we can compute the time derivative of the CBF as:
\[\frac{\partial h(\mathbf{x},t)}{\partial t}=\frac{\partial\Phi(\mathbf{q}_{k },\mathbf{R}_{k})}{\partial\mathbf{q}_{k}}\frac{\partial\mathbf{q}_{k}}{ \partial t}+\frac{\partial\Phi(\mathbf{q}_{k},\mathbf{R}_{k})}{\partial \mathbf{R}_{k}}\frac{\partial\mathbf{R}_{k}}{\partial t}. \tag{36}\]
Now, by utilizing the known motion of the \(k\)-th ellipse with linear and angular velocity \(v_{k}\) and \(\omega_{k}\), we can express the CBC condition as:
\[\text{CBC}(\mathbf{x},\mathbf{u},t) :=\left[\frac{\partial\Phi(\mathbf{q}_{k},\mathbf{R}_{k},\mathbf{ x})}{\partial\mathbf{x}}\right]^{\top}F(\mathbf{x})\underline{\mathbf{u}}+\frac{ \partial\Phi(\mathbf{q}_{k},\mathbf{R}_{k})}{\partial\mathbf{q}_{k}}v_{k}\] \[+\frac{\partial\Phi(\mathbf{q}_{k},\mathbf{R}_{k})}{\partial \mathbf{R}_{k}}\frac{\partial\mathbf{R}_{k}}{\partial\theta_{k}}\omega_{k}+ \alpha_{h}(\Phi(\mathbf{q}_{k},\mathbf{R}_{k},\mathbf{x}))\geq 0.\]
### _Ground-Robot Navigation_
Suppose the robot has a polygonal shape with \(\{\tilde{\mathbf{p}}_{i}\}\) denoting the vertices, and the robot motion is governed by unicycle kinematics,
\[\begin{bmatrix}\ddot{x}\\ \ddot{y}\\ \ddot{\theta}\end{bmatrix}=\begin{bmatrix}\cos(\theta)&0\\ \sin(\theta)&0\\ 0&1\end{bmatrix}\begin{bmatrix}v\\ \omega\end{bmatrix}, \tag{37}\]
where \(v\), \(\omega\) represent the robot linear and angular velocity, respectively. The state and input are \(\mathbf{x}:=[x,y,\theta]^{\top}\in\mathbb{R}^{2}\times[-\pi,\pi)\), \(\mathbf{u}:=[v,\omega]^{\top}\in\mathbb{R}^{2}\). The CLF for the unicycle model is defined as a quadratic form \(V(\mathbf{x})=(\mathbf{x}-\mathbf{x}^{*})^{\top}\mathbf{Q}(\mathbf{x}-\mathbf{ x}^{*})\), where \(\mathbf{x}^{*}\) denotes the desired equilibrium and \(\mathbf{Q}\) is a positive-definite matrix [32]. We then define the goal region \(\mathcal{G}\) as a disk centered at the 2D position of the desired state \(\mathbf{x}^{*}\), with a radius \(r\).
By writing the robot's position as \(\tilde{\mathbf{q}}=[x,y]^{\top}\) and its orientation via the rotation matrix \(\tilde{\mathbf{R}}(\theta)\), we write the shape \(S(\mathbf{x})\) of the robot in terms of its state:
\[S(\mathbf{x}):=\text{conv}\{\tilde{\mathbf{q}}+\tilde{\mathbf{R}}(\theta) \tilde{\mathbf{p}}_{i}\} \tag{38}\]
where \(\tilde{\mathbf{p}}_{i}\) denotes the vertices of the polygon and \(\text{conv}\{\cdot\}\) denotes the convex hull of points. With this definition, we can derive the CBF for the polygon-shaped unicycle model, as in (34).
### _K-joint Robot Arm Safe Stabilizing Control_
In this section, we discuss methods for controlling a 2D K-joint robot arm in a dynamical ellipse environment by utilizing our proposed CBF construction approach. For such robots, the links are intrinsically interconnected due to kinematic chaining. This means that controlling any one link will influence the pose of all subsequent links.
The dynamics of the robot arm are captured by:
\[\dot{\boldsymbol{\theta}}=\boldsymbol{\omega}, \tag{39}\]
where \(\boldsymbol{\theta}=[\tilde{\theta}_{1},\tilde{\theta}_{2},\dots,\tilde{ \theta}_{K}]^{\top}\) and \(\boldsymbol{\omega}=[\omega_{1},\omega_{2},\dots,\omega_{K}]^{\top}\).
For the robot arm, each link has an associated 2D shape, denoted as \(S_{i}(\boldsymbol{\theta})\), which depends on the state of the arm. The overall shape of the robot arm, is given by the union of these shapes \(S(\boldsymbol{\theta})=\bigcup_{i=1}^{K}S_{i}(\boldsymbol{\theta})\). For simplicity, we assume each \(S_{i}\) is a line segment.
For each link \(i\), its state in \(SE(2)\) consists of a position \(\tilde{\mathbf{q}}_{i}=[x_{i},y_{i}]^{\top}\):
\[x_{i} =x_{i-1}+L_{i}\cos\left(\sum_{j=1}^{i}\tilde{\theta}_{j}\right), \tag{40}\] \[y_{i} =y_{i-1}+L_{i}\sin\left(\sum_{j=1}^{i}\tilde{\theta}_{j}\right), \tag{41}\]
and a rotation matrix \(\tilde{\mathbf{R}}_{i}\) corresponding to \(\underline{\theta}_{i}:=\sum_{j=1}^{i}\tilde{\theta}_{j}\). For simplicity, we suppose \(x_{1}=0\) and \(y_{1}=0\), and \(L_{i}\) represents the length of the \(i\)-th link. The robot state can also be represented as multiple \(SE(2)\) configurations corresponding to each link, from \((\tilde{\mathbf{q}}_{1},\tilde{\mathbf{R}}_{1})\) to \((\tilde{\mathbf{q}}_{K},\tilde{\mathbf{R}}_{K})\). Additionally, we denote \(\tilde{\mathbf{q}}_{K+1}\) as the end effector.
We define the CLF for the K-joint robot arm as \(V(\boldsymbol{\theta})=(\boldsymbol{\theta}-\boldsymbol{\theta}^{*})^{\top} \mathbf{Q}(\boldsymbol{\theta}-\boldsymbol{\theta}^{*})\), where \(\mathbf{Q}\) is a positive-definite matrix, and \(\boldsymbol{\theta}^{*}\) is the desired joint states. The goal region \(\mathcal{G}\) is specified as a disk centered at the position of the end effector corresponding to state \(\boldsymbol{\theta}^{*}\), with a defined radius \(r\).
For safety assurance, the CBF is constructed using the distance between the robot arm and nearby elliptical obstacles:
\[h(\boldsymbol{\theta})=\min_{i=1,\ldots,N,\,j=1,\ldots,M}\Phi(\mathbf{q}_{i}, \mathbf{R}_{i},\tilde{\mathbf{q}}_{j},\tilde{\mathbf{R}}_{j}). \tag{42}\]
## VI Evaluation
In this section, we show the efficacy of our proposed CBF construction techniques using simulation examples, focusing on ground-robot navigation and 2-D robot arm control.
Fig. 1 contrasts the \(SE(2)\) distance function with the \(\mathbb{R}^{2}\) counterpart by visualizing their level sets. Our proposed
Fig. 3: Safe stabilization of a 3-joint robot arm. The green circle denotes the goal region, and the gray box denotes the base of the arm. The arm is shown in blue and the trajectory of its end-effector is shown in red. The trajectories of the moving elliptical obstacles are shown in purple.
Fig. 2: Safe navigation in a dynamical elliptical environment. (a) shows the initial pose of the triangular robot and the environment. (b) shows the triangular robot passing through the narrow space between two moving ellipses. (c) shows the robot adjusts its pose to avoid the moving obstacle. (d) shows the final pose of the robot that reaches the goal region and the current environment. In (e), we plot the trajectory of navigating a circular robot in the same environment.
Fig. 1: Comparative analysis of the \(SE(2)\) and \(\mathbb{R}^{2}\) signed distance functions for elliptical obstacles. The cyan triangle represents the rigid-body robot, with its orientation varying across the sequence. The importance of considering robot orientation in distance computations becomes evident: while the \(SE(2)\) function accounts for this orientation, the \(\mathbb{R}^{2}\) approximation treats the robot as an encapsulating circle with radius 1. Level sets at distances 0.2 and 2 are depicted for both functions.
\(SE(2)\) approach incorporates the orientation of the rigid-body robot, yielding notably improved results, particularly when the robot is close to obstacles.
To highlight the significance of accurate robot shape representation, we draw a comparison with a baseline circular robot CBF formulation. In Fig. 2, we compare safe navigation using our proposed \(SE(2)\) CBF approach with a regular \(\mathbb{R}^{2}\) CBF approach. For both methods, we set \(\mathbf{k}(\mathbf{x})=[v_{\max},0]^{\top}\) where \(v_{\max}=3.0\) is the maximum linear velocity. The remaining parameters were \(\lambda=100\), \(\alpha_{V}(V(\mathbf{x}))=2V(\mathbf{x})\), and \(\alpha_{h}(h(\mathbf{x},t))=3h(\mathbf{x},t)\).
We demonstrate safe navigation to a goal state. In Fig. 1(a), the triangular robot starts the navigation with position centered at \((0,0)\) and orientation \(\theta=\pi/4\). In Fig. 1(b), the robot adeptly navigates the narrow passage between two dynamic obstacles. In Fig. 1(d), we see that the robot is able to reach the goal region without collision. In Fig. 1(e), when the robot is conservatively modeled as a circle navigating the identical environment, it is evident that the robot has to opt for a more circuitous route to circumvent obstacles. This is due to its inability to traverse certain constricted spaces, as illustrated in Fig. 1(b). These outcomes underscore the superior performance of our \(SE(2)\) CBF methodology. Another advantage of the \(SE(2)\) formulation lies in its assurance of a uniformly relative degree of \(1\) for the constructed CBF, obviating the need to model a point off the wheel axis [33].
In the following set of experiments, we consider safe stabilization of a 3-joint robot arm in a dynamical elliptical environment. We set \(\mathbf{k}(\mathbf{x})=[0,0,0]^{\top}\) and restrict the joint control bounds with \(|\omega_{i}|\leq 3\). In Fig. 3, the robot arm is able to elude the mobile ellipses by nimbly adjusting its pose. In Fig. 4, we show the control inputs of each joint over time. We see that when the robot arm is close to the obstacles, it is able to take large control inputs in adjusting its pose. In Fig. 5, we show the CLF and CBF values over time. A consistently positive CBF value throughout the trajectory signifies safety assurance, while the decreasing CLF values indicates the convergence to the desired state. Moreover, the CLF value may increase when the arm is close to obstacles (i.e. CBF value is low), this comes from the relaxation of the CLF-CBF QP to ensure the feasibility of the program.
## VII Conclusion
We present an analytic distance formula between elliptical and polygonal objects. Leveraging this formula, we construct a time-varying control barrier function that ensures the safe autonomy of a polygon-shaped robot operating in dynamical elliptical environments. The efficacy of the proposed approach is demonstrated in rigid-body navigation and multi-link robot arm problems. Future work will consider extending the formulation to 3-D environments (e.g. drone navigation, robot arm manipulation) and estimating the geometry and dynamics of the environment with on-board sensing.
|
2309.03619 | Understanding Self-Supervised Learning of Speech Representation via
Invariance and Redundancy Reduction | Self-supervised learning (SSL) has emerged as a promising paradigm for
learning flexible speech representations from unlabeled data. By designing
pretext tasks that exploit statistical regularities, SSL models can capture
useful representations that are transferable to downstream tasks. This study
provides an empirical analysis of Barlow Twins (BT), an SSL technique inspired
by theories of redundancy reduction in human perception. On downstream tasks,
BT representations accelerated learning and transferred across domains.
However, limitations exist in disentangling key explanatory factors, with
redundancy reduction and invariance alone insufficient for factorization of
learned latents into modular, compact, and informative codes. Our ablations
study isolated gains from invariance constraints, but the gains were
context-dependent. Overall, this work substantiates the potential of Barlow
Twins for sample-efficient speech encoding. However, challenges remain in
achieving fully hierarchical representations. The analysis methodology and
insights pave a path for extensions incorporating further inductive priors and
perceptual principles to further enhance the BT self-supervision framework. | Yusuf Brima, Ulf Krumnack, Simone Pika, Gunther Heidemann | 2023-09-07T10:23:59Z | http://arxiv.org/abs/2309.03619v2 | Understanding Self-Supervised Learning of Speech Representation via Invariance and Redundancy Reduction
###### Abstract
The choice of the objective function is crucial in emerging high-quality representations from self-supervised learning. This paper investigates how different formulations of the Barlow Twins (BT) objective impact downstream task performance for speech data. We propose Modified Barlow Twins (MBT) with normalized latents to enforce scale-invariance and evaluate on speaker identification, gender recognition and keyword spotting tasks. Our results show MBT improves representation generalization over original BT, especially when fine-tuning with limited target data. This highlights the importance of designing objectives that encourage invariant and transferable representations. Our analysis provides insights into how the BT learning objective can be tailored to produce speech representations that excel when adapted to new downstream tasks. This study is an important step towards developing reusable self-supervised speech representations.
Yusuf Brima\({}^{\star\dagger}\) Ulf Krumack\({}^{\star}\) Simone Pika\({}^{\dagger}\) Gunther Heidemann\({}^{\star}\)\({}^{\star}\) Computer Vision, Institute of Cognitive Science, Osnabrueck University
\({}^{\dagger}\) Comparative BioCognition, Institute of Cognitive Science, Osnabrueck University
Acoustic Analysis, Barlow Twins, Self-supervised learning, Invariance, Redundancy Reduction, Speech representation learning.
+
Footnote †: This work was funds of the research training group (RTG) in “Computational Cognition” (GRK2340) provided by the Deutsche Forschungsgemeinschaft (DFG), Germany, and an EU-Consolidator grant (772000, TurnTaking).
To appear in _Proc. ICASSP 2024, Seoul, Korea, 14-19 April 2024_
## 1 Introduction
Speech processing applications like speaker identification, diarization, segmentation, and voice assistants rely heavily on capturing statistical regularities from speech data. However, supervised learning methods for speech face significant data labeling challenges [1, 2, 3]. We present a self-supervised learning approach for speech representation learning inspired by the Barlow Twins framework [4]. By harnessing redundancy reduction principles [5] and using differently augmented views of speech as input, our method can capture meaningful representations of speech without reliance on manually labeled data.
Our focus is assessing how the choice of objective function influences downstream task performance [6, 7]. Through comprehensive sets of experiments spanning speaker identification, gender recognition, and keyword spotting, we rigorously analyze how the emergent representations capture robust, invariant, and hierarchical speech traits [4]. Unlike existing supervised methods [1, 8, 9, 10, 11], our approach significantly reduces data dependency, offering new directions for representation learning. Comparisons against supervised techniques provide insights into representation quality using established evaluation protocols.
### Related Literature
Self-Supervised Learning (SSL) has emerged as a promising complement to supervised learning for representation learning [12, 13, 14, 15, 16, 17, 18, 19]. SSL exploits the inherent statistical regularities within unlabeled data, enabling models to learn meaningful underlying representations without relying on manual annotations. This approach has
demonstrated state-of-the-art (SOTA) performance in various domains, including computer vision and natural language processing, highlighting its efficacy in capturing underlying factors of variation in data.
In recent years, the application of SSL has extended to the field of speech processing [20, 1, 10, 11]. Notably, Schneider et al.[21] introduced wav2vec, an SSL method designed for learning speech representations from raw audio. By incorporating a quantization module and contrastive learning, wav2vec achieves competitive performance compared to supervised counterparts in tasks such as speaker verification and speech recognition. Similarly, Hsu et al.[9] proposed HuBERT, an SSL technique for speech representation learning based on masked language modeling. HuBERT utilizes a BERT-like model trained to predict masked speech tokens, enabling the capture of both semantic and phonetic information in the learned representations. And Jing et al., [22] introduced a Redundancy Reduction Twins Network (RRTN) for emotional state prediction using a joint Barlow Twins and Concordance Correlation Coefficient (CCC) loss. Anton et al., [23] recently carried out an implementation of BT for speech representation learning which was evaluated on the HEAR 2021 Challenge outperforming SOTA or achieving comparable results. Unlike [23], the object of this work is to investigate the question: how does the choice of the BT objective function impact downstream task performance?
## 2 Method
### Learning Framework
The Barlow Twins (BT) framework, as depicted in Fig. 1, employs a joint embedding architecture (JEA), comprising an encoder \(f_{\theta}\), to project augmented mini-batch speech views \(X^{A}\) and \(X^{B}\) into a latent space yielding \(Z^{A}\) and \(Z^{B}\) representations. The optimization objective, shown in Eq. (2) and (1), is to learn invariance between latent variables of the same sample and reduce redundancy different samples within a mini-batch in the representation space. However, the cross-correlation in original BT as shown in Eq. 2 may influence gradient scale and does not affect relative latent relationships.
To address this, we introduce the Modified Barlow Twins (MBT) objective. We \(L_{2}\)-normalize the latent dimensions of \(Z^{A}\) and \(Z^{B}\) as shown in Eq. (3) that can be applied to Eq. (1). This provides balanced gradients, mitigates varying feature magnitudes, and bases alignment on directions rather than scale.
In Section 3, we evaluate the generalization performance of MBT.
\[\mathcal{L}(C;\lambda)\triangleq\underbrace{\sum_{i}{(1-C_{ii})^{2}}}_{\text{ invariance}}+\lambda\underbrace{\sum_{i}{\sum_{j\neq i}{(C_{ij})^{2}}}}_{\text{ redundancy reduction}}. \tag{1}\]
\[\mathcal{C}_{ij}^{\text{BT}}\triangleq\frac{\sum_{b}{z_{b,i}^{A}z_{b,j}^{B}}}{ \sqrt{\sum_{b}{\left({z_{b,i}^{A}}\right)^{2}}}\sqrt{\sum_{b}{\left({z_{b,j}^ {B}}\right)^{2}}}}, \tag{2}\]
\[C_{ij}^{\text{MBT}}=\sum_{b}\left[\frac{{Z_{b,i}^{A}}^{T}\cdot Z_{b,j}^{B}}{ \|Z_{b}^{A}\|_{2}\|Z_{b}^{B}\|_{2}}\right]. \tag{3}\]
### Datasets
The datasets consist of upstream and downstream data, summarized in Table 1. The upstream data includes VoxCeleb-1, LibriSpeech-100, and LibriSpeech-360, providing diverse speech for training and baseline tasks. The Google Speech Commands and world leaders at the US Congress (WLUC) are used for downstream tasks of keyword spotting, speaker ID, and gender recognition.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & Source & Samples & Classes & Table \\ \hline
1 & VoxCeleb-1 & 75,81 & 1,211 & Upstream \\
2 & LibriSpeech-100 & 14,385 & 128 & Upstream \\
3 & LibriSpeech-360 & 104,385 & 921 & Upstream \\
4 & Speech Commands & 7,985 & 2 & Downstream \\
5 & WLUC & 7,500 & 5 & Downstream \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of upstream and downstream datasets.
## 3 Experiment
We pre-trained on the large-scale audio datasets in Table 1 for 50 epochs for each upstream model with a mini-batch size of \(n=64\) and latent dimensionality of \(m=2048\).
### Audio preprocessing
All audio samples undergo conversion to 16 kHz sampling rate if needed and split into segments of 1 second. Log-scaled mel-spectrograms are generated using 64 ms windows with 32 ms hop size, capturing 513 melfrequency bins from 0 to 8000 Hz. This process yields mel-spectrograms \(X\in\mathbb{R}^{513\times 32}\), with 513 frequency bins and 32 time frames, forming input tensors \(X_{b}\in\mathbb{R}^{n\times 1\times 513\times 32}\) for the encoder, where \(n\) is the mini-batch size.
### Results and Analysis
In our analysis and interpretation of the results, we have adhered to a specific convention. Tables 2, 3, and 4 present the top-1 test accuracy for various downstream tasks and datasets. The fractions denote the proportion of the downstream dataset used during model fine-tuning, which corresponds to Stage 2 in Fig. 1. These downstream tasks are Speaker Recognition, Gender Recognition, and Keyword Spotting with the associated learning objectives being original Barlow Twins (BT) and Modified Barlow Twins (MBT). This is preceded by the evaluation of upstream datasets (LibriSpeech-100, LibriSpeech-360, and VoxCeleb1) in ascending order of sample heterogeneity and representativeness of data distribution in real-world scenarios.
Our findings suggest that the MBT objective function enhances performance compared to the original Barlow Twins (BT) formulation, especially when fine-tuning models with limited downstream data. This trend is consistent across various down
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{Fraction (\%)} & \multirow{2}{*}{Baseline} & \multicolumn{2}{c}{LibriSpeech-100} & \multicolumn{2}{c}{LibriSpeech-360} & \multicolumn{2}{c}{VoxCeleb1} \\ \cline{3-8} & & BT & MBT & BT & MBT & BT & MBT \\ \hline
5 & 34:21 & 39:47 & **47.37** & 28:95 & 42:11 & 36.84 & 26.32 \\
10 & 54:67 & 64:00 & **78.67** & 54:67 & 52:00 & 48.00 & 48.00 \\
50 & 75:20 & **83.73** & 81.33 & 77:60 & 73:60 & 68.00 & 64.80 \\
100 & 84:53 & **84.93** & 84:53 & 81:20 & 78:67 & 75.07 & 69.20 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Evaluation on the WLUC dataset for Speaker Recognition where the fraction indicates the proportion of downstream dataset used to fine-tune the models.
Figure 1: Proposed speech representation learning.
stream tasks. For instance, in the Speaker Recognition task with only 5% of the downstream data used for fine-tuning, MBT outperforms BT by 7.9% and 10.0% when pre-trained on LibriSpeech-100 and LibriSpeech-360, respectively. Similarly, in the keyword spotting task, MBT exhibits a substantial improvement of 10.16% when fine-tuning on a small fraction of LibriSpeech-360 data, highlighting its effectiveness in low-resource scenarios.
However, as the proportion of downstream data increases, the performance gains with MBT tend to diminish, and in some cases, BT achieves similar or slightly better results. This suggests that the choice between MBT and BT may depend on the availability of labeled data for fine-tuning.
MBT exhibits promising capabilities in learning invariant and robust representations, particularly in low-resource downstream settings. The normalized feature correlations in MBT are likely serving as a regularization mechanism, preventing overfitting to speaker-specific cues during pre-training. This, in turn, facilitates better generalization when adapting to new tasks with limited labeled data.
## 4 Conclusion
This work provides key insights into designing self-supervised learning to produce reusable speech representations for limited resource tasks. The proposed Modified Barlow Twins demonstrates improved generalization via normalized latent variables, enabling robust, invariant representation learning from unlabeled speech. Going forward, using information-theoretic measures for learning representations via invariance and redundancy reduction during pre-training is promising. Additionally, exploring alternate normalization techniques to encourage latent invariance and/or using sparse network architectures for pre-training are an important direction. Overall, this work lays a strong foundation for developing self-supervised speech representations to unlock new possibilities for speech applications.
|
2309.04176 | Collapsing of Mean Curvature Flow of Hypersurfaces to Complex
Submanifolds | In this paper, we produce explicit examples of mean curvature flow of
(2m-1)-dimensional submanifolds which converge to (2m-2)-dimensional
submanifolds at a finite time. These examples are a special class of
hyperspheres in $\mathbb{C}^{m}$ with a $U(m)$-invariant K\"ahler metrics. We
first discuss the mean curvature flow problem and then investigate the type of
singularities for them. | Farnaz Ghanbari, Samreena | 2023-09-08T07:44:47Z | http://arxiv.org/abs/2309.04176v1 | # Collapsing of Mean Curvature Flow of Hypersurfaces
###### Abstract
In this paper, we produce explicit examples of mean curvature flow of \((2m-1)\)-dimensional submanifolds which converge to \((2m-2)\)-dimensional submanifolds at a finite time. These examples are a special class of hyperspheres in \(\mathbb{C}^{m}\) with a \(U(m)\)-invariant Kahler metrics. We first discuss the mean curvature flow problem and then investigate the type of singularities for them.
## 1 Introduction
Mean curvature flow is a well-known geometric evolution equation for hypersurfaces in which each point moves with a velocity given by the mean curvature vector. If the hypersurface is compact, the short time existence and uniqueness of the mean curvature flow are well-known. In general, it is very hard to find an exact solution of mean curvature flow problem. In fact there are very few explicit examples. Round spheres in Euclidean space are non trivial examples of evolving hypersurface under mean curvature flow which concentrically shrink inward until they collapse at a finite time to a single point. Another instance would be the marriage ring that under mean curvature flow shrinks to a circle. A round cylinder also remains round and finally converges to a line. Mean curvature flow develops singularities if the second fundamental forms of the time dependent immersions become unbounded. It is well-known that mean curvature flow of any closed manifold in Euclidean space develops singularities at a finite time.
The mean curvature flow has first been investigated by Brakke in 1978 [1]. Later Huisken [8] showed that any closed convex hypersurface in Euclidean space shrinks to a round point at a finite time. He then proved [7] that the same holds for hypersurfaces in general Riemannian manifolds satisfying a strong convexity condition which takes into account the geometry of the ambient space. Brakke used geometric measure theory, but Huisken employed a more classical differential geometric approach. In order to describe singularities of the flow, Osher-Sethian introduced a level-set formulation for the mean curvature flow which was investigated later by Evans-Spruck ([6], [5], [4], [3]) and Chen-Giga-Goto [2] in details. Ilmanen revealed in [9] the relation between the level-set formulation and the geometric measure theory approach.
In this paper, we consider a class of canonical hyperspheres in \(\mathbb{C}^{m}\). We will make an important assumption about the symmetry group. Namely, we will require that the Kahler metric on \(\mathbb{C}^{m}\setminus 0\) has \(U(m)\) as the group of isometries. We study the mean curvature flow problem for hyperspheres in \(Bl_{0}\mathbb{C}^{m}\) which reduce to an ordinary differential equation due to invariance of the metric and mean curvature under isometries. In general, it is not easy to compute the second fundamental form to investigate the singularities and also their types. We have computed all the principal curvatures and observed that near the exceptional divisor, all the principal curvatures vanish except for one direction which goes to infinity. By knowing the principal curvatures, we
can compute the mean curvature and also the square of the norm of the second fundamental form. There is a known example, Burns metric on \(Bl_{0}\mathbb{C}^{2}\), for which we will study the mean curvature flow problem in Section 5 and show the exact time of singularity.
The rest of the paper is organized as follows. Section 2 is devoted to definitions, some well-known theorems and some results that will be used throughout the work. Section 3 focuses on the blow up of \(\mathbb{C}^{m}\) at the origin. We discuss the condition when a \(U(m)\)-invariant metric on \(\mathbb{C}^{m}\setminus 0\) can extend to the blow up of \(\mathbb{C}^{m}\) at the origin. In Section 4, we state and prove a proposition on computing principal curvatures on special cases that leads to the proof of our main theorem. Finally Section 5 is dedicated to the mean curvature flow for our setting and some examples.
## 2 Preliminaries
In this section, we recall some basic notions and fix some notations used throughout this paper.
**Definition 2.1**: _Let \(F_{0}:\Sigma^{m}\longrightarrow M^{m+1}\) be a smooth immersion of an \(m\)-dimensional manifold. The mean curvature flow of \(F_{0}\) is a family of smooth immersions \(F_{t}:\Sigma\longrightarrow M^{m+1}\) for \(t\in[0,T)\) such that setting \(F(p,t)=F_{t}(p)\) the map \(F:\Sigma\times[0,T):\Sigma^{m}\longrightarrow M^{m+1}\) is a smooth solution of the following system of PDE's_
\[\begin{cases}\frac{\partial}{\partial t}F(p,t)=H(p,t)n(p,t),\\ F(p,0)=F_{0}(p)\,,\end{cases}\]
_where \(H(p,t)\) and \(n(p,t)\) are respectively the mean curvature and the unit normal of the hypersurface \(F_{t}\) at the point \(p\in\Sigma\)._
Usually the Riemannian manifold \(M\) is called the ambient manifold and the parameter t is considered as time. Minimal submanifolds, i.e. submanifolds with zero mean curvature everywhere, are the stationary solutions of this flow.
There are two important propositions in the Euclidean case. We use them in the proof of our main theorem on mean curvature flow problem [10]. The propositions are as follows.
**Proposition 2.2**: _If the second fundamental form is bounded in the interval \([0,T)\) with \(T<+\infty\), then all its covariant derivatives are also bounded._
**Proposition 2.3**: _If the second fundamental form is bounded in the interval \([0,T)\) with \(T<+\infty\), then \(T\) cannot be a singular time for the mean curvature flow of a compact hypersurface \(F:\Sigma\times[0,T)\longrightarrow\mathbb{R}^{n+1}\)._
From these two propositions, we have the following Remark.
**Remark 2.4**: _The above estimate can be found independent of \(T\) and also independent of initial data._
One of the most important problems in studying the mean curvature flow is to understand the possible singularities the flow goes through. We introduce the notion of singularity in mean curvature flow and their types in the following.
**Definition 2.5**: _If the second fundamental form \(|A|^{2}\) blows up at \(t\longrightarrow T\), then we call \(T\) a singular time of the flow._
**Definition 2.6**: _We say that the flow is developing a type \(I\) singularity at time \(T\) if there exists a constant \(C>1\) such that we have the upper bound_
\[max_{p\in\Sigma}|A(p,t)|^{2}\leq\frac{C}{T-t}.\]
_Otherwise, we say it is a type \(II\) singularity._
## 3 Kahler Metrics on the Blow Up of \(\mathbb{C}^{m}\) at the Origin
We consider the blow up of \(\mathbb{C}^{m}\) at the origin and denote it by \(Bl_{0}\mathbb{C}^{m}.\) It is defined as following:
\[Bl_{0}\mathbb{C}^{m}=\{((z_{1},z_{2},\ldots,z_{m}),[t_{1},t_{2},\ldots,t_{m}]) \in\mathbb{C}^{m}\times\mathbb{C}P^{m-1}:z_{i}t_{j}-z_{j}t_{i}=0\}\subset \mathbb{C}^{m}\times\mathbb{C}P^{m-1}\,.\]
There is a natural projection map \(\pi_{1}:Bl_{0}\mathbb{C}^{m}\to\mathbb{C}^{m}\) defined by
\[\pi_{1}((z_{1},z_{2},\ldots,z_{m}),[t_{1},t_{2},\ldots,z_{m}])=(z_{1},z_{2}, \ldots,z_{m})\,.\]
The inverse image \(\pi_{1}^{-1}(p)\) of \(p\in\mathbb{C}^{m}\) is a line passing the point \(p.\)
The **exceptional divisor**\(E\) is defined as the inverse image of the origin i.e., \(\pi^{-1}(0)=\mathbb{C}P^{m-1}.\)
Moreover the map \(\pi_{1}\) can be restricted to a biholomorphism
\[\pi_{1}:Bl_{0}\mathbb{C}^{m}\setminus E\to\mathbb{C}^{m}\setminus 0.\]
A system of charts that covers the exceptional divisor is given as follows: for every \(i=1,2,\ldots,m,\)
\[U_{i}=\{((z_{1},z_{2},\ldots,z_{m}),[t_{1},t_{2},\ldots,z_{m}]):t_{i}\neq 0, z_{j}=z_{i}t_{j}\}\,.\]
The coordinate map \(\Phi_{i}:U_{i}\to\mathbb{C}^{m}\) is defined as
\[((z_{1},z_{2},\ldots,z_{m}),[t_{1},t_{2},\ldots,t_{m}])\to\left(z_{i},\frac{t _{1}}{t_{i}},\ldots,\frac{t_{i-1}}{t_{i}},\frac{t_{i+1}}{t_{i}},\ldots,\frac{t _{m}}{t_{i}}\right),\]
with inverse map \(\Phi_{i}^{-1}:\mathbb{C}^{m}\to U_{i}\)
\[(z_{1},z_{2},\ldots,z_{m})\to((z_{1}z_{i},z_{i}z_{2},\ldots,z_{i},\ldots,z_{i} z_{m}),[z_{1},\ldots,z_{i-1},1,z_{i+1},\ldots,z_{m}]). \tag{3.1}\]
For every \(i=1,2,\ldots,m,\) the chart \(U_{i}\) intersects the exceptional divisor \(E\):
\[E\cap U_{i}=\{z_{i}=0\}\,.\]
We now take the smooth \((1,1)\)-form on \(\mathbb{C}^{m}\setminus 0\) given by
\[\omega=\sqrt{-1}\partial\bar{\partial}\log(S),\]
where \(S=\sum_{i=1}^{m}|z_{i}|^{2}.\)
The pull back of the smooth form \(\omega=\sqrt{-1}\partial\bar{\partial}\log(S)\) on \(\mathbb{C}^{m}\setminus 0\) extends to the Fubini Study metric on the exceptional divisor \(E=\mathbb{C}P^{m-1}.\)
The pull back \(\pi_{1}^{*}\omega\) is given in local coordinates (3.1) by,
\[\pi_{1}^{*}\omega =\partial\bar{\partial}\log(|z_{i}|^{2}(|z_{1}^{2}|+|z_{2}|^{2} \cdots+|z_{i-1}|^{2}+1+|z_{i+1}|^{2}+\cdots+|z_{m}|^{2})\] \[=\partial\bar{\partial}\log(|z_{1}|^{2}+|z_{2}|^{2}\cdots+|z_{i-1 }|^{2}+1+|z_{i+1}|^{2}+\cdots+|z_{m}|^{2}). \tag{3.2}\]
Clearly (3.2) is the Fubini Study metric on the exceptional divisor \(E\) in homogeneous coordinates \([z_{1},\ldots,z_{i-1},1,z_{i+1},\ldots,z_{m}].\)
Let \(g:\mathbb{C}^{m}\to\mathbb{R}\) be a smooth function that depends on \(S=\sum_{i=1}^{m}|z_{i}|^{2}.\) Then the smooth form
\[\omega=\sqrt{-1}\partial\bar{\partial}f(S)=\sqrt{-1}\partial\bar{\partial}( \log S+g(S)) \tag{3.3}\]
gives Kahler metric on \(\mathbb{C}^{m}\setminus\{0\}\) if and only if \(\frac{1}{S}+g_{S}>0\) and \(g_{S}+Sg_{SS}>0.\) The next proposition explains when the Kahler form (3.3) on \(\mathbb{C}^{m}\setminus 0\) can be extended to \(Bl_{0}\mathbb{C}^{m}.\)
**Proposition 3.1**: _The smooth form \(\omega=\sqrt{-1}\partial\bar{\partial}(\log S+g(S))\) on \(\mathbb{C}^{m}\setminus\{0\}\) extends to Kahler metric on \(Bl_{0}\mathbb{C}^{m}\) if and only if \(g_{S}(0)>0,\)\(\frac{1}{S}+g_{S}>0\) and \(g_{S}+Sg_{SS}>0.\)_
_Proof._ For the sake of simplicity, we only prove the case when \(m=2.\) The general case follows from the same argument.
Given the projection map
\[\pi_{1}:Bl_{0}\mathbb{C}^{2}\rightarrow\mathbb{C}^{2},\]
on the chart \(U_{1}\) we have \(S=|z_{1}|^{2}(1+|z_{2}|^{2})\) and \(E\cap U_{1}=\left\{z_{1}=0\right\}.\) The pull back of the Kahler metric (3.3) to \(Bl_{0}\mathbb{C}^{2}\) is given in coordinates (3.1) by
\[\pi_{1}^{*}\omega=\begin{bmatrix}(1+|z_{2}|^{2})(g_{S}+Sg_{SS})&z_{1}\bar{z_{ 2}}(g_{S}+Sg_{SS})\\ z_{2}\bar{z_{1}}(g_{S}+Sg_{SS})&|z_{1}|^{2}(g_{S}+|z_{1}|^{2}|z_{2}|^{2}g_{SS} )+\frac{1}{1+|z_{2}|^{2}}\end{bmatrix}.\]
The restriction of \(\pi_{1}^{*}\omega\) to the exceptional divisor \(E\) is:
\[\pi_{1}^{*}\omega|_{E}=\begin{bmatrix}(1+|z_{2}|^{2})g_{S}(0)&0\\ 0&\frac{1}{1+|z_{2}|^{2}}\end{bmatrix}.\]
Clearly \(\pi_{1}^{*}\omega|_{E}\) is positive definite if and only if \(g_{S}(0)>0.\)
In the same way on \(U_{2},\) the pull back \(\pi_{1}^{*}\omega\) where
\[\pi_{1}^{*}\omega=\begin{bmatrix}\frac{1}{1+|z_{1}|^{2}}+|z_{2}|^{2}(g_{S}+|z_ {1}|^{2}|z_{2}|^{2}g_{SS})&z_{1}\bar{z_{2}}(g_{S}+Sg_{SS})\\ z_{2}\bar{z_{1}}(g_{S}+Sg_{SS})&(1+|z_{1}|^{2})(g_{S}+Sg_{SS})\end{bmatrix}\]
can be restricted to the exceptional divisor as follows:
\[\pi_{1}^{*}\omega|_{E}=\begin{bmatrix}\frac{1}{1+|z_{1}|^{2}}&0\\ 0&(1+|z_{1}|^{2})g_{S}\end{bmatrix}\]
\(\pi_{1}^{*}\omega|_{E}\) is positive definite if and only if \(g_{S}(0)>0.\)
\(\sqcap\)\(\sqcup\)
**Remark 3.2**: _If \(g_{S}(0)=0\), then \(\pi_{1}^{*}\omega|_{E}\) defines a metric only along the exceptional divisor. Therefore the condition \(g_{S}(0)\neq 0\) guarantees the non degeneracy of the metric orthogonal to the exceptional divisor. The other two conditions \(\frac{1}{S}+g_{S}>0\) and \(g_{S}+Sg_{SS}>0\) are considered because \(\omega\) must be a Kahler metric on \(\mathbb{C}^{m}\setminus 0.\)_
## 4 Principal Curvatures of Hyperspheres
In this section, we compute the second fundamental form for hyperspheres under special conditions. In order to investigate the mean curvature flow for our examples, we need to know the principal curvatures which are the eigenvalues of the second fundamental form.
Let \(\varSigma\) be an \(d\)-dimensional smooth submanifold in an \(d+1\)-dimensional manifold \(M\) and \(g\) be the Riemannian metric on \(M\) with Levi Civita connection \(\nabla\).
**Definition 4.1**: _The second fundamental form of \(\varSigma\) is defined by_
\[\varPi_{n}(X,X)=g\left(\nabla_{X}(X),n\right)\,, \tag{4.1}\]
_where \(X\in T_{p}M\) and \(n\in(T_{p}\varSigma)^{\perp}.\)_
**Lemma 4.2**: _Suppose \(X\) and \(n\) are local vector fields on M such that_
1. \(||X||_{g}^{2}\) _and_ \(||n||_{g}^{2}\) _are constants,_
2. _for all_ \(p\in\Sigma\)_,_ \(X(p)\in T_{p}\Sigma\) _and_ \(n(p)\in(T_{p}\Sigma)^{\perp}\)_,_ _then_ \[\Pi_{n}(X,X)=-g([X,n],X).\]
Proof: \[\Pi_{n}(X,X) =g\left(\nabla_{X}(X),\eta\right)=-g(\nabla_{X}(n),X)=-g([X,n]+ \nabla_{n}(X),X)\] \[=-g([X,n],X)+\frac{1}{2}n(||X||_{g})=-g([X,n],X).\]
In the next proposition, we state and prove the main result of this section and calculate the second fundamental from for hyperspheres with some particular assumptions.
**Proposition 4.3**: _Suppose that \(g_{0}\) and \(g\) are Euclidean and Riemannian metrics on M respectively. Let \(e_{1},...,e_{d+1}\) be orthonormal local vector fields for \(M\) with respect to \(g_{0}\) i.e., \(g_{0}(e_{i},e_{j})=\delta_{ij}\). \(\Sigma\subset M\) is an \(m\)-dimensional submanifold such that for each \(p\in\Sigma\) we have \(e_{d+1}(p)\perp T_{p}\Sigma\). Let \(n=e_{d+1}\) and \(A,\eta,\mu\) be local functions on \(M\) such that their restrictions on \(\Sigma\) are constants. We have the following conditions:_
1. \(g(e_{d+1},e_{d+1})=A^{2}\) _,_ \(g(e_{d},e_{d})=\mu^{2}\)_,_
2. \(g(e_{i},e_{i})=\eta^{2}\) _if_ \(1\leq i\leq d-1\)__
3. \(g(e_{i},e_{j})=0\)__\(\forall i\neq j\)__
4. \([e_{d},e_{d+1}]\in\mathbb{R}<e_{d},e_{d+1}>.\)__
_Now if \(\Pi_{\Sigma}(g_{0})=\tau g_{0}\) for some \(\tau\in\mathbb{R}\), then_
\[\Pi_{g}(e_{i},e_{j})=\begin{bmatrix}(\eta^{2}A^{-1}\tau+\eta A^{-1}\nabla_{n} \eta)I_{m-1}&0\\ 0&\mu^{2}A^{-1}\tau+\mu A^{-1}\nabla_{n}\mu\end{bmatrix}.\]
Proof: We fix some notations which will be used in the proof.
Let \([n,e_{i}]=\sum_{j=1}^{d+1}a_{ij}e_{j}\) for \(1\leq i\leq d\). Then,
* \(a_{ii}=g_{0}([n,e_{i}],e_{i})=-g_{0}([e_{i},n],e_{i})=\Pi_{g_{0}}(e_{1},e_{1})=\tau\)
* \(0=2\Pi_{g_{0}}(e_{i},e_{j})=g_{0}([e_{i},n],e_{j})+g_{0}([e_{j},n],e_{i})=a_{ ij}+a_{ji}.\)
Notice that \(\{\eta^{-1}e_{1},...,\eta^{-1}e_{d-1},\mu^{-1}e_{d},A^{-1}n\}\) is an orthonormal frame for the metric \(g\). We prove the proposition in the following steps.
**Step 1**: By the Lemma 4.2 we have
\[\Pi(\eta^{-1}e_{1},\eta^{-1}e_{1}) =-g([\eta^{-1}e_{1},A^{-1}n],\eta^{-1}e_{1})\] \[=-g(\eta^{-1}A^{-1}[e_{1},n]-A^{-1}\nabla_{n}(\eta^{-1}e_{1}), \eta^{-1}e_{1})\] \[=-\eta^{-1}A^{-1}g([e_{1},n],\eta^{-1}e_{1})+\eta^{-1}A^{-1} \nabla_{n}(\eta^{-1})g(e_{1},e_{1})\] \[=-\eta^{-2}A^{-1}g([e_{1},n],e_{1})+A^{-1}\eta\nabla_{n}(\eta^{- 1})\] \[=-\eta^{-2}A^{-1}g([e_{1},n],e_{1})-A^{-1}\eta^{-1}\nabla_{n}(\eta)\] \[=-A^{-1}(\tau+\eta^{-1}\nabla_{n}\eta)\]
where we employed the property of Lie bracket and the fact that \(\nabla_{e_{1}}A^{-1}=0\) on \(\Sigma\). For the last step we use the following relation:
\[g([e_{1},n],e_{1})=a_{11}g_{11}=\tau\eta^{2}.\]
**Step 2**: The same calculation shows that
\[\Pi(\eta^{-1}e_{i},\eta^{-1}e_{i})=A^{-1}(\tau+\eta^{-1}\nabla_{n}\eta),\]
if \(1\leq i\leq d-1\),
\[\Pi(\mu^{-1}e_{d},\mu^{-1}e_{d})=g([\mu^{-1}e_{d},A^{-1}n],\mu^{-1}e_{d}).\]
Similar to the step 1 we have:
\[\Pi(\mu^{-1}e_{d},\mu^{-1}e_{d})=A^{-1}(\tau+\mu^{-1}\nabla_{n}\mu).\]
Now similar to the last calculations we get \(\Pi(e_{i},e_{j})=0\) for each \(1\leq i<j\leq d-1\).
In the next step we show that \(\Pi(\eta^{-1}e_{1},\mu^{-1}e_{d})=0\).
**Step 3**:
\[2\Pi(\eta^{-1}e_{1},\mu^{-1}e_{d})=g([\eta^{-1}e_{1},n],\mu^{-1}e_{d})+g([\mu^{ -1}e_{d},n],\eta^{-1}e_{1})\]
We have:
\[[\eta^{-1}e_{1},n]=\eta^{-1}[e_{1},n]-\nabla_{n}\eta^{-1}e_{1}=\eta^{-1} \Sigma a_{ij}e_{j}-\nabla_{n}\eta^{-1}e_{1},\]
and
\[[\mu^{-1}e_{d},n]=\mu^{-1}[e_{d},n]-\nabla_{n}\mu^{-1}e_{d}=\mu^{-1}\Sigma a_ {dj}e_{j}-\nabla_{n}\mu^{-1}e_{d}.\]
\[2\Pi(\eta^{-1}e_{1},\mu^{-1}e_{d}) =\mu^{-1}\eta^{-1}g([e_{1},n],e_{d})-\mu^{-1}\nabla_{n}\eta^{-1}g (e_{1},e_{d})+\mu^{-1}\eta^{-1}g([e_{d},n],e_{1})\] \[\quad-\eta^{-1}\nabla_{n}\mu^{-1}g(e_{d},e_{1})\] \[=\mu^{-1}\eta^{-1}g([e_{1},n],e_{d})+g([e_{d},n],e_{1}))\] \[=\mu^{-1}\eta^{-1}(\Sigma a_{ij}e_{j},e_{d})+g(\Sigma a_{dj}e_{j},e_{1}))\] \[=\mu^{-1}\eta^{-1}(a_{1d}g(e_{d},e_{d})+a_{d1}g(e_{1},e_{1}))\] \[=\mu^{-1}\eta^{-1}a_{d1}(-g(e_{d},e_{d})+g(e_{1},e_{1})).\]
Since \([e_{d},n]\in span<e_{d},n>\), then \(a_{d1}=...=a_{d-1}=0\). We thus conclude that
\[\Pi(\eta^{-1}e_{1},\mu^{-1}e_{d})=0.\]
We consider \(\mathbb{C}^{m}\setminus 0\,\) with Kahler metric \(g=\partial\overline{\partial}f(S)=(f_{S}\delta_{ij}+f_{SS}\bar{z_{i}}z_{j})dz _{i}\wedge d\bar{z_{j}}\), \(\Sigma=\{(z_{1},z_{2},\dots,z_{m})\in\mathbb{C}^{m}:S=R^{2}=|z_{1}|^{2}+|z_{2}| ^{2}+\dots+|z_{m}|^{2}\}\subset\mathbb{C}^{m}\), the normal vector \(n\) and \(J(n)=in\) and moreover an orthonormal basis \(e_{1},...,e_{2m-2}\) for \(<n,J(n)>^{\perp}\). Let \(e_{2m-1}=J(n)\) and \(e_{2m}=n\), the metric g is written by:
\[\begin{bmatrix}f_{S}I_{2m-2}&0\\ 0&(f_{S}+f_{SS}S)I_{2}\end{bmatrix}.\]
**Theorem 4.4**: _The principal curvatures of the family \(\Sigma_{S}^{2m-1}\subset\mathbb{C}^{m}\setminus 0\) with a \(U(m)\)-invariant Kahler metric \(\omega=\sqrt{-1}\partial\overline{\partial}f(S)\) are as follows:_
\[\lambda_{1}=\lambda_{2}=...=\lambda_{2m-2}=-\frac{\sqrt{f_{S}+f_{SS}S}}{f_{S} \sqrt{S}},\;\lambda_{2m-1}=-\frac{f_{S}+3Sf_{SS}+S^{2}f_{SSS}}{(f_{S}+f_{SS}S) ^{\frac{3}{2}}\sqrt{S}}.\]
_where \(S=\Sigma_{i=1}^{m}|z_{i}|^{2}\)._
_Proof._ In the setting of the Proposition 4.3, we have \(\Sigma=S^{2m-1}(r)\) and \(M=\mathbb{C}^{m}\setminus 0\). Furthermore, we have \(A^{2}=\mu^{2}=f_{S}+f_{SS}S\) and \(\eta^{2}=f_{S}\). Additionally we get \(\eta^{-1}\nabla_{n}\eta=\frac{S}{f_{S}}f_{SS}\) and \(\mu^{-1}\nabla_{n}\mu=\frac{\sqrt{S}}{\mu^{2}}(2f_{SS}+Sf_{SSS})\).
Now by computing \(g^{-1}\Pi(g)\), we obtain the following principal curvatures:
\[\lambda_{1}=...=\lambda_{2m-2}=-\frac{\sqrt{f_{S}+Sf_{SS}}}{f_{S}\sqrt{S}},\; \lambda_{2m-1}=-\frac{(f_{S}+3Sf_{SS}+S^{2}f_{SSS})}{(f_{S}+f_{SS}S)^{\frac{3} {2}}\sqrt{S}}.\]
Mean Curvature Flow
In this section, we prove our main theorem presenting the mean curvature flow with initial data given by a special class of hyperspheres in \(\mathbb{C}^{m}\) with a \(U(m)\)-invariant Kahler metrics. To do so, we first compute the mean curvature which is the sum of the eigenvalues of the second fundamental form.
**Theorem 5.1**: _The mean curvature of the family \(\Sigma_{S}^{2m-1}\subset\mathbb{C}^{m}\setminus 0\) with \(U(m)\)-invariant Kahler metric \(\omega=\sqrt{-1}\partial\bar{\partial}f(S)\) is given as follows:_
\[H(S)=\frac{-1}{(2m-1)(f_{S}+Sf_{SS})^{\frac{3}{2}}\sqrt{S}f_{S}}((2m-2)(f_{S}+ Sf_{SS})^{2}+f_{S}(f_{SSS}S^{2}+3Sf_{SS}+f_{S})).\]
In the following two lemmas, we compute the square of the second fundamental form to see whether the mean curvature flow contains singularity or not.
**Lemma 5.2**: _Let \(A\) be the second fundamental form of the family of \(\Sigma_{S}^{2m-1}\subset\mathbb{C}^{m}\setminus 0\) with \(U(m)\)-invariant Kahler metric \(\omega=\sqrt{-1}\partial\bar{\partial}f(S)\). Then the square of its norm, \(|A|^{2}\) is as follows:_
\[\frac{(2n-2)(f_{S}+f_{SS}S)^{4}+f_{S}^{2}(f_{S}+3Sf_{SS}+S^{2}f_{SSS})^{2}}{f _{S}^{2}(f_{S}+f_{SS}S)^{3}S}\]
_Proof._ The principal curvatures for the hyperspheres are:
\[\lambda_{1}=\lambda_{2}=...=\lambda_{2m-2}=-\frac{\sqrt{f_{S}+f_{SS}S}}{f_{S} \sqrt{S}},\;\lambda_{2m-1}=-\frac{f_{S}+3Sf_{SS}+S^{2}f_{SSS}}{(f_{S}+f_{SS}S )^{\frac{3}{2}}\sqrt{S}}.\]
Now we can compute \(|A|^{2}\) as following:
\[|A|^{2} =\lambda_{1}^{2}+\lambda_{2}^{2}+...+\lambda_{2m-1}^{2}=(2m-2) \lambda_{1}^{2}+\lambda_{2m-1}^{2}\] \[=\frac{(2m-2)(f_{S}+f_{SS}S)^{4}+f_{S}^{2}(f_{S}+3Sf_{SS}+S^{2}f _{SSS})^{2}}{f_{S}^{2}(f_{S}+f_{SS}S)^{3}S}\,.\]
\(\sqcap\)\(\sqcup\)
**Lemma 5.3**: _For each \(g\) with the following conditions,_
\[g_{S}(0)>0,\;\frac{1}{S}+g_{S}>0,\;\mbox{and}\;g_{S}+Sg_{SS}>0,\]
\(|A|^{2}\) _blows up only at \(S=0\)._
_Proof._
\[|A|^{2}=\frac{(2n-2)(g_{S}+g_{SS}S)^{4}+(\frac{1}{S}+g_{S})^{2}(g_{S}+3Sg_{SS }+S^{2}g_{SSS})^{2}}{(\frac{1}{S}+g_{S})^{2}(g_{S}+g_{SS}S)^{3}S}\]
We know that \(g\) is a smooth function and does not blow up. When \(S=0\), the numerator is always positive by the above conditions of \(g\). Thus the singularity only happens when \(S=0\). \(\sqcap\)\(\sqcup\)
Now we prove the main result of our work in the next theorem in which we investigate the mean curvature flow for our setting.
**Theorem 5.4**: _Consider \(\mathbb{C}^{m}\setminus 0\) with a \(U(m)\)-invariant Kahler metric \(\omega=\sqrt{-1}\partial\overline{\partial}f(S)\) where \(f(S)=\log S+g(S)\) and \(g\) is an analytic function with the following conditions:_
\[g_{S}(0)>0,\;\frac{1}{S}+g_{S}>0,\;\mbox{and}\;g_{S}+Sg_{SS}>0.\]
_There exists \(\epsilon>0\) such that if \(R_{o}<\epsilon\), we can choose one hypersphere with radius \(R_{0}\) in such a way that the mean curvature flow with initial condition \(\Sigma_{R(0)}=\Sigma_{R_{0}}\) converges to the exceptional divisor at a finite time and we have a singularity of Type I._
Proof.: The mean curvature flow problem for the hyperspheres \(\Sigma_{S}\) is the following ordinary differential equation (ODE):
\[\frac{dR(t)}{dt}=H(R(t)).\]
We can choose \(\epsilon>0\) such that if we start the flow with the initial data \(R(0)=R_{0}<\epsilon\), the mean curvature does not vanish and is negative. In the previous lemma we observe that there is only one singularity at \(R(t)=0\). Therefore, the time of singularity (\(T_{sing}\)) happens whenever \(R(t)=0\). This means that if the flow starts at \(t=0\), then \(|A|^{2}\) is bounded for all \(t\in[0,T_{sing})\). We can write the mean curvature flow problem as \(\frac{dR(t)}{dt}=\frac{1}{R^{\alpha}(t)}K(R(t))\) for some \(\alpha>0\), where \(K(R(t))\) is an analytic function without singularity and its Taylor series near \(R(t)=0\) is as follows: \(K(R(t))=\sum_{n=0}^{\infty}\frac{K^{n}(0)}{n!}R^{n}(t)\). We thus get \(R^{\alpha}(t)\frac{dR(t)}{dt}=\sum_{n=0}^{\infty}\frac{K^{n}(0)}{n!}R^{n}(t)\). By applying integral on both sides, we get \(\frac{1}{\alpha+1}R^{\alpha+1}(t)=K(0)t+\sum_{n=1}^{\infty}\frac{K^{n}(0)}{(n +1)!}R^{n+1}(t)+C\) for some constant C. Moreover, with initial condition \(R(0)=R_{0}\) we have \(C=\frac{R_{0}^{n+1}}{\alpha+1}-\sum_{n=1}^{\infty}\frac{K^{n}(0)}{(n+1)!}R_{0 }^{n+1}\). Further, we have the singularity only at \(R(t)=0\). Hence
\[T_{sing}=\frac{1}{K(0)}(\sum_{n=1}^{\infty}\frac{K^{n}(0)}{(n+1)!}R_{0}^{n+1}- \frac{R_{0}^{\alpha+1}}{\alpha+1}).\]
We can easily conclude that the time of singularity is finite and we can employ the Propositions 2.2 and 2.3. Since these Propositions are well-known local theorems, we can apply them in Riemannian case. Therefore, we can conclude that the flow does not stop (i.e., keep restarting) and converges to \(R(t)=0\) which is the exceptional divisor in \(Bl_{0}\mathbb{C}^{m}\). Moreover, We can write the square of the second fundamental form as \(|A|^{2}=\frac{W(R(t))}{R^{2}(t)}\), where \(W(R(t))\) is an analytic function without singularity. Its Taylor series then near \(R(t)=0\) is \(W(R(t))=\sum_{n=0}^{\infty}\frac{W^{n}(0)}{n!}R^{n}(t)\). Clearly we have
\[|A|^{2}=\frac{W^{0}(0)}{R^{2}(t)}+\frac{W^{1}(0)}{R(t)}+\frac{W^{2}(0)}{2}+ \sum_{n=3}^{\infty}\frac{W^{n}(0)}{n!}R^{n-2}(t).\]
Therefore we get
\[\lim_{t\to T_{sing}}(T_{sing}-t)|A|^{2}=\lim_{t\to T_{sing}}(T_{sing}-t)\frac{W^ {0}(0)}{R^{2}(t)}+\lim_{t\to T_{sing}}(T_{sing}-t)\frac{W^{1}(0)}{R(t)}\]
\[+\lim_{t\to T_{sing}}(T_{sing}-t)\frac{W^{2}(0)}{2}+\lim_{t\to T_{sing}}(T_{sing} -t)\sum_{n=3}^{\infty}\frac{W^{n}(0)}{n!}R^{n-2}(t).\]
\(R(t)\) goes to zero as \(t\) goes to \(T_{sing}\), so we can easily check that
\[\lim_{t\to T_{sing}}(T_{sing}-t)\frac{W^{2}(0)}{2}=\lim_{t\to T_{sing}}(T_{sing} -t)\sum_{n=3}^{\infty}\frac{W^{n}(0)}{n!}R^{n-2}(t)=0.\]
Since \(\frac{dR(t)}{dt}=H(R(t))\), we have \(R^{\prime}(T_{sing})=\frac{dR(t)}{dt}|_{t=T_{sing}}=H(R(T_{sing}))=H(0)=\infty\). By using Hopital method we can conclude that
\[\lim_{t\to T_{sing}}(T_{sing}-t)\frac{W^{1}(0)}{R(t)}=\lim_{t\to T_{sing}}\frac{ -W^{1}(0)}{R^{\prime}(t)}<\infty.\]
We can also compute that \(\lim_{t\to T_{sing}}R(t)H(R(t))\neq 0\). Again by using Hopital method we can easily see that
\[\lim_{t\to T_{sing}}(T_{sing}-t)\frac{W^{0}(0)}{R^{2}(t)}<\infty.\]
Consequently,
\[\lim_{t\to T_{sing}}(T_{sing}-t)max|A|^{2}<\infty.\]
The singularity is thus Type I.
The assumption of analyticity in the above Theorem is not restrictive. Many interesting Kahler metrics are analytic. For example, as proved by Hopf and Morrey constant scalar curvature Kahler metrics satisfy this hyphothesis [11].
**Remark 5.5**: _We can observe that when \(S(t)\to 0\), then \(\lambda_{1}=...=\lambda_{2m-2}\to 0\) and \(\lambda_{2m-1}\to\infty\). This means that when \(S(t)\to 0\), one of the principal directions collapses and the hypersphere converges to the exceptional divisor, which is holomorphic submanifold of \(Bl_{0}\mathbb{C}^{m}\). Since holomorphic submanifolds of complex manifolds are minimal, so one would naturally expect that the principal curvature vanishes there._
In some examples we can estimate \(\epsilon\) as \(+\infty\) including the Burns metric. Example 5.6 provides an instance of the mean curvature flow problem for the Burns metric.
**Example 5.6**: _Consider \(Bl_{0}\mathbb{C}^{2}\) with the Burns metric given by \(\omega=\sqrt{-1}\partial\overline{\partial}(\log(S)+S)\). We can choose an arbitrary hypersphere \(\Sigma_{R_{0}}\) as initial condition for the mean curvature flow. The mean curvature flow of the hypersphere converges to \(S^{2}\) at a finite time and we have the singularity of Type I._
_Proof._ The mean curvature flow problem for the hyperspheres \(\Sigma_{S}\) is the following ODE:
\[\frac{dR(t)}{dt}=H(R(t)).\]
Now the principal curvatures of \(\Sigma_{S}\) with Burns metric are:
\[\lambda_{1}=\lambda_{2}=\frac{-R}{(R^{2}+1)},\ \ \lambda_{3}=\frac{-1}{R}.\]
Moreover, the mean curvature of these families and \(|A|^{2}\) are given by
\[H(R(t))=\frac{-1}{3}\frac{3R^{2}(t)+1}{R(t)(R^{2}(t)+1)},\ |A|^{2}=\frac{2R^{4}(t) +(R^{2}(t)+1)^{2}}{R^{2}(t)(R^{2}(t)+1)^{2}}.\]
Therefore the mean curvature problem is equivalent to:
\[\frac{dR(t)}{dt}=\frac{-1}{3}\frac{3R^{2}(t)+1}{R(t)(R^{2}(t)+1)}.\]
The solution of the equation with initial data \(R(0)=R_{0}\) would be
\[\frac{R^{2}(t)}{2}+\frac{1}{3}\log(3R^{2}(t)+1)=-t+c.\]
\(|A|^{2}\) blows up only when \(R(t)=0\). With the initial condition \(R(0)=R_{0}\) we get the time of singularity as following:
\[T_{sing}=\frac{R_{0}^{2}}{2}+\frac{1}{3}\log(3R_{0}^{2}+1).\]
The time of singularity is finite and the flow exists for all \(t\in[0,T_{sing})\). We can also check that there exists a positive constant \(C\) such that \(|A|^{2}<\frac{C}{|T_{sing}-t|}\). The singularity is thus Type I.
## Acknowledgement
We thank Professor Claudio Arezzo for many valuable discussions and comments about this work. His insightful feedback brought our work to a higher level. We are also greatly indebted with Professor Reza Seyyedali. He also contributed to improve our paper by kindly providing several comments on this paper.
|
2309.09334 | Off the Beaten Track: Laterally Weighted Motion Planning for Local
Obstacle Avoidance | We extend the behaviour of generic sample-based motion planners to support
obstacle avoidance during long-range path following by introducing a new
edge-cost metric paired with a curvilinear planning space. The resulting
planner generates naturally smooth paths that avoid local obstacles while
minimizing lateral path deviation to best exploit prior terrain knowledge from
the reference path. In this adaptation, we explore the nuances of planning in
the curvilinear configuration space and describe a mechanism for natural
singularity handling to improve generality. We then shift our focus to the
trajectory generation problem, proposing a novel Model Predictive Control (MPC)
architecture to best exploit our path planner for improved obstacle avoidance.
Through rigorous field robotics trials over 5 km, we compare our approach to
the more common direct path-tracking MPC method and discuss the promise of
these techniques for reliable long-term autonomous operations. | Jordy Sehn, Jack Collier, Timothy D. Barfoot | 2023-09-17T17:48:27Z | http://arxiv.org/abs/2309.09334v1 | # Off the Beaten Track: Laterally Weighted Motion Planning for Local Obstacle Avoidance
###### Abstract
We extend the behaviour of generic sample-based motion planners to support obstacle avoidance during long-range path following by introducing a new edge-cost metric paired with a curvilinear planning space. The resulting planner generates naturally smooth paths that avoid local obstacles while minimizing lateral path deviation to best exploit prior terrain knowledge from the reference path. In this adaptation, we explore the nuances of planning in the curvilinear configuration space and describe a mechanism for natural singularity handling to improve generality. We then shift our focus to the trajectory-generation problem, proposing a novel Model Predictive Control (MPC) architecture to best exploit our path planner for improved obstacle avoidance. Through rigorous field robotics trials over 5 km, we compare our approach to the more common direct path-tracking MPC method and discuss the promise of these techniques for reliable long-term autonomous operations.
Motion Planning, Field Robotics, Obstacle Avoidance.
## I Introduction
Robot navigation in unstructured outdoor environments is a challenging-yet-critical task for many mobile robotics applications including transportation, mining, and forestry. In particular, robust localization in the presence of both short- and long-term scene variations without reliance on a Global Positioning System (GPS) becomes particularly difficult. Furthermore, the off-road terrain-assessment problem is non-trivial to generalize as the variety of potential obstacles increases, all of which require careful identification, planning, and control to prevent collisions.
Visual Teach & Repeat (VT&R) [1] tackles these problems by suggesting that often it is sufficient for a mobile robot to operate on a network of paths previously taught by a human operator. During a learning phase (the _teach pass_) the robot is manually piloted along a route whilst building a visual map of the environment using a rich sensor such as a stereo camera. In the autonomous traversal phase (the _repeat pass_), live stereo images are used to localize to the map with high precision and resiliency to lighting and seasonal changes [2, 3].
In practice, this architecture works well to address both the localization and terrain-assessment problems. By having a human operator drive the teach pass, we exploit the strong prior that the original path was traversable. It follows that, at least in the short term, it is likely the taught path remains collision free and by following the path closely in the repeat, minimal terrain assessment is required.
With the advent of Light Detection and Ranging (LiDAR) implementations of teach and repeat [4, 5, 6] whose localization system is less sensitive to viewpoint changes when repeating a path than stereo, we explore the possibility of temporarily deviating from the teach path to avoid new obstacles and increase the practicality of VT&R. In this application, we highlight the reliability of change detection for obstacle perception in difficult environments. Additionally, we emphasize the importance of path planning that minimizes deviations from the taught path to take advantage of the human terrain assessment prior whenever possible. This allows us to navigate in the safest possible manner in the event of terrain-assessment errors.
The primary contribution of our system is in the planning domain where we present a novel edge-cost metric that, when combined with a curvilinear planning space, extends the capabilities of a generic sample-based optimal motion planner to generate paths that naturally avoid local obstacles along the teach pass. Notably, in obstacle-free environments, our planner ensures that the solution remains on the original teach path without cutting corners. The key feature of our edge-cost metric is its ability to encourage path solutions that strike a delicate balance between minimizing path length and lateral path deviation, all achieved without relying on waypoints. Additionally, we demonstrate that our new metric maintains the fundamental properties of the underlying planning algorithm by utilizing an admissible sampling heuristic.
After generating suitable obstacle-avoidance plans, we propose two distinct control paradigms for trajectory generation using MPC. In the first case, we directly track the planner output paths while utilizing MPC to enforce basic kinematic and acceleration constraints. This approach generates smooth
Fig. 1: We present an architecture for locally avoiding unmapped obstacles along a reference path by extending sample-based motion planners to encourage trajectories with characteristics that best exploit prior path knowledge. Our system is validated on an ARGO Atlas J8 robot in real-world scenarios.
trajectories that effectively avoid local obstacles. In the second implementation, we decouple the MPC from the planner by explicitly following the taught reference path instead of the planner output. To avoid obstacles, the homotopy class of the current path solution is used to enforce a set of corridor state constraints, ensuring a collision-free and robust MPC trajectory.
In this evolved work, we expand and improve upon the motion-planning methodology and the results of our previous publication [7]. Specifically, we introduce a mechanism for rotation singularity handling in the path planner to provide a more complete planning framework. We then propose and evaluate a novel homotopy-class-guided MPC, designed to best exploit the strengths of our path planner and compare to our previous direct-tracking MPC implementation over a series of autonomous navigation experiments in unstructured environments using a new robotic platform.
## II Related Work
### _Path Planning in Teach and Repeat_
Path planning for local obstacle avoidance is a rich and well studied field with a wide range of approaches. In this work, we are particularly concerned with local obstacle avoidance for unmanned ground vehicles with fixed underlying reference plans, and as such limit our review of the literature to this subsection of obstacle planning. Given the current state of the robot and known positions of obstacles, we explore the methods for generating trajectories that avoid the obstacle and return to the reference plan.
The emergence of 3D LiDAR-based VT&R architectures [4, 5, 8, 9] with improved localization robustness up to several meters away from the path has made local obstacle avoidance more feasible. Krusi et al. [5] utilized the additional mobility freedom to navigate reliably in dynamic environments. They actively detected potential obstacles by analyzing irregularities in the distance between concentric LiDAR rings, assuming local planarity of the ground. Once detected, an online motion planner generated a tree of many potential trajectories around the obstacles, selecting the one that best satisfied system constraints. This approach improved the long-term autonomy of the teach-and-repeat architecture but was limited to mostly structured environments with simple obstacle geometries. In 2016, the same team attempted to improve upon this approach by incorporating a traditional local planner based on Rapidly Exploring Random Trees (RRT) [10] with dynamic waypoint generation. However, for path-following applications, the selection of waypoints can be cumbersome.
Mattamala et al. [11] explore the use of a locally reactive controller for completing visual teach-and-repeat missions in the presence of obstacles. Their research employs local elevation maps to compute vector representations of the environment and directly generate control twist commands using a Riemannian Motion Policies controller. The approach demonstrates effectiveness in handling numerous small obstacles, benefiting from its efficient computational processing. However, it has shown a tendency to encounter challenges when dealing with more complex obstacle formations, as it relies on maneuvering through Signed Distance Fields and can fall into local minima.
More recently, there has been considerable effort in exploring alternative navigation strategies beyond classical approaches. For example, Meng et al. [12] demonstrate the use of a deep network to learn high-level navigation behaviours from a dataset of example demonstrations. Their network takes in a sequence of images from a 'teach' dataset and outputs a short-distance waypoint ahead of the robot based on the current observation. While not explicitly designed for obstacle avoidance, this approach naturally exhibits avoidance behaviour in addition to path following. Although promising, the path-following errors in these techniques [13, 14, 15] are currently larger than what is acceptable for precise teach-and-repeat applications.
None of the aforementioned works explicitly preserve consistent viewpoints or limit lateral path deviations during obstacle avoidance. These factors are crucial for mitigating localization failures during repeats.
### _Local Trajectory Planning_
Another approach to path following with local obstacle avoidance is presented by Liniger et al. [16]. They propose a decoupled path planner and MPC architecture, applied in the domain of autonomous racing. Their method discretizes the track into a grid of cells and employs dynamic programming [17] to identify the shortest path around competing vehicles, leading back to the optimal racing line. From this initial solution, they define lateral corridor constraints that guide an MPC controller to find an optimal trajectory within the corridor.
As they can define this corridor as convex, this ensures that the MPC does not get stuck in local minima and can maintain high control rates. While this approach is powerful and serves as a significant source of motivation for our method, it also has its limitations, particularly in the scalability of the planning approach. The brute-force dynamic programming planner is restricted to structured scenes with a few obstacles of known fixed size.
The guided MPC method of motion planning used by Liniger et al. [16] is based on the work of Park et al. [18] and has roots in the concept of path homotopies. A path-homotopy class is a mathematical concept used in topology to classify paths based on their topological equivalence [19]. Two paths belong to the same homotopy class if they can be transformed into each other by smoothly deforming them while keeping their endpoints fixed. The notion of path homotopy is based on the idea that the shape or topology of a path is more important than its precise geometric details.
The popular Timed Elastic Band (TEB) approach [20] similarly leverages path-homotopy classes to find robust planning solutions. In their implementation, TEB utilizes Voronoi diagrams to span the environment and discover as many homotopy classes as possible. They then use analysis based on the related concept of path homology to refine and filter these sets of paths, followed by parallel optimization to select the optimal solution. This approach offers improved scalability for
real-time applications. However, it does not inherently support smooth path following without relying on waypoints and is computationally expensive due to the exhaustive nature of Voronoi diagram generation. Bhattacharya et al. [21] perform a similar exhaustive homotopy class search, instead using the graph-based search algorithm A* [22] to improve the scalability.
Unlike the previous architectures, we elect to use a sample-based planner to identify a strong candidate path that avoids obstacles and combine this result with MPC in two ways. The first is a direct tracking MPC approach where the output of the planner is used as a reference trajectory for the cost function. The second method is to implement a homotopy-guided MPC with corridor constraints defined by the current planner solution's homotopy class. Using sample-based planners such as Rapidly Exploring Random Trees (RRT*) [23], Fast Marching Trees (FMT) [24], Probabilistic Roadmaps (PRM) [25], or D* [26], potentially allows solutions to be generated more tractably in large unstructured environments. Unlike deterministic search algorithms that exhaustively explore the configuration space, sample-based planners take a probabilistic approach by sampling random points in the space. This sampling strategy allows them to efficiently explore the configuration space without being constrained by its size or complexity [27] and is better suited for the applications of teach and repeat.
### _Model Predictive Control_
We are not the first to try to combine sample-based planners with MPC. Zhou et al. [28] run a variation of the sample-based planner Informed RRT* [29] to generate shortest-distance paths avoiding obstacles, and opt to track this path directly with MPC under the assumption that it will be collision free. Similar to this work, Al-Moadhen et al. [30] use Batch Informed Trees (BIT*) as the sample-based motion planner with the addition of a B-Spline smoothing post-processing step to generate kinematically feasible paths to be tracked with MPC.
A fundamental difference between previous works and this research lies in the design of the sample-based path planner. By introducing specific modifications to the planner's fundamental structure, including the incorporation of a specialized configuration space and a novel edge-cost metric that prioritizes minimizing lateral deviation from a reference path, we demonstrate the ability to customize this architecture to best align with the path-following problem structure.
Our approach utilizes a sample-based planner to address the challenge of solving tractable MPC problems in obstacle-dense environments. In our initial approach, [7], we prioritize tracking the reference path generated by the planner while enforcing kinematic constraints to ensure a smooth trajectory, similar to other existing methods [31, 32]. This implementation is straightforward and effective in avoiding obstacles when the reference path is collision free. However, a drawback of this method is that if the reference path is not explicitly kinematically feasible, it can lead to small tracking errors that may result in collisions. Unlike the comparable work done by Xu et al. [32] that requires a post-processing step to smooth plans, our planner naturally generates smooth trajectories, mitigating some of these limitations. Nevertheless, we propose a second alternative MPC architecture that addresses this issue by utilizing the planner to generate dynamic lateral corridor state constraints. The MPC then uses these constraints to generate trajectories within the corridor homotopy class that guarantee obstacle avoidance, as in [16]. We then compare this approach with the more commonly used direct-tracking method.
## III Path Planning
This section outlines our approach to autonomously following a previously taught route. To provide context, we present an overview of our complete architecture in Fig. 2, and we will now delve into a detailed description of our local planner.
### _Obstacle Detection_
As the obstacle-detection system is not the focus of evaluation for this work, we direct the reader to Wu [9] for implementation details. At a high level, we treat obstacle detection as a change-detection problem between the environment's long-term static structures and the current LiDAR observations. If a point from the live scan fails to coincide with a mapped structure, it is likely from a new obstacle, or is representative of a significant change to the environment that should be avoided.
We then adopt classical methods for classifying points in the scan based on a thresholding method similar to [33], while modelling surface roughness with a Gaussian in our classification metric as in [34, 35]. Going forward we take for granted that all obstacles are reliably detected and projected onto local 2D occupancy grids for collision checking.
### _Sample-Based Planning Preliminaries and Notation_
While the selection of path planner is arbitrary for the application of our extensions, we employ Batch Informed Trees (BIT*) [36] as our baseline planner. BIT* is probabilistically complete, asymptotically optimal, and can be
Fig. 2: An overview of the proposed obstacle-avoidance system. A change-detection LiDAR perception module identifies previously unmapped structures and updates a locally planar 2D occupancy grid map. The planner finds paths that avoid obstacles using our laterally weighted edge-cost metric and a tracking MPC enforces kinematic constraints on the final trajectory.
adapted to re-plan or _rewire_ itself in the presence of moving obstacles, making it an ideal candidate for our current and future applications.
BIT* performs as follows. A Random Geometric Graph (RGG) with _implicit_ edges is defined by uniformly sampling the free space around a start and goal position. An _explicit_ tree is constructed from the starting point to the goal using a heuristically guided search through the set of samples. Given a starting state \(\mathbf{x}_{\mathrm{start}}\) and goal state \(\mathbf{x}_{\mathrm{goal}}\), the function \(\hat{f}(\mathbf{x})\) represents an admissible estimate (i.e., a lower bound) for the cost of the path from \(\mathbf{x}_{\mathrm{start}}\) to \(\mathbf{x}_{\mathrm{goal}}\), constrained through \(\mathbf{x}\in\mathrm{X}\).
Admissible estimates of the cost-to-come and cost-to-go to a state \(\mathbf{x}\in\mathrm{X}\) are given by \(\hat{g}(\mathbf{x})\) and \(\hat{h}(\mathbf{x})\), respectively, such that \(\hat{f}(\mathbf{x})=\hat{g}(\mathbf{x})+\hat{h}(\mathbf{x})\). Similarly, an admissible estimate for the cost of creating an edge between states \(\mathbf{x},\mathbf{y}\in\mathrm{X}\) is given by \(\hat{c}(\mathbf{x},\mathbf{y})\). Together, BIT* uses these heuristics to process and filter the samples in the queue based on their ability to improve the current path solution. The tree only stores collision-free edges and continues to expand until either a solution is found or the samples are depleted.
A new batch begins by adding more samples to construct a denser RGG. Had a valid solution been found in the previous batch, the samples added are limited to the subproblem that could contain a better solution. Given an initial solution cost, \(c_{\mathrm{best}}\), we can define a subset of states, \(X_{\hat{f}}:=\{\mathbf{x}\in\mathrm{X}\Big{|}\hat{f}(\mathbf{x})\leq c_{ \mathrm{best}}\}\) that have the possibility of improving the solution. When the metric for the edge-cost computation is Euclidean distance, the region defined by \(\hat{f}(\mathbf{x})\leq c_{\mathrm{best}}\) is that of a prolate hyperspheroid with transverse diameter \(c_{\mathrm{best}}\), conjugate diameter \(\sqrt{c_{\mathrm{best}^{2}+c_{\mathrm{min}^{2}}}}\), and foci at \(\mathbf{x}_{\mathrm{start}}\) and \(\mathbf{x}_{\mathrm{goal}}\)[29].
The original implementation of BIT* is designed for shortest-distance point-to-point planning and is not customized for path following. In the remaining sections, we describe two modifications to adapt a generic sample-based planner for the problem structure outlined in Section I.
### _Curvilinear Coordinates_
Our first extension adds natural path following by using an orthogonal curvilinear planning domain [37]. A reference path is composed of a set of discrete three Degree of Freedom (DOF) poses \(P=\{\mathbf{x}_{\mathrm{start}},\,\mathbf{x}_{1},\,\mathbf{x}_{2},\ldots,\, \mathbf{x}_{\mathrm{goal}}\}\) with \(\mathbf{x}=(x,y,\psi)\) describing the Euclidean position and yaw. We define a curvilinear coordinate, \((p,q)\), representation of the path such that the \(p\)-axis, \(p\in[0,p_{\mathrm{len}}]\) describes the longitudinal distance along the path, and the \(q\)-axis, \(q\in[q_{\mathrm{min}},q_{\mathrm{max}}]\), is the lateral distance perpendicular to each point \(p\) on the path. \(q_{\mathrm{min}}\) and \(q_{\mathrm{max}}\) describe the lateral place-dependant bounds of the curvilinear space at each segment of the path.
A change in distance between subsequent poses, \(\Delta p\), is computed as
\[\Delta p=\sqrt{\Delta x^{2}+\Delta y^{2}+a\Delta\psi^{2}}. \tag{1}\]
An aggregated \(p\) value is stored for each discrete pose in a preprocessing step up to the total length of the path, \(p_{\mathrm{len}}\). It is important to note that as part of (1), we incorporate a small term for changes in yaw along the repeat path tuned by a constant parameter \(a\). This allows us to avoid singularities in the curvilinear coordinate space in the event of rotations on the spot by distinguishing between poses with identical positions but changing orientations.
A key observation from this definition is that all paths in Euclidean space become straight lines in \((p,q)\) space, automatically handling paths that self-intersect without requiring waypoints. By storing the \(p\) values associated with each Euclidean pose from the reference path, we can uniquely map an arbitrary curvilinear point \((p_{i},q_{i})\) to its corresponding Euclidean point \((x_{i},y_{i})\) by interpolating to find a pose on the reference path closest to the target point and applying some basic trigonometry.
While a unique map always exists from \((p,q)\) to Euclidean space, generally the reverse cannot be guaranteed due to singularities. This proves to be problematic when considering the collision checking of obstacles. Our solution is intuitive: we run BIT* in \((p,q)\) space as normal, and perform all collision checks in Euclidean space by discretizing edges and mapping the individual points back to Euclidean space for query with the obstacle costmaps.
After planning a successful path in \((p,q)\) space, we use this same unique map to convert the plan back to Euclidean space for tracking with the controller. It is worth noting that this process improves the _smoothness_ of our solutions compared to Euclidean planners, as straight-line edges in the curvilinear planner are inherently mapped back to curves in Euclidean space.
### _Weighted Euclidean Edge Metric_
Consider the 2D Euclidean planning problem where the teach path is composed of a path connecting \(\mathbf{x}_{\mathrm{start}}\) to \(\mathbf{x}_{\mathrm{goal}}\). The usual cost of an edge connecting two arbitrary points in space \((x_{1},y_{1})\) to \((x_{2},y_{2})\) can be expressed generally as the length \(c_{21}\):
\[c_{21}=\int_{x_{1}}^{x_{2}}\sqrt{1+\Big{(}\frac{dy}{dx}\Big{)}^{2}}dx. \tag{2}\]
For our method, we incorporate an additional coefficient such that the cost of an edge increases as the lateral \(y\) deviation over the length of the edge grows, scaled by a tuning parameter \(\alpha\):
\[c_{21}=\int_{x_{1}}^{x_{2}}(1+\alpha y^{2})\sqrt{1+\Big{(}\frac{dy}{dx}\Big{)}^ {2}}dx. \tag{3}\]
Fig. 3: Left: A reference path in Euclidean coordinates shown in green, with the longitudinal and lateral components extended in a grid. Right: The corresponding representation of the path in curvilinear coordinates.
For a straight line, this integral becomes
\[c_{21}=\Bigg{(}1+\frac{\alpha(y_{2}^{3}-y_{1}^{3})}{3(y_{2}-y_{1})}\Bigg{)}\sqrt{( x_{2}-x_{1})^{2}+(y_{2}-y_{1})^{2}}. \tag{4}\]
As \(\Delta y\) approaches zero (horizontal edges) we have
\[\begin{split}\lim_{\Delta y\to 0}\Bigg{(}1+\frac{\alpha(y_{2}^{3}-y_{1} ^{3})}{3(y_{2}-y_{1})}\Bigg{)}\sqrt{(x_{2}-x_{1})^{2}+(y_{2}-y_{1})^{2}}\\ =(1+\alpha y^{2})|x_{2}-x_{1}|,\end{split} \tag{5}\]
where \(y_{1}=y_{2}=y\).
In (4) we obtain the Euclidean distance metric scaled by a coefficient to apply a penalty for lateral path deviation.
While this edge metric works in this simple example of a straight-line path, the result is difficult to generalize when considering arbitrarily complex reference paths in Euclidean space. In curvilinear space, however, all reference paths become horizontal lines on the \(p\)-axis, allowing us to directly apply this idea for the edge-cost metric in BIT* by letting \(y_{i}\to q_{i}\) and \(x_{i}\to p_{i}\), respectively, in (4).
Before using this new metric in BIT*, we must first evaluate the influence of the edge-cost to the informed sampling region that constrains the RGG sub-problem following an initial solution. Consider the estimated total cost, \(\hat{f}(\mathbf{x})=\hat{g}(\mathbf{x})+\hat{h}(\mathbf{x})\), to incorporate an arbitrary sample, \(\mathbf{x}=(p,q)\), into the path solution in curvilinear coordinates as in Fig. 4.
The cost is
\[\begin{split}\hat{f}(\mathbf{x})=&\Big{(}1+\frac{ \alpha}{3}q^{2}\Big{)}\Big{(}\sqrt{(p-p_{\mathrm{start}})^{2}+q^{2}}+\sqrt{(p- p_{\mathrm{goal}})^{2}+q^{2}}\Big{)}.\end{split} \tag{6}\]
If \(\mathbf{x}\) is to improve the quality of a current solution cost, \(c_{\mathrm{best}}\), we require that \(\hat{f}(\mathbf{x})\leq c_{\mathrm{best}}\). Rearranging the inequality, we have
\[\begin{split}\Big{(}\sqrt{(p-p_{\mathrm{start}})^{2}+q^{2}}+ \sqrt{(p-p_{\mathrm{goal}})^{2}+q^{2}}\,\Big{)}\\ \leq\frac{c_{\mathrm{best}}}{\Big{(}1+\frac{\alpha}{3}q^{2}\Big{)} }\leq c_{\mathrm{best}}.\end{split} \tag{7}\]
We note that the lateral scaling factor is always \(\geq 1\) for all \(q\) and that once again the left-hand term is simply the Euclidean-distance edge metric. This result implies that, conservatively, we could sample from within the informed ellipse defined by the usual Euclidean edge cost (no lateral penalty) as in [29], and the probabilistic path convergence guarantees will remain satisfied.
On the other hand, (7) indicates that the lateral penalty causes our true informed sampling region to become a denser subset within the original ellipse. While difficult to describe geometrically, we can visualize this region by randomly sampling from within the outer ellipse and rejecting points with heuristic costs, \(\hat{f}(\mathbf{x})\), larger than the bounds. As shown in red in Fig. 4, we see the laterally weighted informed sampling region takes the shape of an 'eye-ball' and for this problem has significantly smaller volume than the Euclidean distance ellipse. In practice, direct sampling from the eye-ball region is difficult and rejection sampling can be inefficient. However, it is possible to calculate the height of a conservative rectangle to bound the true informed sampling region and perform direct sampling from within the bounding box.
## IV Curvilinear Singularities and Corner Cases
Throughout this work, we have been operating under the assumption that converting continuous paths in curvilinear coordinates to Euclidean space is a seamless process, free from singularities and discontinuities. In the vast majority of scenarios, this assumption holds true. However, there are situations, particularly during sharp inside turns, where certain regions in the curvilinear configuration space can introduce problematic singularities, challenging the validity of our assumption. In this section, we delve into a detailed exploration of how we can adapt our planning algorithm to address this issue effectively.
### _Singularity Regions_
While uncommon, one such scenario when curvilinear singularity regions become problematic is shown in Fig. 5. In this instance, a sharp turn in the reference path combined with an obstacle located very close to this curve results in the irregular 'three-point-turn' when the curvilinear path solution is converted back to Euclidean space. Although this path is technically valid, such behaviour is generally not desirable. The occurrence of this behaviour arises when the planned path in BIT* traverses the curvilinear space with a nonzero lateral \(q\) component over regions where the reference Euclidean path exhibits excessive inside curvature.
Although the occurrence of excessive curvature is typically manually avoided by the operator during the teach phase
Fig. 4: The informed sampling domain, \(X_{j}\), for the Euclidean distance edge metric (blue ellipse), and the conservatively bounded laterally weighted edge metric (black box) for \(\alpha=0.5\). Samples shown in red, were populated using rejection sampling and illustrate the true ‘eye-ball’ distribution of the informed sampling region. As \(\alpha\) tends to zero, the domains coincide.
Fig. 5: In this scenario, we have another sharply turning reference path. The presence of the obstacle near to the corner necessitates the planner to cross a singularity region resulting in the odd Euclidean path result shown in black. We would prefer to take a path shown in blue.
of Visual Teach & Repeat 3 (VT&R3) due to the wear imposed on a large differential-drive robot and limitations with localization accuracy, it is still important to address this issue comprehensively. Our proposed solution involves identifying these problematic regions during an offline precomputation step. Then, during runtime, we modify our planning algorithm to naturally avoid these regions.
To define the singularity regions, we consider an arbitrary point in curvilinear space, denoted as \((p,q)\), with the corresponding projected Euclidean space point as \((x_{p},y_{p})\). The singularity regions can be described by the inequality
\[q_{\max}(p)\geq q>q_{\min}, \tag{8}\]
where \(q_{\max}(p)\) represents the maximum lateral bounds of the corridor defined by the reference path at each point \(p\), and \(q_{\min}\) is defined as the distance to the nearest point on the reference path from the Euclidean point \((x_{p},y_{q})\). In other words, a curvilinear point falls within a singularity region if, when converted to Euclidean space, the \(q\) value does not match the distance to the closest Euclidean point on the reference path.
While defining the singularity regions is a straightforward task, discretizing the configuration space to compute the interior boundary (where \(q=q_{\min}\)) is neither practical nor elegant. Instead, we can rely on another observation: these regions are closely linked to the curvature of the reference path, specifically the instantaneous Radius of Curvature (ROC). Our key insight is to notice that all points where \(q>\mathrm{ROC}\) also satisfy the equality in equation (8). Although the ROC does not fully describe the singularity regions, it provides a promising starting point for our search. By precomputing the ROC at each point \(p\) along the path using the reference path, we can expand along lines of constant lateral distance and test for condition \(q=q_{\min}\) to be satisfied.
Fig. 6 demonstrates how to generate the corresponding singularity region. To ensure tractability, we discretize the lateral and longitudinal directions with a resolution of 10 cm and conservatively approximate the singularity regions using a series of adjoining rectangles. We do not bother calculating the singularity regions outside of the maximum lateral boundary for our reference path as we never intend to plan in these regions.
### _Wormhole Generation_
Now that we have a clear understanding of the singularity regions and how they appear in the planner, it is crucial to discuss how we can effectively avoid them. One possible approach would be to treat these regions as obstacles and prevent the planner from generating paths that pass through them. While this technique does work, it would eliminate the option of taking the inside corner to circumvent an obstacle on a sharp turn, as shown in Figure 5. However, it is clear in this example that taking the inside corner could still be the most efficient choice. Therefore, solely relying on this approach should be avoided if possible.
An important insight we can exploit is that the boundary points of the singularity regions, denoted as \((p_{l},q)\) and \((p_{r},q)\) for the left and right boundary points of \(q\), respectively, correspond to the same Euclidean position following conversion. The distinction is in their headings, which can vary significantly.
In other words, by traversing from the leftmost boundary point to the rightmost boundary point, we effectively have an edge that, when followed, executes a turn on the spot in Euclidean space. This realization provided the intuition that instead of planning through the singularity regions we can try to leap past them. To achieve this, we introduce the concept of _wormholes_. In this context, a wormhole refers to pairs of vertices in the curvilinear planning domain that, when connected by the planning tree, enable _teleportation_ across the singularity regions. We can prepopulate several wormhole edges in the planning tree that serve as passageways, opening up the possibility of traversing inside corners during planning. An illustration of the preseeded wormholes for the reference path problem in Fig. 5 is depicted in Fig. 6 and we demonstrate a planning example using wormholes in Fig. 7. In this example, our planner is able to find elegant inside corner solutions in Euclidean space avoiding the two obstacles shown in grey by using wormholes to bypass the singularities.
Fig. 6: In this figure we illustrate the generation of the curvilinear singularity regions. Using the ROC plot as a starting pot, we discretize the space laterally and expand from the ROC plot to the left and right along the \(p\)-axis, testing until condition (8) is invalidated. Eventually, we are left with the grey singularity region shown in (b). We then pressed a number of wormhole edges shown in green that BIT\({}^{*}\) can use to find valid path solutions that cross the singularities.
## V Model Predictive Control
### _Tracking Model Predictive Control_
In this section, we will delve into the mathematical formulation of our Model Predictive Control (MPC) schemes. To ensure generality and versatility across various robotic platforms, we adopt a representation that leverages matrix Lie group theory, drawing upon established notation from Barfoot [38, 39]. Similar to the implementation by Teng et al. [40] and Chang et al. [41], we represent the robot's state using the Special Euclidean Group \(\mathrm{SE}(3)\)1, which allows us to encapsulate both the position and orientation of the robot in a single mathematical object. This choice allows us to create an MPC framework that can seamlessly accommodate a wide range of vehicles, from 6 Degrees of Freedom (DOF) drones to differential-drive unmanned ground vehicles (UGVs) and beyond.
Footnote 1: We formulate our problem generally in \(\mathrm{SE}(3)\). However, for planar robots, \(\mathrm{SE}(2)\) can be used to save a marginal amount of computing effort.
Given a robot in a moving coordinate frame \(\underline{\mathcal{F}}_{v}\) with respect to a global frame \(\underline{\mathcal{F}}_{i}\), we define the generalized 6 DOF velocity vector \(\varpi\) in the moving frame as
\[\varpi=\begin{bmatrix}\mathbf{v}_{v}^{vi}\\ \mathbf{\omega}_{v}^{vi}\end{bmatrix}, \tag{9}\]
composed of the linear and angular velocity vectors, \(\mathbf{v}\) and \(\mathbf{\omega}\), respectively. The pose of the vehicle in the global frame is subsequently defined by the transformation matrix
\[\mathbf{T}\triangleq\mathbf{T}_{vi}=\begin{bmatrix}\mathbf{C}_{vi}&\mathbf{ r}_{v}^{iv}\\ \mathbf{0}^{T}&1\end{bmatrix}. \tag{10}\]
By constraining the generalized velocity in (9), we can enforce different robot kinematic models using a projection matrix, \(\mathbf{P}\), to isolate the nonzero components of \(\varpi\). In our specific application, we will be working with a unicycle model with only forward linear and angular yaw velocities, \(v\) and \(\omega\), respectively. However, it is clear to see that by specifying a different projection matrix, we can enforce alternative kinematic constraints. We make the following substitution in terms of a lower-dimensional velocity vector \(\mathbf{u}\), such that
\[\varpi=\begin{bmatrix}v&0&0&0&0&\omega\end{bmatrix}^{T}=\mathbf{P}^{T}\mathbf{ u}, \tag{11}\]
where we have:
\[\mathbf{P}=\begin{bmatrix}1&0&0&0&0&0\\ 0&0&0&0&0&1\end{bmatrix},\quad\mathbf{u}=\begin{bmatrix}v\\ \omega\end{bmatrix}. \tag{12}\]
We wish to derive a model-predictive controller to compute the optimal sequence of \(k\) control inputs, out to a horizon of \(K\) time steps with a period of \(h\) seconds, to minimize the error between some reference path, \(\mathbf{T}_{\mathrm{ref},k}\), and the predicted trajectory of the robot. This path-tracking task can be realized by solving the least-squares optimization problem:
\[\operatorname{argmin}J(\mathbf{T},\mathbf{u})=\sum_{k=1}^{K}\ln( \mathbf{T}_{\mathrm{ref},k}\mathbf{T}_{k}^{-1})^{\vee^{T}}\mathbf{Q}_{k}\ln( \mathbf{T}_{\mathrm{ref},k}\mathbf{T}_{k}^{-1})^{\vee}\] \[+\mathbf{u}_{k}^{T}\mathbf{R}_{k}\mathbf{u}_{k}\] \[\mathrm{s.t.} \tag{13a}\] \[\mathbf{T}_{k+1}=\exp\Big{(}(\mathbf{P}^{T}\mathbf{u}_{k})^{ \wedge}h\Big{)}\mathbf{T}_{k},\quad k=1,2,\ldots,K\] (13b) \[\mathbf{u}_{\min,k}\leq\mathbf{u}_{k}\leq\mathbf{u}_{\max,k}, \quad k=1,2,\ldots,K. \tag{13c}\]
Using Lie groups and notation adopted from Barfoot [38], our objective function (13a) aims to minimize the pose error between the trajectory we intend to generate and a reference trajectory, while simultaneously trying to minimize control effort. (13b) and (13c) are our generalized kinematic constraint and actuation limit constants, respectively.
For the direct-tracking control implementation, we obtain the reference poses at each time step \(\mathbf{T}_{\mathrm{ref},k}\) directly from
Fig. 8: The definition of our robot state variables for a differential-drive robot.
Fig. 7: In this figure, we show a representative planning scenario using wormholes to find better path solutions. We find that the use of wormholes allows us to generate paths that tightly hug obstacles (grey), opting for turns on the spot to further reduce lateral path deviation. These turns on the spot allow us to generate safe-corridor constraints, (green), that the controller can use to smooth out the trajectory while maintaining collision-avoidance properties.
the current densely discretized Euclidean BIT* solution that guides the robot from its current state to the path's endpoint. To determine the reference poses along the trajectory for each time step, we interpolate poses from the current planner solution with a separation distance denoted as \(p_{\mathrm{ref},k}\). This allows us to try to maintain a nominal desired path-tracking velocity \(v_{\mathrm{ref}}\) such that
\[p_{\mathrm{ref},k}=v_{\mathrm{ref}}hk. \tag{14}\]
The weights \(\mathbf{Q}_{k}\) and \(\mathbf{R}_{k}\) can be tailored to the specific application, allowing adjustment of the relative importance of different tracking degrees of freedom. Likewise, the actuation limits \(\mathbf{u}_{\min,k}\), \(\mathbf{u}_{\max,k}\), and \(v_{\mathrm{ref}}\) are set based on the robot parameters. To generate the trajectory with the lowest cost, we solve this MPC problem over the horizon \(K\). Following the standard MPC approach, we apply the first velocity command to the robot and subsequently reinitialize the problem at each control loop request based on the latest available information.
### _Homotopy-Guided Corridor Model-Predictive Control_
To address the limitations of the direct-tracking MPC approach, we propose an alternative architecture that provides a concrete guarantee on collision avoidance. Instead of blindly following the output of the path planner, which may not always be kinematically feasible, our approach involves directly tracking the driven reference path that is guaranteed to be kinematically feasible in the VT&R paradigm. We then incorporate the planner solution into a hard spatial constraint within the MPC problem.
To understand our approach, it is crucial to discuss the concept of path homotopies. In an environment with obstacles between a robot and its goal, the presence of obstacles introduces different potential path-homotopy classes. The emergence of multiple homotopy classes creates local minima in the optimization problem, making it challenging to find the globally optimal solution. To tackle this challenge, we draw inspiration from the work of Linigar et al. [16]. We utilize the planner's solution to identify the most promising homotopy class among the set of possible paths. By considering the characteristics of the selected homotopy class, we define a series of lateral corridor path constraints that shape the MPC trajectory planning within a convex planning space. These lateral corridors serve as spatial constraints that guide the MPC toward a globally optimal solution for the trajectory-planning problem.
The new MPC problem is initialized using the current path solution and optimized within the constrained convex space to gain three key advantages. Firstly, utilizing the collision-free BIT* solution for corridor constraints decouples the MPC solution from the planner, providing a guarantee on collision avoidance. Secondly, this lets us work with approximate planner solutions to define corridors, enabling safe operation at higher velocities compared to direct-tracking. Lastly, reference poses are consistently chosen from the original taught path instead of the planner's solution, potentially leading to slight improvements in path-following error.
The revised MPC optimization problem is similar to (13), but now includes an additional lateral corridor constraint:
\[-\mathbf{d}_{k,l}\leq\mathbf{1}_{2}^{T}\mathbf{T}_{\mathrm{ref},k}\mathbf{T}_ {k}^{-1}\mathbf{1}_{4}\leq\mathbf{d}_{k,r},\quad k=1,2,\ldots,K \tag{15}\]
Here, \(\mathbf{1}_{i}\) represents the \(i\)-th column of the identity matrix, with \(\mathbf{d}_{k,l}\) and \(\mathbf{d}_{k,r}\) denoting the maximum allowable lateral deviation at each point on the left and right sides of the path that define the path-homotopy corridor. The values of \(\mathbf{d}_{k,l}\) and \(\mathbf{d}_{k,r}\) are computed on-demand by the controller and the process is outlined as follows:
Upon obtaining the curvilinear space representation of the desired reference pose from the latest BIT* solution, \((p_{\mathrm{ref},k},q_{\mathrm{ref},k})\), in curvilinear coordinates, we construct two test edges extending from \((p_{\mathrm{ref},k},q_{\mathrm{ref},k})\) to the maximum safe-corridor boundaries \((p_{\mathrm{ref},k},q_{\mathrm{max}})\) and \((p_{\mathrm{ref},k},-q_{\mathrm{max}})\), where \(q_{\mathrm{max}}\) is the place-dependant maximum lateral boundary set by the user at each point along the reference path. We then perform collision checks on these edges using our standard procedure and set \(\mathbf{d}_{k,l}\) and \(\mathbf{d}_{k,r}\) equal to the lateral component of the last collision-free vertex along the discretized test edges. This process is illustrated in Fig. 9.
Just as in the direct-tracking implementation, we adaptively select the reference poses in accordance with a desired nominal repeating velocity; however, now the reference poses, \(\mathbf{T}_{\mathrm{ref},k}\), are selected directly from the teach path instead of the BIT* solution.
### _Solution Method_
Given the optimization problem defined in (13), our next step is to devise a method for effectively enforcing the series of box constraints related to the actuator limits and lateral corridor. To address this, we employ a technique known as the Barrier Method [42].
Fig. 9: We generate a dynamic safe lateral corridor constraint (purple) using the current planner solution (blue). Given reference poses along the path, we collision check laterally in both directions, starting from the planner solution to the maximum place-dependant corridor bounds. If there are no collisions, the corridor is set to the maximum bounds. If there is a collision, the corridor is set to the vertex immediately proceeding the collision point and used as state constraints in the homotopy-guided MPC.
Using the Barrier Method, we can capture the effects of constraints of the general form \(\mathbf{x}\leq\mathbf{a}\) and \(\mathbf{x}\geq\mathbf{a}\) as a series of logarithmic penalty terms. This transformation allows us to convert the constrained optimization problem into an unconstrained problem that can be solved efficiently with a Gauss-Newton method by linearizing the problem at an operating point and applying small perturbations to the Lie Algebra to simplify terms. In practice, we solve the MPC problem directly using the Simultaneous Trajectory Estimation and Mapping (STEAM) engine [39], an iterative Gauss-Newton-style optimization library aimed at solving batch nonlinear optimization problems involving both Lie group and continuous-time components.
## VI Offline Experiments
### _Performance Metrics_
In teach-and-repeat applications, it is critical that when avoiding obstacles we limit lateral path error to best exploit the prior knowledge on terrain assessment. It is also important to maintain a similar sensor viewing angle throughout the trajectory for localization purposes. With these characteristics in mind, we propose two metrics to compare the relative quality of the path solutions produced over the course of our experiments. We compute the Root Mean Square Error (RMSE) for both the translation and rotation (heading) components relative to the reference path over 15 m segments of obstacle interactions and average the result across all trials. In the context of this study, obstacle interactions refer to any event where a newly appeared obstacle (as labelled by terrain-change detection) causes the robot to deviate from the original reference path to avoid a collision.
### _Unit Testing of New Edge Cost_
We demonstrate the benefits of the lateral edge metric by testing BIT* on several representative obstacle-avoidance problems, both with and without the extensions proposed in Section III. For fair comparison, we implemented our own C++ version of BIT* in accordance with [36], using parameters of 150 samples per batch and an RGG constant of 1.1. Our extended implementation, Lateral BIT*, uses the laterally weighted edge-cost metric (4) with \(\alpha=0.5\) and the rectangular approximation of the informed sampling region. All experiments were conducted on a Lenovo Thinkpad laptop with 11th Gen Intel(R) Core(TM) i7-11800H @ 2.3GHz. For the standard BIT* implementation, we can force the use of a pure Euclidean distance edge metric by simply adjusting \(\alpha=0\), and switch to using the ellipsoidal sampling region described in [29].
To study the influence of the new edge metric in isolation, we posed a series of ten simulated planning problems where the reference path is composed of a horizontal line 15 m in length on the \(x\)-axis, making the curvilinear and Euclidean path representations identical. We allow both versions of the algorithm to find converged path solutions connecting the starting pose to the goal and evaluate the solution both quantitatively and qualitatively. A representative example of the output paths and associated BIT* trees on one test problem is shown in Fig. 10.
In analyzing Fig. 10, we see some important differences in the exploration strategies of the two methods. Using our lateral edge-cost metric, BIT* tends to naturally generate smoothly curving solutions, spending the most time exploring paths near to the reference while balancing forward progress with reducing lateral error. In contrast, the standard BIT* algorithm settles on the direct shortest-distance solution. While efficient, in practice this path could be a higher-risk manoeuvre due to the additional localization and terrain-assessment uncertainty incurred when away from the reference path. We calculate the RMSE metrics for both implementations and summarize the results in Table 1.
As we would expect, the use of our laterally weighted edge metric considerably reduces the average lateral error with respect to the reference path from 0.254 m to 0.098 m. We also see a small improvement on the heading error, likely due to the fact the lateral planner tends to spend more time exactly following the reference path.
Using straight-line reference paths, it is easy to see that the lateral edge cost encourages solutions with desirable path properties. However, we can further exploit the curvilinear
Fig. 10: A comparison of the planning trees generated running BIT* with the Euclidean edge metric (top) and with the laterally weighted metric (bottom) in a representative test environment. Our edge metric encourages the final plan, shown in black, to avoid obstacles while remaining close to the red reference path on the \(p\)-axis.
coordinate space to produce similarly smooth plans on more intricate reference paths. In Fig. 11, we initialize the planner on a complex path taken from real teach data that includes a variety of sharp curves, a path crossing, and several difficult obstacles. Despite the challenges, our planner is able to converge to a desirable solution using only a single goal and no intermediate waypoints. As Euclidean BIT* tends to generate solutions that traverse singularity regions in curvilinear space (this subsequently produces discontinuous Euclidean paths), we are not able to directly provide a comparison to this result.
### _Runtime Analysis_
Regardless of path-following capability, real-time performance is crucial for the path-planning module. In Fig. 12, we analyze the runtime performance and observe that our extended version of BIT* generally generates initial solutions and converges to an optimal result at a slower pace compared to the Euclidean version of BIT*. This outcome was anticipated based on the qualitative path observations in Fig. 10, where a higher number of samples per batch is required to generate the observed 'weaving' plans. Intuitively, a high sample density is necessary to refine the smooth curves that are characteristic of our extensions, which leads to slower convergence.
However, it is important to highlight that the differences in compute times are relatively small. Despite this, our Lateral BIT* implementation still manages to find initial solutions to challenging planning problems in just 17 ms. Furthermore, it achieves convergence to within 3% of the optimal solution cost in just 233 ms on average, ensuring the generation of smooth output paths in a timely manner.
The true strength of our planner lies in its average time for correct homotopy-class identification. This metric effectively measures the time taken by the planner to generate a solution that avoids obstacles and can be continuously deformed into the optimal solution without colliding with obstacles. On average, our planner is able to identify these correct homotopy solutions in just 23 ms, which is nearly three times faster than the Euclidean planner can approach a converged solution suitable for tracking.
## VII Online Experiments
### _Introduction_
Our final set of experiments aims to assess the effectiveness of our entire system in real-world obstacle-avoidance scenarios. We integrated our obstacle-avoidance architecture into the VT&R3 codebase and conducted extensive autonomy tests in two diverse environments as shown in Fig. 14. These environments presented contrasting challenges: the first involved navigating a relatively flat prairie terrain with a simple single loop, while the second featured a more complex path in a valley with varying elevations.
In each case, the robot was initially manually driven to teach an obstacle-free network of paths. Subsequently, various obstacles were introduced throughout the path to obstruct the predefined routes. Through a series of experiments, we repeated the taught paths and analyzed the resulting trajectories produced by our two motion-planning architectures. The experiments were performed in collaboration with Defence Research and Development Canada (DRDC), utilizing an ARGO Atlas J8 electric differential-drive robot, Fig.13, under identical configurations to those used for the offline experiments.
### _Evaluation Strategy and Metrics_
In our evaluations, we emphasize the importance of obstacle-avoidance trajectories that strike a balance between minimizing lateral deviations from the taught path and ensuring smooth forward progress. This balance is crucial due to the prior knowledge we have about the taught path's collision-free nature, as verified by a human operator. By closely following the reference path, we maximize our chances of avoiding unforeseen obstacles and improve localization performance in VT&R3 despite perception uncertainties.
To assess the performance, we report the Root Mean Squared (RMS) lateral and heading errors obtained from both the differential GPS and the VT&R3 state estimation system during the repeat experiments. However, it is important to note that these values become subjective, as they are influenced
Fig. 11: Lateral BIT* planning in a curvilinear representation of the space (top) to find a global obstacle avoidance path solution in Euclidean space (bottom) along a complex reference path with many obstacles.
Fig. 12: In this figure we compare the compute characteristics of the Euclidean BIT* planner to the Lateral BIT* planner. We find that generally the Euclidean edge-cost metric converges faster with marginally higher early solution quality. However, it is important to recognize that for the key compute metrics that most heavily influence the performance our architecture, the initial solution time and the homotopy class identification time, Lateral BIT* performs comparably.
by the size and density of obstacles encountered during each repeat. To address this, we propose a new metric called the 'obstacle interaction' RMS lateral and heading error that focuses on a specific obstacle's influence on path deviation. We define an obstacle interaction as the length of the path extending 5 m on either side of the obstacle. By comparing the obstacle interaction RMSE with the error in obstacle-free sections, we gain insights into the relative amount of path deviation the planner uses to successfully avoid the obstacle.
Additionally, we consider the average maximum lateral deviation per obstacle interaction. This measure, when evaluated in the context of the lateral extent of the obstacle obstructing the reference path, provides an indication of how effectively the planner avoids obstacles while staying close to the reference path. We are particularly interested in observing the consistency of the maximum lateral deviation across obstacles with varying geometries.
When comparing the two motion planners, the direct-tracking MPC and the homotopy-guided MPC, we propose using the average robot path curvature as a metric to evaluate the relative smoothness of these approaches. Our rationale is that if both motion planners can maintain similar desirable lateral error characteristics during obstacle avoidance, the path with the lowest average curvature is likely to be preferable in terms of energy efficiency.
Finally, one of our most critical and intuitive metrics of interest is the obstacle-avoidance rate. This rate is calculated by dividing the number of successful obstacle interactions (those that are collision-free) by the total number of obstacles encountered on the route. It serves as a reliable indicator of the improvement in the long-term autonomy of the VT&R3 system, as our goal is to minimize the need for operator interventions during autonomous navigation.
### _Results (Easy Scenario)_
Upon repeating the loop both without obstacles and with obstacles, we present the actual robot path relative to the taught reference path (as reported by GPS) across the loop in Fig. 15.
The locations and sizes of the obstacles have been marked for reference, and some examples of obstacle interactions are provided in Fig. 16. In both the direct-tracking MPC and the homotopy-guided MPC implementations, the robot successfully completed the loop 5 times at a target reference speed of 1.25 m/s, achieving a 100% obstacle-avoidance rate. Throughout this experiment, the robot covered a cumulative distance of 1.5 km, navigating through 50 obstacles without any collisions or operator interventions. While both controllers accomplished our primary goal in this less-challenging scenario, we will now delve into the relative differences and trajectory characteristics of the two methods.
To assess the performance of the planners during obstacle interactions, we calculated the RMS lateral and heading errors for obstacle-free repeats of the environment, serving as a baseline for path-tracking performance. For this particular path network, the direct-tracking MPC achieved a lateral RMS error of 2.50 cm, while the homotopy-guided MPC yielded a slightly lower error of 2.21 cm. Similarly, the baseline heading RMS
Fig. 14: We evaluate our local obstacle-avoidance system at the Experimental Proving Ground on Canadian Forces Base Suffield. In the first scenario, (a), we taught a short loop on reasonably flat and open terrain. In the second scenario, (b), we taught a long and intricate route in the middle of a valley consisting of many elevation changes and diverse terrain. We then introduced a series of obstacles (yellow crosses) to the reference paths, forcing the robot to deviate from the teach path to complete subsequent repeat objectives.
Fig. 13: The ARGO Atlas J8 Unmanned ground vehicle used to conduct all field evaluation: (1) Outser OS1 LiDAR 1024x64 @ 10Hz, (2) Hemisphere Dual Differential GPS, (3) Internal Lenovo Thinkpad laptop with 11th Gen Intel(R) Core(TM) 17-11800H @ 2.3GHz.
errors were 4.5 deg and 4.0 deg, respectively.
Next, we recalculated the RMS error metrics specifically for the obstacle-avoidance interactions, focusing on the 10 m segments of the path centered around each obstacle. The results of the obstacle interaction analysis for one loop repetition of each controller are presented in Table II. We provide the extent of the obstacle on the path for reference as this directly influences how far the robot must deviate from the path during an obstacle interaction and depends on both the size of the robot and obstacle. Our findings indicate that both controllers exhibit comparable performance in this aspect, with the homotopy-guided MPC demonstrating a reduction in obstacle interaction errors compared to the direct-tracking MPC, as anticipated.
The obtained data aligns with our qualitative observations, affirming that the planner effectively avoids significant translation and rotation errors relative to the extent of the encountered obstacles, while avoiding any adverse effects on localization. Additionally, we consider the maximum lateral deviation per obstacle interaction as an informative metric. By examining the peaks of the lateral error magnitude plot along the path for both planners in Fig. 18, we find that, on average, the direct-tracking MPC deviates laterally from the reference path by a maximum of \(r\) + 0.675 m with a standard deviation of 0.254 m, while the homotopy-guided MPC results in \(r\) + 0.335 m with standard deviation 0.048 m. In this context, \(r\) represents the lateral extent of the obstacle obstructing the reference path. Theoretically, the lower bound of this value should be exactly \(r\) for collision-free avoidance, but knowing that our perception is not infallible and to ensure a small safety buffer, we introduce an obstacle-inflation tuning parameter of 0.3 m, resulting in a slight expected offset. Notably, the homotopy-guided motion planner is able to achieve maximum lateral deviation characteristics close to this bound and the small uncertainty in the measure is evident of consistent obstacle-avoidance behaviour across obstacles of varying sizes and geometries. In contrast, the direct-tracking MPC tends to take a more conservative approach to avoiding obstacles, with less repeatability.
Another aspect of comparison between the two motion planners is the average curvature of the robot trajectory across the repeats. Intuitively, the presence of obstacles leads to an increase in average path curvature, as the robot maneuvers to avoid collisions. As a baseline, we see that the obstacle-free average curvature of 0.093 m\({}^{-1}\) increases to 0.112 m\({}^{-1}\) for the direct-tracking MPC and 0.105 m\({}^{-1}\) for the homotopy-guided MPC, representing a 20.4% and 12.9% increase, respectively.
Fig. 16: Some typical obstacle interactions encountered along the reference path in the Easy Scenario.
Fig. 17: In this figure, we compare the solutions of the two controller architectures over a representative obstacle-avoidance scenario with a barrier. The current planner solution (blue path) provides an initial path leading the robot around the obstacle and back to the reference path (red path). In (b), the direct-tracking MPC produces a smooth trajectory (green path), to track the BIT* solution, however, due to infeasible kinematics in the planner solution, there is some tracking error that takes the robot dangerously close to the obstacle. In contrast, the homotopy-guided MPC generates a tight and safe trajectory around the obstacle using a similar planner solution.
Fig. 15: A plot of the repeat trajectories overlaid with the teach path as reported by GPS ground truth. We see that the homotopy-guided MPC solution (green), generally avoids obstacles with more consistent obstacle interaction profiles and with less overall lateral path deviation then the direct-tracking approach (blue).
### _Results (Hard Scenario)_
In this experiment, a similar set of obstacles as in the easier-terrain scenario were used, but with more challenging placement, necessitating complex trajectories within a more heavily constrained safe corridor. Additionally, the presence of elevation changes obstructed the view of some obstacles along the repeat path until the vehicle approached close enough to clear occlusions, requiring faster replanning to successfully avoid obstacles. The total path length was 550 m, repeated 5 times for each motion planner, resulting in the robot encountering a total of 55 obstacles interactions over a cumulative navigation distance of 2.75 km.
In this more difficult test case, both motion planners successfully completed the repeats without explicit operator intervention. However, the direct-tracking MPC exhibited minor glancing collisions with some obstacles due to kinematic tracking errors, resulting in an obstacle-avoidance rate of 87.27%. In contrast, the homotopy-guided MPC maintained a perfect 100% obstacle-avoidance rate.
While the more successful obstacle-avoidance rates achieved by the homotopy-guided MPC provide telling conclusions, the RMS lateral and heading errors for the obstacle interactions further corroborate the earlier observations made during the easier obstacle course. The baseline obstacle-free path-tracking errors were measured as 6.12 cm for lateral errors and 6.1 deg for heading errors in the case of the direct-tracking MPC. Similarly, the homotopy-guided MPC exhibited obstacle-free lateral and heading errors of 5.94 cm and 6.0 deg, respectively. It is worth noting that the tracking errors increased in the baseline obstacle-free repeats compared to those of the easier environment, likely due to the more challenging reference path trajectories and inconsistent terrain composition. Following obstacle-free repeats, obstacles were introduced on the path and Table III reports the obstacle-interaction RMS error metrics for the repeats.
In addition to achieving more reliable and safe navigation, we consistently observe that the homotopy-guided MPC maintains or surpasses the quality of local obstacle-avoidance trajectories compared to the direct-tracking MPC, particularly in minimizing lateral and rotational path errors. On average, the direct-tracking MPC deviates laterally from the reference path by a maximum of \(r\) + 0.484 m, with a standard deviation of 0.158 m, while the homotopy-guided MPC results in \(r\) + 0.309 m, with a standard deviation of 0.052 m. It is noteworthy that the trajectory characteristics of the repeated obstacle-avoidance paths maintain the trends observed throughout the easy scenario, where the homotopy-guided MPC exhibits slight advantages in terms of the consistency of lateral deviations, despite the significantly different operating environment. Both motion-planning solutions, however, generate qualitatively desirable paths that avoid obstacles while closely adhering to the reference path to complete the objective and exhibit consistent characteristics across a range of operating conditions.
When considering the average path curvature, we find further evidence supporting the superiority of the homotopy-guided MPC over the direct-tracking approach. With an obstacle-free average path curvature of 0.106 m\({}^{-1}\), the direct-tracking method results in a robot trajectory with a curvature
Fig. 19: The maximum tracking error characteristics on an individual obstacle interaction basis on the easy scenario.
Fig. 20: The taught reference path showcasing the large elevation changes across the loop as reported by GPS
Fig. 18: A comparison of the lateral path errors across the obstacle-avoidance repeats for the two proposed control architectures on the easy scenario.
of 0.123 m\({}^{-1}\), while the homotopy-guided method achieves 0.114 m\({}^{-1}\). This corresponds to a change of 16.0% and 7.5%, respectively.
## VIII Conclusions
In this work, we presented a local obstacle-avoidance architecture designed for path-following applications such as teach and repeat. By modifying a sample-based motion planner to use a laterally weighted edge-cost metric combined with a curvilinear planning space, we show how natural path following can be achieved that exploits prior knowledge of the terrain to avoid obstacles. Our method emphasizes reducing lateral deviations from a previously taught reference path, which is expected to offer increased safety during traversal. Critically, we demonstrate that through a novel preprocessing step, we can condition our path planner to elegantly handle infrequently occurring but troublesome rotation singularities in the reference paths without invalidating the underlying planner properties.
We then explored two MPC architectures that leverage the planner solution to generate smooth robot trajectories and avoid local obstacles along the reference path. In the first method we elected to directly track the output path of the planner to produce a kinematically feasible robot trajectory. While simple and reasonably effective for simple problems, we find that due to kinematic tracking errors it is possible for collisions to occur unless this error is explicitly accounted for through obstacle inflation. By decoupling the MPC from the planner using the proposed homotopy-guided dynamic lateral corridor state constraints, we show how we are able to find better obstacle-avoidance trajectories with significantly faster path solution times and a stronger guarantee on collision avoidance.
Since our initial manuscript [7], our motion planner has proven its versatility by being successfully deployed on various other autonomous platforms, ranging from indoor navigation robots [43], to autonomous boats [44], with minimal tuning required between applications.
## Acknowledgements
This research is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC).
|
2309.16214 | Canary: Congestion-Aware In-Network Allreduce Using Dynamic Trees | The allreduce operation is an essential building block for many distributed
applications, ranging from the training of deep learning models to scientific
computing. In an allreduce operation, data from multiple hosts is aggregated
together and then broadcasted to each host participating in the operation.
Allreduce performance can be improved by a factor of two by aggregating the
data directly in the network. Switches aggregate data coming from multiple
ports before forwarding the partially aggregated result to the next hop. In all
existing solutions, each switch needs to know the ports from which it will
receive the data to aggregate. However, this forces packets to traverse a
predefined set of switches, making these solutions prone to congestion. For
this reason, we design Canary, the first congestion-aware in-network allreduce
algorithm. Canary uses load balancing algorithms to forward packets on the
least congested paths. Because switches do not know from which ports they will
receive the data to aggregate, they use timeouts to aggregate the data in a
best-effort way. We develop a P4 Canary prototype and evaluate it on a Tofino
switch. We then validate Canary through simulations on large networks, showing
performance improvements up to 40% compared to the state-of-the-art. | Daniele De Sensi, Edgar Costa Molero, Salvatore Di Girolamo, Laurent Vanbever, Torsten Hoefler | 2023-09-28T07:39:25Z | http://arxiv.org/abs/2309.16214v1 | # Canary: Congestion-Aware In-Network Allreduce Using Dynamic Trees
###### Abstract
The allreduce operation is an essential building block for many distributed applications, ranging from the training of deep learning models to scientific computing. In an allreduce operation, data from multiple hosts is aggregated together and then broadcasted to each host participating in the operation. Allreduce performance can be improved by a factor of two by aggregating the data directly in the network. Switches aggregate data coming from multiple ports before forwarding the partially aggregated result to the next hop. In all existing solutions, each switch needs to know the ports from which it will receive the data to aggregate. However, this forces packets to traverse a predefined set of switches, making these solutions prone to congestion. For this reason, we design Canary, the first congestion-aware in-network allreduce algorithm. Canary uses load balancing algorithms to forward packets on the least congested paths. Because switches do not know from which ports they will receive the data to aggregate, they use timeouts to aggregate the data in a best-effort way. We develop a P4 Canary prototype and evaluate it on a Tofino switch. We then validate Canary through simulations on large networks, showing performance improvements up to 40% compared to the state-of-the-art.
keywords: in-network compute, allreduce, load balancing +
Footnote †: journal: Future Generation Computer Systems
## 1 Introduction
As the parallelism in computing systems steadily increases, the performance scalability of applications running on data centers becomes more dependent on communication performance. The allreduce operation is a widely-used communication primitive, both for the training of machine learning models [1], but also in scientific computing in general [2; 3]. In an allreduce, each host has a vector of data elements. All the vectors must be _reduced_ (i.e., aggregated) together element-wise using a commutative and associative operator. Then, after aggregation, data is distributed back to the hosts.
Allreduce accounts for a significant fraction of the training time of deep learning models, with estimates ranging from 50% for 10 Gbps networks [4], to 20-30% for 100 Gbps networks [5]. Moreover, improvements in computation speed significantly outpace network bandwidth improvements. For example, we observed a 10x increase in GPU floating-point performance in 2.5 years [6; 4]. In contrast, it took ten years for network bandwidth to increase by 10x [7; 4]. Thus, we can expect application performance to be even more dependent on communication performance in the future.
For these reasons, several allreduce optimization techniques have been proposed [1], including (but not limited to) data quantization [8], sparsification [9], non-blocking collectives [10; 11], solo allreduce [12], hierarchical synchronization [13; 14], and in-network reductions [4; 15; 16; 5]. This work focuses on in-network reductions, i.e., solutions where the network switches aggregate data. Several works showed that in-network allreduce transmits half of the data volume transmitted by the host-based bandwidth-optimal allreduce algorithm [17] (e.g., _ring_ allreduce) [15; 4; 5]. Thus, if the network aggregates the data at line rate, this potentially halves the time required to complete the reduction.
All existing in-network allreduce algorithms adopt a similar approach [4; 15; 18; 16; 19; 5], which we describe through an example. Let us consider the network depicted in Figure 0(a), where the hosts _H0_, _H1_, and _H3-H6_ want to perform an in-network allreduce. First, they set up a _reduction tree_, where the leaves are the hosts participating in the reduction, and the intermediate nodes are a subset of the switches in the network, as shown in Figure 0(b). This setup step mostly involves installing forwarding rules in the switches. By doing so, for example, _S4_ knows that it must aggregate the data coming from _H0_ and _H1_, and forward the aggregated result to _S2_. Similarly, _S5_ does not wait for data coming from _H2_, and forwards the data coming from _H3_ to _S2_ immediately after receiving it.
Albeit the described algorithm is simple and effective, it is also prone to congestion, because each switch in the reduction tree needs to know a priori its children and its parent in the reduction tree. This forces packets to be always routed on the same paths, regardless of their congestion. Network congestion can significantly slow down applications [20; 21; 22; 23; 24], and this is particularly relevant for in-network reductions. For example, let us assume that the link between _S7_ and _S0_ in Fig
ure 1 is congested. Even if _S0_ already received the data from all its other children, it still needs to wait for the data coming from _S7_ before starting the _broadcast_ phase.
Thus, **it is enough to have congestion on just one of the links composing the reduction tree to slow down the entire operation**. A straightforward solution would consist in running the allreduce traffic in a separate traffic class. However, as we also show in Section 5.2.4, concurrent allreduces issued by different applications (e.g., different training jobs) would still interfere unless they are mapped to different classes. Because the number of concurrent allreduces can be higher than the available traffic classes [15; 22; 25; 26], this is not a viable solution.
For these reasons, in this work **we design and evaluate Canary (_Congestion-Aware In-Network Allreduce Using Dynamic Trees_), the first congestion-aware in-network allreduce algorithm**. Canary relies on traffic load balancing algorithms to send packets on the least congested paths, dynamically building the reduction tree and adapting it throughout the execution based on congestion.
To illustrate the impact of congestion on in-network allreduce, we simulate a 2-level fat tree network [27] connecting 1024 hosts (we provide more details on the simulation infrastructure in Section 5.2). We execute an allreduce first on 1% and then on 75% of the hosts in the network. We observe in Figure 2 that, when there is no congestion, both state-of-the-art in-network allreduce algorithms (using a static reduction tree) and Canary provide a 2x bandwidth improvement over the bandwidth-optimal host-based allreduce.
Then, we introduce congestion by concurrently executing a random uniform injection traffic pattern on the remaining hosts (99% and 25%, respectively). We observe that congestion causes a drop in the performance of the state-of-the-art in-network allreduce, which can even perform worst than the host-based allreduce.
Instead, Canary is less affected by congestion and provides a performance advantage over both the bandwidth-optimal host-based allreduce and state-of-the-art allreduces. As we describe in detail in Section 3, this is possible because Canary dynamically builds and adapts the reduction tree to the network conditions by relying on existing congestion-aware traffic load balancing techniques.
In this work, we introduce the following contributions:
* We identify the impact of congestion on in-network allreduce, and we design an algorithm that relies on dynamic in-network reduction trees (Section 3).
* We improve the management of the switch resources and the fault tolerance compared to the state-of-the-art because the switches only store a soft state (Section 3).
* We implement a P4 Canary prototype on a Tofino switch [28], to assess the feasibility and limitations of our algorithm (Section 4 and Section 6).
* We perform a detailed analysis through large-scale simulations, calibrated on our P4 implementation (Section 5) showing that, on congested networks, Canary is up to 40% faster than state-of-the-art in-network allreduce algorithms.
## 2 State of the Art
We now discuss Canary's fundamental design principles (summarized in Table 1), that distinguish it from most existing in-network allreduce algorithms.
### Congestion Awareness
Existing interconnection networks have a large path diversity [35; 22] and, to avoid congestion, load balance the traffic using algorithms like ECMP [36], flowlet switching [37; 38], Valiant routing [39], and others. Some of these algorithms, like ECMP, distribute packets over the available paths by selecting the destination port based on the result of a hash function computed on some packet header fields. However, although many networks use ECMP [40], it has been shown that traffic often
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline Algorithm\({}^{\prime}\) & Year & CA & DRM & DFT \\ \hline \hline PERCS [29] & 2010 & \(\mathbf{\texttt{x}}\) & \(\mathbf{\texttt{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1 }\pgfsys@color@rgb@stroke{0}{0}{1}\pgfsys@color@rgb@fill{0}{0}{1}\color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1}\color@rgb@fill{0}{0}{1}\color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1}\color@rgb@fill{0}{0}{1}\color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1}\color@rgb@fill{0}{0}{1}\color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1}\color@rgb@fill{0}{0}{1}\color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1}\color@rgb@fill{0}{0}{1}\color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1}\color@rgb@fill{0}{0}{1}\color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1}\color@rgb@fill{0}{0}{1}\color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1}\color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1}\color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@rgb@{0}{0}{1} \color@rgb@fill{0}{0}{1} \color@
experiences congestion, even in the presence of alternative non-congested paths [37; 41; 42]. For this reason, some load balancing algorithms try to select the least congested path among those available. Such algorithms are also offered by some of the largest cloud providers when deploying high-performance virtualized clusters [43; 44].
As discussed, however, all state-of-the-art in-network reduction algorithms always send the packets on the same paths regardless of congestion (Table 1, \(\bigstar\)). Moreover, to the best of our knowledge, only one in-network allreduce algorithm distribributes the traffic over multiple reduction trees [18], showing advantages compared to a single reduction tree (Table 1, \(\bigstar\)). Nevertheless, because the algorithm statically selects trees in a round-robin way, it is still congestion oblivious, even if it balances traffic over multiple paths.
Differently from all existing solutions, Canary dynamically builds, packet by packet, the optimal reduction tree based on the current network status, by routing packets on the least congested paths, and by aggregating the packets in a best-effort fashion (Table 1, \(\bigstar\)- Section 3.1). It is worth remarking that simply adding load balancing capabilities to existing in-network allreduce algorithms would not work and that we need to design new ways to aggregate the data, as we discuss in Section 3.
### Dynamic Resource Management
Each switch aggregates in a memory buffer, packet by packet, the data it receives from its children. Switches, however, have limited memory, and existing in-network allreduce algorithms adopt different approaches to deal with this. Most algorithms reserve some buffer space before starting an allreduce and, if there is no buffer space available, they fall back to a host-based algorithm [16; 19; 31; 45]. Because this reservation process requires interacting with the control plane and might introduce some latency (up to _"a few seconds"_[18]), resources are usually reserved when the application starts and deallocated when the application terminates [16; 19; 4].
However, by doing so, long-lived applications would reserve resources for their entire execution, even if they would only sporadically use in-network reductions, potentially excluding other applications from using them. This decreases the number of concurrent in-network reductions that can be executed, which might be a relevant problem on multi-tenant datacenters [15] (Table 1, \(\bigstar\)). Other approaches, instead, dynamically partition the available resources across the currently active reductions [18; 5; 15], as we also do in Canary (Table 1, \(\bigstar\)-Section 3.2). Specifically, in Canary the switches allocate and deallocate the memory in an on-demand fashion and strictly for the time required to complete the reduction.
### Dynamic Fault Tolerance
Another critical point to discuss is how to deal with links or switch failures [46; 47]. Because existing algorithms statically determine the reduction trees, if a link or a switch on the tree fails, the in-network reduction cannot progress. In most cases, the network controller detects switch failures [16; 18] and builds another reduction tree. However, hosts might be in an inconsistent state after a failure (e.g., some hosts might have successfully received all the reduced data, whereas others might just have received part of the data). Recovering from such a state might imply re-issuing the entire reduction operation [16; 19] from scratch on a different reduction tree or falling back to host-based reductions (Table 1, \(\bigstar\)). Other solutions [4] delegate the task of detecting and recovering from failure to the upper layer.
Canary, on the other hand is self-contained and can autonomously detect and recover from switch failures without re-starting the entire reduction from scratch. Indeed, Canary builds reduction trees dynamically and keeps only a soft state in the switches, and treats switches and links failures in the same way as a packet loss. In both cases, Canary requires only the re-transmission of the small fraction of data that was stored in the switch when it failed (Table 1, \(\bigstar\)- Section 3.3). Because managing both packet losses and switches faults adds complexity to the algorithm, Canary partitions its functionalities between switches and a _leader host_ (Section 3.1.4).
## 3 Canary Design
In general, we can consider in-network allreduce algorithms composed of two phases: a _reduce_ phase (where data flows from the hosts to the root of the reduction tree) and a _broadcast_ phase (where aggregated data flows from the root to the hosts). In Canary, hosts send packets to the same root switch (predetermined before starting the application), but packets are forwarded on the least congested paths towards the root to bypass congestion. Canary is orthogonal to the load balancing algorithm, and switches can use any existing algorithm to select the next-hop (either on a per-packet [41] or a per-flowlet granularity [37]).
Canary aggregates packets that traverse the same switch in the same time window. Each switch allocates memory on-demand when receiving packets in the _reduce_ phase and deallocates it in the _broadcast_ phase, after sending the aggregated data to its children. To simplify the description of the algorithm, we first describe in Section 3.1 a scenario where switches have infinite memory, where a fully reliable network is used (i.e., packets are never dropped and switches/links never fail), and where there is at most one application at a time using Canary. We then remove these assumptions in Section 3.2, Section 3.3, and Section 3.4 respectively.
### General Design
Before describing the details of the algorithm, we analyze the challenges of using non-predetermined paths in in-network reductions. For the moment, we assume that each host can fit all the data it needs to reduce in a single network packet. We then discuss in Section 3.1.3 how to deal with larger data.
If we consider the network shown in Figure 3, we can see that if we select _S0_ as a root and we let packets follow different paths from the hosts to the root, _SI_ and _S2_ would not know how many packets they will receive from their children. According to the network conditions, sometimes both _S3_ and _S4_
might decide to forward their packets to _S1_, sometimes they might decide to forward their packets to _S2_, and sometimes _S3_ might forward packets to _S1_ and _S4_ to _S2_ (or vice-versa). As a consequence, _S1_ might receive 0, 1 or 2 packets to aggregate (and the same for _S2_). This ambiguity is not present in existing in-network reduction approaches because packets always follow predetermined paths, and each switch knows exactly how many packets to wait for and aggregate. For this reason, existing in-network allreduce algorithms cannot simply be extended by using congestion-aware traffic load balancing, and a different approach must be adopted.
#### 3.1.1 Reduce
Because the switch does not know how many packets to wait for, in the _reduce_ phase Canary aggregates all packets received in a given time window. The first time a switch receives a reduction packet, it creates a _descriptor_ and stores it in its memory. The descriptor contains a data accumulator (where the switch stores the data carried by the packet) and the root address (also carried by the packet). The descriptor also contains a timer that the switch starts when the first reduction packet is received. After storing the information carried by the packet in the descriptor, the switch drops the packet. The switch stores in the descriptor also the list of ports from which it received the allreduce packets (it will use it to reach the children in the _broadcast_ phase). For all subsequent packets, the switch aggregates the data carried in the packets with that stored in the accumulator and then drops the packets. When the timeout expires, the switch retrieves the accumulated data from the descriptor, stores it in a new packet, and sends it to the next hop towards the root. The switch selects the next hop using any available congestion-aware load balancing algorithm, thus dynamically building the reduction tree depending on the current network conditions.
Eventually, all packets reach the root and the _reduce_ phase is concluded. Each packet sent by the hosts carries a counter indicating the number of already reduced packets, together with the number of hosts participating to that reduction (Figure 3 ()). Counters coming from multiple packets are summed by the switches and stored in the descriptor. For example, if _S3_ reduces the data coming from _H0_ and _H1_, it will send a packet to _S1_ with a counter equal to 2 (). At some point the accumulated counter will be equal to the number of hosts (), meaning that all data coming from the hosts has been reduced and that the root can start the _broadcast_ phase.
Intermediate switches might receive some packets after the timeout expiration if the timeout is too short. In that case, the packet is identified as a straggler and immediately forwarded to the next hop. In turn, the following switch considers that packet as a straggler or not, depending on its timeout. The switch can determine if a packet is a straggler because it does not deallocate the descriptor until the reduction is completed (which cannot happen unless all the packets are received and aggregated by the root). On the other hand, a too large timeout might increase the latency of the packets and the completion time of the allreduce. However, this is noticeable only for small allreduces, as we show in Section 5.2.3.
#### 3.1.2 Broadcast
When the _broadcast_ starts, the root retrieves from the descriptor the list of the ports from which it received the data (i.e., the list of its children), forwards the aggregated data on those ports, and then deallocates the descriptor. After receiving a reduced packet from its parent, a switch forwards the packet to its children and deallocates the descriptor. In a nutshell, Canary reserves resources in the switches dynamically and strictly for the time required to complete the _reduce_ and _broadcast_ phases. A switch allocates a descriptor when it receives the first packet going to the root and deallocates it when it receives the packet coming down from the root. Eventually, the reduced data reaches the hosts that started the reduction, and the allreduce operation terminates.
We can notice that, whereas Canary dynamically routes packets in the _reduce_ phase, in the _broadcast_ phase, it forwards them on the same paths used in the _reduce_ phase. A fully dynamic multicast would require explicit deallocation of resources because packets might cross different switches than those used in the _reduce_ phase. This would add unnecessary complexity to the design of the algorithm because, as we show in Section 5, Canary can still bypass most of the congestion in the network. Also, as we discuss in Section 3.1.3, the reduction tree is dynamically rebuilt on a packet-by-packet basis, thus reducing the probability of finding persistent congestion in the _reduce_ phase.
#### 3.1.3 Packetization
We assumed so far that all the data to be reduced by a host would fit in a single network packet. However, this is usually not the case, and data is often larger than the MTU (_Maximum Transmission Unit_). Thus, each host assigns a unique identifier (_id_) to each packet it sends, as shown in Figure 4. Packets with the same _id_ belong to the same _reduction block_ and must be aggregated together by the switch. The switches can now process
Figure 4: Packets and reduction blocks.
Figure 3: Aggregation counter update. _C_: aggregation counter, _H:_ number of hosts.
multiple blocks concurrently and store a separate descriptor for each block in a table indexed by _id_. Alongside the data accumulator, the list of children, and the timer, the descriptor also contains the block _id_.
For the moment, we assume that there is always space available to store the descriptor. We then describe in Section 3.2 how Canary works when the descriptor cannot be stored. To further improve load balancing, Canary aggregates each block in a different root, determined before starting the application (e.g., the hosts could select the roots in a round-robin way).
#### 3.1.4 Leader Host
Programmable switches have limited resources and can only perform simple operations. As a consequence, it is not possible to fully implement complex tasks such as tracking and retransmission of lost packets or recovery from other switches failures, and Canary delegates these tasks to a _leader host_, similarly to what happens in other algorithms [15]. Reduction packets are still sent towards the root and aggregated on its path, as we described above. However, the root switch sends the aggregated data to the leader host as soon as the timeout expires (or when all the expected data is received).
If we consider the example in Figure 3, we could use switch _S4_ as root switch, and either _H2_ or _H3_ as leader host. The leader does not send its data on the network. Instead, it waits for data to arrive from _S4_, after which it aggregates the received data with its own, and then starts the _broadcast_ phase. It is worth remarking that Canary still relies on a root switch that should aggregate as much data as possible. Ideally, the root should send to the leader host only one fully aggregated packet, thus avoiding having the leader receive multiple packets per block, which would reduce the operation's bandwidth.
Although this solution increases the latency because packets need to cross the network stack of the leader [48], it allows us to partition tasks between hosts and switches. The switches only perform simple tasks, like aggregating data packets in a best-effort fashion. On the other hand, the leader host handles more complex operations like the retransmission of lost packets (Section 3.3). To reduce the latency required to cross the network stack, leader functionalities could be offloaded to programmable NICs [49; 50; 51; 52] or implemented on a high-performing network stack such as DPDK [53]. Moreover, because Canary sends each reduction block to a different leader, the pressure on individual hosts is reduced. Namely, suppose \(N\) hosts participate in the reduction. In that case, each host will be the leader only for 1 block out of \(N\), thus receiving data at a much slower rate than line rate, increasing the time budget available for performing leader tasks.
### Resource Management
Switches have limited memory resources and could not allocate memory for all the reductions concurrently executed over the network. Because Canary relies on adaptive routing, it cannot reserve any resource because the paths that packets will take are not known a priori, and reserving resources on all the network switches would unnecessarily increase resources occupancy. Instead, Canary stores block descriptors in a static array and, when a packet is received, the switch maps the _id_ to a specific array location (e.g., by using a hash function). If the location is empty, the switch stores data in the accumulator and initializes the descriptor. Otherwise, if the stored accumulator belongs to the same _id_, the switch aggregates the data carried by the packet with that in the accumulator. If the stored accumulator belongs to a different _id_, then we have a _collision_. Collisions might happen because the hash function might map to the same location reduction blocks with different _id_s (and that do not need to be aggregated together).
#### 3.2.1 Collisions Management
If there is a collision, the switch cannot store the descriptor of the new block. In principle, the switch could forward the packet to the next hop, delegating the aggregation to the following switches towards the root. However, in this case, the switch would not be able to participate in the subsequent _broadcast_ phase as it did not store the descriptor (containing the list of children) for that _id_. For example, in Figure 5 () hosts _H0_ (leader), _H2_, and _H5_ want to reduce their data. _S1_ receives data from _H2_ and experiences no collisions (). _S2_ receives data from _H5_ () but detects a collision. Let us assume that _S2_ just forwards the data to the next hop and the reduction progresses as normal from that point on.
During the _broadcast_ phase (), _S2_ will not be able to forward data to its children because, due to the collision during the reduction phase, it was not able to store the descriptor that would contain, among others, the child port identifier. Ultimately, _H5_ will not receive the reduced data. In general, if a switch cannot store the block descriptor because of a collision, the entire subtree rooted at that switch will not be reachable during the _broadcast_ phase. It is worth remarking that each switch needs to store the ports connecting it to its children. Indeed, the leader host cannot just insert the addresses of the children of all the switches in the packet because this would be linear in the number of hosts participating in the reduction.
To solve this problem, we adopt a simple but effective solution called _tree restoration_. After a conflict, the switch inserts its address in the packet alongside the identifier of the port from which it received it. Then, it forwards the packet directly
Figure 5: Collision and tree restoration.
to the leader host (the packet is marked and ignored by all the other switches on the path). After the reduction, the leader host knows the unreachable subtrees: i.e., it has a list of switches and respective ports from which they received packets generating collisions. During the _broadcast_ phase, the leader host uses this information to send an additional packet to these switches, allowing them to bootstrap a local broadcast, restoring the subtree. Other than the reduced data, these packets also carry the list of ports on which the switch must forward the data (e.g., encoded as a bitmap).
In our example, when the collision is detected (), _S2_ forwards the packet to the leader host, together with its address and the port number (_R_) from which the packet has been received. After the reduction phase, the leader host _H0_ starts the _broadcast_ as usual (). The broadcast packet eventually reaches _S2_, but it is not able to progress since _S2_ is missing information about its subtree (). The leader host also sends a _"restoration"_ packet to _S2_ (), making it able to forward the data on port \(R\) towards _H5_ ().
This approach works even if, after some collisions, the entry becomes available and subsequent packets are successfully stored. In that case, the switch stores identifiers of some children in the block descriptor on the switch, and hence they will be reached by the normal _broadcast_ phase. Others, i.e., the ones that generated conflicts, will be reached through the tree restoration process.
Because of the extra network traffic generated after a collision (e.g., restoration packets), this approach can lower the throughput (i.e., the leader host receives more packets due to missed aggregations). However, collisions only happen if a switch receives multiple packets with different _id_s mapping to the same descriptor entry in the same time window. If this performance penalty is not acceptable, Canary can avoid collisions entirely by setting a limit on the number of concurrent allreduces, and by statically mapping _id_s to descriptor array entries.
#### 3.2.2 Switch Memory Occupancy Modelling
We now analyze how much switch memory an allreduce can occupy. A block descriptor is allocated in a switch when the first packet of that block is received and deallocated when the fully aggregated data is received. For this reason, switches at the bottom level of the tree keep descriptors allocated for the longest time and, to estimate the maximum memory occupancy, we model the memory occupancy of those switches.
We denote the network bandwidth with \(b\), the network diameter with \(d\), the 1-hop delay with \(l\), the timeout with \(t\) (i.e., how much time a switch waits before sending the partially aggregated data to the next hop), and the time required to the leader to aggregation its data with \(r\). Then, the time between the allocation and the deallocation of a descriptor can be measured as \(2d(l+t)+r\). Each descriptor contains the aggregated data, plus a few more bytes for storing the root address and other information. Thus, we can approximate the size of a descriptor with the MTU \(m\). By using Little's Law, if we assume to send MTU-sized packets, we can estimate the number of bytes occupied by descriptors as:
\[\frac{b}{m}(2d(l+t)+r)m=b(2d(l+t)+r)\]
Recent networks have a diameter of up to five and a per-hop latency of around 300 nanoseconds [22], and programmable NICs can perform tasks similar to those performed by the leader host in around one microsecond [50]. Thus, on a 100Gbps network with a one-microsecond allreduce timeout, each allreduce might store up to 175KiB in each switch crossed by its packets.
It is worth remarking that the memory occupancy is independent of the actual size of the data to be reduced because Canary aggregates data block-by-block, and the bandwidth-delay product bounds the number of in-flight blocks. Also, the occupancy does not depend on the number of hosts participating in the reduction. Indeed, each switch stores only one descriptor for each block, independently of how many packets (one from each child) are aggregated in that block.
### Packet Loss and Fault Tolerance
Canary treats packet losses and switches failures in the same way. Indeed, in both cases, the leader does not receive some packets (if the loss/failure occurs in the reduce phase), or the hosts do not receive the reduced data (if the loss/failure occurs in the broadcast phase). Without loss of generality, we describe how to manage packet losses since the same approach is also used for managing switches failures.
To detect a loss, all the hosts (excluding the leader) set a timeout for each packet right before transmitting it. When the reduced data arrives, the host deletes the timer. If the timeout expires, a retransmission request is issued. If the leader receives a retransmission request, two situations might occur. If the leader entirely reduced the data, the packet was lost during the broadcast phase, and the reduced data is re-transmitted to the host that issued the request. Otherwise, if the leader only partially reduced the data, some packet was lost in the reduce phase.
In this case, the leader does not know which packet was lost. Indeed, to know which packets contributed to the partially reduced data, each switch would need to keep a bitmap of all the hosts that contributed to the data reduced so far, which would be linear in the number of hosts participating in the reduction. However, having this bitmap in the packet is infeasible because allreduces might span thousands of hosts [1; 54], and existing programmable switches can only process a few hundred bytes per packet (Section 6). Accordingly, because the leader cannot determine which packet should be re-transmitted, it broadcasts a failure message. Upon the reception of this message, the hosts re-issue the reduction of that packet with a different _id_ (or they can reduce that packet only by using a host-based reduction algorithm). To avoid overloading the network with reduction packets, the hosts fall back to a host-based reduction after a given number of failed retransmissions.
It is worth remarking that when a host terminates the reduction, it cannot simply modify or deallocate the reduced data because the other hosts might not have successfully terminated
the reduction yet. Accordingly, it must preserve the part of the data for which it was the leader to re-transmit the packets in case any retransmission request arrives. For small-size reductions, the leader can store a copy of the data and deallocate it when it receives a fully reduced packet for a subsequent reduction. Indeed, the hosts can start the subsequent reduction only if they have already completed the previous one. If there are no subsequent reductions, an explicit completion notification is required (for example, by issuing a barrier). Because preserving the data between subsequent reductions could potentially double the memory consumption, for large reductions the hosts always explicitly notify the completion. The explicit notification introduces a marginal latency overhead compared to the allreduce and allows the data to be deallocated or modified immediately after the notification, not requiring any additional copy.
Although Canary can autonomously manage switch failures without re-issuing the entire allreduce operation, host failures must be managed at a higher layer (e.g., with checkpoint/re-start solutions).
### Multitenancy
The switch does not have any knowledge of the different applications or users running on the system and simply aggregates together packets with the same _id_. To support multiple applications, each of them must generate unique _id_s. Thus, Canary_id_s are built by concatenating an identifier of the application (e.g., generated by the workload manager) and an identifier that each application increases for every subsequent packet.
It is worth remarking that having multiple concurrent allreduces does not necessarily increase the amount of data that a switch needs to store. Indeed, a descriptor is allocated only on the switches traversed by the packets belonging to the corresponding block and strictly for the time needed to reduce that block. In a nutshell, running multiple concurrent allreduces, each on a few hosts (thus connected through a few switches), might consume the same amount of resources of a single allreduce on a higher number of hosts (thus spanning over more switches).
### Summary
We now wrap up Canary design. First, each host splits the data in multiple packets, each marked with a unique _id_. When a switch receives a data packet, it maps the _id_ to a specific entry of the array containing the descriptor for that _id_. If the entry is available, the switch stores in the table the data carried by the packet, updates the list of children, starts a timer, and drops the packet. If a descriptor with the same _id_ is already present in the entry, the switch accumulates the data, updates the list of children, and drops the packet. If a descriptor with a different _id_ already occupies the entry, the switch inserts in the packet its address and the identifier of the port from which it received the packet, before forwarding it to the leader.
When the timer of a descriptor expires, the switch sends to the next hop the data contained in the descriptor. The port is selected based on the root address stored in the switch using any available load balancing algorithm. If a packet arrives and the timeout for that _id_ already expired, the switch updates the list of children and forwards the packet to the next hop. Eventually, when the leader starts the _broadcast_ phase, the switch receives the fully reduced data.
If there is a descriptor for that _id_, the switch forwards the packet to all its children and removes the descriptor from the table. If the switch does not have a descriptor for that _id_ (because it could not store it due to a collision), it drops the packet. Later, it receives the data from the leader specifying the children to which the packet should be forwarded.
At some point, all the data will reach the tree's leaves. If a packet is lost or a switch fails, some leaves send a retransmission request to the leader. When the leader receives a retransmission request, it can either re-transmit the fully reduced data or require that block to be reduced from scratch (if it did not already entirely reduce the data).
## 4 Implementation
We implemented Canary using a state-of-the-art Intel Tofino programmable switch. Although existing programmable switches can process terabits of data per second (1.2 billion packets per second for each pipeline), they limit the type of computation they can perform on packets. These limitations drove some of the main choices in our design. For example, having one of the hosts acting as the reduction leader pushes some complexity to the host and keeps the code executed by the switch as simple as possible.
Programmable switches process packets through multiple pipelines, each composed of multiple stages. In our implementation, we use the first stages to determine if the packet is a reduce or a broadcast packet and check if the packet generates any collision when accessing the descriptors table. We then use subsequent stages to read/write data. Because the goal of Canary is to leverage the speed of such programmable devices, all the data is managed in the _data plane_ and stored into registers. Content-addressable memory (CAM) [55] is usually available but can only be updated from the _control plane_[56; 28]. We do not use CAM for storing the data because interactions with the control plane would significantly increase the latency [57].
### Packet Format
Because we are using programmable switches, we define a custom packet format for Canary packets. To reduce the packet overhead, Canary sends packets directly on top of Ethernet. However, any other encapsulation could be used, and Canary packets could be sent on top of IP or UDP. A Canary packet is composed of the following fields:
* **Destination (4 bytes)** IP address of the leader host. Canary uses the same routing tables used for IP routing to determine how to reach the destination.
* **Id (4 bytes)** Unique identifier of the packet.
* **Counter (2 bytes)** Number of reduced packets (Fig. 3).
* **Hosts (2 bytes)** Number of hosts participating in the reduction (Fig. 3).
* **Children (4 bytes)** When a switch cannot store a packet because of a collision, this field carries the identifier of the port from which the packet was received (Section 3.2).
* **Switch Address (2 bytes)** When a switch cannot store a packet because of a collision, this field carries the switch address (Section 3.2).
* **Bypass (1 bit)** If set, the switch should not further process the packet but only forward it to the next hop.
* **Multicast (1 bit)** If set, the packet must be multicast to the children of the switch.
* **Padding (6 bits)** Used to pad the packet size to a multiple of 8 bits.
* **Data (128 bytes)** Data to be reduced.
### Multicasting
Multicasts could, in principle, have an impact on the capacity to run our allreduce algorithm at line rate. Indeed, if the switch generates \(m\) packets for each packet it receives, it might decrease the achievable bandwidth by a factor of \(m\). However, we observe that if a switch multicasts a packet on \(m\) ports (its children), this is because it previously aggregated the data coming from \(m\) ports. Accordingly, the switch forwards, on average, one packet for each packet it receives.
The switch keeps a table associating to each port the corresponding one-hot encoding. Every time a packet is received, the one-hot encoding of the input port is retrieved through a TCAM (pre-configured in the control plane) and added to the bitmap storing the children. Thus, when a packet needs to be multicasted, the switch knows that it must send it to all the ports set in the children bitmap.
Programmable switches require multicast groups to be pre-configured by specifying the association between a group identifier and a list of ports on which the switch will send packets directed to that group. In our case, the group identifier could simply be the bitmap associated with the specific list of ports. For example, let us suppose we have a switch with eight ports and that we want to multicast on the ports [0,2,3,5]. The binary representation of this list is 00101101. So we would have to set up a rule such as 00101101 -> [0,2,3,5]. However, Canary uses adaptive routing, and the switches multicast the packets to the same ports from which they have been received, which are not known a priori. Accordingly, we should store all the possible combinations of ports, which is exponential in the number of ports.
Because this requires too many resources, to reduce the storage requirements, we divide the children bitmap in _shards_. For example, the children bitmap 00101101 can be divided into two shards of 4 bits each. We prepend to each shard its index so that the two shards become 1 0010 and 0 1101. We then store the association between all the possible shard values (that, in this case, would be \(2\cdot 2^{4}\)) and the corresponding list of ports. In our example, we would have the rules 10010 -> [5] and 01101 -> [0,2,3]. This technique reduces the number of multicast groups to store in the switch tables from \(2^{p}\), to \(2^{\frac{p}{s}}\cdot s\), where \(p\) is the number of ports, and \(s\) is the number of shards. For example, on a 64 port switch with four shards, this requires using 256 thousand entries, which is far within reach of current programmable switches [58], as we also show in Section 5.1.
### Timeouts
As described in Section 3.1.1, Canary relies on timeouts to avoid statically setting up the reduction tree. In our Canary implementation, every time the switch receives a reduction \(id\) for the first time, it stores in the descriptor the current timestamp (alongside the data to reduce and the other information). Some programmable switches [28] provide one or multiple _packet generators_ that can generate packets at a predefined rate and with predefined content. For example, we can set up the packet generators so to generate _clock_ packets. Every time one of these packets is received, the switch can check an entry of the table and, if expired, send the content in that entry to its parent. Alternatively, Canary could be implemented on programmable switches that support timer-based events [59].
## 5 Evaluation
In this section, we evaluate the performance of Canary, first by analyzing its performance on a Tofino switch (Section 5.1), and then by simulating it on a large network (Section 5.2) connecting 1024 hosts through 64 switches.
### Single switch implementation
To verify the feasibility of our design, we implemented and validated our P4 Canary prototype on a Tofino Wedge 100BF-32X switch [28] with 32 100Gbps ports. This allowed us to understand existing switches' limitations and drive our design choices. We were able to allocate enough registers to allocate up to 32K descriptors, allowing us to run at least 25 concurrent allreduces of different tenants or applications (Section 3.2.2). Our P4 Canary implementation only uses 14.38% of the switch SRAM. However, Canary uses most of the Arithmetic Logic Units (ALU), up to 81.25% of those available on the switch, to aggregate the elements carried in the packets. Canary leaves enough resources available for running it alongside traffic load balancing algorithms such as flowlet switching [37] that, on the same switch, uses 2.26% of the available SRAM and 0.04% of the ALUs [60].
Figure 6: Goodput (Gbps) of our P4 Canary prototype and of our implementation in the SST simulator, when sending packets with 128 bytes of useful payload.
To measure the goodput of our P4 prototype implementation, we connected two hosts equipped with 100 Gbps Melanox ConnectX-5 NICs to the Tofino switch. We emulate a leaf switch that receives the data to be reduced from the two hosts, aggregates it, and forwards the aggregated result to the next switch in the reduction tree. The two hosts inject the data using Moongen [61], a DPDK [53] wrapper.
We benchmark a 4MiB allreduce and report the goodput in Figure 6. We also report the goodput achieved by our simulation infrastructure (that we describe in Section 5.2) in the same setup. It is worth remarking that, due to the limited number of _match-action_ tables available in existing programmable switches, we can store up to 32 4-bytes elements. Accordingly, each packet contains 128 bytes of useful payload and 57 bytes of headers (19 bytes of Canary header, 14 bytes of Ethernet header, and 24 bytes of framing overhead).
Programmable switches are composed of multiple processing pipelines, and some existing P4 prototypes [15; 4] partially overcome this limitation by striping the packet across pipelines. For example, the first pipeline would store the first 32 elements, then recirculate the packet to the second pipeline, which would store the following 32 elements, and so on. In this way it is possible to have up to 128 [15] or 256 elements [4] per packet. However, to have enough recirculation bandwidth and avoid packet drops, some switch ports must be dedicated to packets recirculations only.
Additionally, it is reasonable to assume that the number of match-action tables in each pipeline stage will increase with the next generations of programmable switches, thus allowing processing more elements per packet. For example, some programmable switches already provide the possibility to process up to 48 elements per pipeline [60]. For these reasons, in the following, we run simulations with 256 elements per packet for all the in-network algorithms.
### Large network simulations
To evaluate Canary performance at scale, we modified the SST simulator [62; 63] so that switches can modify the packets they receive before forwarding them. We build in the simulator a two-level fat-tree network [27]. The network comprises 32 switches at the bottom level, each with 64 ports (32 connected to the hosts and 32 to the switches at the upper level). The top level of the fat-tree comprises 32 switches, each with 32 ports (one port connected to each of the switches at the bottom level). Both the hosts and the switches have 100 Gbps network interfaces.
We calibrated the simulator so that hosts can inject packets into the network at line rate and so that the switches can aggregate the data at the same speed as our Tofino prototype (as we show in Figure 6). The simulated network uses up/down routing. When packets flow from hosts to upper levels of the fat-tree, each switch can select one among multiple _up_ ports. By default, each switch sends packets on a default _up_ port (selected depending on the packet destination). If the output port buffer has an occupancy higher than 50% of its capacity, the switch forwards the packet on the _up_ port with the smallest number of enqueued bytes.
To compare our solution with the state-of-the-art, we implemented in the simulator the following allreduce algorithms:
* **Ring** The bandwidth-optimal host-based _ring allreduce_ algorithm [17]. This solution does not rely on any in-network compute capability.
* **In-Network, \(N\) static trees** An in-network algorithm using static reduction trees. We consider either the case when a single tree is used, similar to what is done by SHARP [16; 19], SwitchML [4], and ATP [15], and also the case where \(N\) trees are used and each block is sent on a different tree in a round-robin way, similar to what done by PANAMA [18].
* **Canary** The in-network algorithm we propose in this work, which dynamically builds reduction trees.
To analyze the impact of congestion on the allreduce performance, we split the hosts into two sets. While some hosts run the allreduce, the remaining hosts generate network congestion by executing a random uniform injection traffic pattern. In this pattern, each host sends a message to a randomly selected host and receives a message from another randomly selected host. Each host changes its random peer throughout the execution to assess the ability of Canary to react to dynamically changing congestion patterns. We execute each test five times, each time randomly allocating the hosts executing the allreduce and those generating the congestion. When using static reduction trees, we also randomly pick the roots of those trees.
#### 5.2.1 Comparison with the static trees approach
First, we report in Figure (a)a the comparison between Canary and the in-network algorithm using one or multiple static trees. We allocated 512 hosts to the allreduce and used the remaining 512 hosts to generate congestion. We report the bandwidth with and without congestion for a 4MiB allreduce. Whereas in the absence of congestion, the performance of all the approaches is comparable, when introducing congestion Canary performs significantly better than the solutions using statically configured reduction trees. Indeed, whereas solutions using static trees are severely affected by congestion, Canary does not experience any performance degradation. For this reason, we observe performance improvements up to 2x compared to solutions using a single reduction tree and up to 40% compared
Figure 7: Goodput and links utilization when 512 hosts execute an allreduce and 512 hosts generate congestion. _ST_: Static Tree(s).
to those using multiple trees. Moreover, we also observe that using more than four trees for solutions relying on static trees leads only to a marginal performance improvement. For these reasons, in the subsequent analysis, we consider a solution using four static trees, as in the original PANAMA paper [18].
We also report in Figure 6(b) the distribution of links utilization (each sample is a network link). For the sake of readability, we only report those of Canary, one static tree, and four static trees. We observe that when there is no congestion, there are no significant differences between the three approaches, and each link is either idle (0% utilization) or fully utilized (around 90% utilization). However, when we introduce congestion, we observe that Canary is characterized by fewer idle links and better distributes the traffic over the available links.
At first sight, it might seem like the in-network solution with one static tree does not fully utilize any network link because there are no humps around 80-100% utilization. However, by analyzing the data more in detail, we found two links with utilization greater than 80%. Because these two links are shared between the in-network allreduce and the application that generates congestion, this is enough to slow down the in-network allreduce by more than 50% (Figure 6(a)) and to reduce the overall network utilization. Indeed, in the presence of congestion, we observed an average network utilization (computed as the average of all the links utilization) of 40.2% for Canary, 29.5% for the in-network allreduce with four trees, and 20.9% for the one with one tree.
#### 5.2.2 Goodput for different congestion intensity
We now analyze the performance of Canary when changing the number of hosts generating congestion by comparing it to the host-based bandwidth-optimal ring allreduce and the in-network allreduce using static trees. We report the results of this analysis in Figure 8. We ran a 4MiB allreduce, and we increase the number of hosts executing the allreduce from 5% to 75% of the 1024 hosts available in the system. The hosts not executing the allreduce generate congestion through a random uniform communication pattern. First, we observe that Canary consistently improves performance compared to other solutions.
When using only 5% of the hosts for the allreduce (thus using 95% of the hosts to generate congestion), Canary performance only decreases by 20%. In contrast, the performance of the in-network static solutions decreases by 66% when using a single tree and by 47% when using four trees. When increasing the number of hosts executing the allreduce (thus decreasing congestion), the performance gap shrinks, but Canary still provides 2x improvement compared to the single static tree solution, and 23% improvement compared to the solution using four reduction trees. Eventually, we observe that in some cases, the congestion decreases the performance of the single tree solution so much that it does not provide any performance advantage compared to the host-based ring allreduce, as also outlined in other recent works [18].
#### 5.2.3 Runtime for different data sizes
We now analyze the allreduce runtime for different data sizes. We allocate 20% of the hosts to the allreduce, whereas the remaining 80% generates congestion. We report in Figure 9 the runtime (in microseconds) with and without congestion. We observe that for small allreduces, Canary is characterized by a higher runtime because the switch only forwards (aggregated) packets after the timeout period expires. When increasing the size of the data exchanged by the allreduce, the performance advantage of Canary increases because the runtime of large allreduces is dominated by the bandwidth, and the extra latency introduced by the timeout mechanism becomes negligible. We also observe that 1KiB and 256KiB ring allreduces have the same runtime. Indeed, the ring allreduce is the host-based bandwidth-optimal allreduce algorithm, but, for small messages, its runtime is dominated by latency and setup of communication phases [17; 5].
#### 5.2.4 Multiple concurrent allreduces
Using statically configured trees also significantly decreases the aggregation bandwidth when multiple tenants (or multiple applications) concurrently run allreduce operations (e.g., multiple training jobs). Therefore, we equally partitioned the system between multiple co-running allreduce operations to analyze this effect. Furthermore, because most existing in-network allreduce algorithms statically partition the switch resources across the tenants [4; 34; 16], to have a fair comparison, we adopt a similar approach also in Canary.
We report in Figure 10a the average goodput across all the concurrent allreduce operations. First, we observe that when
Figure 8: 4MiB allreduce goodput (the higher the better) for different hosts count. The hosts not performing allreduce generate random uniform traffic to introduce congestion.
increasing the number of concurrent allreduces (thus decreasing the number of hosts allocated to each allreduce), the average goodput of the ring allreduce increases. This is a known effect [17] because the performance of the ring allreduce increases when decreasing the number of hosts. However, the performance drops when running more than ten concurrent allreduces due to the increased congestion. We also observe the performance of in-network static allreduce algorithms drops by 40% when increasing the number of concurrent allreduce operations due to congestion. On the contrary, Canary is almost unaffected and allows up to 32 concurrent allreduces at 80 Gbps each.
By analyzing the distribution of the links utilization in Figure (b)b, we observe that Canary is characterized by the lowest number of idle links. We also observe that using four static trees improves the links utilization compared to using only one static tree. However, as shown in Figure (a)a, this is still not sufficient to avoid congestion because multiple allreduces concurrently use some links.
We observed an average network utilization of 67.2% for Canary, 62.9% for the static in-network allreduce with four trees, and 21.8% for the one with one tree. Although Canary and the solution using four trees lead to a similar average network utilization, Canary performs better because it distributes the traffic more evenly across the network (e.g., it does not have any link in the 70-90% range of utilization).
#### 5.2.5 Impact of Timeout and Noise
One of the key points of Canary is the use of timeouts to decide when a reduction block has been fully aggregated and can be sent to the next hop in the reduction tree. As described in Section 3.1.1, if the timeout is too short, or if a packet is delayed for any reason (e.g. OS noise [64]), the straggler packet is sent to the next hop right after it is received. Although this guarantees that all the packets are eventually successfully aggregated, it might introduce some performance penalty, because a switch now sends more packets than the optimal.
To analyze this scenario, we execute a 4MiB allreduce on 512 hosts, with and without congestion, by comparing it with the in-network allreduce using four static trees and by analyzing the performance for different values of the timeout. Also, every time a host sends a packet, it has a given probability (_noise probability_) of delaying the transmission by 1 microsecond. We report the results of this analysis in Figure 11, by showing the runtime when changing the noise probability from 0.01% to 10% (i.e., each host on average delays 10% of the packets it sends by 1 microsecond).
When there is no congestion on the network, we observe that Canary is characterized by a lower goodput, as also observed in the previous experiments. Because in this experiment we randomly delay packets by one microsecond, the scenario with a one-microsecond timeout generates several stragglers, decreasing the Canary goodput. This effect is less visible for larger timeouts, which can absorb differences in packet delays.
We also highlight how the performance does not increase nor decrease monotonically with the timeout value. Indeed, a long timeout unnecessarily increases packet latency, whereas a
Figure 11: Goodput of a 4MiB allreduce executed on 512 hosts, in a scenario where before sending a packet a host might add a delay of 1us with a given probability.
Figure 10: Average goodput of multiple concurrent 4MiB allreduces (left), and link utilization when running 20 concurrent allreduces (right). _ST_: Static Tree(s).
Figure 9: Allreduce runtime (the lower the better) for different message size, when 20% of the hosts are allocated to the allreduce, and 80% to an application generating random uniform traffic.
short timeout generates stragglers. However, even if we have a \(3x\) difference between the timeout values, we observe at most a \(30\%\) difference in the performance. To further mitigate the timeout impact, a possible future extension would be to dynamically select the timeout based on the current network conditions.
Last, when introducing congestion, Canary instead outperforms the static in-network allreduce regardless of the noise probability and timeout values. Indeed, even if stragglers are generated, their impact on the performance is compensated by the fact packets are forwarded on less-congested paths.
## 6 Discussion
This section discusses some of the limitations of existing programmable switches and their impact on Canary.
CollisionsIf two packets with two different _id_s map to the same table entry, Canary forwards the second packet directly to the leader host, generating extra network traffic and potentially reducing the performance (Section 3.2). For this reason, collisions should happen as rarely as possible. To reduce the number of collisions, in principle Canary could use slightly more sophisticated schemes like Cuckoo hashing [65] or double hashing [66]. However, due to the lack of iterative constructs and limited resources, this is not possible on existing programmable switches. As an alternative, the administrator can limit the number of concurrent allreduces and statically partition the table, as done in most in-network allreduce algorithms.
Packet sizeMost existing programmable switches can only process a limited number of data elements per packet, based on the number of physical resources available. Although, as discussed in Section 5.1, this number can be increased by exploiting recirculations, it also requires dedicating most of the switch ports to packet recirculations [4; 15]. As an alternative, Canary could be implemented on different programmable switch architectures that do not have limitations on the number of elements that can be processed per packet [34].
Floating-point arithmeticMost programmable switches do not provide floating-point units [67] and, for this reason, most in-network reduction solutions targeting programmable switches assume that the values to be reduced are converted to fixed-point arithmetic before being transmitted over the network [15; 4; 32]. It has been shown that such techniques do not significantly impact the convergence of deep learning training, and thus they could seamlessly be used with Canary.
Support for other collectivesAlthough we focused on the allreduce, a similar approach could be used for other collective operations. For example, a _reduce_ can be easily implemented by selecting as leader node the destination of the _reduce_, and by skipping the broadcast phase. Similarly, a _barrier_ can be implemented by having a 0-bytes allreduce, and a _broadcast_ by having the node acting as the source of the broadcast sending data to the leader host, thus skipping the data aggregation phase.
Leader FailureFailures of the hosts acting as leaders can be managed with checkpoint/re-start solutions. Indeed, the leader is one of the hosts participating in the allreduce and, if it fails, its data is lost and cannot be recovered as it happens in the case of a switch/link failure. This, however, is also true for any allreduce algorithm (both host-based or in-network) in case of the failure of one of the hosts involved in the reduction. Alternatively, the leader could run on a server not used by any host (e.g., in the SDN controller(s)). However, this would pose scalability challenges in case of multiple in-network allreduce issued by different jobs, and would not allow load balancing between different leaders (see Section 3.1.4).
FragmentationWe assume that application-level messages are split into IP packets (see Section 3.1.3), each with its own Canary header. We enforce packets (including Canary header) to be no larger than MTU to avoid fragmentation, which would significantly complicate the design.
Other TopologiesFor the sake of simplicity, we described and evaluated our algorithm on fat tree topologies. However, a similar approach could be used on other topologies, since an aggregation tree is naturally formed when sending packets from the hosts to the root.
Retransmission DelaysIf we assume a retransmission timeout of \(2\cdot RTT\) (where \(RTT\) is the _Round Trip Time_ between a host and the leader), in the worst case a new reduction for a given block will be re-issued after \(3\cdot RTT\). Indeed, a host needs \(2\cdot RTT\) before issuing a retransmission request. The retransmission request arrives at the leader after \(RTT/2\), and broadcasts to all the hosts a failure message for that block, that the hosts receive after \(RTT/2\).
## 7 Conclusion
In this work, we designed, implemented, and evaluated Canary, the first congestion-aware in-network allreduce algorithm. We first shown the impact that network congestion can have on the performance of in-network allreduce algorithm, up to the point where they exhibit lower performance than host-based allreduce. For this reason, by relying on timeouts, Canary can dynamically route packets to avoid congested links, aggregating them in a best-effort fashion.
We carefully partitioned Canary functionalities between hosts and switches, and we described a prototype P4 implementation, that we evaluated on a Tofino switch. We then simulated our solution on a 1024 nodes network with 64 switches, showing improvements up to 2x compared to in-network solutions using a single reduction tree and up to 47% compared to solutions using multiple trees. Furthermore, we have shown that these results are consistent across different congestion intensities, proving that Canary is an effective solution in avoiding congestion when running in-network allreduces.
## Acknowledgements
We would like to thank Vladimir Gurevich for the helpful comments and feedback. This work has been partially funded by the European Union's Horizon Europe programme project RED-SEA (grant no. 955776). Daniele De Sensi was supported by an ETH Postdoctoral Fellowship (19-2 FEL-50), and by Sapienza University under the SEED-2022 funding scheme.
|
2310.06164 | DEUX: Active Exploration for Learning Unsupervised Depth Perception | Depth perception models are typically trained on non-interactive datasets
with predefined camera trajectories. However, this often introduces systematic
biases into the learning process correlated to specific camera paths chosen
during data acquisition. In this paper, we investigate the role of how data is
collected for learning depth completion, from a robot navigation perspective,
by leveraging 3D interactive environments. First, we evaluate four depth
completion models trained on data collected using conventional navigation
techniques. Our key insight is that existing exploration paradigms do not
necessarily provide task-specific data points to achieve competent unsupervised
depth completion learning. We then find that data collected with respect to
photometric reconstruction has a direct positive influence on model
performance. As a result, we develop an active, task-informed, depth
uncertainty-based motion planning approach for learning depth completion, which
we call DEpth Uncertainty-guided eXploration (DEUX). Training with data
collected by our approach improves depth completion by an average greater than
18% across four depth completion models compared to existing exploration
methods on the MP3D test set. We show that our approach further improves
zero-shot generalization, while offering new insights into integrating robot
learning-based depth estimation. | Marvin Chancán, Alex Wong, Ian Abraham | 2023-09-16T23:33:15Z | http://arxiv.org/abs/2310.06164v1 | # DEUX: Active Exploration for Learning Unsupervised Depth Perception
###### Abstract
Depth perception models are typically trained on non-interactive datasets with predefined camera trajectories. However, this often introduces systematic biases into the learning process correlated to specific camera paths chosen during data acquisition. In this paper, we investigate the role of how data is collected for learning depth completion, from a robot navigation perspective, by leveraging 3D interactive environments. First, we evaluate four depth completion models trained on data collected using conventional navigation techniques. Our key insight is that existing exploration paradigms do not necessarily provide task-specific data points to achieve competent unsupervised depth completion learning. We then find that data collected with respect to photometric reconstruction has a direct positive influence on model performance. As a result, we develop an active, task-informed, depth uncertainty-based motion planning approach for learning depth completion, which we call DEPTH Uncertainty-guided \(\epsilon\) exploration (DEUX). Training with data collected by our approach improves depth completion by an average greater than 18% across four depth completion models compared to existing exploration methods on the MP3D test set. We show that our approach further improves zero-shot generalization, while offering new insights into integrating robot learning-based depth estimation.
## I Introduction
Depth estimation supports a wide range of robotic and computer vision applications including autonomous navigation, augmented reality, and three-dimensional (3D) mapping, planning and reconstruction. Recent advances in unsupervised learning for depth estimation from a single RGB camera and depth sensor (_e.g._, 3D LiDAR) have enabled the supervision signal to scale w.r.t. data size, _i.e._, autonomous data collection. Despite these rapid advances, existing depth perception models have largely been trained on non-interactive data-sets, _e.g._, collected from non-autonomous, user-driven camera trajectories with predetermined routes, such as KITTI [1, 2] for outdoor scenarios, or VOID [3] and NYUv2 [4] for indoors. While their purpose was to facilitate learning for a variety of vision tasks, they introduce systematic errors on account of the lack of task-specific diversity that is often seen during real-world deployment (Fig. 1-top). Even with the option of revisiting the data collection sites, determining what more training data to collect based on some test set remains in question. While one can attempt to densely/uniformly collect data from the environment, as opposed to actively sampling key data points w.r.t. utility metrics, it is not scalable in real scenarios.
Motivated by these shortcomings, we investigate the influence of robotic exploration on the performance of unsupervised depth completion, the task of inferring dense depth from an image and a synchronized sparse point cloud, by (i) using interactive environments for task-specific data collection, and (ii) proposing an active exploration approach driven by model error modes, instead of relying on user-based or conventional navigation paradigms (Fig. 1-bottom).
Robotic navigation, on the other hand, has a long history in both classic and learning-based techniques designed to enable high-level motion planning and localization tasks. Although these two lines of research have been largely disconnected, recent works have attempted to create a standard framework for benchmarking classical and end-to-end learning-based exploration methods by using complex interactive environments [5, 6] or proposing active viewpoint sampling [7, 8]. In contrast to these works, which mainly study navigation tasks, here we focus on exploration for learning depth completion specifically in the context of robot navigation and task-driven data collection. We propose an active, depth uncertainty-guided exploration approach, which integrates 3D reconstruction errors, to allow task-driven exploratory planning and further improve the overall depth estimation performance. Extensive experimental results show that our approach can provide rich and diverse data that improves the state-of-the-art of unsupervised depth completion models. The main contributions of this paper are:
* A benchmark on the use of classic and learning-based navigation methods for training supervised and unsupervised depth completion models.
* An active, task-informed exploration approach to guide robot motion planning, via photometric reprojection errors, for data collection to improve depth estimation.
Fig. 1: **The depth perception problem**. Existing approaches typically use publicly available video-based datasets for training depth perception models (top). Our active exploratory approach for task-informed data collection (bottom) improves depth completion results yield by existing exploration paradigms, and zero-shot generalization to other datasets.
## II Related Work
### _Autonomous Robotic Exploration_
**Conventional Exploration**. The goal of autonomous exploration algorithms is to allow robotic platforms to navigate across regions of interest for specific downstream tasks within an environment (_i.e._, single or multi-goal localization [9], and area coverage for 3D mapping/reconstruction [10]). The task of area coverage, where a robot visits every space of an environment, requires certain exploratory capabilities such as directed lawn-mowers [11, 12, 13, 14], random walks [15, 16, 17], information-based methods [18, 19, 20, 21, 22, 23, 24], path-planning optimization via viewpoint sampling [25, 8], or semantic-based exploration [26, 27, 28]. Most of these exploration methods, however, are typically designed to ensure space coverage, semantic mapping, or point/object-goal search instead of specific downstream computer vision tasks such as depth perception. This limits autonomous exploration of important areas with equal proportion relative to an expected utility value of the downstream task. To this end, here we focus on studying the influence of robotic motion and exploratory approaches in the context of depth perception learning.
**Interactive Environments**. Recent work uses interactive environments for (i) benchmarking classic and learned navigation methods in single-goal localization tasks [5, 6], or (ii) learning active camera exploration for multi-goal localization tasks [7]. In [5], the authors demonstrate that classical and end-to-end learning-based exploration approaches are still far from human-level performance in single-goal navigation tasks (_e.g_, in terms of Success weighted by Path Length (SPL), success rate, and pace). Also for single-goal navigation tasks, [6] provides extensive classic and learned navigation results while introducing new exploration paradigms based on reinforcement learning (_e.g_, area coverage, novelty, curiosity, and semantic-based reconstruction). Furthermore, [7] proposed active learning for multi-goal navigation, however this approach exploits human experience to guide the active learning process.
For depth completion tasks, there is existing work that also uses 3D simulation environments [29, 30, 31, 32, 33], but these do not explore the influence of robot motion and exploration into the learning process. A recent dataset for robot perception and navigation was proposed in [34], which is collected from 3D environments. However, this work uses pre-defined camera trajectories, obtained by incremental mapping and trajectory sampling via RRT\({}^{*}\)[35], specifically from collision-free navigation tasks only. Our focus in this work, instead, is exploring the influence of diverse navigation methods for data collection to improving depth completion performance. Our proposed framework also aims to supporting new research on the influence of robot exploration into the performance of any other perception modality algorithm.
### _Depth Estimation_
Depth can be inferred from various sources, _i.e._, stereo [36, 37, 38, 39], multi-view [40, 41, 42, 43, 44], and monocular [45, 46, 47, 48, 49, 50] images, which can also be used in combination with sparse range [51, 52, 53, 54, 55, 56, 57, 58, 59]. We focus on monocular depth completion, which aims to infer a 2.5D dense depth map from a single image and a synchronized sparse point cloud--supporting dense mapping in the exploration task. With the need to localize the agent, _e.g_., a robot during exploration, one typically employs a simultaneous localization and mapping (SLAM) [60, 61, 62] or a visual inertial odometry (VIO) [63, 64, 65, 66] system, which tracks a sparse point cloud. While the point cloud is sufficient for localization, it is far too sparse for representing the structure of 3D environments. Hence, we choose depth completion to densify or complete the point cloud with guidance from a single image, which naturally integrates well with SLAM or VIO systems [67, 3]. This form of depth estimation supports inputs from any of the aforementioned streams without concerns for accumulating sufficient parallax (_i.e._, multi-view) obtaining scale (_i.e._, monocular) or requiring an additional camera (_i.e._, stereo) at test time. Our choice in depth completion methods belongs to the unsupervised learning paradigm, so that we do not assume access to ground truth depth for training, but only calibrated images and their associated point clouds that are measured by a minimal (optical, inertial) sensor setup.
**Unsupervised depth completion** training typically relies on supervision based in structure-from-motion, whether from stereo or monocular video. Methods trained with stereo [59, 68] require rectified stereo pairs and predict disparity between the two frames. The supervision signal comes from reconstructing each frame from other other and ensuring left-right consistency between the reconstructions; depth is inversely proportion to disparity and can be obtained in closed form using the focal length and baseline between the stereo cameras. Similarly, methods that leverage monocular video [52, 56, 57, 56, 59] minimize forward-backward reconstruction error from a subset of frames in a video to a given temporally nearby reference frame of the same video. To this end, methods typically jointly optimize for the predicted depth and relative pose between the videos frames. Both stereo and monocular video training modes reconstructs the sparse point cloud as an additional loss term to ground estimates to metric scale. As 3D reconstruction from 2D image and sparse range measurements is an ill-posed problem, the training objective also includes a local smoothness regularizer. In this work, all of the depth completion models chosen rely on video-based training, which requires a single camera, and if available, inertial measurement unit (IMU); both are ubiquitous in most devices and suitable for deployment with SLAM and VIO systems.
## III The Devix Approach
### _Learning Unsupervised Depth Completion_
Given an RGB image \(I:\Omega\subset\mathbb{R}^{2}\mapsto\mathbb{R}^{3}_{+}\) and its sparse point cloud (projected onto the image plane) \(z:\Omega_{z}\subset\Omega\mapsto\mathbb{R}_{+}\), we learn a function \(h_{\theta}(I,z):\Omega\mapsto\mathbb{R}_{+}\), parameterized by \(\theta\), that maps the image and sparse depth into a dense depth map. To train \(h_{\theta}(I,z)\), we assume access to temporally consecutive frames, _i.e_., a monocular video, at time \(t\), \(t-1\), and \(t-2\) and the camera intrinsic calibration matrix
\(\mathbb{R}^{3\times 3}\). We minimize the photometric reprojection error (Eqn. 4) between \(I_{t}\) and its reconstructions \(\hat{I}_{t,t-1}\) and \(\hat{I}_{t,t-2}\) from \(I_{t-1}\) and \(I_{t-2}\), where each reconstruction \(\hat{I}_{t,\tau}\) is obtained via
\[\hat{I}_{t,\tau}(q,\hat{d}(x),p_{\tau,t})=I_{\tau}\big{(}\pi\ p_{\tau,t}\ K^{-1}\ \ q\ \hat{d}(q)\big{)}, \tag{1}\]
where \(\hat{d}:=h_{\theta}(I,z)\) for ease of notation, \(\tau\in\{t-1,t-2\}\) the time step, \(\hat{q}=[q^{\top}1]^{\top}\) the pixel location as a homogeneous coordinate, \(p_{\tau,t}\in SE3\) is the relative camera motion from time \(t\) to \(\tau\) and \(\pi\) the canonical perspective projection. Specifically, the photometric reprojection loss is comprised of color consistency and structural consistency terms. Color consistency penalizes the \(L1\) difference between \(I_{t}\) and \(\hat{I}_{t,\tau}\):
\[\ell_{co}(I_{t},\hat{I}_{t,\tau})=\frac{1}{|\Omega|}\sum_{q\in\Omega}|\hat{I}_ {\tau}(q)-I_{t}(q)|. \tag{2}\]
Structural consistency measures the structural similarity between \(I_{t}\) and \(\hat{I}_{t,\tau}\) using SSIM[70]. We subtract SSIM score from 1 to penalize \(\hat{I}_{t,\tau}\) for deviations from \(I_{t}\):
\[\ell_{st}(I_{t},\hat{I}_{t,\tau})=\frac{1}{|\Omega|}\sum_{q\in\Omega}\big{(}1- \texttt{SSIM}(\hat{I}_{\tau}(q),I_{t}(q))\big{)}. \tag{3}\]
The photometric reprojection loss is their linear combination weighted by their respective \(\lambda\) summed over all frames:
\[\ell_{ph}=\sum_{\tau\in\{t-1,t-2\}}\lambda_{co}\ell_{co}(I_{t},\hat{I}_{t,\tau} )+\lambda_{st}\ell_{st}(I_{t},\hat{I}_{t,\tau}). \tag{4}\]
Because 3D reconstruction from 2D images is an ill-posed problem, the use of a regularizer to enforce local smoothness and connectivity over \(\hat{d}\) is needed, following [57, 58]:
\[\ell_{sm}=\frac{1}{|\Omega|}\sum_{q\in\Omega}e^{-|\nabla\hat{I}_{t}(q)|}| \nabla\hat{d}(q)|, \tag{5}\]
where the gradient of \(\hat{d}\) is weighted by the image gradient to allow for depth discontinuities across object boundaries.
Minimizing the reprojection error will reconstruct the scene structure up to an unknown scale [58]. Predictions are grounded to metric scale by minimizing the \(L1\) difference between \(\hat{d}\) and \(\Omega_{z}\) (over its domain), as follows:
\[\ell_{sz}=\frac{1}{|\Omega_{z}|}\sum_{q\in\Omega_{z}}|\hat{d}(q)-z(q)|. \tag{6}\]
The unsupervised depth estimation loss, therefore, reads:
\[\ell_{d}=\ell_{ph}+\lambda_{sz}\ell_{sz}+\lambda_{sm}\ell_{sm}. \tag{7}\]
### _Uncertainty Measures in Depth Estimation_
Measuring uncertainty in depth estimation tasks in typically achieved via image reconstruction [57, 71, 72]. But rather than using the matching cost as a proxy for uncertainty measure [71] or to guide depth completion learning [57], we use it to provide us with specific scene locations that are likely to contain high depth uncertainty. For instance, due to inconsistent robot motions or inherently challenging scenes, more informative data can be collected from robot exploration around these areas to address potential failure modes of the model. We assume a set of pretrained weights obtained from minimizing Eqn. 7 in Sect. III. To determine the error modes of a model, we compute the photometric reprojection errors between \(I_{t}\) and its reconstructions \(\hat{I}_{t,t-1}\) and \(\hat{I}_{t,t-2}\) from \(I_{t-1}\) and \(\hat{I}_{t-2}\) using Eq. 1. We then compute the mean of these errors over all pixels in the image to obtain a scalar residual value \(\delta_{\tau}(q)\) for each \(I_{\tau}(q)\) as follows:
\[\delta_{z}(q)=1-e^{-\frac{1}{|\Omega|}\sum_{q\in\Omega}|\hat{I}_{\tau}(q)- \hat{I}_{t}(q)|} \tag{8}\]
we use \(\delta_{\tau}(q)\in\mathbb{R}_{+}\) with \(L1\) penalty as a proof-of-concept, but it can be replaced by other metrics, _i.e._, SSIM[57].
### _Robot Exploration_
Our goal is to obtain an exploration policy to perform robot navigation tasks. We use a Markov Decision Process \(\mathcal{M}\) with discrete states \(\mathbf{s}_{t}\in\mathcal{S}\) and actions \(\mathbf{a}_{t}\in\mathcal{A}\) spaces, with a transition operator \(\mathcal{S}:\mathcal{S}\times\mathcal{A}\rightarrow\mathcal{S}\) to model our navigation task as a finite-horizon \(T\) problem. Our navigation policy will maximize the objective function given by
\[J=\mathbb{E}_{\tau\sim\pi_{z}}\left[\sum_{t=1}^{T}r(\tau)\right] \tag{9}\]
where \(\pi:\mathcal{S}\rightarrow\mathcal{P}(\mathcal{A})\) is the navigation policy we want to design (reported in Alg. 1), and \(r:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) is a reward function we want to maximize (given by the negative of the Eq. 8). We define \(\pi\) as an algorithm based on our residual values \(\delta_{\tau}(q)\), as described in the next section.
```
Data: map \(\mathfrak{M}\), time budget \(T_{exp}\), residual pose \(p_{\delta}\) while\(T_{exp}\) not reacheddo \(p_{f}=\) SampleFrontiers(\(\mathfrak{M}\)); \(p^{tgt}=\) SampleDepthGuidedTarget(\(p_{f}\), \(p_{\delta}\)); whilenot reached\(p_{\delta}\)do \(\hat{\mathfrak{M}}=\) ProcessMap(\(\mathfrak{M}\)); \(Path_{tgt}\), \(\Delta_{next}=\) AStarPlanner(\(\hat{\mathfrak{M}}\), \(p^{tgt}\)); ifReached\(p^{tgt}\)then break; else ifPath\({}_{tgt}\) is not Nonethen \(p^{next}=\) Path\([\Delta_{next}]\); action \(=\) get_action(\(p^{next}\)); else action \(=\) random(); /*unstuck*/ end if end if end while end while end while
```
**Algorithm 1**Depth Uncertainty-guided Nav. Policy
We consider a robot equipped with a depth estimation model, which is instantiated to navigate an environment, while computing \(\delta_{\tau}(q)\) for every time step. We note that this exploration stage is limited by a maximum number of time steps \(T_{exp}\), and that the agent is spawned at an
initial pose \(p_{0}\in\mathit{SE3}\), pre-defined by the 3D simulator configuration. Given the dense map outputs of the depth perception model \(\hat{d}_{\tau}(q)\in\mathbb{R}_{+}^{k\times w}\), we follow [6, 73] to build a 2D top-down occupancy map of the environment \(\mathfrak{M}\in\mathbb{R}^{h\times w}\), which indicates whether a certain \((x,y)\) location is navigable or occupied. This egocentric occupancy map \(\mathfrak{M}\) is used to compute the frontiers between free and occupied space, which along with \(\delta_{\tau}(q)\) are used to generate the next depth-informed target locations. A target is chosen one at a time by prioritizing regions with high uncertainty. When a target is selected, the A-Star planner algorithm [74] is used to process the map \(\mathfrak{M}\) and generate the shortest path from the current position of the robot. Once the robot reaches the desired location, the next target is sampled along with the path to navigate there, following Alg. 1.
## IV Experiments
### _Experimental Setup_
**Interactive Environments**. We leverage the Habitat-Sim simulator [75] and its Habitat-Lab API as our experimental platform for embodied robot exploration and data collection. Within this platform we use two _interactive datasets_, which allow autonomous robot navigation, built from 3D scans of real-world environments: Matterport3D (MP3D) [76], and Habitat-Matterport 3D Research Dataset (HM3D) [77]. Each of these datasets consist of 30.22k and 112.50k \(m^{2}\) of overall navigable space, respectively. MP3D provides 90 different building-scale scenes. HM3D is the much larger with 1,000 building-scale residential, commercial, and civic spaces. In our experiments we use the default train/validation/test split sets provided by the Habitat-Sim setup. This results in 61/11/18 scenes for MP3D, and 800/10/100 for HM3D.
**Data Collection Pipeline**. We follow standard practice for data collection [6, 75] and deploy a robot, equipped with a navigation algorithm (Fig. 2-a), for a maximum of 500 time steps per scene. Our data pipeline comprises a four stage process: robot exploration, sparse depth sampling, data verification, and data preprocessing. The action space of the robot consist of four discrete actions: go forward, turn left, turn right, or stop. We instantiate a robotic agent that has access to a continuous sensory input stream of rendered RGB-D frames and 6D poses of its camera. RGB images and ground-truth depth maps are rendered at resolution 400\(\times\)400 pixels. We highlight that we do not use these ground-truth dense depth maps for training, and obtain instead extremely sparse depth maps by sampling \(\sim\)1500 sparse depth points via Harris corner detector [78] (covering \(\sim\)1.0% of the full depth map). This allow us to mimic sparse depth maps produced by SLAM/VIO systems [58].
**Evaluation Metrics**. We address depth completion tasks given a robotic vision scenario, where an embodied agent can explore a particular scene using autonomously navigation algorithms. We allow the robot to use several navigation policies to explore a 3D scene and collect data. This is in contrast to existing evaluations where the model is trained and tested using the same exploration policy (_e.g._, human defined path); here our benchmark evaluates models on data collected from unseen policies. This would allows us to better evaluate the generalization capabilities of depth completion algorithms to novel viewpoints and scenes. For evaluating depth completion performance, we use four standard metrics: MAE, RMSE, iMAE, and iRMSE (we refer to the read to [58] where these are defined, if necessary).
**Conventional Exploration**. We use conventional _classic_ and _learned_ navigation paradigms, typically designed for ensuring full area coverage or point-goal localization tasks, for data collection and benchmarking. We use 8 different navigation algorithms: 3 heuristic-based (Random, Frontier, and Oracle) [6, 73] and 5 reinforcement learning-based (Curiosity, Semantic Reconstruction, Coverage, Smooth Coverage, and Novelty) methods [6]. The Random agent is the simplest baseline that uniformly samples random navigation actions. This agent was found to get stuck when the robot bumped into certain obstacles/objects within a scene, thus we added some heuristics to unstuck the agent when this happens. The Frontier agent is based on the classic frontier-based exploration [6, 73] that uses the A-Star planning algorithm [74] to visit the frontiers (_i.e_, edges between unexplored and free spaces). The Oracle agent is designed to provide an append-bound exploration performance by having access pre-sampled, specific target locations to navigate [6].
The remaining five learned exploration agents are based on RL training. We use the pretrained RL navigation policies provided by [6], where each policy is trained with a specific reward functions using the PPO algorithm [79]. A brief description of each policy is provided as follows. Curiosity: encourages visiting poorly predicted states by a forward-dynamics model. Object-based semantic reconstruction: rewards visiting states that allow better semantic reconstruction. Coverage: maximizes the overall area visited. Novelty: rewards visiting less frequently visited states. Smooth Coverage: bridges the gap between Novelty and Coverage by rewarding the number of times a region was observed. This allows the discovery of unexplored parts by navigating across less frequently visited locations. We refer the reader to [6] for additional details.
**Proposed Exploration Approach**. In addition to these conventional navigation paradigms, we collect training data with our proposed active depth-guided exploration approach. Similar to conventional learning-based paradigms, which requires training an _RL policy_* (Fig. 2-b), our depth-guided navigation approach trains a depth _seed model_ (Fig. 2-b)
Fig. 2: **Overview of our approach**. Existing work on navigation trains an RL-based Policy to guide exploration [6], we instead train an Seed Model that yields residual poses (\(p_{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{ \mathit{\mathit{\mathit
which enables visiting locations with high depth uncertainty using our navigation policy (Alg. 1). Thus, we first use a Random agent to collect data and train our seed model (Fig. 2-a,b). We then train our depth completion model (Fig. 2-right) from scratch using only the data collected in our depth-guided exploration stage (Fig. 2-middle). We highlight that although both the seed model and the depth completion model share the same architecture, they have different functionalities and are not used interchangeably. We also note that the seed model can be replaced by any other pretrained depth model that can be taken off-the-shelf, _e.g._, not necessarily requiring weights trained on a random agent. Furthermore, our approach is agnostic to the depth completion model chosen, supervised or unsupervised, as we report in Section IV-B and Tables I, II.
**Depth Completion Training Details**. We evaluate the depth completion performance yield by conventional navigation methods and our proposed depth-guided exploration approach using three unsupervised (KBNet [58], VOICED [3], FusionNet [56]) and one supervised (ScaffNet [56]) depth completion models. The predicted and evaluated depth values were both set within the range of [0.1, 10.0] meters. We train these models, using 384\(\times\)384 crops, for 25 epochs with the Adam optimizer [80] and a learning rate of \(1e^{-4}\) for the initial 10 epochs and then \(5e^{-5}\); as we found through cross validation experiments that those worked well for each particular model. The remaining hyperparameters are set following their respective papers. For KBNet, VOICED and FusionNet, training on MP3D takes \(\sim\)2h and on HM3D \(\sim\)24h, while for ScaffNet \(\sim\)0.5h and \(\sim\)6h, respectively. This is using a single 20GB NVIDIA RTX A4500 GPU with a 24-Core AMD Ryzen 3960X CPU.
### _Results_
We compare the performance of depth completion models trained on data collected by several types of navigation paradigms, along with our proposed depth-guided exploration approach. Table I shows the results of _unsupervised_ (KBNet, VOICED, FusionNet) and _supervised_ (ScaffNet) methods on the MP3D test set.
**Classic Exploration**. In Table I, for the KBNet model for instance, the Frontier and Oracle methods outperform the Random agent. This is expected, as these agents are designed to visit specific locations, and thus are able to collect more informative data. The Oracle agent, which has access to specific goal references per scene (pre-set by a human expert), is significantly better (in all metrics) than the Frontier agent, which only performs heuristic-based search; as described in the previous Subsection. This similar trend is also observed for VOICED, FusionNet and ScaffNet, although not across all metrics as KBNet is a recent work.
**Learned Exploration**. The use of learning-based exploration policies substantially improves the depth completion performance of classic agents. For instance, in Table I for KBNet, we note that the Smooth Coverage agent outperforms the Oracle and Random agents by 7% and 22% in terms of MAE, respectively. Now, the fact that the Smooth Coverage (Sm. Cov.) approach improves over the Novelty and Coverage agents is not surprising, as it combines the best of both policies in terms of reward functions. As per the Semantic Reconstruction (Sem. Rec.) and Curiosity agents, their lower performance, among the learned agents, is likely due to the inherently sparser nature of its reward functions, as these are defined by predicted semantic-based object locations and novel predicted robot states, respectively.
**Proposed Depth-guided Exploration**. With the insights provided by benchmarking existing navigation algorithms. We now evaluate the performance of our proposed depth-guided exploration agent. As shown in Table I, our proposed agent outperforms all the exploration methods by significant margins using either unsupervised or supervised depth completion models. Overall, our exploration approach outperforms by 19%, 18% and 48% to existing learning-based (semantic reconstruction), heuristics (oracle), and random exploration techniques, respectively, across all metrics and depth models. This demonstrates that our agent is visiting locations with high depth uncertainty and collecting more informative data specifically for depth perception tasks.
Fig. 3-left shows qualitative depth completion results, from Table I, for KBNet. And Fig. 3-right shows the qualitative robot exploration trajectories obtained for their respective navigation paradigm during data acquisition, also for KBNet.
Here we highlight that existing exploration methods such as smooth coverage (best of learning-based methods) or oracle (best of heuristic techniques), which are particularly designed to maximize specific reward functions or cover larger areas, respectively, do not necessarily achieve good depth estimation. In contrast, our task-driven exploration approach shows that visiting locations with high depth uncertainty yields significantly better depth completion results.
Table II reports the results on HM3D. We note that for this particular dataset we do not compare against the remaining navigation paradigms (as in Table I) given that those exploration agents are not provided/trained on HM3D in [6]. However, in Table III, we provide zero-shot generalization results of all the exploration methods (obtained by training all depth models on MP3D; from Table I). Additionally, Table II reports the zero-shot generalization results (_i.e._, depth models trained on MP3D with DEUX*), which yields better results compared to a random exploration approach trained on HM3D. Across all metrics, we observe similar trends for our approach (DEUX) than those obtained in Table III, outperforming a heuristic-based paradigm by over 40% on average across all metrics and depth models.
**Zero-Shot Generalization**. In Table III we show cross-dataset generalization results, _i.e._, train on MP3D and evaluate on HM3D. We evaluate each depth model, trained on MP3D from Table I, on the HM3D test set, and report the average for all models (KBNet, VOICED, FusionNet, and ScaffNet) across each metric. Overall, our approach outperforms all existing exploration methods including smooth coverage (best of RL-based exploration), oracle and random agents by 5%, 17% and 24%, respectively. Note that these results are for zero-shot transfer learning alone.
**Influence of Exploration Paradigms**. Now we compare the influence of existing paradigms for handling the exploration and data collection phases, see Fig. 2, introduced in [6] (area coverage, novelty, curiosity, and semantic reconstruction) against our approach. From Table I, for the best performing model (KBNet), we highlight that our exploratory approach yields better results than conventional paradigms as follows: DEUX \(>\) Smooth Coverage \(>\) Novelty \(>\) Sem. Rec. \(>\) Curiosity. This provides a key insight on the utility of navigation paradigms for learning specific downstream tasks such as depth completion. Competent depth completion learning was achieved when the robot/camera visited locations with high depth uncertainty, which are not necessarily those with high novelty or low coverage within an environment.
## V Conclusions
We presented a novel exploration paradigm for unsupervised depth perception tasks in the context of robotic navigation. Our proposed task-informed exploratory approach is based on photometric reprojection residuals, which are used for robot path planning across locations with high depth uncertainty. Extensive experimental results using four depth completion models on two interactive datasets have shown that our approach outperforms existing (classic and learning-based) exploration techniques by significant margins. Our key insight is that existing navigation paradigms, typically used for data collection and training of a wide range of robotic vision applications, do not necessarily provide task-specific data points to achieve competent learning for a particular downstream application. In future work includes extending this to other robot perception tasks both in simulated environments as well as real world deployment.
|
2301.00254 | Depression Diagnosis and Analysis via Multimodal Multi-order Factor
Fusion | Depression is a leading cause of death worldwide, and the diagnosis of
depression is nontrivial. Multimodal learning is a popular solution for
automatic diagnosis of depression, and the existing works suffer two main
drawbacks: 1) the high-order interactions between different modalities can not
be well exploited; and 2) interpretability of the models are weak. To remedy
these drawbacks, we propose a multimodal multi-order factor fusion (MMFF)
method. Our method can well exploit the high-order interactions between
different modalities by extracting and assembling modality factors under the
guide of a shared latent proxy. We conduct extensive experiments on two recent
and popular datasets, E-DAIC-WOZ and CMDC, and the results show that our method
achieve significantly better performance compared with other existing
approaches. Besides, by analyzing the process of factor assembly, our model can
intuitively show the contribution of each factor. This helps us understand the
fusion mechanism. | Chengbo Yuan, Qianhui Xu, Yong Luo | 2022-12-31T17:13:06Z | http://arxiv.org/abs/2301.00254v1 | # Depression Diagnosis and Analysis via Multimodal Multi-Order Factor Fusion
###### Abstract
Depression is a leading cause of death worldwide, and the diagnosis of depression is nontrivial. Multimodal learning is a popular solution for automatic diagnosis of depression, and the existing works suffer two main drawbacks: 1) the high-order interactions between different modalities can not be well exploited; and 2) interpretability of the models are weak. To remedy these drawbacks, we propose a multimodal multi-order factor fusion (MMFF) method. Our method can well exploit the high-order interactions between different modalities by extracting and assembling multi-order factors across modalities under the guide of a shared latent proxy. We conduct extensive experiments on two recent and popular datasets, E-DAIC-WOZ and CMDC, and the results show that our method achieve significantly better performance compared with other existing approaches. Besides, by analyzing the process of factor assembly, our model can intuitively show the contribution of each factor. This helps us understand the fusion mechanism.
Chengbo Yuan\({}^{\dagger}\), Qianhui Xu\({}^{\dagger}\) and Yong Luo\({}^{\dagger}\)\({}^{\dagger}\)Wuhan University, {michael.yuan.cb,xuqianhui,luoyong}@whu.edu.cn Depression, multimodal learning, factor fusion, multi-order.
## 1 Introduction
Depression is a common and serious mental disorder disease around the world. It is estimated that about \(3.8\%\) of the population is affected by the disease, and more than \(700,000\) patients die of suicide every year due to depression, according to the statistics of world health organization (WHO). Due to the exhaustive consumption of human resources and heavy dependence of subject judgement of the traditional diagnosis approaches, there is an increasing interests in utilizing machine learning for automatic diagnosis of depression, and multimodal learning is one of the most popular solutions.
The existing multimodal learning works either focus on improving the feature extraction for different modalities, or designing better fusion strategies. To extract features for long-time sequential data, some classical models such as LSTM and CNN are adopted in [1, 2, 3], while more recent approaches employ transformer to alleviate the forgetting problem caused by the long duration of patient interview [4]. In regard to the modality fusion, in addition to the simple concatenation, attention mechanism is applied to adjust the modality contribution [5]. Some other approaches, such as CubeMLP [6] and Multi-Agent [7] can exploit more complex interactive relationships between different modalities. Although have achieved certain success, these approaches suffer from two main drawbacks: 1) lack of exploitation of different-order interactions across modalities; and 2) the realtively low interpretability, which leads to the confusion of the fusion mechanism.
To remedy these drawbacks, we propose a Multimodal Multi-order Factor Fusion method (MMFF) based on modality multi-order factor extraction and assembly. The idea of MMFF is to first extract multi-order factors across different modalities and then integrate the factors in terms of different orders for final prediction. The whole process is under the guidance of a shared subspace proxy, which is utilized to generate adaptive weights for modality factor fusion at different orders. The learned weights can intuitively show the contribution of each factor in the fusion process. Besides, we extract features for the long time sequential data (such as videos and audios) from a new perspective based on trajectory preservation, where a novel frame selection pipeline based on the principal component analysis (PCA) and Midimax algorithm is developed. This helps us to effectively select the key frames and compress the time series.
To summarize, our main contributions are as follows:
* We propose a novel multimodal multi-order factor fusion (MMFF) method for the diagnosis of depression. The proposed method is able to exploit the modality interactions at different levels of orders.
* We design a latent proxy to generate adaptive weights for different modality factors (and also different orders). This improves the interpretability of our model since the contribution of each factor can be identified at different levels of orders. This enables the fusion mechanism analysis.
* We develop a novel method to select key frames for long time sequential data by integrating the PCA and Midimax algorithm.
The experiments are conducted on two recent datasets: the Extended Distress Analysis Interview Corpus (E-DAIC
WOZ) [1, 8] and the Chinese Multimodal Compression Corpus (CMDC) [9]. The results demonstrate that our MMFF is superior to the state-of-the-art approaches with significant improvements.
## 2 The Proposed Method
Figure 1 is an illustration of the overall structure of the proposed Multimodal Multi-order Factor Fusion (MMFF) method. In this section, we first introduce the feature extractors for different modalities, where a novel frame selection pipeline is developed. Then an encoder-decoder module is utilized to construct a proxy to generate adaptive weights for combination. Finally, the factors extracted from different modalities are integrated at different levels of orders, and the integrated representations are further adaptively fused for final prediction.
### Feature Extraction
In this paper, we use a classic sentence to vector (sent2vec) encoder, the Universal Sentence Encoder [10] together with BiLSTM at the sentence level [11] to extract the textual features for the patient's interview. We use the pretrained USE to extract the sentence embedding, and then apply a two-layer BiLSTM and three-layer MLP to extract the features.
In regard to the audio and video inputs, we utilize the low-variance filtering, PCA and Midimax algorithms together with a two-layer BiLSTM to extract the features. The process is illustrated in Figure 2, where the original features for each video frame are extracted following [1, 8] for the E-DAIC-WOZ dataset. For the CMDC dataset, the original feature is a stacking of the \(12\) feature vectors, each corresponds to a certain question in the interview. Since the number of frames in each video or audio sequence may be very large (such as in the E-DAIC-WOZ dataset), the computational cost may be very high and there may be much redundant information. Therefore, we process these original features using the following strategy.
Firstly, low-variance filtering is performed on the original feature. Suppose that the original feature after min-max normalization is
Figure 1: Illustration of the proposed Multimodal Multi-order Factor Fusion (MMFF) method. Specifically, our MMFF contains two main parts: 1) a latent space construction module based on the encoder-decoder mechanism to extract a proxy, which is utilized to generate adaptive weights to fuse different modality factors; 2) a multi-order combination part that applies multiple encoders to generate factors, and integrate them under the guidance of the shared-subspace proxy at different levels of orders.
Figure 2: The proposed frames feature extractor. Low variance filter is firstly used to remove the insignificant feature. Then, the Midimax algorithm is applied to the first principle component of the original feature to acquire the key frames.
\(1,2...N\)), where \(N\) is the size of training set, \(X_{i}^{j}\) is the \(j\)-th frame of \(i\)-th sample, \(K\) is the feature dimension of the frame, \(M_{i}\) is the number of frames in \(X_{i}\), and let \(X_{i}^{j}=[x_{i}^{j1},x_{i}^{j2},\cdots,x_{i}^{jK}]^{T}\), then the \(k\)-th dimension of features that satisfies the following condition is filtered:
\[\frac{1}{N}\sum_{i=1}^{N}\frac{1}{M_{i}}\sum_{j=1}^{M_{i}}(x_{i}^{jk}-E_{j}[x_ {i}^{jk}])^{2}\leq\beta, \tag{1}\]
where \(\beta\) is an hyperparameter (we set \(\beta=0.01\) in this paper). Specifically, \(\frac{1}{M_{i}}\sum_{j=1}^{M_{i}}\left(x_{i}^{jk}-E_{j}[x_{i}^{jk}]\right)^{2}\) is the variance of \(X_{i}\) at the \(k^{th}\) dimension. After the low variance filtering, some redundant features are removed.
Then we compress the sequence based on variance extraction and the Midimax algorithm. The main idea of Midimax algorithm is to keep the trajectory of sequence as much as possible under the premise of satisfying a certain compression ratio. Considering that Midimax is mainly applicable to one-dimensional data, we first perform the maximum variance projection of features. That is, we use the PCA algorithm to extract one principal component to transform the sequence to one-dimension, and then apply Midimax to the projected sequence.
The following is a brief description of the Midimax algorithm we apply: given the compression ratio \(\delta\), which is an integer, we first divide the sequence into \([s_{1},s_{2}...s_{Ns}]\), where \(Ns\) is the number of time slices, \(s_{i}\) is the \(i^{th}\) slice, and the length of each time slice is \(\delta\). Let \(s_{i}=[s_{1}^{1},s_{i}^{2}...s_{i}^{\delta}]\), and suppose that the index of the maximum, minimum and median in the time slice are \(p_{1}\), \(p_{2}\), \(p_{3}\). Then we sort \([p_{1},p_{2},p_{3}]\) as \([q_{1},q_{2},q_{3}]\) (\(q_{1}\leq q_{2}\leq q_{3}\)) and finally compress \(s_{i}\) into \([s_{i}^{q_{1}},s_{i}^{q_{2}},s_{i}^{q3}]\). The scheme is shown in Figure 3. Finally, BiLSTM is applied to extract the features.
### Proxy Construction
In factor fusion, we need to calculate the factor contributions for each modality. Considering that the calculation of factor contribution should be consistent, we first map three modalities into a common low-dimensional subspace, which is learned by reconstructing the inputs with minimum errors. That is, in order to make the new subspace effectively represent the original space, a reverse projection is conducted to map the subspace back to the original space, and employ the mean square error (MSE) of the recovery space and the original space as the loss function, as illustrated in Figure 4.
Formally, let \(x_{t}\), \(x_{a}\), \(x_{v}\) be the extracted features for the textual, audio and visual modalities respectively, we learn a mapping function \(F(\cdot)\), and the subspace mapping can be expressed as:
\[\begin{split}& y_{mod}=F_{mod}(x_{mod}),\,\,\,mod=\{t,a,v\}, \\ & z=\frac{1}{3}(y_{t}+y_{a}+y_{v}),\end{split} \tag{2}\]
where the dimension of \(y_{mod}\) and the latent representation \(z\) is less than \(x_{mod}\). The latent subspace is learned by using the following loss function:
\[\begin{split}&\mathcal{L}_{latent}=\frac{1}{3}\sum_{mod\in\{t,a,v\} }(\hat{x}_{mod}-x_{mod})^{2},\\ &\hat{x}_{mod}=F_{mod}^{\prime}(z).\end{split} \tag{3}\]
The learned \(z\) is utilized as a proxy to guide the subsequent factor fusion.
### Factor Extraction & Assembly
There are many approaches for multimodal fusion, but most of them have two main issues: the incomplete exploitation of the interactive information across modalities and low interpretability. Our MMFF model alleviates these issues by exploiting multi-order interactions between different modalities in the combination, which also provides a certain degree of interpretability for fusion mechanism analysis.
The factor fusion process at different levels of orders is illustrated in Figure 5. First, the features of each modality are decomposed into factors of different orders. where the \(p\)-order factor indicates that the factor comes from \(p\) modalities. Then for the \(p\)-order factor, there will be \(C_{3}^{p}\) subfactors, where \(C_{3}^{p}\) is the combination number. How to extract each factor will be depicted as follows.
Let \(mod\) denote a single modality, \(com\) signifies modality combination, and \(H(\cdot)\) be a certain mapping function, we first
Figure 4: Illustration of the latent space module for proxy extractor. The encoder-decoder form is adopted to maximize the sharing between modality subspace while preserving the modality information.
Figure 3: The Midimax Algorithm (\(\delta\)=6). To preserve the original trajectory, the maximum, minimum and median are picked up for each time slice under the given compression ratio.
calculate the contribution of each modality for the factors of different levels of orders. This is achieved by using the constructed proxy to generate the adaptive weights. Specifically:
\[\gamma^{k}=H_{k}(z),\;\;k=1,2,3, \tag{4}\]
where \(\gamma^{k}=[\gamma^{k}_{t},\gamma^{k}_{a},\gamma^{k}_{v}]^{T}\), which satisfies \(\gamma^{k}_{t}+\gamma^{k}_{a}+\gamma^{k}_{v}=1\). The 1st-order factor \(v^{1}_{mod},\;mod\in\{t,a,v\}\), is calculated as follows:
\[v^{1}_{mod}=\gamma^{1}_{mod}P^{T}_{mod}G^{mod}_{1}(x_{mod}),\;\;mod=\{t,a,v\}, \tag{5}\]
where \(P^{T}_{mod}\) is a projection matrix, \(G(\cdot)\) is an encoder and \(\gamma^{1}_{mod}\) is the contribution of certain modality at the 1st-order. Given the contribution \(\gamma^{2}_{mod}\) for each modality at the 2nd-order, we use the multimodal low-rank bilinear (MLB) approach [12] to construct the 2nd-order factor, i.e.,
\[v^{2}_{tu} =P^{T}_{tu}\sigma(\gamma^{2}_{t}G^{t}_{2}(x_{t})\circ\gamma^{2}_ {a}G^{a}_{2}(x_{a})), \tag{6}\] \[v^{2}_{av} =P^{T}_{av}\sigma(\gamma^{2}_{a}G^{a}_{2}(x_{a})\circ\gamma^{2}_ {v}G^{v}_{2}(x_{v})),\] \[v^{2}_{tv} =P^{T}_{tv}\sigma(\gamma^{2}_{t}G^{t}_{2}(x_{t})\circ\gamma^{2}_ {v}G^{v}_{2}(x_{v}))\]
where \(\sigma(\cdot)\) is a certain activation function and \(\circ\) is the Hadamard product. Then the MLB approach is extended to induce the 3rd-order factor:
\[v^{3}_{tav}=P^{T}_{tav}\sigma(\gamma^{3}_{t}G^{t}_{3}(x_{t})\circ\gamma^{3}_{ a}G^{a}_{3}(x_{a})\circ\gamma^{3}_{v}G^{v}_{3}(x_{v})). \tag{7}\]
We then combine the obtained subfactors at different levels of orders. The integrated 1st-order factor \(v^{1}\), 2nd-order factor \(v^{2}\) and 3rd-order factor \(v^{3}\) are obtained by simply adding the subfactors and then performing a projection, i.e.,
\[v^{1} =P^{T}_{1}(v^{1}_{t}+v^{1}_{a}+v^{1}_{v}), \tag{8}\] \[v^{2} =P^{T}_{2}(v^{2}_{ta}+v^{2}_{av}+v^{2}_{tv}),\] \[v^{3} =P^{T}_{3}v^{3}_{tav}.\]
The final fusion feature is composed of all the 1st, 2nd and 3rd-order factors. Similarly, we use the proxy to calculate the contribution of each order, and then weightedly fuse the integrated factors to obtain the final feature \(v\):
\[v=\gamma_{1}v^{1}+\gamma_{2}v^{2}+\gamma_{3}v^{3}. \tag{9}\]
Here,
\[\gamma=[\gamma_{1},\gamma_{2},\gamma_{3}]^{T}=H_{com}(z),\;\;s.t.\;\;\sum_{i= 1}^{3}\gamma_{i}=1. \tag{10}\]
For our MMFF, A hierarchical training strategy is adopted. That is, we first train the latent subspace solver, and then freeze its parameters, and finally train the backbone of our MMFF. After obtaining the final fusion feature \(v\), we use a three-layers MLP to predict patients' PHQ score. Let \(\hat{y}\) be the prediction score, \(y\) be the real label, and the loss function used in our training process is the MSE loss function, i.e.,
\[\mathcal{L}_{MSE}=\frac{1}{N}\sum_{i=1}^{N}(\hat{y}-y)^{2}. \tag{11}\]
Through the adaptive training of the contributions \(\gamma^{1}\), \(\gamma^{2}\), \(\gamma^{3}\) and \(\gamma\), we can intuitively acquire the importance of each factor of modality in the combination, so as to better understand the mechanism of fusion.
## 3 Experiments
### Dataset
Our experiments are mainly conducted on two recent depression diagnosis datasets, namely, the Extended Distress Analysis Interview Corpus (E-DAIC-WOZ) [1, 8] and Chinese Multimodal Compression Corpus (CMDC) [9].
E-DAIC-WOZ is an extended version of WOZ-DAIC dataset[8], and it is also the dataset for the AVEC2019 DDS Challenge [1]. E-DAIC-WOZ provides video, audio and text modality data. For the text modality, the interview records are provided. For the video modality, only the extracted features are provided to protect the privacy of patients. For audio modality, in addition to the wav file, some features are also provided. The dataset contains \(163\) training samples, \(56\) validation samples and \(56\) test samples.
CMDC is a newly proposed dataset, which is collected by [9]. This dataset was obtained from the audio-visual records of patients' clinical interviews in the Chinese environment. CMDC presets 12 questions during the interview and stores the answering data for these 12 questions. Similarly, in order to ensure the privacy of patients, the datasets only provided extracted features in the audio and video modalities. The dataset consists of \(45\) samples that have complete modalities: \(19\) samples for patients with depression and \(26\) samples for the control group. Considering the relatively small amount of data, the author provides a stratified 5-fold cross validation to obtain the final performance of the model.
Figure 5: The proposed contribution decomposition methodology. The core idea is to decompose the interaction gain to all modality combination explicitly, so that the fully coverage of interaction extraction is assured.
### Experiment Setup
Our experimental settings are as follows: the activation function for both the audio and video modalities are ELU(-), and the activation function for the text modality is ReLU(-). In the factor fusion part, \(F(\cdot)\) is a two-layer MLP with Tanh activation, \(G(\cdot)\) is a single-layer MLP with Hardtanh activation, and \(H(\cdot)\) is a three-layer MLP with ReLU activation. In order to alleviate the over-fitting problem, we use the AdamW optimizer for training, and add the Dropout layer to the network backbone. The dropout rate is set between \([0.4,0.6]\).
For the E-DAIC-WOZ dataset [1, 8], we follow the data partition in AVEC2019 DDS. We use Midimax to compress the length of video and audio feature to \(1200\) and \(600\) respectively, and unit normalization is conducted on the compressed results. For the CMDC dataset [9], we first remove the data with incomplete modalities as described in Zou et al.[9], and then divide the remaining data into five folds according to the stratified partition provided in the CMDC attached file. Then, for the non-test data in each fold, we randomly take \(80\%\) data as the training set, and set the remaining data as the validation set. The model is evaluated based on the combination of prediction scores from five folders.
In the AVEC2019 DDS [1], CCC (concordance correlation coefficient) and RMSE (the root mean square error) are the main evaluation metrics. To test the effectiveness of our model, we add the MAE (mean absolute error) as an extra criterion. For the CMDC dataset, we follow [9] to adopt the RMSE, MAE and Pearson correlation coefficients as the evaluation criteria.
### Experimental Results
The results on the E-DAIC-WOZ [1, 8] dataset are reported in Table 1. From the results, we observe that: 1) after low-variance filtering and Midimax processing, our model obtains CCC=\(0.434\) and RMSE=\(5.44\) using only the audio modality. This is a competitive result, which verifies the effectiveness of our Midimax sequence compression; 2) the proposed MMFF achieves the performance of CCC=\(0.676\), RMSE=\(4.91\), and MAE=\(3.98\). It can be seen that our model achieves the best performance on the most popular evaluation metric CCC, and ranked first and third in terms of \(MAE\) and \(RMSE\) respectively. This demonstrates the superiority of our method.
The results on the CMDC dataset are reported in Table 2. In the experiments of Zou et al. [9], the combination including the audio modality achieves better performance compared with others. This is in line with the results in our experiments. The performance of audio modality is surprisingly good. By applying low variance filtering, normalization and 2-layer BiLSTM, utilizing only the audio modality can outperform [9]. By combining the different modalities using our MMFF method, the performance can be further improved.
### Fusion Mechanism Analysis
As mentioned above, the contribution rate (\(\gamma^{1}\),\(\gamma^{2}\),\(\gamma^{3}\),\(\gamma\)) obtained from the proxy can provide us with the mechanism explanation of factor fusion to some extent. In this section, we visualize the modality contributions on the E-DAIC-WOZ dataset and analyze the fusion mechanism.
The weights of each modality on each order and the whole process are shown in Table 4. It can be seen that the audio (42.5%) is the most important modality, followed by the text (30.4%) and video (27.1%). We also observe that the audio, text, and video modalities have the largest contribution for the 1st-order, 2nd-order, and 3rd-order factors, respectively. This indicates that the contributions of different modalities vary when exploiting the interactions in terms of different orders.
Besides, the contribution of audio modality at the 1st, 2nd and 3rd-order decreases (\(56.6\%\) to \(33.0\%\)), and the contribution of text at the 2nd and 3rd-order decreases (\(40.8\%\), \(27.7\%\)), while the contribution of video at the 1st, 2nd and 3rd-order increases (\(18.9\%\) to \(39.3\%\)). Therefore, we speculate that there is a layer-by-layer extraction mechanism in our MMFF. That is, the information extracted by order \(t\) is mainly the information that has not be fully extracted by the order smaller than \(t\), thus realizing the hierarchical and full extraction of modality information.
## 4 Conclusion
In this paper, we propose a Multimodal Multi-order Factor Fusion (MMFF) method for depression diagnosis. Compared with the existing approaches, our method can ensure the full coverage of interactive information in different orders, while providing stronger interpretability. Experiments conducted on two recent depression datasets show that our MMFF is significantly superior to the state-of-the-art approaches. We also conduct analysis of the learned contribution of each factor and give a conjecture of the fusion mechanism (namely, order-hierarchical information extraction for fusion). In the future, we intend to apply the proposed method to more applications in addition to the depression diagnosis.
|
2309.12072 | On the origin of accretion bursts in FUORs | Accretion luminosity of young star FU Ori increased from undetectable levels
to hundreds of Solar luminosities in 1937 and remains nearly as high at the
present time. In a recent paper we showed how Extreme Evaporation (EE) of a
young gas giant planet that migrated to a 10 day orbit around the star may
power FU Ori. However, our model assumed a power-law mass-radius relation for
the evaporating planet. Here we employ a stellar evolution code to model mass
losing planets. We find that adiabatic planets expand rapidly, which results in
runaway FUOR bursts. Super-adiabatic planets contract while losing mass; their
outbursts are dimming with time. Long steadily declining bursts such as FU Ori
require relatively fine tuned internal planetary structure, which may be rare.
More commonly we find that super-adiabatic planets contract too rapidly and
their EE falters, leading to FUOR burst stutter. This stutter allows a single
planet to produce many short repeating bursts, which may be relevant to bursts
observed in V346 Nor, V899, V1647. We compute broad band spectra of our best
fitting scenario for FU Ori. Since the outburst is triggered behind the planet
location, the mid-IR emission rises many months before the optical, similar to
bursts in Gaia-17bpi and Gaia-18dvy. We show that in outbursts powered by the
classic thermal instability, mid-IR lags the optical, whereas the dead zone
activation models predict mid-IR light precede the optical burst by many years
to decades. We comment on the stellar flyby scenario for FU Ori. | Sergei Nayakshin, Vardan Elbakyan | 2023-09-21T13:40:31Z | http://arxiv.org/abs/2309.12072v1 | # On the origin of accretion bursts in FUORs
###### Abstract
Accretion luminosity of young star FU Ori increased from undetectable levels to hundreds of L\({}_{\odot}\) in 1937 and remains nearly as high at the present time. In a recent paper we showed how Extreme Evaporation (EE) of a young gas giant planet that migrated to a \(\sim 10\) day orbit around the star may power FU Ori. However, our model assumed a power-law mass-radius relation for the evaporating planet. Here we employ a stellar evolution code to model mass losing planets. We find that adiabatic planets expand rapidly, which results in runaway FUOR bursts. Super-adiabatic planets contract while losing mass; their outbursts are dimming with time. Long steadily declining bursts such as FU Ori require relatively fine tuned internal planetary structure, which may be rare. More commonly we find that super-adiabatic planets contract too rapidly and their EE falters, leading to FUOR burst stutter. This stutter allows a single planet to produce many short repeating bursts, which may be relevant to bursts observed in V346 Nor, V899, V1647. We compute broad band spectra of our best fitting scenario for FU Ori. Since the outburst is triggered behind the planet location, the mid-IR emission rises many months before the optical, similar to bursts in _Gaia_-17bpi and _Gaia_-18dvy. We show that in outbursts powered by the classic thermal instability, mid-IR lags the optical, whereas the dead zone activation models predict mid-IR light precede the optical burst by many years to decades. We comment on the stellar flyby scenario for FU Ori.
keywords: accretion, accretion discs - planet-disc interactions - protoplanetary discs - planets and satellites: formation
## 1 Introduction
During FU Ori outbursts (Hartmann and Kenyon, 1996; Fischer et al., 2022, FUOR hereafter;), protostars accrete gas at astonishingly high rates of \(\dot{M}\sim(10^{-6}-10^{-4})\,\rm M_{\odot}\) year\({}^{-1}\)(Hartmann and Kenyon, 1985; Zhu et al., 2009). There is a number of models for triggering these events (Audard et al., 2014). In the well known Hydrogen ionisation instability scenario (Bell and Lin, 1994), also known as Thermal Instability (TI), the inner \(\sim 0.1\) AU of the disc keeps switching between the stable low \(\dot{M}\) and the high \(\dot{M}\) solution branches. On the former, the disc midplane temperature is \(T_{\rm d}\lesssim 3,000\) K and Hydrogen is neutral, whereas on the high branch \(T_{\rm d}\gtrsim 30,000\) K, and Hydrogen is ionised. Disc viscosity is low on the former and high on the latter branches, leading to quiescent \(\dot{M}_{\rm low}\lesssim(10^{-8}-10^{-7})\)\(\rm M_{\odot}\) yr\({}^{-1}\) and outburst \(\dot{M}_{\rm high}\gtrsim 10^{-5}\)\(\rm M_{\odot}\) yr\({}^{-1}\). However, to match outburst duration and accreted mass budget, Bell and Lin (1994) required disc viscosity to be two orders of magnitude lower than generally accepted based on both observations (e.g., Lasota, 2001; Hameury, 2020) and numerical simulations (e.g., Hirose, 2015; Scepi et al., 2018).
Lodato and Clarke (2004) presented a TI-planet scenario for FUORs, in which a massive planet embedded in the disc opens a deep gap in the disc and leads to banking up of significant excess material behind its orbit. In this case the outbursts may start in this excess material rather than at \(\sim 2-3\) stellar radii (as happens in the absence of a planet; Bell and Lin, 1994). However, there is a serious planet budget problem with this scenario. FUOR events are believed to occur roughly \(\sim 10\) times per star (Hartmann and Kenyon, 1996), whereas only \(\sim 1\%\) of FGK stars host hot jupiters (cf. Fig. 9 in Santere et al., 2016).
Nayakshin and Lodato (2012) argued that massive young planets can be fluffy enough to be tidally disrupted at separations \(\sim 0.1\) AU. The material released by the planet is deposited in the protoplanetary disc and accretes onto the star quickly, powering an accretion burst. In this picture most of very young hot jupiters are destroyed in FUOR bursts, so very few of them survive to be observed around mature stars. This model is very closely related to the influential scenario for episodic accretion proposed by Vorobyov and Basu (2006, 2010, 2015). Large scale 2D simulations by these authors show that planets born by gravitational instability in massive young discs at \(\sim 100\) AU migrate into the inner \(\sim 10\) AU of the disc very rapidly. If accreted by the star these planets account for both the frequency and the mass budget of FUORs qualitatively well. The Nayakshin and Lodato (2012) calculations describe how this planet accretion process works if the planets are tidally disrupted on scales unresolved in the large scale 2D simulations. Unfortunately, this scenario fails decisively in terms of outburst light curves. Planet tidal disruptions result in outbursts that are too short and too bright
by \(\sim\) two orders of magnitude. Armitage (2015), his SS6, also notes that it is hard to see how gas giant planets can be as extended as \(\sim 40\,\mathrm{R_{J}}\) to be tidally disrupted at \(\sim 0.1\) AU.
Nayakshin et al. (2023) (hereafter paper I) pointed out a number of reasons why a massive planet may be key to understanding FU Ori: (i) Absorption lines profiles and quasi-periodic photometric variability of FU Ori indicate the presence of a hot spot in the disc at the distance of \(\sim 0.08\) AU from the star (e.g., Powell et al., 2012; Siwak et al., 2018), with the period steady for two decades now (see also Herbig et al., 2003). (ii) Interferrometric observations of FU Ori by Lykou et al. (2022) showed that the active disc feeding the star for nearly a century at the rate well above \(10^{-5}\) M\({}_{\odot}\) yr\({}^{-1}\) extends to the radius of only \(\sim 0.3\) AU1. This is paradoxical unless there is a source of mass hidden inside of this tiny region. The outward viscous spreading of material from \(a=0.08\) naturally results in an outer bright disc edge at \(\sim 0.3\) AU. (iii) The slow monotonic decline of FU Ori brightness from 1937 till now is paradoxical since the disc is expected to exhibit TI. However, TI was found to disappear if the source of matter feeding FU Ori is not the outer disc but a planet at 0.08 AU.
Footnote 1: In retrospect, this value is probably very much consistent with earlier findings from SED disc modelling by Zhu et al. (2007, 2008). At the time their modelling was done, FU Ori distance, mass, and disc inclinations were all assumed to be significantly different from the better constrained modern values given in Lykou et al. (2022). Using the latter, and requiring the active disc to stop at exactly the same effective temperature as in Zhu et al. (2008) we rescale their value of \(R_{\rm act}=0.58\) AU to 0.296 AU.
We then proceeded to describe and characterise a previously missed planet-disc interaction process, named Extreme Evaporation (EE). It was shown that dusty GI planets have radii as large as \(R_{\rm p}\sim(10-20)R_{\rm J}\) at the age of class I YSOs. When such a planet migrates into the inner 0.1 AU of the disc, it is exposed to midplane disc temperatures \(T_{\rm d}\gtrsim 30,000\) K during TI outburst episodes. As the planet outer layers heat up to these extreme temperatures, they become unbound from the planet. As in Nayakshin & Lodato (2012) model, mass lost by the planet is deposited in the protoplanetary disc; the difference is in how the planet mass is lost. While tidal disruption of a gas giant planet is usually a runaway (almost annihilation-like) event, EE is a self-limiting quasi-steady process. Using a time-dependent disc with an embedded planet model we have shown in paper I that this TI-EE version of the model accounts for a number of observed features of FU Ori and interferometry constraints.
In this paper we address two important outstanding questions. First, planet radius evolution during mass loss via EE was found (see fig. 15 in paper I) to be crucial in governing outburst properties, yet a simple power-law mass-radius (\(M-R\)) relation was assumed. Second, in paper I we focused on the time evolution of the accretion rate onto the star as a proxy for disc brightness; however this leaves behind the more observationally pertinent question of how the spectrum of the system evolves in various filters.
The paper is structured as following. A brief account of computational methods is given in SS2. In SS3.1 we explain in simple terms why the internal structure of the planet is key to the planet-sourced TI-EE scenario of FUORs. In SS3.2 stellar evolution code MESA is used to explore \(M-R\) relation for a standard convective versus linearly super-adiabatic planet models; the differences in resulting accretion bursts is presented in SS3.3. In SS4 we experiment with exponential super-adiabatic planets which have inert solid cores. In SS5 we compute the Spectral Energy Distribution (SED) of time-dependent discs in model outbursts for three competing models: TI (classical planet-free thermal instability), TI-EE, and the MRI activation in the dead zone (DZ-MRI hereafter; Armitage et al., 2001). We also discuss the stellar flyby scenario for FU Ori in SS5.4. An appendix investigates the claim made in paper I that TI cannot be naturally suppressed in the classical DZ-MRI model of FU Ori.
## 2 Methods
### Disc-planet evolutionary code DEO
DEO (Disc with an Embedded Object) is a time-dependent 1D code for evolving a Shakura & Sunyaev (1973) vertically averaged disc model with an (optionally) embedded planet in it (Nayakshin & Lodato, 2012; Nayakshin et al., 2022). We only give a brief account of it here (cf. paper I for more detail). The disc surface density \(\Sigma\) is evolved according to
\[\begin{split}\frac{\partial\Sigma}{\partial t}=\frac{3}{R} \frac{\partial}{\partial R}\left[R^{1/2}\frac{\partial}{\partial R}\left(R^{ 1/2}\nu\Sigma\right)\right]-\\ -\frac{1}{R}\frac{\partial}{\partial R}\left(2\Omega^{-1} \lambda\Sigma\right)+\frac{\dot{M}_{\rm p}}{2\pi R}D(R-a)\;.\end{split} \tag{1}\]
In this equation \(\Omega\) is the Keplerian angular velocity, \(\nu=\alpha c_{\rm s}H\) is the kinematic viscosity, \(H\) is the disc vertical scale height, and \(c_{\rm s}\) is the disc sound speed. The exchange of angular momentum between the planet and the disc is described by the second term on the right hand side of the equation (expressions for \(\lambda\) are given in Nayakshin et al., 2022), whereas the last term describes the injection of mass into the disc if/when the planet loses mass at the rate \(\dot{M}_{\rm p}\). The function \(D(R-a)\) is a narrow Gaussian normalised to yield \(\int_{0}^{\infty}2\pi RD(R)dR=1\). The planet-star separation, \(a\), evolves in a way explicitly conserving angular momentum of the disc-planet system. Mass is deposited into our disc at the feeding rate, \(\dot{M}_{\rm feed}\), close to the outer disc boundary.
Planet evaporation is driven by exchange of heat between the disc and the planet. The disc midplane temperature, \(T_{\rm d}\), and the planet radius, \(R_{\rm p}\), are the primary parameters determining the rate of planet mass loss, \(\dot{M}_{\rm p}\). When \(R_{\rm p}\) is less than the Bondi radius,
\[R_{\rm B}=\frac{GM_{\rm p}}{2c_{\rm s}^{2}}\approx 12\,R_{\rm J}\,\frac{M_{\rm p}}{5 \,\mathrm{M_{J}}}\,\left(\frac{T_{\rm d}}{3\times 10^{4}}\right)^{-1}\;, \tag{2}\]
the outer layers of the planet remain marginally bound to it, and so mass loss proceeds via the relatively weak Bondi wind ("boil off"; Owen & Wu, 2016) solution. However, when \(R_{\rm p}>R_{\rm B}\), the outer heated layers of the planet are unbound and are removed rapidly at the EE rate that is well approximated by
\[\dot{M}_{\rm EE}=1.6\times 10^{-5}\frac{\mathrm{M_{\odot}}}{\mathrm{year}}\left( \frac{T_{\rm d}}{3\times 10^{4}\mathrm{K}}\right)^{2.2}\,\left(\frac{R_{\rm p}}{10R_{ \rm J}}\right)^{3/2}\;. \tag{3}\]
Note that disc TI is needed to trigger EE in this picture, and only if \(\dot{M}_{\rm EE}\) is larger than \(\dot{M}\) that the disc would have in the absence of the planet does the planet-disc system enter the self-sustained "planet-sourced" mode. In this case the planet
plays the role of a mass losing secondary star in a binary system; this can last for tens to hundreds of years.
### Planet evolution and mass-radius relation
Here we use MESA stellar evolution code (Paxton et al., 2011, 2013) to model planet evolution. All of our calculations are performed for uniform composition planets with metallicity \(Z=0.02\). We use the standard MESA opacity for brown dwarfs and planets (Freedman et al., 2008) supplemented by silicates dust opacity. The dust opacity is given by
\[\kappa_{\rm d}=3\ {\rm cm}^{2}\ {\rm g}^{-1}\ \left(\frac{T}{10^{3}\ {\rm K}} \right)\ f_{\rm melt}(T,\rho)\, \tag{4}\]
where the function \(f_{\rm melt}\) approximates the effect of grain melting (following Kuiper et al., 2010) via
\[f_{\rm melt}(T,\rho)=\frac{1}{2}-\frac{1}{\pi}\arctan\left(\frac{T-T_{\rm melt }}{100\ {\rm K}}\right) \tag{5}\]
with melting temperature depending on gas density as
\[T_{\rm melt}=2000\ {\rm K}\ \rho^{0.0195}. \tag{6}\]
Note that this assumes Solar metallicity of grains in the atmosphere of the planet, and that \(f_{\rm melt}=1\) for \(T\ll T_{\rm melt}\).
In paper I (SS7) we constructed models in which a planet started far from the inner disc region with a large initial radius, \(R_{\rm p}\). We then computed \(R_{\rm p}\) evolution together with planet migration in the protoplanetary disc. We neglected planet mass loss in the Bondi wind regime, and turned on EE (eq. 3) once \(R_{\rm B}\) fell below \(R_{\rm p}\). At that point our planet evolution simply assumed that \(R_{\rm p}\) remained constant or dependent on the changing \(M_{\rm p}\) as a power-law. Simulation labelled M1 in Paper I reproduced many traits of the observed FU Ori outburst. In this simulation, the planet had initial mass \(M_{\rm p0}=6\,{\rm M}_{\rm J}\) and radius \(\approx 14R_{J}\) at the commencement of EE. The resulting mass loss rate varied during the burst from the peak of \(4\times 10^{-5}{\rm M}_{\odot}\) year\({}^{-1}\) to \(\sim 3\times 10^{-5}{\rm M}_{\odot}\) year\({}^{-1}\), _under the assumption_\(R_{\rm p}=\) const.
Here we relax this assumption. To limit the parameter space that we need to investigate, we continue with the planet of \(M_{\rm p0}=6\,{\rm M}_{\rm J}\) and radius \(\approx 14R_{J}\) at the commencement of EE, but here we use MESA to study planet radius evolution when the planet loses mass vigorously. We do not model the outflow region directly as this requires explicit hydrodynamics with very short time steps. Instead we prescribe a mass loss rate \(\dot{M}_{\rm p}\) and follow the standard approach to mass loss in MESA (mass is removed from the outer regions of the planet as described in the MESA instrument papers).
One fortunate result simplifies the otherwise non trivial (and not yet achieved) on-the-fly coupling of our disc code DEO with MESA. Fig. 1 shows the mass-radius relation for several MESA runs that start with an identical planet internal structure (model \(S_{\rm min}=12\) described in SS3.2), but different \(\dot{M}_{\rm p}\). We see that the mass-radius relation \(R(M)\) is independent of the exact value of \(\dot{M}_{\rm p}\) as long as it exceeds \(\sim 3\times 10^{-6}{\rm M}_{\odot}\) year\({}^{-1}\). This is to be expected2. Only for \(\dot{M}_{\rm p}=10^{-6}{\rm M}_{\odot}\) year\({}^{-1}\) there is a \(\sim 10\%\) difference in \(R_{\rm p}\) at a given \(M_{\rm p}\). Such low \(\dot{M}_{\rm p}\) are of limited interest for us here, so we fix \(\dot{M}_{\rm p}=3\times 10^{-5}{\rm M}_{\odot}\) year\({}^{-1}\) for all our MESA planet mass loss calculations below. We emphasise that this approach is accurate within about a few % as per Fig. 1.
Footnote 2: At very high \(\dot{M}_{\rm p}\) the planet has no time to shuffle energy between different layers by either convection or radiation. Qualitatively, radiative losses are not important during mass loss if mass loss time scale \(M_{\rm p}/\dot{M}_{\rm p}\ll t_{\rm KH}\), the Kelvin-Helmholz timescale.
## 3 Coreless planets with a linear entropy function
### Why planet internal structure matters, and why its uncertain
In this paper we assume that FU Ori accretion rate varied with time approximately as
\[\dot{M}\approx 5\times 10^{-5}\ {\rm M}_{\odot}\over{\rm yr}\ e^{-t/t_{\rm e }}\, \tag{7}\]
where \(t_{\rm e}=73\) years and \(t=0\) corresponds to the beginning of the burst in year 1937 (Herbig, 1966). This equation is consistent with the current \(\dot{M}\) in FU Ori (the "ALMA" model in table 1 of Lykou et al., 2022) and the dimming rate of 0.015 magnitude per year (Kenyon et al., 2000). If \(M\) in FU Ori is approximately equal to \(\dot{M}_{\rm EE}\), then eq. 3 suggests that planet radius decreased with time approximately exponentially with e-folding time of \(\sim 120\) years.
We start with qualitative ideas. Hydrogen in very young post collapse planets is nearly completely ionised, so we may hope that a toy "ideal polytrope" planet with a constant mean molecular weight, \(\mu\), a constant specific entropy, \(S\), and an adiabatic equation of state (EOS), pressure \(P=K\rho^{\gamma}\),
Figure 1: Planet radii versus planet mass for the same initial planet structure but different planet mass loss rates, \(\dot{M}\). This shows that \(M-R\) relation is nearly independent of \(\dot{M}_{\rm p}\) for mass loss rates exceeding a few \(10^{-6}{\rm M}_{\odot}\) year\({}^{-1}\).
with \(\gamma=5/3\), is appropriate. For such a planet (SS19 in Kippenhahn et al., 2013)
\[R_{\rm p}\propto KM_{\rm p}^{-\xi_{\rm p}}\;, \tag{8}\]
where \(\xi_{\rm p}=1/3\), so \(R_{\rm p}\) increases as \(M_{\rm p}\) drops. Such a planet expands as it loses mass, so its FU Ori outburst would brighten with time in contrast to eq. 7.
In a more general case entropy \(S\) varies as a function of enclosed mass \(M\) within the planet. Fig. 2 is a sketch of the mass-radius relation for such planets (note that \(M_{\rm p}\) decreases along the horizontal axis). Consider three planets with the same initial mass, \(M_{0}\), and radius, \(R_{0}\), but with different internal entropy profiles. Let the planets lose mass so rapidly that neither radiation nor convection are able to transfer energy between different layers in the planet. The planets in Fig. 2 start at the the blue dot but evolve differently:
1. In the "blue" planet the entropy is constant with \(M\). As it loses mass it moves along the blue line towards the top right corner of the figure, following eq. 8 with \(\xi_{\rm p}=1/3\).
2. In the "red" planet the entropy is higher in the centre. For the sake of simplicity let it be constant within enclosed mass \(M_{1}\). When the planet mass becomes equal to \(M_{1}\) its radius then is equal to that at the red point on the red curve. This planet mass-radius relation is steeper, \(\xi_{\rm p}>1/3\).
3. Finally, in the "green" planet the entropy is lower in the centre. When this planet mass decreases to \(M_{1}\) it arrives at the green point. This planet mass-radius relation is shallower, so \(\xi_{\rm p}<1/3\), and may be negative.
The internal structure of GI planets with age \(\sim O(10^{4})\) years is currently uncertain. The planets are born at separations of \(\sim O(100)\) AU as pre-collapse molecular Hydrogen dominated clumps. They contract until central temperature \(T_{c}\sim 2000\) K is reached, at which point H\({}_{2}\) dissociates and the planet collapses into the post-collapse configuration (Bodenheimer, 1974). This is similar to the collapse of first cores into the second cores in star formation (Larson, 1969; Masunaga and Inutsuka, 2000) but occurs in a disc. No detailed calculation of such a collapse has been performed to date, to the best of our knowledge. However, Bate et al. (2014) perform 3D radiation transfer MHD simulations of a rotating 1M\({}_{\odot}\) cloud collapse down to stellar core densities. While not performed for protoplanetary collapse, their simulations address the properties of the second cores formed early on in the collapse. They find that the sizes of cores with masses up to 20 M\({}_{\rm J}\) are \(\sim 30\) R\({}_{\rm J}\) and that they have entropy profiles increasing outward, with the minimum entropy of \(S\approx 14k_{b}/m_{p}\). Similar entropy values and even larger core sizes are found in 2D radiative hydrodynamics calculations of Bhandare et al. (2020), e.g., see their Figs. 3 and 8.
This large initial entropy of a protoplanet will be reduced by radiative cooling, however entropy of the outer planetary layers can also be increased by energy deposition due to tides, irradiation, or other effects (e.g., Bodenheimer et al., 2003; Jackson et al., 2008; Ginzburg and Sari, 2015) when the planet migrates into the innermost disc. Additionally, gas accreted by the planet would initially inherit the disc gas entropy, which is significantly higher than that of post-collapse planets. Simulations show that GI planets can be born in metal-enriched regions of the disc (e.g., Boley and Durisen, 2010) and they accrete pebbles rapidly (Humphries and Nayakshin, 2018; Vorobyov and Elbakyan, 2019; Baehr and Klahr, 2019). This deposition of heavy elements into the planet is certain to create non-uniform and non-adiabatic planets (e.g., Valletta and Helled, 2020; Ormel et al., 2021). This motivates us to explore different assumptions about planet internal structure in SS3.2 and SS4.
In the interest of connecting to previous literature we point out that the uncertainties in the initial structure of GI planets are probably less important after the disc is dissipated (after \(\sim 3\) Myr). Once planet "harassment" via disc interactions stops the planet will cool and should become nearly fully convective and therefore very close to being adiabatic (Graboske et al., 1975; Bodenheimer et al., 1980; Wuchterl et al., 2000; Helled et al., 2008). This is why in stellar evolution codes the planets are usually initialised as constant entropy spheres (e.g., Burrows et al., 1997; Spiegel and Burrows, 2012; Paxton et al., 2013). The justification for this approach is that the initial contraction is "rapid"(e.g., cf. fig. 16 in paper I) and in \(\sim 10^{6}\) years the exact initial conditions are forgotten.
### Planet response to mass loss
Here we continue to focus on planets with initial mass and radius \(M_{\rm p0}=6\) M\({}_{\rm J}\) and \(R_{\rm p0}=14R_{\rm J}\), respectively, as per model M1 from paper I. We want to know how such planets respond to vigorous mass loss for different internal structure. Fig. 3 shows the results of our MESA calculations with solid curves. We explain these curves below. The bottom and top horisontal axises show time, counted from commencement of mass loss, and planet mass, respectively.
The ideal polytrope \(M-R\) relation (eq. 8) is depicted in Fig. 3 with the violet dash-dotted curve. The shaded region between the dashed cyan and the dotted green curves is important for the interpretation of the resulting FU Ori model outbursts in SS3.3. The upper curve is the planet Hill radius at separation of 0.08 AU. If the planet swells up and crosses the dotted green curve, it is destroyed tidally (as in Nayakshin and Lodato, 2012) very rapidly, resulting in a short and very powerful burst very unlike FU Ori. The cyan dashed curve in Fig. 3 is the Bondi radius (eq. 2 for fixed \(T_{d}=3.75\times 10^{4}\) K),
Figure 2: Planet radius versus mass for a toy planet in which entropy varies with enclosed mass. Super-adiabatic planets (positive \(dS/dM\)) can contract while losing mass. See §3.1 for detail.
which drops together with \(M_{\rm p}\). If \(R_{\rm p}\) drops below \(R_{\rm B}\) then EE process stops. To remain in the EE regime the planet must be in the shaded area.
#### 3.2.1 Standard planet
The black curve is the \(M-R\) relation of the actual MESA model we used for simulation M1 from paper I. We label this model "standard" since except for addition of dust opacity in the planet envelope it is the one obtained with the default initial conditions for young post-collapse planets in MESA (procedure _create_initial_model_ in Paxton et al., 2013). The planet was initialised as a constant entropy sphere with radius \(\approx 30\,{\rm R_{J}}\). It was then cooling and contracting for about 20 thousand years while migrating from the outer disc to \(R\approx 0.08\) AU where it entered the EE regime (cf. figs. 16, 22 and 23 from paper I).
Fig. 3 shows that the standard planet expands very rapidly, roughly following the \(\xi_{\rm p}\sim 2\) track, much steeper than the ideal polytrope (violet dash-dot curve). This may appear surprising since the entropy profile \(S(M)\) within the standard planet is very close to a constant, linearly decreasing with \(M\) from the maximum of \(S(0)\approx 14.25\) in units of \(k_{\rm b}/m_{p}\) in the centre to the minimum of \(S(M=M_{\rm p})\sim 13.9\) (there is an upturn in \(S\) close to the planet surface but this involves a very small fraction of mass and cannot affect the evolution of \(R_{\rm p}\) strongly). The entropy profile decreasing with \(M\) is a natural outcome of radiative cooling for a planet that is initialised adiabatic. The planet cools and contracts by losing energy from its surface. The outer layers thus lose entropy first, however convection maintains a _nearly_ constant \(S(M)\).
The main reason for the rapid expansion of the standard planet is recombination of Hydrogen which releases a significant amount of energy not taken into account in the toy model planet in eq. 8. At \(t=0\), the mean H ionisation fraction in the "standard" planet is about 50 %, but this falls very rapidly as the planet expands and its mean temperature drops. The line \(R_{\rm p}=R_{\rm H}\) is crossed due to this rapid expansion at \(t=40\) years, when the planet mass is \(\sim 4.8\,{\rm M_{J}}\).
#### 3.2.2 Super adiabatic planets
The rest of the solid curves in Fig. 3 are for _super-adiabatic_ planets, that is, those with entropy \(S(M)\) increasing outwards in an arbitrary postulated linear fashion:
\[S(M)=S_{\rm min}+(S_{\rm max}-S_{\rm min})\frac{M}{M_{\rm p0}}\;, \tag{9}\]
where \(S_{\rm min}\) is the entropy in the planet centre, whereas \(S_{\rm max}\) is entropy at its outer edge. To initialise such calculations with MESA we take the initial state of the standard planet and cool its centre while heating its outer layers until the entropy profile given by eq. 9 is established. Since we aim for a planet initially having exactly the same \(R_{\rm p}\), only one of the parameters \(S_{\rm min}\) and \(S_{\rm max}\) is independent; we chose to pick \(S_{\rm min}\) and adjust \(S_{\rm max}\) to yield \(R_{\rm p}=14R_{\rm J}\).
The legend in Fig. 3 lists \(S_{\rm min}\) for the respective curves. We can see that super-adiabatic planets contract initially, as expected. However, at some point a minimum in \(R_{\rm p}\) is reached, and contraction turns into expansion. The cooler the planet is in the centre (the smaller is \(S_{\rm min}\)), the later that minimum is reached. This eventual rapid expansion of the planet is once again strongly abetted by Hydrogen recombination.
### Resulting FUOR bursts
Here we study how properties of EE bursts depend on the planet mass-radius relations we computed in SS3.2. In this section we use the same disc model that we used for the model M1 in paper I and simply inject the planet into the disc on a circular orbit at \(a=0.08\) AU and hold its orbit fixed. Our main focus here is how the disc-planet system behaves in response to how the planet radius evolves when \(M_{\rm p}\) decreases as the EE burst proceeds. Planet migration will be self-consistently included in the calculations in SS4.2.
In Fig. 4 we show stellar accretion rate versus time after the planet starts to lose mass (the time axis is shifted to the beginning of the burst) for the four MESA model planets from Fig. 3. We do not show planet mass loss rates for brevity and clarity of the figure. The green thick line is a power-law fit to \(\dot{M}\) evolution of FU Ori given by eq. 7.
The "standard planet" model (solid curve) results in an accretion burst that declines slightly during the first \(\sim 15\) years but then runs away to an immense peak at \(t\approx 36\) years. The initial decline is not due to planet radius evolution, rather this is simply a drop from the initial peak powered by both the planet and the disc TI burst. Since the standard planet expands with time, \(\dot{M}_{\rm EE}\) eventually increases, powering the corresponding increase in stellar \(\dot{M}\). As foreseen in SS3.2, this is a runaway process. Eventually the planet fills its Roche lobe (\(R_{\rm p}\) exceeds \(R_{\rm H}\)) and the planet is disrupted tidally.
As expected based on the arguments of SS3.1, planets with \(S(M)\) increasing outward produce accretion bursts with \(\dot{M}\) decreasing more rapidly with time than for the standard planet. None of the three models however match FU Ori ob
Figure 3: Planet radii versus time for a selection of MESA model planets losing mass at rate \(\dot{M}_{\rm p}=3\times 10^{-5}{\rm M_{\odot}}\) year\({}^{-1}\) (solid curves). Depending on the internal structure, the planets either expand or contract. To be in the EE regime continuously, \(R_{\rm p}\) must be in the shaded area. Planets leaving the area through the top are tidally destroyed; planets contracting too rapidly and falling below the cyan dashed curve stop losing mass and so FU Ori outbursts may switch off. See text in §3.2 for detail.
servations. For \(S_{\rm min}=13\), \(\dot{M}\) decreases for about 45 years but then increases towards the eventual tidal disruption. \(S_{\rm min}=12\) model predicts a more rapid fall from the maximum brightness in FU Ori, and that the bursts would continue for almost 400 years in total when the planet is eventually disrupted. It is also somewhat inconsistent with the observations because FU Ori is continuing its declining trend currently (Lykou et al., 2022).
Although this is not obvious from the figure, we note that the earlier the planet is disrupted tidally, the more powerful is the accretion outburst onto the star. This is because the planet mass tends to be larger.
The most centrally condensed planet, \(S_{\rm min}=10\) in Figs. 3 and 4, contracts way too rapidly and is able to sustain the primary EE-dominated burst for only \(\sim 30\) years. This is because the planet contracts and becomes smaller than the Bondi radius at that time, as expected based on the cyan dashed curve in Fig. 33.
Footnote 3: Note that \(R_{\rm B}\) in Fig. 3 assumes a fixed disc temperature around the planet and sets the mean molecular weight to \(0.63m_{\rm p}\), whereas DEO calculations shown in Fig. 4 make no such approximations.
Interestingly, multiple planet-powered bursts occur in the \(S_{\rm min}=10\) model. Thus, a planet that falls through the lower boundary of the shaded region in Fig. 3 is not at all lost to FUOR phenomenon but only the current burst. When the next TI outburst occurs, the disc heats up again, and to a temperature higher than the one at the end of the primary EE burst. This means that \(R_{\rm B}\) is smaller at the beginning of the next burst (\(t\approx 80\) years) than it was at the end of the primary one. The \(R_{\rm B}<R_{\rm p}\) criterion for EE process is satisfied again and hence a second episode of a sustained planet mass loss at an FU Ori-like rate results. The second episode is however shorter (only about 10 years long) than the previous one because the planet is now smaller. The third and so on episodes are shorter still, and so the planet may actually undergo very many smaller mass loss episodes. However, if planet migration was included in this calculation then such a planet could continue to migrate closer to the host star and could eventually be destroyed via either EE or TD, but this would typically occur \(\sim O(10^{3})\) years later.
## 4 Planets with exponential entropy function and solid cores
### Mass radius relations
As another class of planet internal structure models we consider the entropy function of the form
\[S(M)=S_{\rm max}+(S_{\rm min}-S_{\rm max})e^{-M/M_{\rm ent}}\, \tag{10}\]
where \(M_{\rm ent}\) is a "low entropy core" of the planet, with \(M_{\rm ent}\) a free parameter. By trial and error we found that there is a certain degeneracy of model results to the value of \(S_{\rm min}\) and \(M_{\rm ent}\), and so we only present here the results for \(M_{\rm ent}=2\,{\rm M_{J}}\).
We also add an inert solid core with mass \(M_{\rm c}\) to the planet. A solid core in GI planets may form through grain growth and sedimentation (e.g., McCrea & Williams, 1965; Boss, 1998; Helled et al., 2008; Helled & Schubert, 2008; Boley et al., 2010; Nayakshin, 2010, 2011), although such a core may be expected to be luminous (e.g., Humphries & Nayakshin, 2019) given its rather short growth time. We consider two contrasting values for \(M_{\rm c}\) here, one with a low mass core of \(M_{\rm c}=5\,{\rm M_{\oplus}}\) and the other with a very massive one4, \(M_{\rm c}=65\,{\rm M_{\oplus}}\). We assume that the core has a constant density of \(\rho_{\rm core}=7.8\) g/cm\({}^{3}\).
Figure 4: Stellar mass accretion rates for EE bursts for planets with the same initial radius but different internal structure (see Fig. 3). The standard MESA planet is nearly adiabatic and expands rapidly as the planet looses mass. \(\dot{M}_{\rm EE}\) runs away and the planet is eventually tidally disrupted. Planets with smaller \(S_{\rm min}\) are denser in the centre; they contract while losing mass and are more resilient to mass loss. The pink shaded area is the approximate observed FU Ori \(\dot{M}\) evolution. See §3.3 for detail.
To refer to these exponential entropy profile models we shall use "Exp-A-B" format where "A" is the central entropy and "B" is the core mass.
As in SS3.2, before we begin our MESA mass loss calculations, we adjust planet internal structure to the desired form (eq. 10). We start with an initially adiabatic planet with a given central core mass and the value of \(S_{\rm min}\) so that the planet is smaller than the desired radius \(R_{\rm p}\). We then increase \(S_{\rm max}\) sufficiently rapidly to preclude radiative cooling of the outer layers yet sufficiently slowly to allow the planet structure to adjust at sub-sonic speeds and follow eq. 10 with an instantaneous value of \(S_{\rm max}\). We keep increasing \(S_{\rm max}\) until the planet expands to the desired value of \(R_{\rm p}=14\,{\rm R_{J}}\).
We computed mass-radius relation for mass-losing planets with \(S_{\rm min}\) from 7.5 to 12, and plot the results in Fig. 5. The figure also shows a simple power-law mass radius relation with \(\xi_{\rm p}=0.5\) with the open red circles for comparison.
Focusing first on the top panels, \(M_{\rm c}=5\,{\rm M_{\oplus}}\), we note that the mass-radius relation is flatter at early times than it was for the linear \(S(M)\) function (Fig. 3). Planet contraction however turns to expansion when the planet mass decreases sufficiently. An exception to that is the lowest entropy value, \(S_{\rm min}=7.5\), the pink dashed curve, for which the planet radius always shrinks and eventually becomes equal to the core radius of about \(2R_{\oplus}\). This is simply the case of a planet that lost all of its gaseous atmosphere and just the core survives. Note that all of the other curves would also arrive at the same final radius but MESA iterations do not always converge when planets expand rapidly.
In the case of a very massive solid core, shown in the bottom panel in Fig. 5, we observe that the core has a profound effect on the radius of the planet when \(M_{\rm p}\) drops below \(\sim(2-3)\,{\rm M_{J}}\), which is an order of magnitude higher than \(M_{\rm c}\). Planet radius is a monotonically decreasing function of time (decreasing as mass \(M_{\rm p}\) decreases) for all but the two highest values of \(S_{\rm p}\).
### A small parameter space study
Here we perform a small two-parameter phase study of our model for the planets with internal entropy profile given by eq. 10. The first parameter is \(S_{\rm min}\). The second is the disc feeding rate \(\dot{M}_{\rm feed}\), which is varied from the minimum of \(M_{\rm feed}=5\times 10^{-7}\)\(\,{\rm M_{\oplus}}\) yr\({}^{-1}\) to the maximum of \(4\times 10^{-6}\)\(\,{\rm M_{\odot}}\) yr\({}^{-1}\) in seven logarithmically uniform steps. Unlike SS3.3, where the planet was injected in the disc at \(a=0.08\) AU, here we inject the planet at \(a=0.5\) AU. This is sufficiently far from the TI-unstable region for the disc and the planet to adjust to one another's presence before the planet migrates into the inner disc and EE bursts commence.
#### 4.2.1 The role of \(\dot{M}_{\rm feed}\).
Fig. 6 shows stellar mass accretion rates during the 400 years after the beginning of an EE burst for the same planet, \(M_{\rm c}=5\,{\rm M_{\oplus}}\), \(S_{\rm min}=10.3\), but for three different values of \(\dot{M}_{\rm feed}\) as shown in the legend. We also present the location of where the planet is disrupted, \(a_{\rm f}\), in the legend. We shifted the time axis to have \(t=0\) in the beginning of the outburst. The pink shaded area is the "desired" FU Ori \(\dot{M}\) evolution given by eq. 7.
As expected based on paper I, at a fixed value of \(R_{\rm p0}\), the higher is \(\dot{M}_{\rm feed}\), the larger is \(a_{\rm f}\), because the disc is hotter and hence the EE condition \(R_{\rm p}>R_{\rm B}\) is encountered by the migrating planet earlier. At the same time, planets evaporating closer in to the star produce more powerful outbursts. This is logical since after the onset of the outburst the disc around them heats up to higher temperatures than it does for planets evaporated at larger distance. This results in EE bursts that are shorter and brighter at smaller \(\dot{M}_{\rm feed}\) for the same initial \(R_{\rm p}\). Similar effects were also seen for planets with \(R_{\rm p}\) contracting due to radiative cooling (SS7.3 in paper I, and fig. 21 in particular).
We note however that this prediction - brighter bursts in lower \(\dot{M}_{\rm feed}\) discs - holds here but may not hold in a fully self-consistent calculation. We consider planets of the same \(R_{\rm p0}\) and internal structure. In a fully self-consistent calculation where the planets evolve, they contract with time, and are likely to be smaller in discs with smaller \(\dot{M}_{\rm feed}\). TI-EE bursts may disappear completely in older discs with too small \(\dot{M}_{\rm feed}\) (cf. fig. 17 in paper I).
Fig. 6 also shows that the character of EE bursts may change significantly as \(\dot{M}_{\rm feed}\) varies. There is one continuous EE burst for the two lower values of \(\dot{M}_{\rm feed}\) in the figure, but at \(\dot{M}_{\rm feed}=4\times 10^{-6}\)\(\,{\rm M_{\odot}}\) yr\({}^{-1}\), an initial burst stutters after only about 20 years because the planet contracts and becomes smaller than \(R_{\rm B}\), so EE process terminates. The burst however restarts at \(t\approx 70\) years, this time burning the planet till the end.
Looking through the results of our models for different \(\dot{M}_{\rm feed}\) we found that \(\dot{M}_{\rm feed}=2.2\times 10^{-6}\)\(\,{\rm M_{\odot}}\) yr\({}^{-1}\) comes closest to matching both the observed stellar accretion rate on FU Ori and also the location \(a\approx(0.07-0.08)\) AU of the suspected planet responsible for the disc hotpot in the model of QPOs observed from FU Ori Powell et al. (2012); Siwak et al. (2018). In the next section we explore how planet internal structure affects EE bursts for this particular value of \(\dot{M}_{\rm feed}\).
#### 4.2.2 The role of planet internal structure
Fig. 7 shows the resulting stellar accretion rates for \(\dot{M}=2.2\times 10^{-6}\)\(\,{\rm M_{\odot}}\) yr\({}^{-1}\) and various central (minimum) entropy values as given in the legends. The top and bottom panels show the cases \(M_{\rm c}=5\,{\rm M_{\oplus}}\) and \(M_{\rm c}=65\,{\rm M_{\oplus}}\), respectively. The planet-star separation at the beginning of the burst is the same, \(a_{\rm f}\approx 0.078\) AU, for all of the curves in Fig. 7 because the planets are of the same initial radius.
Focusing first on the case of a low mass core, \(M_{\rm c}=5\,{\rm M_{\oplus}}\), we observe once again that expanding planets (the black curve) lose their mass the quickest, and end up tidally disrupted too soon. The \(S_{\rm min}=10.8\) and 10.3 models produce stellar accretion rates within a factor of two of that needed to explain FU Ori observations, although their \(\dot{M}\) is too flat after \(t\sim 70\) years. The smallest value of \(S_{\rm min}\) in the top panel of Fig. 7 (blue curve) is inconsistent with the observed burst strongly. This planet contracts too fast so its first EE burst terminates quickly. The next TI bursts however reheats the disc sufficiently to restart EE of the planet, albeit for a shorter time. This results in repeated planet-assisted out
bursts with mixed characteristics, their nature shifting from EE-dominated in the beginning to practically pure TI later on.
The outbursts for the planet with a more massive core, \(M_{\rm c}=65\,{\rm M}_{\oplus}\), bottom panel of Fig. 7, are qualitatively similar but there are quantitative differences. Fig. 5 showed that planets tend to have smaller radii if they have a more massive core5. This implies that planets with a more massive core are less likely to be tidally disrupted. This is why the outbursts with the highest values of \(S_{\rm min}\) in Fig. 7 last longer in the bottom panel than they do in the top one. For the same reason the higher \(M_{\rm c}\) planets are more likely to fall below the \(R_{\rm p}=R_{\rm B}\) line and hence their EE bursts tend to clutter more. For example, the \(S_{\rm min}=10.3\) planet (green curve in the top panel) with a low mass core produces one very long continuous outburst whereas the same value of \(S_{\rm min}\) in the bottom panel results in multiple shorter bursts. The simulation Exp-10.3-Mc5 (the acronym means "exponential entropy profile with central entropy 10.3 and core mass \(5\,{\rm M}_{\oplus}\)) produces a close although not entirely perfect fit to the FU Ori accretion rate. in SS5 we consider this model in greater detail.
Footnote 5: after they lost some mass. Recall that by design our planets have the same initial radius \(R_{\rm p}\) at \(t<0\).
## 5 Time dependent spectra of unstable discs
In this section we study how the accretion disc structure and the corresponding Spectral Energy Distribution (SED) vary through the model outbursts for classical TI, TIP-EE, and the DZ-MRI scenarios. The focus is on the early SED evolution as we find that the three models make diverging predictions that can facilitate their observational differentiation.
### Classical TI bursts: months-long mid-IR delays
Here we analyse a standard planet-free TI disc behaviour through one outburst cycle. The disc feeding rate \(\dot{M}_{\rm feed}\) is the same as in Fig. 7, \(2.2\times 10^{-6}\)\(\,{\rm M}_{\odot}\) yr\({}^{-1}\,\). The resulting stellar accretion rate during the cycle is shown in the centre top panel of Fig. 9. The time axis in this section is shifted to \(t=0\) at the moment of the initial very brief stellar accretion
Figure 5: _Left panels:_ Planet radius vs time for planets with exponential entropy function (§4) losing mass at the rate \(\dot{M}=3\times 10^{-5}\)\(\,{\rm M}_{\odot}\) yr\({}^{-1}\,\). _Right panels:_ Same results but plotted versus planet mass.
rate spike in the outburst (seen in the top left panel of the figure which zooms in on the first year of the burst.)
Fig. 8 shows disc evolution from \(t=-0.1\) to \(t=0.2\) yrs in the top 4 panels, whereas the bottom 4 panels show disc evolution from \(t=0.5\) until the disc falls deep in quiescence, \(t=25\) yrs. Three of the panels show fairly obvious disc characteristics, the disc surface density \(\Sigma\), the midplane and effective temperatures, \(T_{\rm d}\) and \(T_{\rm eff}\), respectively. The relative flux quantity, \(\mathcal{F}\), is the ratio of the instantaneous emergent radiation disc flux, \(F\), to the radiation flux of a steady-state Shakura & Sunyaev (1973) disc, \(F_{\rm ss}\), for a fixed \(\dot{M}_{\rm ref}=2\times 10^{-5}\) M\({}_{\odot}\) yr\({}^{-1}\). For a steady-state disc with accretion rate \(\dot{M}\), \(\mathcal{F}=\dot{M}/\dot{M}_{\rm ref}=\) const everywhere; deviations from this are tell-tale signs of time-dependency.
The early disc evolution shown in the top 4 panels of Fig. 8 reproduces the well known result (e.g., Bell & Lin, 1994; Bell et al., 1995) that classical TI outbursts begin in the inner disc, at a distance of \(\sim 2-3\) stellar radii. Time \(t=0\) coincides with emergence of an ionisation peak at \(R\approx 0.023\) AU. The ionisation fronts propagate inward and outward, reaching the star the quickest. Within just a few days, the accretion rate onto the star rises by two orders of magnitude.
The two bottom panels in Fig. 9 show model magnitudes in the B, V, J and L bands (\(\lambda\approx 0.545\), 0.641, 1.25, 3.5 \(\mu m\), respectively). We compute those by assuming that the disc emits local blackbody emission, and integrating this emission over the disc annuli. We assume disc inclination of \(i=37^{\circ}\) and use a constant in time \(A_{\rm v}=1.7\) for visual extinction (cf. Lykou et al., 2022). The right panel of Fig. 9 presents the integrated disc SED evolution during the TI burst rise. We see that the optical (B & V) and the near-IR J bands rise to the maximum brightness in a matter of weeks whereas the mid-IR L band lags. Quantitatively, a rise of 2 magnitudes in the L band occurs \(\sim 2\) months later than the burst onset in the B/V bands. The peak in the L is offset even more, by \(\sim 1\) year, with respect to the peaks in the optical bands. This delay is comparable to the ionisation front propagation time through the unstable region. The front propagates outward at the speed \(\sim ac_{s}(H/R)=\alpha(H/R)^{2}v_{K}\sim 0.004v_{K}\)(Bell & Lin, 1994) for \(\alpha=0.1\) and \(H/R\sim 0.2\).
The middle panels of Fig. 9 show that the near-IR and mid-IR emission of the disc outlast the optical burst by a few years. The red dotted curve (\(t=25\)) shows that in quiescence the inner disc essentially disappears. Also note that the disc beyond \(R=0.3\) AU varies very little throughout the full outburst cycle; this region of the disc sits stably on the low \(\dot{M}\) neutral Hydrogen solution branch.
### TI-EE bursts: months-to-year long optical delays
Figs. 10 & 11 analyse the EE burst of model Exp-10.3-Mc5. The vertical violet-shaded band shows the location of the planet (whose orbit barely evolves during the burst). Consider first the early times of the burst, the top 4 panels of Fig. 10. Prior to the outburst beginning, the gap around the planetary orbit is open. As in the scenario of Lodato & Clarke (2004), the outburst starts behind the planet, at \(R\approx 0.1\) AU. As the disc heats up, \(T_{\rm d}\) surges by an order of magnitude. Factoring in the additional change in \(\mu\) by about a factor of 4, the disc \(H/R\) at the planet location increases by at least a factor of 5 (cf. fig. 3 in paper I). The value of the Crida parameter \(C_{\rm p}\) varies suddenly from sub-unity (gap opened) to a few (see fig. 20 in paper I). The gap is hence filled by the hot gas on the local dynamical time.
Consider the bottom four panels in Fig. 10. The outburst propagates outward, bringing into the inner disc more matter that was piled up behind the planet, and setting off further heating up of the inner disc. This eventually tips the planet over the \(R_{\rm p}>R_{\rm B}\) barrier to EE mass loss. As the planet takes over the role of the main mass supplier to the inner disc, a self-sustained FU Ori outburst begins.
Figure 6: FUOR bursts from the planet with identical properties but immersed in discs with different feeding rates at large radius, as marked in the legend. The pink shaded area is eq. 3. The smaller is \(\dot{M}_{\rm feed}\), the closer to the star the planet needs to be to experience EE-causing conditions, so the bursts are brighter, and their character may change. See §4.2.1.
The two left panels in Fig. 11 show mass fluxes and photometric evolution of the outburst. During the first \(\sim 1.5\) years the outburst is powered by the TI-planet (Lodato & Clarke, 2004) mechanism, with the EE process becoming important only later. Once the planet EE process turns on, the outburst is in the slowly evolving planet sourced-mode for the next \(\sim 400\) years (cf. the \(S_{\rm min}=10.3\) curve in Fig. 7). From the bottom left panel and the SED (the right panel of Fig. 11) evolution, we see that the outbursts in the NIR band J & the mid-infrared band L start some \(\sim 0.3\) years earlier than in the V or B bands. This is quite different from the classical TI burst behaviour. The outburst rise time to maximum light is \(\sim 2\) years in the B and V bands, which is comparable but a little longer than observed (e.g., Herbig, 1966). Early brightening in mid-IR was observed in several recently discovered FUORs, see SS6.
### DZ-MRI bursts: decade long optical delays
Recently, Bourdarot et al. (2023) (B23 hereafter) presented near-IR interferometric observations of FU Ori and 1D time-dependent disc modelling of the dead zone MRI activation scenario (DZ-MRI) and concluded that the model accounts for the _"outburst region"_ size well. In Figure 12 we plot the lightcurves for B23 model with \(\dot{M}_{\rm feed}=8\times 10^{-8}\) M\({}_{\odot}\) yr\({}^{-1}\) in various photometric bands6. We note that the B and V band lightcurves in this model are several magnitudes lower than the observed ones. This is because \(\dot{M}\) is a factor of two larger and \(R_{*}\) is about twice smaller in Lykou et al. (2022) than in this model. By adjusting parameters of the model it is possible to match the observed B and V data closer, as shown
Figure 7: Same as Fig. 4 but for planets with exponential entropy profile (§4) and solid cores with mass \(M_{\rm c}=5\) M\({}_{\oplus}\) (top panel) and \(M_{\rm c}=65\) M\({}_{\oplus}\) (bottom panel).
Figure 8: Standard planet-free disc structure at early (_the top 4 panels_) and late times (_the bottom 4 panels_) in the TI outburst shown in Fig. 9.
Figure 9: Standard planet-free TI outburst observational appearance. _Left and Centre panels. Top:_ Stellar accretion rate vs time, _bottom:_ Magnitude evolution for FU Ori in selected four filters. Note that in the L band the outburst begins weeks to months later than in the other bands, and that the burst lasts a few years longer in J and L filters. _Right panel:_ SED evolution of the disc during the rise to the maximum light. See §5.1 for detail.
Figure 11: **Top:** Stellar accretion rate and planet mass loss rate vs time at early times in model Exp-10.3-Mc5. **Bottom:** Magnitude evolution for FU Ori in selected four filters. Note that in the J and L bands the outburst begins a few months prior to that in the optical.
Figure 10: Model Exp-10.3-Mc5 disc structure at early (_the top 4 panels_) and late (_the bottom 4 panels_) times in the outburst.
by B23, however our focus here is on the time lags between the four photometric bands, and that is weakly dependent on \(R_{*}\) or \(\dot{M}\). Figure 12 shows that the outburst in J and L bands start approximately two and five years earlier than it does in B and V. This is opposite to the behavior seen in classic TI bursts. For the TI-EE scenario the outbursts also start in the mid-IR first, but the delay in the optical emission is months to a year versus at least several years for the DZ-MRI scenario.
These results are best understood by considering disc evolution during the MRI activation burst ignition shown in Figure 13 for the same (\(\dot{M}_{\rm feed}=8\times 10^{-8}\) M\({}_{\odot}\) yr\({}^{-1}\) B23) DZ-MRI model. The inner edge of MRI inactive zone (the DZ) is at \(R\approx 0.4\) AU before the outburst. The burst is thermally (rather than GI) triggered when midplane temperature exceeds the critical temperature of 800 K there (interestingly this is similar to the suggestion made by Cleaver et al., 2023). As the disc at this location gets ionised, a heating wave propagates both inward and outward. It takes \(\sim 5\) years for the ionisation front to reach the star, and this delay is the reason why mid-IR brightens much earlier than do B and V bands.
Differences in model parameters, such as critical temperature, \(T_{\rm crit}\), disc surface density \(\Sigma_{\rm crit}\), viscosity parameter values in the DZ and during the burst, \(\dot{M}_{\rm feed}\), will all lead to different outburst \(\dot{M}\) curves. However, such outbursts always start at a significant fraction of an AU from the star (up to 2-3 AU; see Armitage et al., 2001; Zhu et al., 2010). Therefore, the key result of this section - the mid-IR emission preceding the optical in MRI activation bursts by many years - is robust to parameter choices.
When this manuscript was in its final stages, Cleaver et al. (2023) presented a detailed study of accretion bursts in DZ-MRI scenario. They assume a constant viscosity parameter, i.e., \(\alpha_{\rm hot}=\alpha_{\rm cold}=0.1\), and they also reduce dust opacity by an order of magnitude to account for dust growth in the disc. Both of these assumptions modify the location of the critical point \(\Sigma_{\rm A}\) where the instability usually sets in on the lower stable branch (see Fig. 2 and Fig. 1 in Bell and Lin, 1994; Lodato and Clarke, 2004, respectively). We believe that these choices straighten the S-curve out and so largely prevent the TI instability from operating, except for relatively small dips in the lightcurves (see figures in Cleaver et al., 2023). The authors also investigate a range of pre-burst initial \(\Sigma(R)\) profiles and the location of the seed perturbation that sets the MRI instability off, rather than letting the cycles repeat on their own, as we have done here. Despite all these differences, Cleaver et al. (2023) also find that for powerful FU Ori type outbursts the location of the starting perturbation is \(\sim 1\) AU in the DZ-MRI scenario, and they find that the optical burst lags IR emission by as much as decades. Similar conclusions are obtained in a simpler analytical modelling by Liu et al. (2022), see their fig. 21. Thus, irrespective of modelling detail, the optical emission in DZ-MRI bursts lags IR rise by years to several decades.
### External perturbers
FU Ori is a binary system (Perez et al., 2020), with the outbursting star FU Ori North referred to simply as FU Ori here. In this paper we completely ignored FU Ori South even though binary interactions (Bonnell and Bastien, 1992) and stellar flybys (e.g., Vorobyov et al., 2021; Borchert et al., 2022) were shown to be able to produce powerful accretion outbursts in simulations. However, in application specifically to FU Ori, this scenario is probably not very likely for several reasons: (i) we now know (Lykou et al., 2022) that the active disc radius is very small, \(\lesssim 0.3\) AU, which is much smaller than predicted by the simulations cited above; (ii) the observed separation of the components in FU Ori exceeds 200 AU. If the two FU Ori components form a bound pair then the orbit must be extremely eccentric for a close passage to affect the inner disc of FU Ori. But this passage would have happened no less than 300 years ago rather than 1937. This scenario is also challenged by the point "\(\forall\)" below; (iii) If the pair is unbound instead then the stars must be travelling with an uncommon high relative velocity for typical star cluster environment to yield a close passage in 1937. The probability of a passage of two stars within a few AU or each other in this case is \(\sim 10^{-6}\) (private communication, Andrew Winter). (iv) Accretion outbursts resulting from stellar flybys are strongly peaked events (e.g., Borchert et al., 2022) that do not resemble the surprisingly slow decline of FU Ori's brightness. (v) The two discs around the components of FU Ori are quite smooth (Perez et al., 2020), not showing any evidence for a recent violent interaction.
The binary nature of FU Ori is however important in constraining the large-scale (tens of AU) disc structure and evolution. _If_ our model for FU Ori is correct then this implies that discs in binary systems can hatch planets via GI and these planets are able to migrate into the inner 0.1 AU by the time the discs look relaxed and GI-spiral free. This will be addressed in future work.
## 6 Discussion
### Internal structure of GI planets
In paper I, a power-law mass radius relation for the model planet was assumed. Here, in SS3 and SS4, we computed the
Figure 12: Lightcurves in various photometric bands for the \(\dot{M}_{\rm feed}=8\times 10^{-8}\) M\({}_{\odot}\) yr\({}^{-1}\) B23 model shown with the red curve in Fig. 11. Notice that, contrary to the TIP-EE scenario (lower left panel in Fig. 11), outbursts in J and L bands precede the optical burst by 2 and 5 years, respectively.
mass-radius relation of mass losing planets with a stellar evolution code. In such a calculation one specifies an initial condition for the internal structure of the planet, which unfortunately is not well constrained for the youngest GI planets (see SS3.1). In realistic planets composition \(Z\) and specific entropy \(S\) are functions of \(M\), the enclosed mass inside the planet. Here we investigated a simple scenario of a uniform composition gas envelope (metallicity \(Z=0.02\)), with a possible presence of an inert solid (\(Z=1\)) core in the centre, and several forms for \(S(M)\). \(S(M)\) cannot be a strongly decreasing function of \(M\) as vigorous convection ensues, so we have two cases to consider. "Standard planets" are those initialised with constant entropy as per default MESA approach (Paxton et al., 2013). Super-adiabatic planets are those with a positive gradient in \(S(M)\). In SS3 coreless (core mass \(M_{\rm c}=0\)) planets were studied, whereas in SS4 we allowed for a presence of an inert solid core with \(M_{\rm c}>0\). TI-EE outbursts depend strongly on the planet internal structure:
1. Standard core-less planets expand rapidly while losing mass. Such planets lead to runaway FUOR outbursts whereby planet mass loss keeps increasing (and so stellar \(M\)) until the planet fills its Hill radius, \(R_{\rm H}\), at which point it is disrupted in a powerful and short burst.
2. Super-adiabatic coreless planets with a linear entropy function (\(dS/dM={\rm const}>0\)) may first contract and then eventually expand (Fig. 3). For small values of \(dS/dM\), the outbursts are similar to those of the standard planets, but tidal disruption of the planet is delayed. For large values of \(dS/dM\), the planet contracts so rapidly that Extreme Evaporation process terminates while most of the planet is still intact. This leads to rapidly declining short outbursts.
3. Century long continuous outbursts such as FU Ori's must be somewhat rare because for this the planet radius must be in a relatively narrow region, the grey area in Fig. 3, between the Bondi radius, \(R_{\rm B}\), and \(R_{\rm H}\).
4. Bursts powered by EE of coreless planets, \(M_{\rm c}=0\), always end in powerful and short tidal disruption bursts.
5. In SS4 we tested an exponential form of entropy function (eq. 10). The resulting planet \(M-R\) relation (Fig. 5) in general also shows contraction followed by expansion, however solid core presence leads to a rapid contraction when the envelope mass \(M-M_{\rm c}\sim M_{\rm c}\). Extreme Evaporation of such planets does not necessarily end in a runaway disruption burst, and there is a planetary remnant with the core of \(M_{\rm c}\) and some atmosphere.
6. Planets that contract while losing mass yield repetitive shorter bursts, such as the green and blue curves in the bottom panel of Fig. 4. This may explain why FUOR phenomenon is so widespread. Hartmann and Kenyon (1996) estimated that FUOR bursts happen a dozen times in the life of each young star while its growing. Recent observations suggest that these outbursts recur most frequently in class 0 phase (Hsieh et al., 2019; Zaki et al., 2022). For Class II sources, recurrence time is \(\sim 10^{5}\) years, an order of magnitude longer than in Class I (Contreras Pena et al., 2019). However since duration of class II phase is longer than class I this may represent a non negligible contribution to FUOR event rate. If each accretion burst required one massive planet then this would require an extraordinary high rate of planet formation by disc fragmentation, i.e., \(\sim\) dozens of GI planets per star. The repetitive bursts we see in Fig. 4 however reduce this requirement significantly, probably making it consistent with a few to ten GI clumps per star formed in disc fragmentation simulations (e.g., Vorobyov and Basu, 2006, 2010; Cha and Nayakshin, 2011).
7. The repetitive EE bursts may also be the mode, consistent with the fact that many recently discovered outbursts are not century long events but last for only \(\sim\) a dozen years and may repeat (e.g., Fischer et al., 2022). Indeed, a number of FUORs have now been seen to undergo switching from the burst to near quiescence and back up, e.g., V346 Nor (Kospal et al., 2017, 2020), V899 (Ninan et al., 2015, 2023), V1647 (Ninan et al., 2013). Such behaviour is not expected for DZ-MRI scenario where periods between bursts are expected to be \(\sim(10^{3}-10^{4})\) years. For the model parameters explored here, the recurrence time of the repetitive EE bursts is much shorter, e.g., tens of years (Fig. 7), closer to those observed.
8. Our best fit model for FU Ori (the green solid curve in the top panel of Fig. 7) suggests that the planet radius
shrank only by \(\sim 30\%\) from the burst beginning till now, and its mass is \(\sim\) half of its initial mass of \(6\,\mathrm{M}_{\mathrm{J}}\).
### Multi wavelength tests of FUOR outbursts
Observations of episodic accretion bursts in multiple bands, especially during rapid light curve evolution, is a key tool to test outburst models (as previously suggested by, e.g., Clarke et al., 1990; Bell et al., 1995; Bourdarot et al., 2023; Cleaver et al., 2023). In SS5 we compared light curve evolution in four bands (optical B, V, and mid-IR J and L) for three models of FUOR bursts: the classical TI, the TI-EE, and the DZ-MRI.
1. As previously found by Bell et al. (1995), classical TI bursts start very close to the star and propagate outward (Fig. 8). Spectroscopically, such bursts start in the optical, with mid-IR emission rising some months later (Fig. 9).
2. In our best TI-EE scenario for FU Ori, the burst starts behind the planet at \(R\sim 0.1\) AU, as suggested by Lodato & Clarke (2004), rather than in the inner disc (Fig. 10). The burst first becomes apparent in the mid-IR, with the optical emission delayed by a few months (Fig. 11) to a year.
3. In the DZ-MRI scenario, the outburst starts at \(R\sim(0.5-2)\) AU. Such outbursts rise in the IR years to decades before the optical emission does; these delays are much longer than in the TIP-EE model (note that our results reproduce simpler modelling by Liu et al., 2022, cf. their Fig. 21). Further, Cleaver et al. (2023) has recently found several decades long optical delays for bright FUORs.
There are only two sources for which this phase of the burst was well observed in the modern era of multiwavelength observations (see SS4.1.2 and fig. 6 in Fischer et al., 2022), and in both cases the optical burst started after the IR rises in the lightcurve. In _Gaia_-17bpi (Hillenbrand et al., 2018) the optical delay is at least a year, whereas in _Gaia_-18dvy (Szegedi-Elek et al., 2020) the delay is probably somewhat shorter. Classic TI predicts that optical precede IR, so this scenario is ruled out for both sources. DZ-MRI scenario and TIP-EE scenario both predict the right sign for the delay, e.g., the optical coming after the IR. Interestingly, however, Cleaver et al. (2023) show that the delays correlate with outburst \(\dot{M}\) in the DZ-MRI picture. In particular, for the weak burst in _Gaia_-17bpi, where the peak accretion rate is estimated to be \(\sim(1-6)\times 10^{-7}\)\(\mathrm{M}_{\odot}\) yr\({}^{-1}\)(Rodriguez & Hillenbrand, 2022), the delay is predicted to be \(\sim 1\) year. For FU Ori like outbursts the delay is several decades. _Gaia_-18dvy peak accretion rate is estimated to be smaller but comparable to FU Ori, e.g., \(\sim 6\times 10^{-6}\)\(\mathrm{M}_{\odot}\) yr\({}^{-1}\), thus one to two orders of magnitude larger than in _Gaia_-17bpi. DZ-MRI scenario hence predict a much longer delay in _Gaia_-18dvy compared with _Gaia_-17bpi, but this is not observed. While here we studied FU Ori like outbursts and plan to address weaker outbursts in near future, we note that the optical delays cannot be very different from a year in our scenario. TIP-EE outbursts always start at relatively small radius, \(\sim 0.1\) AU. Thus, currently available multiband photometry observations of early lightcurve rises are best consistent with our scenario. We also note that in _Gaia_-18dvy the outer radius of the bright (active) disc is surprisingly small, \(\sim 0.1\) AU (Section 4.3 in Szegedi-Elek et al., 2020). As we showed in paper I, \(R_{\mathrm{act}}\) values much smaller than \(\sim 1\) AU challenge DZ-MRI scenario strongly but are consistent with the TI-EE scenario, especially early on in the outburst.
## 7 Aknowledgement
James Owen is warmly thanked for providing his MESA setup for calculations of planet contraction and evaporation that we modified for our purposes here, and for useful comments on the draft. The authors are grateful to Agnes Kospal for a illuminating discussions of recent FUOR observations. Allona Vazan is thanked for discussions and comments on a very early draft. Andrew Winter is thanked for discussion of the binary flyby scenario for FU Ori. The authors acknowledge the funding from the UK Science and Technologies Facilities Council, grant No. ST/S000453/1. This research used the ALICE High Performance Computing Facility at the University of Leicester, and the DiRAC Data Intensive service at Leicester, operated by the University of Leicester IT Services, which forms part of the STFC DiRAC HPC Facility (www.dirac.ac.uk). For the purpose of open access, the authors have applied a Creative Commons Attribution (CC-BY) licence to any Author Accepted Manuscript version arising.
## 8 Data Availability
The data obtained in our simulations can be made available on reasonable request to the corresponding author.
## Appendix A DZ-MRI disc modelling
It has been argued in paper I that MRI activation model for FU Ori (Armitage et al., 2001, A01 hereafter) is inconsistent with observations of the source for two main reasons: (1) the _active disc region_ is too large compared with mid-IR interferometric observations of Lykou et al. (2022) and (2) the inner disc must go through TI cycles which contradicts the surprising long term stability of FU Ori lightcurve.
Concerning point (1), after paper I was accepted, Bourdarot et al. (2023) (B23 hereafter) showed that their model with MRI activation outburst is in a good agreement with the observations. This appears to contradict paper I results, however we note the definition differences here. B23 define the _outburst_ region through its observed brightness, therefore, it is really an emitting region bright in H and K photometric bands. In contrast, Lykou et al. (2022) and in paper I, an _active disc_ region is defined as the disc region where \(\dot{M}\approx\dot{M}_{*}\). Outside of that region the local energy dissipation rate is set to zero in Lykou et al. (2022); in paper I it is not zero but drops significantly with distance, in qualitative similarity to their model. This is best seen in the bottom left panel of Fig. 10: the effective energy dissipation rate is a factor of \(\sim 5\) smaller in the disc beyond 0.4 AU compared to that in the inner 0.1 AU.
Our preliminary investigation shows that the size of the _emitting region_ in the optical or near-IR bands could be independent of what happens in the disc beyond \(\sim 0.2\) AU for accretion rates of a few \(\times 10^{-5}\)\(\mathrm{M}_{\odot}\) yr\({}^{-1}\) : the disc at these regions, either active or inactive, emits little. In contrast, mid-IR bands are more sensitive to the disc emission
at \(\gtrsim 0.1\) AU where our and MRI activation models may be more divergent. A detailed model comparison must however include irradiation of the outer passive disc by the radiation emitted from the inner disc (with a radiative transfer calculation similar to those performed by, e.g., Zhu et al., 2008; Lykou et al., 2022) and we leave this to future work.
Point (2) appears to be robust, and we see no physically motivated way to turn TI off in the MRI activation scenario. This statement does not contradict previous literature. A01 uses a computational grid of 120 radial mesh points that are uniformly distributed in \(R^{1/2}\) from \(R\approx 0.023\) AU (\(5R_{\odot}\)) to 40 AU. Such a grid covers the \(R<0.1\) AU disc with 4 grid points only and is far coarser than our grid, which is logarithmic in \(R\). We found that when we degrade our grid resolution to match A01 inner grid then our disc models also show no TI (whereas TI is present for same model parameters at our default, higher resolution). Similarly, Zhu et al. (2009c) finds no TI bursts in their 2D simulations of MRI activation scenario when the inner boundary of their computational domain exceeds 0.1 AU, but the instability does appear when the region is resolved in their simulations. For the purposes of this section we cannot follow these approaches since the inner 0.1 AU disc emits almost all of the disc emission in the optical.
B23 explores MRI activation scenario with and without TI included in their 1D time-dependent disc calculation. The no-TI scenario is found to fit FU Ori lightcurves better. To exclude TI, the authors set gas opacity to \(\kappa=0.02T^{0.8}\) cm\({}^{2}\)g\({}^{-1}\). With this opacity choice, we also find no thermal instability7 cycles in our disc models (here and in paper I we use the more complete dust and gas opacity from Zhu et al., 2009a). Although there does not seem to be a clear justification for this simplified opacity form, its use is illuminating as it allows one to contrast outburst spectral evolution of the MRI activation scenario with that of classical TI and TIP-EE scenarios studied earlier. In the rest of this section we therefore use the "no-TI" opacity from B23.
Footnote 7: This is expected. For thermal instability to take place in a vertically integrated disc model, the condition \(d\ln Q^{-}/d\ln T<1\) needs to be satisfied for a constant \(\alpha\) disc, where \(Q^{-}\) is the vertically integrated disc cooling rate. This requires (Frank et al., 2002) disc opacity \(\kappa\) to increase with \(T\) faster than \(T^{4}\), which is not the case for this shallow power law opacity.
In Figure 11 we show the time evolution of mass accretion rate onto the star for five models that follow disc viscosity choices as in B23 (coloured lines) for various values of \(\dot{M}_{\rm feed}\). We also present three models with parameter choices exactly as in A01 (black lines)8. All the curves were shifted in time to display them on the same scale.
Footnote 8: The two models are very similar, however have some differences in parameter values. The outer radius of the disc and the stellar mass are 40 AU and 1M\({}_{\odot}\) in the model from A01, while in the model from B23 they are 30 AU and 0.5M\({}_{\odot}\), respectively. Furthermore, in B23 a modified prescription for the \(\alpha\) parameter in MRI active zone is used (see their Eq. (4)). Also, in B23 the authors are using \(\Sigma_{\rm crit}=10\) g cm\({}^{-2}\), which is an order of magnitude lower than in A01.
In the B23 models, the initial sharp rise in mass accretion rate reaches \(\dot{M}_{\rm feed}\approx 5\times 10^{-5}\) M\({}_{\odot}\) yr\({}^{-1}\) during about 1 year. Depending on the value of \(\dot{M}_{\rm feed}\), stellar \(\dot{M}\) shows either a second rise with a gradual decrease or a gradual decrease. We believe that the two-step rise to maximum in some of our models is due to our more complete equation of state that includes the change in the mean molecular weight of the gas as Hydrogen is ionised, while this was fixed at 2.3\(m_{p}\) by B23 and A01. Such a two-step rise in brightness was not observed in FU Ori, but for what follows it will not be important anyway.
As in Fig. 7, the pink shaded area in Fig. 11 is the approximate time evolution of mass accretion rate for FU Ori according to Lykou et al. (2022). The accretion rates in B23 models with \(\dot{M}_{\rm feed}<4\times 10^{-7}\) M\({}_{\odot}\) yr\({}^{-1}\) have rise times similar to the observed ones for FU Ori, and the model with \(\dot{M}_{\rm feed}=8\times 10^{-8}\) M\({}_{\odot}\) yr\({}^{-1}\)(the thick red line) comes closest to the desired \(\dot{M}\) evolution.
## Appendix B Ti Presence in MRI Activation Scenario
We claimed in paper I that DZ-MRI scenario model for FU Ori should show TI outbursts and this challenges this scenario. In SS5.3 we used simplified opacity form from B23 to study bursts which would start at the inner edge of the DZ. Here we run the same model but with realistic opacities from Zhu et al. (2009a). We find that TI is developing in the inner disc for all the \(\dot{M}_{\rm feed}\) values shown in Fig. 11. In Figure 11 we show the stellar mass accretion rates (top panel) and the lightcurves in various bands (bottom panel) zoomed-in on one of the MRI bursts in the model with \(\dot{M}_{\rm feed}=8\times 10^{-8}\) M\({}_{\odot}\) yr\({}^{-1}\). The duration of DZ-MRI burst is about 2500 yrs, but during the outburst much shorter (a few decades long) TI outbursts are present. In the inset we show a sequence of such TI outbursts. Clearly, such a variable accretion is not consistent with FU Ori observations.
|
2309.15292 | Scaling Representation Learning from Ubiquitous ECG with State-Space
Models | Ubiquitous sensing from wearable devices in the wild holds promise for
enhancing human well-being, from diagnosing clinical conditions and measuring
stress to building adaptive health promoting scaffolds. But the large volumes
of data therein across heterogeneous contexts pose challenges for conventional
supervised learning approaches. Representation Learning from biological signals
is an emerging realm catalyzed by the recent advances in computational modeling
and the abundance of publicly shared databases. The electrocardiogram (ECG) is
the primary researched modality in this context, with applications in health
monitoring, stress and affect estimation. Yet, most studies are limited by
small-scale controlled data collection and over-parameterized architecture
choices. We introduce \textbf{WildECG}, a pre-trained state-space model for
representation learning from ECG signals. We train this model in a
self-supervised manner with 275,000 10s ECG recordings collected in the wild
and evaluate it on a range of downstream tasks. The proposed model is a robust
backbone for ECG analysis, providing competitive performance on most of the
tasks considered, while demonstrating efficacy in low-resource regimes. The
code and pre-trained weights are shared publicly at
https://github.com/klean2050/tiles_ecg_model. | Kleanthis Avramidis, Dominika Kunc, Bartosz Perz, Kranti Adsul, Tiantian Feng, Przemysław Kazienko, Stanisław Saganowski, Shrikanth Narayanan | 2023-09-26T22:08:19Z | http://arxiv.org/abs/2309.15292v1 | # Scaling Representation Learning from Ubiquitous ECG with State-Space Models
###### Abstract.
Ubiquitous sensing from wearable devices in the wild holds promise for enhancing human well-being, from diagnosing clinical conditions and measuring stress to building adaptive health promoting scaffolds. But the large volumes of data therein across heterogeneous contexts pose challenges for conventional supervised learning approaches. Representation Learning from biological signals is an emerging realm catalyzed by the recent advances in computational modeling and the abundance of publicly shared databases. The electrocardiogram (ECG) is the primary researched modality in this context, with applications in health monitoring, stress and affect estimation. Yet, most studies are limited by small-scale controlled data collection and over-parameterized architecture choices. We introduce **WildECG**, a pre-trained state-space model for representation learning from ECG signals. We train this model in a self-supervised manner with 275,000 10s ECG recordings collected in the wild and evaluate it on a range of downstream tasks. The proposed model is a robust backbone for ECG analysis, providing competitive performance on most of the tasks considered, while demonstrating efficacy in low-resource regimes. The code and pre-trained weights are shared publicly at github.com/klean2050/tiles_ecg_model.
Electrocardiography, Ubiquitous computing, Self-supervised learning, State-space models +
Footnote †: journal: Pre-print, currently under review
## 1. Introduction
Artificial Intelligence (AI) has made significant inroads into human-centered signal modeling, notably in the realms of behavioral analysis (Bartosz, 2017) and health (Kraus et al., 2018). This progress benefits primarily from the algorithmic development of deep learning models and the substantial effort in curating publicly-shared datasets (Kraus et al., 2019). The rapid advances of deep learning in various application domains, such as computer vision (CV), speech, and natural language processing (NLP) are critically dependent on the availability of large datasets, allowing for designing and training large-scale neural networks. Within the medical domain of biosignal analysis, supervised learning algorithms have been employed to improve diagnostic performance and accelerate biomarker detection in many areas, including dermatology (Kumar et al., 2018), ophthalmology (Kumar et al., 2018), as well as in psychology, physical health and well-being (Kumar et al., 2018; Kumar et al., 2018)1.
Footnote 1: **Pre-print, currently under review**. POC: [email protected]
Driven by successful applications in multiple fields within health and well-being, AI technologies are increasingly demanded in ubiquitous sensing and modeling of human states in everyday settings, including home and workplace (Kraus et al., 2018; Kraus et al., 2018; Kraus et al., 2018; Kraus et al., 2018; Kraus et al., 2018). A significant portion of research and understanding of human biosignals has focused on modeling physiological responses to external stimuli and from constrained interaction environments. These approaches typically consider signals derived from cardiac activity, respiration patterns, body temperature, electrodermal activity and even neural (brain) activity (Kraus et al., 2018). Cardiac activity, particularly the electrocardiogram (ECG), has been a prominent modality choice due to its well-recognized signal patterns and clinically-validated significance (Kraus et al., 2018; Kraus et al., 2018). While it requires low-cost recording equipment, ECG offers enormous diagnostic potential and hence has seen increasing research efforts, notably through the creation of shared databases and novel, data-driven modeling approaches.
However, the transition from monitoring in clinical settings to sensing in the wild introduces novel challenges toward more generalized capabilities for understanding cardiac activity across diverse living contexts. Unlike text or image domains where abundant large datasets have enabled large (foundation) self-supervised models (Kumar et al., 2017), the state of the art in biosignal models lags behind. One practical issue related to data acquisition is the need for long-term recording capabilities and the resulting cost of obtaining biological signals from ecologically valid contexts (Kumar et al., 2017). This has challenged the creation of datasets with high-quality recordings from a large number of participants and diverse backgrounds. Furthermore, the intricacy of extracting meaningful insights from vast and heterogeneous ECG data, with sparse or no accompanying (meta) information, demands methods that can learn and infer from these data in self-supervised ways. Such methods are also motivated by the need to address inherent biases and subject heterogeneity in bio-behavioral responses that hinder model performance and reliability. Another critical challenge involves engineering models that can adeptly capture the structure and temporal dependencies of ECG without the need of scaling to large and overparameterized models (Kumar et al., 2017) that increase the risk of overfitting and are not practical for mobile deployment. This involves striking a delicate balance between model complexity and efficiency. Given the multifaceted nature of these challenges, there is an imminent need for developing methodologies toward practical, robust, and reliable solutions in the realm of ubiquitous ECG analysis.
## 2. Contributions
In this study, toward addressing those challenges, we propose a framework to train a model on large-scale public data to extract general-purpose vector representations for the ECG signal. Our contributions can be summarized as follows:
* Our model, called **WildECG**, is trained on TILES (Wang et al., 2017), one of the largest publicly available biosignal data collections in the wild, manifesting a wide range of variability and subject heterogeneity for better generalization.
* To minimize the impact of noisy and biased data annotations, WildECG is trained in a self-supervised manner to identify distortions automatically induced on ECG samples at training time. This enables us to test performance on a variety of downstream tasks related to ECG.
Figure 1. Our proposed model extracts vector representations from input, single-lead ECG signals, and can be used both as a backbone encoder and feature extractor across multiple different sensing tasks.
* Our model incorporates a lightweight architecture based on state-space models that are efficient in modeling sequences with long temporal dependencies. By using a small number of parameters, WildECG further reduces the risk of overfitting and is suitable for deployment on the edge. This is particularly important for user privacy and data security by handling all computations locally.
In sum, the proposed framework offers an efficient way for extracting robust ECG representations that perform competitively across multiple downstream tasks including human interaction modeling, affect recognition, and disease prediction. WildECG outperforms multiple architectures and training algorithms, while retaining discriminative information in low-resource settings and during minimal fine-tuning.
## 3. Background
### Electrocardiography
Electrocardiography is a non-invasive technique for recording the electrical activity of the heart. The resulting signal, called the electrocardiogram (ECG), provides information about the functioning and structure of the heart, including the timing and regularity of its rhythm, significant underlying conditions or abnormalities, along with psychological states such as stress or emotional arousal. ECG has a characteristic structure that consists of specifically documented signatures: the P wave, the R peak and broadly the QRS complex, and the T wave, each of which corresponds to a distinct phase of the cardiac cycle. In addition to these waves, one can also identify intervals on the ECG that hold important information, e.g., PR and QT intervals, as well as the RR interval which is used to calculate heart rate and heart rate variability [(31; 45; 46; 68)].
ECG is acquired through electrodes that are placed on the surface of the skin, usually at the chest or the wrists. The most common sensor configurations include 12-lead, 3-lead, or single-lead placements. The latter configuration, applied through wearable straps or wristbands, is a practical choice for measurements made in naturalistic settings, due to the ease of placement and minimal interference with the subject. To facilitate applications in both clinical and naturalistic domains, we restrict our study to single-lead ECG data.
### Self-Supervised Learning
Self-supervised learning (SSL) is an emerging machine learning paradigm that provides an effective way to learn meaningful data representation without the need to acquire explicit labels. In contrast to supervised learning, which relies on labeled data, SSL leverages the intrinsic structure and relationships within the data to create pseudo-labels or tasks to learn from. As such, it holds several advantages over conventional supervised approaches for our task, as it avoids the need for reliable annotations in large quantities, which would also constrain the scope of the model.
Most researchers distinguish two main types of SSL frameworks: (1) generative and (2) contrastive [(37; 38; 43; 61; 72; 74)]. The generative models (e.g., autoencoders) learn representations by reconstructing or generating the original input data using masked or corrupted data as input, which defines their pretext task. Contrastive methods, on the other hand, train a model by contrasting the representations of semantically same data (e.g., two augmented views, positive samples) to other distant data (negative samples). Additional variants of SSL have also been proposed in the literature, including predictive [(43; 72; 74)], property-based [(38)] or pretext learning [(61)] objectives, as well hybrid [(38; 74)] or cross-modal [(37; 61)] types.
Several promising approaches to SSL have been implemented, primarily in natural language processing (NLP) [(14; 3)] and computer vision [(10; 48)]. In the context of time series data, SSL has been used to learn representations for various tasks such as anomaly detection [(60)], frequency coupling [(78)], and masking [(32)]. Self-supervised learning of biosignals and ECG has already shown promising results in health applications and behavioral analysis. We include a comprehensive review of related studies in Appendix 8. In this work, we design
a custom data augmentation approach to automatically identify distortions induced on ECG samples, as a strategy to leverage the inherent structure and temporal dependencies that characterize the ECG signal.
### State-Space Models
State-space models (SSM) are a recently introduced category of deep neural networks (Krizhevsky et al., 2014) that were proposed to efficiently model long-term sequences, i.e., signals with either long duration and/or high sampling rate. Hence, the ECG modality constitutes a promising candidate for adopting state-space model architecture. SSMs draw intuition from both convolutional and recurrent network architectures. The continuous-time SSM converts a 1-D input signal \(u(t)\) into a latent state \(x(t)\) before projecting it onto a 1-D output \(y(t)\):
\[\begin{split} x^{\prime}(t)&=Ax(t)+Bu(t)\\ y(t)&=Cx(t)+Du(t)\end{split} \tag{1}\]
For discrete-time sequences that are sampled at a step \(\Delta\), Eq. 1 can be mapped to the recurrence shown in Eq. 2, using the bilinear method (Krizhevsky et al., 2014) to convert \(A\) into an approximation \(\bar{A}\):
\[\begin{split} x_{k}=\bar{A}x_{k-1}+\bar{B}u_{k}\quad y_{k}=Cx_{k} +Du_{k}\\ \bar{A}=(I-\Delta/2\cdot A)^{-1}(I+\Delta/2\cdot A)\\ \bar{B}=(I-\Delta/2\cdot A)^{-1}\Delta B\end{split} \tag{2}\]
Here \(D=0\)(Krizhevsky et al., 2014). Eq. 2 is a sequence-to-sequence map and the recurrence allows the discrete SSM to be computed like a recurrent network with hidden state \(\bar{A}\). Eq. 2 is also equivalent to a discrete convolution with kernel \(\bar{K}\)(Krizhevsky et al., 2014):
\[\bar{K}=\left(C\bar{B},C\overline{AB},C\bar{A}^{2}\bar{B},\ldots\right), \quad y=\bar{K}*u \tag{3}\]
Thus, SSMs can be viewed as special cases of convolutional and recurrent layers, inheriting their learning efficiency. Gu et al. (Gu et al., 2017) also contributed an efficient way of evaluating \(\bar{K}\).
The _Structured State Space for Sequence Modeling_ (S4) architecture was proposed in (Krizhevsky et al., 2014) to model sequences more efficiently than standard SSMs, also showing the capacity to capture long-range temporal dependencies. S4 is a particular instantiation of the SSM, where matrix \(A\) is parameterized as a diagonal plus low-rank (DPLR) that allows faster repeated computations. To capture long-range dependencies, S4 initializes \(A\) as HiPPO (Krizhevsky et al., 2014), so that the state \(x_{k}\) can memorize the history of the input \(u_{k}\). At the same time, HiPPO preserves the DPLR form, as shown in (Krizhevsky et al., 2014). Hence, the core S4 module is a linear, 1-D sequence mapping, however it handles high-dimensional features by defining independent copies of itself, and then mixing features with a position-wise linear layer. Nonlinear activations and dropouts in-between these layers provide the non-linearity of the whole architecture.
## 4. Method
### Pre-Processing
We adopt a universal approach in processing all ECG data used in pre-training and fine-tuning sessions. The following steps aim to alleviate the impact of discrepancies in different data collection processes, such as the performed task, sampling rate, equipment noise, subject-specific and other artifacts that induce different spatiotemporal properties. First, ECG signals are downsampled to 100 Hz and smoothed with a moving average kernel to remove powerline interference (Sandel, 2017). The specific sampling frequency provides a balance between preserving relevant information and reducing computational requirements. The majority of ECG datasets are recorded at 100 Hz or higher, and it has been reported (Sandel, 2017; Wang et al., 2018) that downsampling to 100 Hz does not compromise model performance. Next, we apply a high-pass Butterworth filter at 0.5 Hz. Finally, we perform subject-wise, z-score normalization. The signals are then segmented into non-overlapping windows of 10 second duration. During pre-training, where each sample is 15 seconds, 10 s samples are randomly extracted during training.
### Signal Transformations
We base our proxy tasks for pre-training on evaluating various signal transforms applied to ECG samples. To this end, we have implemented a Python module of ECG-tailored transformations that we share publicly2. The ECG-augmentations library (Beng et al., 2017) currently includes versatile transforms of multi-lead ECG signals. Implemented augmentations include:
Footnote 2: [https://github.com/klean2050/ecg-augmentations](https://github.com/klean2050/ecg-augmentations)
* **Masking**: We currently support random masking or masking of PR and QRS intervals, whereas the user can also specify the ratio of intervals to be masked. Detection of R peaks is done using NeuroKit2(Nenrouz et al., 2018).
* **Cropping**: Random (\(r\)) cropping of an ECG sub-sequence given the desired length \(\lambda\): \(s^{\prime}=s[r:r+\lambda]\)
* **Noise**: We support both additive white noise and random wander, with adjustable signal-to-noise ratio.
* **Permutation**: Each ECG signal is divided into \(m\leq 10\) segments, which are shuffled, i.e., by randomly perturbing their temporal order. Each segment has a set minimum length of 10% the total signal length.
* **Time Warping**: Randomly selected segments of the original ECG are stretched or squeezed along the temporal axis, through interpolation. The output signal is cropped or zero-padded when stretched or squeezed, respectively.
* **Scaling**: The ECG magnitude is multiplied by a random scalar \(0<\alpha<5\): \(s^{\prime}[n]=-\alpha s[n]\), \(n=0,...,N\)
* **Inverting**: Implemented by negating the input signal along the temporal axis: \(s^{\prime}[n]=s[N-n]\), \(n=0,...,N\)
* **Reversing**: Simply implemented by scaling the input signal using \(\alpha=-1\): \(s^{\prime}[n]=-s[n]\), \(n=0,...,N\)
### Pre-Training Objective
Most SSL studies apply either masked sample reconstruction or contrastive learning objectives to pre-train their respective models. Since there is no established training algorithm for physiological signals, WildECG considers elements from both SSL approaches. Our objective aims to identify which signal transformations are applied to a sample ECG, where each signal is augmented randomly using at most four out of all the available transforms. Each transform is selected based on a set probability, so it is possible that some samples are input without any augmentation. We formulate this task as a multi-label classification task with nine classes (eight possible transformations plus the original signal).
This task draws from both predominant SSL approaches. Our first motivation comes from masked reconstruction objectives by including masking augmentations in our pre-training framework. These include both random masking of signal patches and masking of specific ECG intervals. Second, we follow the contrastive learning paradigm in the sense that we evaluate the impact of induced augmentations in the data. However, we intentionally choose to predict applied transformations over the conventional contrastive approach, in which we would identify the similarity of two distorted samples of the same input. The reason is that this objective would focus on invariant ECG features that are primarily subject-dependent. On the other hand, identifying distortions is intuitive for our scope, since the model focuses on ECG abnormalities that could potentially hold diagnostic information.
### Model Architecture
Our proposed model inherits the S4 model as the backbone architecture, as it features critical elements that are desirable and intuitive in ECG analysis. As mentioned before, S4 has demonstrated promising performance in modeling long-range sequences with dependencies over thousands of timesteps, which is a current limitation of state-of-the-art models like the Transformer (Srivastava et al., 2017) architecture. ECG is a sequence of that type, with its sampling rate ranging from 100 to 1000 Hz. Also, S4 is implicitly a continuous-time model, making it well-suited to waveform signals. Indeed, prior work (Srivastava et al., 2017) has shown that variants of S4 provide excellent performance in classifying cardiovascular conditions based on controlled ECG data.
Here we employ a simplified version of the original S4, consisting of a linear encoder, six S4 blocks, and a linear decoder. Each block consists of a Layer Normalization module, the \(\bar{K}\) estimation module, a GELU (GELU, 2018) activation, Dropout and an output projector. The blocks are connected with residual connections, as shown in Figure 2. The input and output dimension is set to 256 and dropout layers of 20% are applied. For the pre-training phase, we adjust a linear layer to the decoder output and replace it during fine-tuning.
## 5. Experiments
Below we share details about the datasets of this study (Table 1). Our experimentation covers 7 widely used ECG datasets and targets settings where the 1-lead ECG modality is prominent and the evaluation criteria are clearly defined. We thus omitted datasets such as DEAP (Kumar et al., 2017), AMIGOS (Kumar et al., 2017), or DREAMER (Garwal et al., 2018) as EEG-oriented, and also medical datasets that depend heavily on 12-lead ECG recordings.
### Pre-training: TILES Dataset
Tracking Individual Performance with Sensors (TILES) (Zhu et al., 2017; Wang et al., 2018) is a research project that has collected multimodal data sets for the analysis of stress, task performance, behavior, and other factors pertaining to professionals in a
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Dataset** & **ECG Setting** & **\# Subjects** & **\# Classes** \\ \hline TILES (Zhu et al., 2017) & 24-h monitoring & 200 & N/A \\ \hline PTB-XL (Zhu et al., 2017) & clinical acquisition & 18869 & 5 \\ LUDB (Garwal et al., 2018) & clinical acquisition & 200 & 2 \\ WESAD (Wesand, 2018) & activity engagement & 15 & 3 \\ CASE (Zhu et al., 2018) & video watching & 30 & \(\star\) \\ AVEC-16 (Garwal et al., 2018) & dyadic interaction & 27 & \(\star\) \\ SWELL-KW (Zhu et al., 2017) & workplace stress & 25 & 2 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Overview of the study’s datasets. \(\star\) denotes regression.
Figure 2. The architecture of the proposed ECG backbone model, following a simple version of the original S4 (Kumar et al., 2017). The model consists of six S4 blocks, connected through residual connections. Linear classifiers are attached on top.
high-stress workplace environment. Biological, environmental, and contextual data were collected from hospital nurses, staff, and medical residents both in the workplace and at home over a ten week period. Labels of human experiences, such as stress, anxiety, and affect, were collected using psychologically validated questionnaires which were administered at different times.
In the present study, we use the ECG data from the publicly available TILES 2018 dataset (Krishnan et al., 2018) to pre-train a general-purpose ECG model. Each participant had their ECG recorded for 15 seconds every 5 minutes during their work hours, for a total of 10 weeks. There were 213 participants in total, 200 of whom had agreed to wear a bioshirt that enabled high quality ECG data collection, making the aggregate number of samples conducive to pre-train a large ECG representation model. Since ECG was collected in the wild, we apply a quality check on the available signals by attempting to identify RR intervals. All days for which the total detection rate is lower than 90% are discarded, leading to approximately 275,000 ECG samples from 168 individuals.
### Fine-tuning: In-the-wild Sensing
#### 5.2.1. AVEC-16 Multimodal Affect Recognition Sub-challenge
The multimodal affect recognition sub-challenge (MASC) of AVEC-16 (Krishnan et al., 2018) stems from the REmote COLIaborative and Affective interactions (RECOLA) dataset (Krishnan et al., 2018). RECOLA included continuous multimodal recordings during dyadic interactions via video conferences. The complete dataset contains audio, visual, and physiological information from 27 French-speaking participants. The single-channel ECG data used in this work were sampled at 250 Hz and subsequently filtered using a band-pass filter at 3-27 Hz. The labels are continuous ratings for arousal and valence at 40ms intervals, throughout the first five minutes of the complete recordings.
#### 5.2.2. Swell-Kw
This dataset (Krishnan et al., 2018) aimed at analyzing employees' emotional states and workplace stress under three scenarios: _normal_, in which participants performed various office tasks for 45 minutes, _time-pressure_, in which participants had only 30 minutes to complete the same tasks, and _interruption_, in which they were also interrupted by emails and messages. ECG signals were collected from 25 participants using the TMSI MOBI device at a sampling rate of 2048 Hz. At the end of each scenario, participants were asked to report their valence, arousal, and also other states, such as stress.
#### 5.2.3. Wesad
The dataset for WEarable Stress and Affect Detection (WESAD) (Krishnan et al., 2018) contains ECG data from 15 participants. RespiBN Professional sensors were used to collect ECG at a sampling rate of 700 Hz. The goal was to study four different affective states (neutral, stressed, amused, and meditated). First, 20 minutes of neutral condition data were collected, during which participants were asked to do normal activities. Then participants watched 11 funny video clips (amusement) and went through public speaking and arithmetic tasks (stress). Finally, they went through a guided meditation session of 7 minutes. Upon completion of each trial, labels for the affect states were collected using 9-scale PANAS.
### Fine-tuning: Passive Sensing
#### 5.3.1. Ptg-Xl
The PTB-XL dataset (Krishnan et al., 2018) is a set of 21799 clinical 12-lead ECGs from 18869 patients of 10 second length. The raw waveform data were annotated by up to two cardiologists, who assigned potentially multiple ECG statements to each record. The waveform data underlying the PTB-XL ECG dataset were collected with devices from Schiller AG over the course of nearly seven years between October 1989 and June 1996. In total 71 different ECG statements conform to the SCP-ECG standard and cover diagnostic, form, and rhythm statements. The dataset is complemented by extensive metadata on demographics, infarction characteristics, diagnostic statements, and annotated signal properties.
#### 5.3.2. Ludb
The Lobachevsky University Electrocardiography Database (LUDB) (Ludb, 2018) is an ECG dataset with annotated boundaries and peaks of P, T and QRS waves. It consists of 200 10-second ECG signals at 500 Hz,
representing different morphologies, out of which we only use the first lead, to comply with our framework. The ECG records were collected from healthy volunteers and patients of the Nizhny Novgorod City Hospital during 2017-2018. The patients had various cardiovascular diseases while some of them had pacemakers. Cardiologists annotated each record with the corresponding diagnosis. For this study, we consider the task of identifying sinus rhythm against a super-set of different abnormalities.
#### 5.3.3. Case
The Continuously Annotated Signals of Emotion (CASE) dataset (Zhou et al., 2018) contains data from 30 participants collected in laboratory conditions. During the experiment, participants watched a series of 8 video stimuli and continuously annotated their emotions in a two-dimensional arousal-valence space using a joystick interface developed by the researchers. Additionally, a two-minute long blue-screen video served as an in-between resting phase. The ECG data were collected at 1000 Hz using Thought Technology SA9306 sensors, and affect annotations were collected at 20 Hz.
### Implementation Details
We pre-train WildECG for 100 epochs on the TILES data using a batch size of 256 samples and an AdamW optimizer with a 0.001 learning rate. A linear layer is used to map the ECG embeddings to the transform classes. We checkpoint the resulting model of the last epoch and apply it to a set of downstream tasks to evaluate the learned representations. For each task, the respective ECG data are extracted and processed akin to the TILES data (see Section 4.1), whereas the additive linear layer is replaced by a 2-layer MLP classifier that maps the pre-trained embeddings to the target space.
We evaluate each task with 5-fold cross-validation in primarily subject-agnostic settings. _Subject-agnostic_ refers to the setting where test splits do not contain samples from subjects of the training splits, whereas _mixed-subject_ denotes the opposite. We use a batch size of 256 samples in all experiments except LUDB (32 samples). The
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Model** & **Training** & **Arousal CCC** & **Valence CCC** \\ \hline Linear SVM (Wang et al., 2018) & – & 0.271 & 0.153 \\ Linear SVM (Wang et al., 2018) & – & 0.118 & 0.085 \\ \hline Baseline S4 & full model & 0.328 & 0.162 \\ WildECG (ours) & full model & **0.356** & **0.303** \\ WildECG (ours) & projector & 0.346 & 0.289 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Downstream performance on **AVEC-16** dataset (Eval Split).
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Training**} & \multicolumn{2}{c}{**Mixed-subject**} & \multicolumn{2}{c}{**Subject-agnostic**} \\ \cline{3-6} & & **Accuracy** & **F1-macro** & **Accuracy** & **F1-macro** \\ \hline AdaBoost (Zhou et al., 2018) & – & – & – & 0.617 ( – ) & 0.525 ( – ) \\ LDA (Zhou et al., 2018) & – & – & – & **0.663** ( – ) & 0.560 ( – ) \\
1D-CNN (Wang et al., 2018) & projector * & 0.969 ( – ) & 0.963 ( – ) & – & – \\ \hline Baseline S4 & full model & 0.956 (0.031) & 0.955 (0.032) & 0.489 (0.089) & 0.410 (0.097) \\ WildECG (ours) & full model & **0.978** (0.028) & **0.978** (0.028) & 0.644 (0.044) & **0.592** (0.058) \\ WildECG (ours) & projector & 0.742 (0.044) & 0.721 (0.064) & 0.600 (0.089) & 0.524 (0.075) \\ \hline \hline \end{tabular}
\end{table}
Table 3. Downstream performance on **WESAD** dataset (3-way activity). * Pre-training includes WESAD. Standard deviation among folds is included in parentheses.
learning rate is tuned to each dataset separately, within \(\{0.0001,0.0005,0.001\}\). All datasets are trained for a maximum of 200 epochs with early stopping based on validation loss. Model checkpoints are selected based on the highest F1-macro or CCC in cross-validation, and lowest validation loss for PTB-XL and AVEC-16, which have specified validation and test splits.
## 6. Results
Below we present the results of the downstream evaluation of WildECG. Our primary objective is to highlight the performance of the proposed model when employed both as a backbone and as a feature extractor, compared to training supervised classifiers. Wherever possible, we compare our performance with available studies, and when no comparable studies were found, we benchmark the respective task.
### In-the-wild Sensing
**AVEC-16:** Table 2 includes results for the AVEC-16 dataset, quantified using the Concordance Correlation Coefficient (CCC). S4 persistently outperforms the scores reported from all prior studies, which rely on knowledge-based ECG features and conventional classifier architectures. For arousal estimation, WildECG achieves a state-of-the-art CCC of 0.356 when fully fine-tuned and 0.346 when only the projector is trained. In both cases it outperforms the S4 variant that is trained from scratch. Similar results are obtained for valence, where our proposed model surpasses 0.3 CCC.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Model** & **Training** & **Accuracy** & **F1-macro** \\ \hline LDA [(62)] & – & 0.854 & 0.813 \\
2D-CNN [(36)] & full model & 0.824 & 0.794 \\ Transformer [(5)] & full model & 0.804 & 0.697 \\ ECGNet [(50)] & full model & 0.908 & 0.857 \\ SVM [(25)] & – & 0.811 & 0.818 \\ \hline Baseline S4 & full model & 0.900 & 0.899 \\ WildECG (ours) & full model & **0.967** & **0.966** \\ WildECG (ours) & projector & 0.900 & 0.891 \\ \hline \hline \end{tabular}
\end{table}
Table 4. Downstream performance on **WESAD** dataset (stress vs normal – subject-agnostic).
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline
**Model** & **Training** & \multicolumn{2}{c}{**Valence \textgreater{} mean**} & \multicolumn{2}{c}{**Arousal \textgreater{} mean**} & \multicolumn{2}{c}{**Stress (N vs T/I)**} \\ \cline{3-8} & & **Accuracy** & **F1-macro** & **Accuracy** & **F1-macro** & **Accuracy** & **F1-macro** \\ \hline SVM * [(30)] & – & – & – & – & – & 0.641 ( – ) & – \\ Transformer [(5)] & full model & – & – & – & – & 0.581 ( – ) & 0.588 ( – ) \\ Deep ECGNet [(50)] & full model & – & – & – & – & **0.755** ( – ) & 0.688 ( – ) \\ \hline S4 baseline & full model & 0.598 (0.098) & 0.560 (0.105) & 0.743 (0.153) & **0.731** (0.148) & 0.680 (0.075) & 0.643 (0.107) \\ WildECG (ours) & full model & **0.629** (0.050) & **0.623** (0.050) & **0.751** (0.064) & 0.704 (0.077) & 0.660 (0.080) & 0.637 (0.094) \\ WildECG (ours) & projector & 0.607 (0.118) & 0.560 (0.157) & 0.731 (0.088) & 0.698 (0.089) & 0.740 (0.102) & **0.711** (0.127) \\ \hline \hline \end{tabular}
\end{table}
Table 5. Downstream performance on **SWELL-KW** dataset for the subject-agnostic setting (binary classification). Standard deviation among folds is included in parentheses. * not only ECG
**WESAD:** Table 3 presents detailed results for WESAD, evaluated in both mixed-subject and subject-agnostic settings. Here, the objective is to identify the type of activity the subject performs out of three scenarios: baseline, stress, and amusement. For the mixed-subject setting, we compare our performance with Sarkar and Etemad [60] where we observe marginal improvements of 1% to 1.5% in F1-macro. We should note that the results are close to absolute correct accuracy, which is attributed to the temporal correlation that each subject and each recording inherits. This is evident by observing the drop of 23-25 percentage points (pp) when we freeze the pre-trained encoder, as well as the drop of more than 30 pp that the subject-agnostic setting induces. Nonetheless, WildECG outperforms the literature in obtained F1-macro, reaching 59.2%, with an accuracy of 64.4%. In this case, the pre-training mechanism is critical, since the S4 baseline cannot reach an accuracy better than random chance.
In addition to evaluating the 3-way condition in WESAD, we also assess the binary task of stress versus the two other conditions and report related results in Table 4. WildECG achieves very high accuracy, reaching 96.6% F1-macro and outperforming all previous studies by a large margin. Both frozen and fully fine-tuned models outperform convolutional and transformer encoders by 11-17% F1-macro despite having fewer parameters.
**SWELL-KW:** Finally, we present results on SWELL-KW in Table 5. Here we only share results for the subject-agnostic setting, since the mixed-subject one quickly overfits to perfect 100% accuracy, with similar results shown in [60]. For SWELL-KW, we evaluate three different binary cases: valence and arousal estimation, both binarized at the mean of the obtained values, and stress, as indicated again by the activity performed by the subject. For
\begin{table}
\begin{tabular}{l l c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Training**} & \multicolumn{3}{c}{**Mixed-subject**} & \multicolumn{3}{c}{**Subject-agnostic**} \\ \cline{3-8} & & **Arousal CCC** & **Valence CCC** & **Anxiety CCC** & **Arousal CCC** & **Valence CCC** & **Anxiety CCC** \\ \hline Baseline S4 & full model & 0.249 (0.121) & 0.231 (0.133) & 0.292 (0.168) & 0.198 (0.085) & 0.162 (0.066) & 0.310 (0.159) \\ WildECG (ours) & full model & **0.391** (0.047) & **0.439** (0.081) & **0.565** (0.089) & **0.253** (0.064) & **0.351** (0.086) & **0.424** (0.049) \\ WildECG (ours) & projector & 0.226 (0.098) & 0.114 (0.066) & 0.219 (0.112) & 0.202 (0.034) & 0.123 (0.053) & 0.205 (0.058) \\ \hline \hline \end{tabular}
\end{table}
Table 6. Downstream performance on **CASE** dataset (regression task). Standard deviation is included in parentheses.
\begin{table}
\begin{tabular}{l l c c} \hline \hline
**Model** & **Training** & **AUROC** & **F1-macro** \\ \hline
12-lead LSTM [64] & full model & 0.927 & – \\
12-lead Inception-1D [64] & full model & 0.921 & – \\
12-lead Transformer [77] & full model & 0.887 & – \\
12-lead S4 [77] & full model & 0.931 & – \\ \hline
1-lead S4 baseline & full model & 0.832 & 0.457 \\ WildECG (ours) & full model & **0.845** & **0.480** \\ WildECG (ours) & projector & 0.815 & 0.346 \\ \hline \hline \end{tabular}
\end{table}
Table 7. Downstream performance on **PTB-XL** dataset (sup-diag task).
\begin{table}
\begin{tabular}{l l c c} \hline \hline
**Model** & **Training** & **Accuracy** & **F1-macro** \\ \hline
1-lead S4 baseline & full model & 0.770 (0.040) & 0.618 (0.141) \\ WildECG (ours) & full model & **0.915** (0.064) & **0.894** (0.082) \\ WildECG (ours) & projector & 0.855 (0.062) & 0.792 (0.094) \\ \hline \hline \end{tabular}
\end{table}
Table 8. Downstream performance on **LUDB** dataset (binary task). Standard deviation among folds is included in parentheses.
this task, \(N\) refers to the normal condition whereas \(T\) and \(I\) represent the stress conditions. WildECG performs on par with the S4 trained from scratch, with smaller variations among folds in arousal. However, most studies evaluate their methods on the latter task of stress estimation, achieving more than 75% accuracy and close to 69% F1-macro. Our models provide competitive performance with these studies, with the pre-trained model reaching state-of-the-art 71.1% F1-macro.
### Passive Sensing
Here we report our performance scores for the datasets described in Section 5.3. We begin with the CASE dataset which incorporates target variables of affect. Table 6 contains results for regression on arousal (A), valence (V), and anxiety (N) levels, where anxiety is defined as \(N=A(1-V)\), as proposed in [(25)]. The prediction results show strong performance for WildECG, which outperforms both the S4 baseline and the frozen variant with a substantial margin in both subject-agnostic and mixed-subject settings. While the baseline shows higher performance for arousal, WildECG is better on valence, gaining 0.21 CCC from baseline S4. This difference is also reflected and magnified in the anxiety measures. We further observe that even though WildECG embeddings on their own offer limited performance improvements, our method leads to more consistent predictions in all tasks, with substantially lower variance than the baseline. To the best of our knowledge, no previous study provides continuous estimates of affect variables from ECG signals in CASE.
Although our system is not trained on clinical settings and data, we evaluate its performance as an out-of-distribution task on PTB-XL, one of the largest clinical ECG testbeds that is publicly available. Unfortunately, there is no extensive research assessing single-lead ECG systems for disease diagnosis. Moreover, many studies in cardiology report prominent disease biomarkers on several leads of a clinical ECG recording [(18; 52)]. With that premise, we compare our single-lead results with 12-lead ECG systems (Table 7). An S4 architecture similar to ours recently reported state-of-the-art performance [(77)] for 12-lead PTB-XL. Here we effectively benchmark the performance drop of 1-lead S4 to 10% AUROC, reaching 83.2%. We also demonstrate that when pre-trained on TILES, our model can improve upon this baseline by about 1.3 pp in AUROC and 2.3 pp in F1-macro. We also highlight that, possibly due to the scale of the dataset, the frozen model substantially under-performs in this 5-way classification task, with a 13% drop in F1-macro. Further, in Table 8, we evaluate our model in LUDB, a much smaller medical dataset of various cardiac conditions. Despite the low-resource setting, WildECG distinguishes between healthy and non-healthy recordings, with a mean accuracy of 91.5% and F1-macro of about 90%, demonstrating data efficiency.
## 7. Discussion
### Pre-Training Settings
Our proposed framework incorporates several design parameters that contribute to positive experimental performance. In this section, we conduct a close inspection of each of these design elements by probing and comparing alternative approaches in the literature. Specifically, we compare WildECG to a network that uses a 1D ResNet [(23)] backbone, in order to assess the additive value of the selected architecture. ResNets have shown to be superior to other modeling approaches in a recent review on ECG signals [(47)]. We create two versions, a ResNet-large of 14.7M parameters that includes 10 residual blocks and an input size of 64 filters, and a ResNet-small of 923K parameters by reducing filter sizes to \(1/4\) of ResNet-large. We note that WildECG holds 313K parameters in total. Both networks are trained like WildECG and the obtained results for the fully trained variants are in Figure 3 (up). We observe that our proposed S4 model clearly outperforms both ResNet variants except for SWELL-KW where the accuracy is similar. On the other hand, increasing the parameters of the ResNet model provides limited benefits to performance boost.
We further compare the ECG representations of our pre-training algorithm with those obtained using the standard contrastive learning approach (Kumar et al., 2018). For this purpose, we train an identical to WildECG network with the alternative objective. The results on the same downstream tasks are shown in Figure 3 (down). The chosen objective shows clear advantages over the contrastive one, as seen by the persistently improved performance in most evaluation cases. These results support our choice of pre-training objective as more suitable as well as more intuitive to ECG signal analysis. In future work, it is worth investigating tuning the baseline contrastive objective in order to reduce subject bias in the representations, for which Cheng et al. (2018) have provided proof of concept by configuring subject-specific negative pairing.
### Low-Resource Scenarios
Thus far we have quantified our model's superiority against other architecture choices and across diverse applications. Herein we evaluate WildECG in low-resource settings, where we randomly restrict the number of training samples in each of the downstream tasks. Figures 4 and 5 contain a respective graph for each dataset, with the horizontal axis denoting the percentage of the training samples that were actually used. In this section, we include the same subset of labels that we used previously. We observe that WildECG achieves small or even negligible performance loss in most cases compared to the baseline S4 network. For AVEC16, we are able to retain state-of-the-art performance even with 5% of the training data, while the baseline fails to converge. Similarly, model pre-training alleviates the performance drop for WESAD and PTB-XL, whereas for LUDB the baseline never reaches performance substantially above chance. As mentioned earlier, SWELL-KW is associated with unnoticeable performance differences between WildECG and the S4 baseline, while our model does exhibit smaller variance in its predictions.
TILES dataset incorporates a relatively large number of samples, we randomly select 10% of its data (every 10th sample of each subject) to avoid over-plotting.
We first investigate whether WildECG incorporates subject-specific biases in its representations. To that end, we provide t-SNE visualizations colored by subject ID, in Figure 6. As for the pre-training data, due to the number of samples TILES embeddings appear rather mingled, without visible clusters. However, the mean euclidean distance for intra-subject samples is found to be lower than the mean inter-subject distance, i.e., \(7.02\pm 1.24\) vs. \(10.55\pm 2.59\), respectively; p-value \(1.77\cdot 10^{-27}\) for paired t-Student test. This indicates that samples related to the same subject are closer to each other rather than to samples from other subjects. On the other hand, for CASE, WESAD, and SWELL-KW, the embeddings form well-separated, participant-related clusters. This comes in spite of the subject-wise standardization and implies that the model indeed learns strong subject- or sensor-specific characteristics from the ECG. As for WESAD, each participant typically expands two clusters, grouped by heart rate value of the respective ECG. As shown in Figure 7, these clusters indeed correspond to the _stress_ and _no-stress_ activities performed and in some cases form super-clusters across subjects.
Next, we focus on whether the WildECG representations retain cardiac information, in this case heart rate (HR). Ground truth HR values were obtained from the filtered 10-second ECG samples using the NeuroKit2 [39] library. The mean HR values were grouped into bins of 10 within the acceptable range of human HR [58], i.e., from 40 to 210 bpm. In Figure 8, for TILES dataset we observe a left-to-right transition from low to higher HR values. On the contrary, CASE embeddings, which reflect strong subject biases (Figure 6), do not present any generalized HR patterns. The transition across HR levels happens though within subject-specific clusters. Visualization for WESAD shows that the mixed-subjects' groups represent higher mean HR values, which are also associated with
Figure 4. Low-resource model performance for AVEC-16, WESAD, and SWELL-KW: **active sensing**.
Figure 5. Low-resource model performance for CASE, PTB-XL, and LUDB: **passive sensing**.
stress activity (Figure 7). A similar pattern is also present in SWELL-KW, where samples from different subjects related to higher HR values and stress activity are close to each other.
To assess the model's capability to capture physiology that is characteristic of the context (activity or stimulus) we compute the Euclidean distance between high-dimensional embeddings of different context (Figure 9). For each subject, we determine the average distance between samples with the same label and the average distance between samples with different labels. The most notable difference between within-context and across-context samples can be observed for WESAD and SWELL-KW, where samples related to stressful activities are distant from other activities. As for CASE, the distinction between within-scary sample distance and distance to other-context samples is not significant, but remains noticeable. Further analysis of TILES data reveals that samples with similar HR values are situated closer to each other, while those with more significant differences in HR are positioned farther apart. This observation verifies that WildECG embeddings effectively preserve the cardiac information.
### Limitations and Challenges
A robust and general-purpose representation model for the ECG is an important step towards expanding scientific research and clinical translation, notably for broad dissemination of smart and ubiquitous health applications. Equal importance however should be given to the limitations of these models, from the methodological aspects of evaluation, to fundamental questions regarding the applicability of physiological measures in estimating complex human conditions. Computational modeling of human behavior and physiology currently lacks standardized
Figure 6. T-SNE visualizations of WildECG embeddings on the 2D space for TILES (downsampled to 10%), CASE, WESAD and SWELL-KW datasets, colored by subject ID to reveal subject-specific bias.
Figure 7. T-SNE visualizations of WildECG embeddings on the 2D space for WESAD and SWELL-KW colored by type of activity.
protocols and metrics that would ensure the reproducibility and validity of the obtained results. Consequently, our study also diverged from other comparable studies on how each dataset is set up for evaluation. However, we hope that it contributes a cohesive and comprehensive testbed for studies to evaluate their approaches.
An important characteristic of physiological signals that influences the evaluation protocol is the inter-subject variability, challenging the application of machine learning techniques like transfer learning in the field. Indeed, multiple studies report that machine learning models trained on a specific dataset rarely generalize to other datasets and settings [42, 50]. Even within a specific dataset, subject bias could prevent evaluation on unseen subjects [60]. In our study, we demonstrate that pre-training with a large-scale ECG dataset in a self-supervised way can help alleviate these issues. However, our experiments and feature visualizations reveal that the learned features still reflect such biases, e.g., by forming subject-specific clusters. Hence, adopting specialized objectives to eliminate this bias is an important direction for future work.
Figure 8. Distribution of heart rate values in 2D t-SNE space for WildECG embeddings of TILES (downsampled), CASE, WESAD, and SWELL-KW samples. Brighter colors indicate higher heart rate.
Figure 9. Average Euclidean distance of within-participant samples with respect to: absolute heart rate difference for TILES (downsampled), stimuli for CASE, and activity for WESAD and SWELL-KW. Colors indicate distances between samples with same or different labels.
Taking a step back, it is crucial to underscore the limitations of standalone measures like the ECG to solely estimate the range of human conditions. It is well known that emotional states are heavily influenced by the social and environmental context (Han et al., 2020; Wang et al., 2020), in a way that a single-dimensional signal cannot reflect. We highlight that models like WildECG should be adopted in a holistic perspective that takes into consideration multiple views of human behavior and contextual information. For example, fusing information from multiple physiological and behavioral signal measures like electrodermal activity, speech (Wang et al., 2020), and human activity (Wang et al., 2020) has provided better performance than using ECG alone on the same datasets. Incorporating WildECG in a multimodal sensing framework is another direction of future work to be pursued.
## 8. Conclusion
Ubiquitous sensing and monitoring are already transforming digital health and well-being with new, on-demand services. Hence there is an unmet need to address challenges related to the inference and analysis of the resulting rich and diverse human bio-behavioral states. In this study, we propose WildECG, a robust and versatile AI model for ECG representation learning. By utilizing a large, diverse corpus of biosignals collected in the wild, along with a state-of-the-art state-space network and pre-training algorithm, we demonstrate competitive performance on the tasks of estimating human affect, dimensional emotion, stress levels, as well as pathological markers. We further quantify the contributions of our design factors and verify model robustness in low-resource settings. The conducted qualitative analysis reveals that WildECG indeed incorporates explainable and tractable insights related to the ECG structure and features that could prove beneficial for researchers as well as clinicians.
|
2309.05081 | Case Study of Decoherence Times of Transmon Qubit | In the past two decades, one of the fascinating subjects in quantum physics
has been quantum bits (qubits). Thanks to the superposition principle, the
qubits can perform many calculations simultaneously, which will significantly
increase the speed and capacity of the calculations. The time when a qubit
lives in an excited state is called decoherence time. The decoherence time
varies considerably depending on the qubit type and materials. Today, short
decoherence times are one of the bottlenecks in implementing quantum computers
based on superconducting qubits. In this research, the topology of the transmon
qubit is investigated, and the decoherence time caused by noise, flux, and
critical current noise is calculated by numerical method. | H. Zarrabi, S. Hajihosseini, M. Fardmanesh, S. I. Mirzaei | 2023-09-10T17:03:54Z | http://arxiv.org/abs/2309.05081v1 | Case Study of Decoherence Times of Transmon Qubit
## Abstract
In the past two decades, one of the fascinating subjects in quantum physics has been quantum bits (qubits). Thanks to the superposition principle, the qubits can perform many calculations simultaneously, which will significantly increase the speed and capacity of the calculations. The time when a qubit lives in an excited state is called "Decoherence time." The decoherence time varies considerably depending on the qubit type and materials. Today, short decoherence times are one of the bottlenecks in implementing quantum computers based on superconducting qubits. In this research, the topology of the transmon qubit is investigated, and the decoherence time caused by noise, flux, and critical current noise is calculated by numerical method.
## Introduction
To date, different qubit types have been introduced regarding working principles. One of the most popular of them is superconducting qubits. The Cooper pair box was the first charge qubit based on superconductivity. [1] Due to its high sensitivity to charge noise, it was replaced by transmon qubit, which was introduced by Jens Koch et al. [2] in 2008. The transmon qubit circuit consists of a SQUID (two parallel Josephson junctions) and a shunted capacitor, as shown in Figure 1. The parallel capacitor (\(C_{g}\)) is much larger than the previous qubits. By adding a SQUID, which will reduce the coherence time by penetrating the flux noise in the loop, and increasing \(E_{J}/Ec\), the key reason, the decoherence time could be improved by also incorporating an SQIUD and increasing \(C_{g}\) in the circuit. Decoherence time is affected by the interaction of a qubit with the environment, which causes disturbances and collapse of superpositions and excitations to the ground state. Low decoherence time leads to errors in quantum information processing. The decay of the superposition of states is a limitation to performing complex quantum algorithms. On the other hand, there must be loose
interactions between a qubit and its environment to read quantum data. Quantum decoherence is usually characterized by measuring two times: \(T_{1}\) (relaxation time) and \(T_{2}\) (dephasing time). Although in the new qubit, the sensitivity to charge noise is reduced, however; the flux sensitivity of the transmon qubit increases compared to the Cooper pair box qubit.
## Decoherence time
Different parameters need to be considered, such as anharmonicity (\(\alpha\)) and decoherence time, when investigating the performance of the superconducting qubits. The time it takes for the coherence of the quantum state to be lost is called decoherence time. After decoherence time, the qubit can no longer continue its operation and it stops functioning correctly.
### Dephasing time (\(T_{2}\))
Each state has an energy level, and the energy of a quantum transition between two energy levels is called transition energy. If environmental noises, such as charge noise, flux noise, or critical current noise, cause a change in the transition energy, dephasing occurs. When the noises coupled to the qubit and change the transition energy for the first two states of the system (\(E_{01}\)), dephasing time could be defined according to equation 1. [2]
\[T_{2}(\lambda)\approx\frac{\hbar}{A}\left|\frac{\partial E_{01}}{\partial \lambda}\right|^{-1} \tag{1}\]
Where \(\lambda\) could be a charge, flux, or critical current source noise, \(\hbar\) is the reduced Planck constant. \(A\) is the amplitude of \(1/f\) noise and has different values depending on the type of noise, as mentioned in Table 1. The transition energies can be derived by utilizing the Hamiltonian of the transmon qubit. [2]
\[\hat{H}=4E_{c}(\hat{n}-n_{g})^{2}-E_{J\Sigma}\cos\left(\pi\phi_{\text{ext}} \right)\left[\cos\hat{\varphi}+d\tan\left(\pi\phi_{\text{ext}}\right)\sin\hat{ \varphi}\right] \tag{2}\]
Figure 1: Schematic of the transmon qubit [3]
Where \(\phi/\phi_{0}=\phi_{\rm ext}\), \(E_{J\Sigma}\) is the sum of the two Josephson energies, which is dependent on the critical current, \(E_{c}\) is the charging energy, which is dependent on the inverse of the total capacitance, and \(d\) is the asymmetry coefficient of the Josephson junctions. Based on fabrication techniques, junction parameters could be different, with assumed junction asymmetries up to 10%. [2]
The second method of explaining the dephasing time is utilizing the Bloch sphere. If there is a coupling between environmental noises and the qubit along the \(\hat{z}\) axis, dephasing occurs. The \(\hat{z}\) axis represents the energy gap of the qubit. We expect an ideal qubit to have energy levels as constant as possible with respect to the noise \(\lambda\). In Figure 2, you see the dephasing process, which includes the relaxation with a rate of \(\Gamma_{1}/2\) and pure dephasing with a rate of \(\Gamma_{\phi}\).
In Figure 2, First of all, environmental noise affects the \(\hat{z}\) axis and causes randomization of the phase of the qubit state, where the duration of this event is called the pure dephasing time (\(T_{\phi}\)). Then, by coupling between noise and the qubit along the two axes \(\hat{x}\) and \(\hat{y}\), it moves the state vector to a random position in the Bloch sphere. According to Figure 2, this phenomenon occurs at the rate of \(\Gamma_{1}/2\), and the term \(1/2T_{1}\)
\begin{table}
\begin{tabular}{|l|c|} \hline
**Dephasing** & **Noise Source** \\ \hline Charge & \(A=10^{-4}-10^{-3}\,e\) \\ \hline Flux & \(A=10^{-6}-10^{-5}\,\Phi_{0}\) \\ \hline Critical Current & \(A=10^{-7}-10^{-6}\,I_{c}\) \\ \hline \end{tabular}
\end{table}
Table 1: Different values of the amplitude of \(1/f\) noise based on charge, flux, and critical current noise. [2]
Figure 2: Dephasing time definition by Bloch sphere. [4]
appears. In this way, equation 3 describes the dephasing phenomenon with the help of the dephasing rate \(\Gamma_{2}\).
\[\Gamma_{2}=\frac{\Gamma_{1}}{2}+\Gamma_{\phi}=\frac{1}{T_{2}}=\frac{1}{2T_{1}}+ \frac{1}{T_{\phi}} \tag{3}\]
Concerning equation 1, \(d=10\%\), \(E_{J}=20\,\)GHz, and \(E_{c}=0.35\,\)GHz are assumed. The dephasing time of the transmon qubit caused by the charge, flux, and critical current noises calculated by Jens Koch et al., and the calculation values in the Quantum Toolbox in Python [5] are shown in Table 2. [2]
In addition, with the help of Quantum Toolbox in Python and using equation 1, we can plot the diagram of dephasing times caused by all three noises based on different \(E_{J}/E_{c}\), as shown in Figures 3 and 4.
By comparing Figure 3(a) with the graph in ref. [2], one can verify the numerical results and see that the error percentage of this drawn graph on all \(E_{J}/E_{c}\) ratios is always less than \(0.8\%\).
\begin{table}
\begin{tabular}{|c|c|c|} \hline & **Calculated by Jens Koch et al.** & **Calculation values in QuTiP** \\ \hline Charge noise & \(\approx 8\)\(s\) & \(8.667\)\(s\) \\ \hline Flux noise & \(\approx 1\)\(\mu s\) & \(1.311\)\(\mu s\) \\ \hline Critical current noise & \(\approx 35\)\(\mu s\) & \(32.104\)\(\mu s\) \\ \hline \end{tabular}
\end{table}
Table 2: A comparison of the approximate values of the dephasing time (\(T_{2}\)) reported for the transmon qubit by Jens Koch et al. [2] and values calculated in Python’s Quantum Toolbox (QuTiP) [5].
Figure 3: The diagram of dephasing time due to (a) charge noise (\(T_{2}^{Q}\)) and (b) flux noise (\(T_{2}^{\phi}\)) in terms of \(E_{J}/E_{c}\).
In Figure 3, increasing the \(E_{J}/E_{c}\) ratio, the dephasing time due to charge noise (\(T_{2}^{Q}\)) increases exponentially while the dephasing time due to flux noise (\(T_{2}^{\phi}\)) decreases. The reason for this behavior is evident in equations 4 and 5. [2]
\[T_{2}^{Q}\propto e^{\sqrt{\frac{E_{J}}{E_{c}}}} \tag{4}\]
\[T_{2}^{\phi}\propto\left(2E_{c}E_{J\Sigma}\right)^{-\frac{1}{2}} \tag{5}\]
Regardless of the circuit topology, it is impossible to reduce the qubit sensitivity of charge and flux noise ratio simultaneously. This issue is one of the serious challenges nowadays.
Figure 4: (a) Comparing the calculated results by numerical method with the results in ref [2]. (b) Error percentage in terms of \(E_{J}/E_{c}\).
Another source of noise at low frequencies is Josephson's energy fluctuations. The source of critical current noise is the tunnel junction. Figure 5 shows the changes in the duration of the dephasing due to the critical current noise with the increase of the \(E_{J}/E_{c}\) ratio. According to equation 6, the dephasing time caused by the critical current noise is proportional to the inverse root of the critical current, which could be derived from equation 1.
\[T_{2}^{I_{c}}\propto\frac{\sqrt{I_{c}}}{I_{c}}=\frac{1}{\sqrt{I_{c}}} \tag{6}\]
As a result, by increasing of \(I_{c}\) (or \(E_{J}/E_{c}\) ratio), we expect the dephasing time caused by the critical current noise to decrease, as shown in Figure 5. The solid blue line in Figure 5 is drawn based on the calculation results. The two red and green dotted lines are the result of the theoretical results of the Transmon and Cooper couple box regimes. [2]
Figure 5: The diagram of dephasing time changes due to critical current noise in terms of \(E_{J}/E_{c}\).
## Conclusion
Up to now, many attempts have been made to increase the dephasing and relaxation times by changing the topology of superconducting circuits. Optimizing the quantum circuits and preserving their valuable properties can create a new qubit with much better performance. To increase the dephasing time, it is necessary to increase all three dephasing times because, in reality, all the environmental noises are simultaneously coupled with the qubit.
Numerical simulations were conducted using Quantum Toolbox in Python [5]to study the behavior of the \(T_{2}\) dephasing time for various noise sources based on the \(E_{J}/E_{C}\) ratio in both the Cooper pair box and transmon regimes. Employing equations from reference [2], the effects of different noise sources were examined. The simulation results demonstrated how the dephasing time responded to changing noise sources, with particular emphasis on critical current noise. The analysis suggests that an increase in \(I_{c}\) (or \(E_{J}/E_{C}\) ratio) may lead to a reduction in the dephasing time attributed to critical current noise. This trend aligns with theoretical expectations and highlights the intricate relationship between noise sources and qubit performance.
|
2309.09863 | Driven-dissipative phases and dynamics in non-Markovian nonlinear
photonics | Interactions between photons (nonlinearities) enable a powerful form of
control over the state of light. This control has enabled technologies such as
light sources at new wavelengths, ultra-short optical pulses, frequency-comb
metrology systems, even quantum light sources. Common to a wide variety of
nonlinear optical technologies is an equilibrium between an energy source, such
as an external laser, and dissipation, such as radiation loss or absorption. In
the vast majority of these systems, the coupling between the system and the
outside world (which leads to loss) is well-described as ``Markovian,'' meaning
that the outside world has no memory of its past state. In this work, we
introduce a class of driven-dissipative systems in which a nonlinear cavity
experiences non-Markovian coupling to the outside world. In the classical
regime, we show that these non-Markovian cavities can have extremely low
thresholds for nonlinear effects, as well as self-pulsing instabilities at THz
rates, and rich phase diagrams with alternating regions of stability and
instability. In the quantum regime, we show how these system, when implemented
on state-of-the-art platforms, can enable generation of strongly squeezed
cavity states with intensity fluctuations that can be more than 15 dB below the
classical limit, in contrast to the Markovian driven-dissipative cavity, in
which the limit is 3 dB. In the regime of few-photon nonlinearity, such
non-Markovian cavities can enable a deterministic protocol to generate Fock
states of high order, which are long-desired, but still elusive at optical
frequencies. We expect that exploiting non-Markovian couplings in nonlinear
optics should in the future lead to even richer possibilities than those
discussed here for both classical and quantum light manipulations. | Jamison Sloan, Nicholas Rivera, Marin Soljačić | 2023-09-18T15:24:44Z | http://arxiv.org/abs/2309.09863v1 | # Driven-dissipative phases and dynamics in non-Markovian nonlinear photonics
###### Abstract
Interactions between photons (nonlinearities) enable a powerful form of control over the state of light. This control has enabled technologies such as light sources at new wavelengths, ultra-short optical pulses, frequency-comb metrology systems, even quantum light sources. Common to a wide variety of nonlinear optical technologies is an equilibrium between an energy source, such as an external laser, and dissipation, such as radiation loss or absorption. In the vast majority of these systems, the coupling between the system and the outside world (which leads to loss) is well-described as "Markovian," meaning that the outside world has no memory of its past state. In this work, we introduce a class of driven-dissipative systems in which a nonlinear cavity experiences non-Markovian coupling to the outside world. In the classical regime, we show that these non-Markovian cavities can have extremely low thresholds for nonlinear effects, as well as self-pulsing instabilities at THz rates, and rich phase diagrams with alternating regions of stability and instability. In the quantum regime, we show how these system, when implemented on state-of-the-art platforms, can enable generation of strongly squeezed cavity states with intensity fluctuations that can be more than 15 dB below the classical limit, in contrast to the Markovian driven-dissipative cavity, in which the limit is 3 dB. In the regime of few-photon nonlinearity, such non-Markovian cavities can enable a deterministic protocol to generate Fock states of high order, which are long-desired, but still elusive at optical frequencies. We expect that exploiting non-Markovian couplings in nonlinear optics should in the future lead to even richer possibilities than those discussed here for both classical and quantum light manipulations.
## I Introduction
Nonlinear systems are ubiquitous across scientific disciplines, exhibiting universal phenomena such as phase transitions, synchronization, pattern formation, and chaotic behavior [1; 2; 3]. Nonlinearity also plays a central role in optics, where materials with a nonlinear polarization response enable frequency conversion, field sensing, and ultrashort pulse generation [4; 5]. The invention of the laser quickly enabled the observation of many classical nonlinear effects, including harmonic generation [6], soliton formation [7], self-focusing [8], self-phase modulation [9], and optical parametric amplification, all of which are still intensely researched to this day.
Of particular importance in optics are so-called "driven-dissipative" systems. Such systems typically consist of a nonlinear optical resonator (or multiple resonators) driven by an external light source. The simultaneous presence of nonlinearity, dissipation, and external drive lead to striking classical effects such as cavity bistability [10; 11; 12; 13], dissipative Kerr solitons in waveguides [14; 15; 16; 17; 18], and optical parametric oscillation [19]. Nonlinear optical systems also enable transformations of the quantum state of light [20], enabling key applications in metrology and quantum information processing. Such systems have also been proposed as a platform to study collective behavior and phase transitions [21; 22; 23; 24].
In the vast majority of driven-dissipative systems, the coupling between the system and its environment is assumed to be independent of frequency over the bandwidths of interest. This equivalently means that the system's interaction with its environment is assumed to be instantaneous, or "Markovian." In such systems, the outside world (environment) retains no memory of its prior state, meaning that the outside world interaction is in some sense uncorrelated. Given the importance of correlations in interacting systems for realizing useful behaviors, it is surprising that the regime of strong nonlinearity and non-Markovian dissipation remains largely unexplored. This is especially relevant given that a wide variety of platforms (free space optical filters, fiber systems, photonic crystals, integrated nanophotonics, etc.) can be readily engineered to provide exactly the types of strong frequency-dependent couplings which undermine the Markovian assumption. Thus, while the Markovian assumption is well respected in many systems, it also places considerable limitations on the space of possible driven-dissipative architectures, and the corresponding functionalities which can be realized.
In this work, we develop the physics of driven-dissipative systems with non-Markovian couplings to the environment. We introduce a general class of models which consist of an intensity-dependent (Kerr) nonlinear optical resonance coupled to a number of non-Markovian continuum channels, and solve for their classical and quantum state dynamics. We show that the resonance-frequency-dependent loss in these systems enables new driven-dissipative phases. In the classical domain, we identify low-threshold bistability, as well as self-pulsing instabilities that enable passive modulation of incoming light. In the quantum domain, the combination of bistability and non-Markovian loss enables the natural generation of strongly intensity-squeezed cavity states which are maintained in the steady-state by an external drive. We show how this effect arises from the physics of "sharp loss" which is unique to non-Markovian dissipation, and that mechanism can produce cavity states with intensity fluctuations more than 10 dB below the shot noise limit. In systems with particularly strong nonlinearities, this behavior enables the generation of high order Fock states, which have remained unrealized at optical frequencies, despite their importance for metrology and
quantum information. This introduction of non-Markovian coupling into the already rich space of driven-dissipative systems constitutes a new degree of control which can be used to engineer the quantum correlations of light by modifying a seemingly classical element such as frequency-dependent coupling.
## II Theory
Our results are based on a general quantum optical theory of nonlinear resonances with frequency-dependent (non-Markovian) couplings to the environment, enabling the description of a highly general class of driven-dissipative systems (Fig. 1a). In this work, we focus more specifically on the class of systems comprising two key elements: (1) frequency-dependent environment coupling which creates a resonance-frequency-dependent loss, and (2) Kerr nonlinearity which causes the frequency of resonance \(a\) to depend on the number of photons \(n\) in the resonance (Fig. 1b).
We first describe the quantum optical theory of dispersive dissipation in linear resonators. To do so, we consider a resonance \(a\) coupled to continuum channels (reservoirs) labeled \(i\) (Fig. 1b). We assume the resonance \(a\) exchanges excitations with the reservoir fields \(s_{i}\) via frequency-dependent coupling functions \(K_{c,i}(\omega)\). This action of the coupling on the cavity is encoded in a frequency-dependent loss function \(K_{l}(\omega)\): its real part gives the loss rate, and its imaginary part gives the frequency shift. Importantly, \(K_{c,i}\) and \(K_{l}\) are not independent, but rather constrained by a Kramers-Kronig relation, from which it follows that \(2\,\text{Re}\,K_{l}(\omega)=\sum_{i}|K_{c,i}(\omega)|^{2}\).
Kerr nonlinearity in the resonator equips the mode \(a\) with an intensity-dependent resonance frequency \(\omega(n)=\omega_{a}+\beta n\). Here, \(\omega_{a}\) is the bare resonance frequency of \(a\), \(n\) is the cavity photon number, and \(\beta\) is the Kerr frequency shift which results from adding one photon to the cavity. When the dispersive dissipation is simultaneously present, the resonance \(a\) obeys the following Heisenberg-Langevin equation of motion:
\[\begin{split}\dot{a}=-i\omega_{a}a-\underbrace{i\beta(a^{ \dagger}a)}_{\text{Kerr}}&-\underbrace{\int dt^{\prime}\,K_{l}( t-t^{\prime})a(t^{\prime})}_{\text{Cavity field damping with memory}}\\ &+\underbrace{\sum_{i}\int dt^{\prime}\,K_{c,i}(t-t^{\prime})s_{i}(t^{\prime})}_{\text{Coupling of input fields with memory}}.\end{split} \tag{1}\]
Here, \(K_{l,c}(t-t^{\prime})\) are the time-domain loss (coupling) kernels, related to the frequency-domain functions by \(K_{l,c}(\tau)=\frac{1}{2\pi}\int d\omega\,e^{-i\omega\tau}K_{l,c}(\omega)\).
This dissipation term is balanced by the presence of a quantum operator-valued input term. The input fields are normalized such that \([s_{i}(t),s_{j}^{\dagger}(t^{\prime})]=\delta_{ij}\delta(t-t^{\prime})\). Additionally, they can be decomposed into a sum of mean-field (c-number) and quantum fluctuation (operator) contributions \(s_{i}=\langle s_{i}\rangle+\delta s_{i}\). The c-numbers give forcing terms which drive the mean-field dynamics of \(a\), while the operator valued fluctuations generate a Langevin force term \(F(t)\equiv\sum_{i}\int dt^{\prime}\,K_{c,i}(t-t^{\prime})\delta s_{i}(t^{ \prime})\) which adds non-Markovian fluctuations into the cavity. It follows from the above that for vacuum reservoirs, the frequency-domain correlations of \(F\) are \(\langle F(\omega)F^{\dagger}(\omega^{\prime})\rangle=2\pi\cdot 2\,\text{Re}K_{l}( \omega)\delta(\omega-\omega^{\prime})\). Thus, the correlations are local in frequency space (i.e., fluctuations at different frequencies are not correlated with one another), but with a magnitude that depends on frequency through the loss rate. It can be shown (see S.I.) that the stated correlation functions of the Langevin force lead to the preservation of the equal-time commutation relation of the cavity field, namely \([a(t),a^{\dagger}(t)]=1\), indicating the self-consistency of the theory. Moreover, it can be shown that when the bandwidth of \(K_{c}(\omega)\) is large, the dynamics revert back to the standard case with frequency-independent couplings.
This quantum framework of dispersive dissipation is relevant to a diverse array of physical systems. In fact, for any system where the density of states of the outside world (the loss channels) is significantly frequency-dependent this formalism can be applied. As examples, there are many photonic structures which can be used as sharp frequency-dependent elements, such as Bragg gratings, filters with Lorentzian and Fano profiles, photonic crystal mirrors, and so on. Addition
Figure 1: **General framework for non-Markovian driven-dissipative systems.** (a) General driven-dissipative system with non-Markovian couplings to the environment, and internal resonances which may contain Kerr nonlinearity. (b) Model primarily considered in this work (Eq. 1), consisting of a nonlinear resonance \(a\) coupled to one or more continuum channels \(i\) through coupling functions \(K_{c,i}(\omega)\). These reservoir couplings give the resonance a dispersive loss \(K_{l}(\omega)\).
ally, schemes involving delay lines, or time-multiplexing [25], could also be used to realize non-Markovian dissipation. The key point of our work is that nonlinearity can "interact" with these resonance-frequency-dependent losses to lead to new phenomena, especially in the presence of driving.
## III Results
As a minimal example which exhibits many important features, we introduce the nonlinear Friedrich-Wintgen (F.W.) model. The F.W. model is a temporal coupled mode theory (TCMT) model which describes two resonances \(a\) and \(d\) coupled to a common continuum (Fig. 2a) [26]. The losses of the eigenmodes of this model depend strongly on the relative frequencies of the two resonances: stated differently, the loss of the resonance depends on the resonance frequency of \(a\). Physically, the resonance-frequency-dependence of the loss arises from interference: light from the resonator \(a\) can decay by directly leaking into the continuum, or by hopping through the resonator \(d\) first. These two pathways interfere, and their relative phase depends on the frequency of \(a\).
This model has been studied by many authors for the key feature that it supports conditions where the resonances can be lossless, despite both resonators being coupled to the continuum. These lossless states are often referred to as bound states in the continuum (BICs). Contrary to standard works [27; 28; 29; 30], we consider the case in which one of the resonances is nonlinear, and consider the quantum optical consequences of this modification.
For the F.W. model, the coupling and loss functions for the resonance \(a\) are:
\[K_{c}(\omega) =\sqrt{2\kappa}\left[1-\frac{\gamma}{i(\omega_{d}-\omega)+\gamma}\right] \tag{2a}\] \[K_{l}(\omega) =\kappa\left[1-\frac{\gamma}{i(\omega_{d}-\omega)+\gamma}\right], \tag{2b}\]
where \(\kappa\) and \(\gamma\) are the respective decay rates of \(a\) and \(d\), which have frequencies \(\omega_{a,d}\). The model's key feature is that interference between radiative channels equips \(a\) with a resonance-frequency-dependent loss rate that vanishes at \(\omega_{d}\) (Fig. 2b).
Figure 2: **mean-field behavior of a non-Markovian nonlinear driven-dissipative cavity.** (a) Schematic of a system described by a nonlinear Friedrich-Wintgen (F.W.) model which consists of two resonances \(a\) and \(d\) coupled to a common continuum \(s\). (b) Dispersive loss profile \(\text{Re}\,K_{l}(\omega)\) of the nonlinear Friedrich-Wintgen (F.W.) model. Destructive interference in the loss channels causes the dip in the loss experienced by \(a\). Parameters used are \(\kappa/\omega_{a}=10^{-4}\), \(\gamma/\omega_{a}=10^{-2}\), and \(\omega_{d}=\omega_{a}+\gamma\). (c) steady-state cavity photon number in the presence of a monochromatic pump at several different frequencies \(\omega_{p}\) (marked as vertical lines in (b)). Dashed lines indicate behavior in the Markovian model, while solid lines indicate behavior in the non-Markovian model. Both models exhibit bistability for some detunings. The level of nonlinearity is set to \(\beta/\omega_{a}=10^{-10}\). (d) Phase diagram of the nonlinear F.W. model. Black region indicates conventional cavity bistability, while grey region indicates self-pulsing due to modulational instability (MI). (e-g) Transient behaviors of the system operating at different pump rates corresponding to different steady-state photon numbers \(n\) shown by horizontal dashed lines, and also marked in (d). Decay to a steady-state via relaxation oscillations is seen in (e, f), while self-pulsing instability is seen in (g).
### Mean-field dynamics
Non-Markovian environment coupling strongly impacts the classical (mean-field) dynamics of \(a\). These dynamics are governed by Eq. 1, taking all operators to c-numbers to yield a generalized TCM model with non-Markovian loss. When light is injected into the cavity, the input fields act as source terms. For a monochromatic pump of frequency \(\omega_{p}\), the input field has mean value \(\langle s(t)\rangle=s_{0}e^{-i\omega_{p}t}\), where \(|s_{0}|^{2}\) is the incident flux. In this case, the cavity photon number \(n=\langle a^{\dagger}a\rangle\approx|\left\langle a\right\rangle|^{2}\) in the steady-state satisfies the cubic equation:
\[[(\omega_{ap}+K_{l}^{\prime\prime}(\omega_{p})+\beta n)^{2}+K_{l}^{\prime}( \omega_{p})^{2}]n=|s_{0}|^{2}|K_{c}(\omega_{p})|^{2}, \tag{3}\]
where, \(\omega_{ap}\equiv\omega_{a}-\omega_{p}\), and \(K_{l}=K_{l}^{\prime}+iK_{l}^{\prime\prime}\) has been decomposed into real and imaginary parts which respectively give the dispersive loss and phase shift. The right hand side concerns the coupling of the input, while the left hand side concerns the cavity response, which is sensitive to the loss and detuning at the pump frequency.
As an important point of comparison to existing literature, we note that an externally pumped cavity with Kerr nonlinearity can exhibit bistability [10]. In other words, for a given pump strength \(s_{0}\), there can be two stable solutions to Eq. 3. This occurs because the amount of light coupled into the cavity depends on the pump-cavity detuning, which in turn depends on the cavity photon number via Kerr nonlinearity. Bistability occurs when \(\delta>\sqrt{3}K_{l}^{\prime}(\omega_{p})\), where \(\delta\equiv\omega_{ap}+K_{l}^{\prime\prime}(\omega_{ap})\) is the total detuning (see dashed curves in Fig. 2c).
In the non-Markovian case, the cavity exhibits a different coupling into the cavity for each pump frequency, and a corresponding different loss. As a result, the non-Markovian input-output behaviors exhibit important deviations from their Markovian counterparts. The most important distinction occurs when the pump frequency is near the frequency of the lossless mode (BIC), which is approximately \(\omega_{d}\) when \(\kappa\ll\gamma\). Here, the F.W. model exhibits vanishing loss, which drastically reduces the pump power required to maintain the steady-state at the boundary of the top stable branch. In particular, it can be shown through Eq. 3 that at the upper bistable point, the required pump flux is reduced by the same proportion that the loss is reduced (2 orders of magnitude in this example). The combination of Kerr nonlinearity and a frequency for which the loss nearly vanishes thus corresponds to a cavity intensity for which the loss nearly vanishes. Although low threshold bistability can of course be realized in a Markovian system (by minimizing the loss), this loss rate will still be independent of the cavity intensity.
Non-Markovian dissipation also strongly impacts the transient behaviors in nonlinear systems, in particular by introducing a modulational instability (MI) which occurs specifically due to the frequency-dependence of the loss. By analyzing the response of the steady-state to perturbations, we construct the phase diagram of the driven-dissipative system in the space of pump frequency and photon number (Fig. 2d). The black region indicates the traditional unstable region which occurs when the input-output curve "bends back" on itself, splitting the input-output curve into two disconnected stable branches. The grey regions mark the MI induced by loss dispersion. In this particular example, the MI gain is highest in the vicinity of the bistable region (see S.L).
The consequences of this modulation instability can be understood from the transient dynamics of the nonlinear system excited from the vacuum state with various pump fluxes. For parameters in the stable part of the phase diagram, transients can be described as a damped oscillation around the steady-state (referred to sometimes as "relaxation oscillations") (Figs. 2e-f). For a pump flux which nears, but does not enter, the MI regime (Fig. 2f), the decay time of the relaxation oscillations increases by an order of magnitude, compared to a point further from the instability (Fig. 2e).
Inside the MI region (Fig. 2g), the photon number pulses about the steady-state value predicted from the mean-field theory (\(n=5\times 10^{8}\)), spontaneously and indefinitely. The pulsing frequency is \(\Omega_{p}=\sqrt{\Omega^{2}-(\Gamma/2)^{2}}\), where \(\Omega=\sqrt{\Delta^{2}-(\beta n)^{2}}\) is the relaxation oscillation frequency given in terms of the detuning parameter \(\Delta=\omega_{ap}+2\beta n\), and \(\Gamma\) is the relaxation oscillation decay rate. The pulsing amplitude can be a substantial fraction of the steady-state value; for these particular parameters, the cavity photon number swings over a range which is 80% of the mean value itself.
The MI which occurs here is initiated not by frequency dispersion of the index of refraction [31], but rather the dispersion of the loss of the cavity. A small number of works have explored MI induced by dispersive loss in optical fibers [32; 33; 34]. Although the MI in the system described here shares the dispersive loss feature with known fiber systems, the cavity nature of our system results in some key differences in the way the pulsing frequency \(\Omega_{p}\) is set. In particular, the MI we describe has a pulsing frequency which depends on the steady-state photon number, in contrast to the related fiber effects, in which the MI frequency is set by the detuning of the pump from the maximum loss frequency. These differences make it clear that the physical nature of the instability here is substantially different than previous works in fibers, even though dispersive loss is needed in both cases. For the parameters considered here, \(\Omega_{p}\) is on the THz scale, giving such sources an attractive potential to modulate light at frequencies which are inaccessible by electronic means.
### Quantum noise dynamics
In addition to the mean-field properties described above, non-Markovian loss in driven-dissipative systems can induce important changes in quantum noise properties. Namely, amplitude and phase noise increase near the onset of MI, while amplitude noise can be strongly suppressed below the shot noise limit by the presence of sharp resonance-frequency-dependent loss.
This discussion is based on our theory for the quantum noise of the cavity in the presence of nonlinearity and non-Markovian dissipation. The theory is based on a linearization approach that considers deviations of Eq. 1 from the mean-field solution. In particular, we assume that in the
steady-state, the cavity annihilation operator can be decomposed into a mean value and operator-valued fluctuations as \(a=\langle a\rangle+\delta a(t)\). By substituting this expression into Eq. 1, maintaining contributions at linear order, and transforming into frequency space, we obtain a coupled set of equations for the frequency space noise operators:
\[\begin{pmatrix}\eta(\omega)&i\beta n\\ -i\beta n&\eta^{*}(-\omega)\end{pmatrix}\begin{pmatrix}\delta a(\omega)\\ \delta a^{\dagger}(-\omega)\end{pmatrix}=\begin{pmatrix}K_{c}(\omega_{p}+ \omega)\delta s(\omega)\\ K_{c}^{*}(\omega_{p}-\omega)\delta s^{\dagger}(-\omega)\end{pmatrix}. \tag{4}\]
Here, we have defined \(\eta(\omega)\equiv i(\omega_{a}-\omega_{p}-\omega+2\beta n)+K_{l}(\omega_{p}+ \omega)\). Additionally, it it is understood that \(n=\langle a^{\dagger}a\rangle\) refers to the steady-state mean-field value.
The noise properties are most compactly described in terms of the variances of quadrature operators \(X=a+a^{\dagger}\) and \(Y=-i(a-a^{\dagger})\), which we find to be:
\[(\Delta Q_{\sigma})^{2}=\int\frac{d\omega}{\pi}\frac{R_{\sigma}(\omega)}{\left[ \Omega^{2}(\omega)-\omega^{2}\right]^{2}+\Gamma^{4}(\omega)}, \tag{5}\]
\[R_{\sigma}(\omega)\equiv K_{+}(\omega)\left[(\omega_{ap}+(2-\sigma)\beta n+ \omega)^{2}+K_{-}(\omega)^{2}\right]. \tag{6}\]
Here, \(K_{\pm}(\omega)\equiv\text{Re}\;K_{l}(\omega_{p}\pm\omega)\), and \(\sigma\) denotes the quadrature with variance \((\Delta Q_{\sigma})^{2}\), with \(\sigma=1\) corresponding to \(X\) and \(\sigma=-1\) corresponding to \(Y\). Additionally, \(\Omega(\omega)\) and \(\Gamma(\omega)\) are the frequency-dependent relaxation oscillation frequency and decay rate (see S.I. for expressions). In this formulation, the phase of the pump is chosen, without loss of generality, so the mean-field steady-state \(\langle a\rangle\) is positive and real; then, \(\Delta X\) is the intensity noise, and \(\Delta Y\) the phase noise.
Figure 3: **Quantum noise dynamics in a non-Markovian driven-dissipative cavity.** (a) Amplitude (X) and phase (Y) quadrature noise of the nonlinear F.W. model as a function of the steady-state photon number \(n\). Quadrature variance is shown in reference to the shot noise level. The non-Markovian model exhibits significant noise reduction on the upper bistable branch. The pump frequency is \(\omega_{a}=\omega_{d}+\gamma\), and all system parameters are the same as those used in Fig. 2. (b) Nonlinear phase diagram which shows amplitude quadrature variance as a function of pump frequency \(\omega_{p}\) and steady-state photon number \(n\). The Fano factor is defined as \(F=(\Delta n)^{2}/n\). (c) Sharp loss interpretation of amplitude noise reduction based on the dispersive loss. Pump frequency and sidebands at \(\pm 1\) probe the contours of the loss. As photon number approaches the bistable point, the relaxation oscillation frequency \(\Omega\) approaches zero, so that the noise probes the derivative of the loss. (d) Sharpness of loss as a function of different background losses.
As a point of comparison, we briefly review the noise properties associated with an ordinary bistable cavity (Fig. 3a, Markov). On the lower bistable branch, the amplitude noise increases with photon number until eventually diverging at the lower bistable point, while, the phase noise decreases with photon number until hitting a minimum value which lies a factor of 2 below the shot noise limit (SNL). On the upper bistable branch, the situation is reversed: the phase noise diverges near the bistable point, while the amplitude noise attains a minimum value of \(1/2\) relative to the SNL. At large photon numbers, the amplitude noise approaches a universal value of \(2/3\) of the SNL for all pump frequencies [10].
Dispersive dissipation introduces dramatic changes. First, both amplitude and phase noise diverge near the onset of the MI, indicating the presence of strong bunching [50]. Second, the amplitude noise at the onset of the upper bistable branch drops dramatically compared to the Markovian case. This region lends itself to the natural generation of highly intensity-squeezed states; for the set of parameters shown, the minimum amplitude variance reached is around 0.15, almost 10 dB below the shot noise limit. By tuning the detuning of \(\omega_{a}\) and \(\omega_{d}\), as well as the pump \(\omega_{p}\), it is even possible to exceed 10 dB of amplitude squeezing (see S.I.).
This physics behind this phenomenon is best understood through the lens of intensity-dependent dissipation [35]. In particular, the simultaneous presence of Kerr nonlinearity and strong resonance-frequency-dependent loss provides the cavity with an effective _photon number dependent loss_, obtained by composing the resonance-frequency-dependent loss and the intensity-dependent resonance frequency. In the quantum picture, certain number states experience much lower losses than others, effectively amplifying their presence in the steady-state. By incorporating this idea of intensity-dependent loss with a continuous-wave drive, it then becomes possible to indefinitely stabilize low-noise resonator states, and even states approaching intracavity Fock states (Fig. 4). That such low-noise resource states can be indefinitely maintained -- even in the presence of loss -- shows a critical advantage of considering non-Markovian nonlinear dynamics in the driven-dissipative setting.
This intuition is well supported by an analytical approximation we have derived for the amplitude noise (see S.I.):
\[(\Delta X)^{2}\approx\left(1-\frac{\beta n}{\Delta}\right)\frac{\left(\frac{ \Delta}{\Omega}\right)^{2}+r\left(\frac{\Delta}{\Omega}\right)}{1+r\left(\frac {\Delta}{\Omega}\right)}. \tag{7}\]
This approximation holds in the adiabatic regime in which the non-Markovian coupling element has a shorter time response than cavity (\(\gamma\gg\kappa\) for the F.W. model). In Eq. 7, \(r=(K_{+}-K_{-})/(K_{+}+K_{-})\) is the ratio of the difference and sum of losses at sideband frequencies \(K_{\pm}\equiv\text{Re}\,K_{l}(\omega_{p}\pm\Omega)\). Thus, while the steady-state mean-field behavior depends on the loss at the pump frequency (Eq. 3), the noise level is determined by the difference in loss between the sideband frequencies \(\omega_{p}\pm\Omega\) (Fig. 3c). A sharply resonance-frequency-dependent loss creates a nonzero \(r\), allowing the Fano factor to drop far below the Markovian limit of \(1/2\). As the upper bistability boundary is approached, the sideband frequency \(\Omega\) tends to zero, so that \(r\) depends on the slope of the dispersive loss at the pump frequency. In particular, \(r/\Omega\rightarrow\left[d\,K_{l}^{\prime}(\omega)/d\omega\right]_{\omega_{p} }/K_{l}^{\prime}(\omega_{p})\) as \(\Omega\to 0\). In this regime, the amplitude noise is determined quite directly by the "sharpness" of the dispersive loss curve: a sharper loss pushes noise further below the classical limit.
One critical question is the maximum amplitude noise reduction below the SNL which can be achieved through coherent pumping of a nonlinear non-Markovian resonance. Unlike other methods for generating amplitude-squeezed light with Kerr nonlinearity [36; 37], the approach presented here does not have strict theoretical limits on the achievable number uncertainty. This is possible since the ideal noise reduction is set by the ratio of the sharpness of the frequency-dependent loss to the loss itself. For the F.W. model, this can be achieved by bringing the pump frequency to an \(\omega_{p}\) arbitrarily close to the BIC at \(\omega_{d}\). In practice, the limiting factor on the noise reduction will be the presence of additional loss mechanisms such as material absorption, scattering loss, or multi-photon decay processes [35]. To model this, we add a small frequency-independent background loss \(\kappa_{\text{bg}}\) to the resonator \(a\). Larger background losses decrease the maximum ratio \(r\) that can be achieved, and push the pump frequency of maximal noise reduction slightly away from the BIC (Fig. 3d).
_More general dispersive loss models._ Although we have focused on the nonlinear F.W. model as a minimal example, both the physical phenomena and theoretical tools in this work extend much more widely. As an example, we briefly show that similar phenomena can be realized in a reflection-based Fano resonance geometry with strong nonlinearity. Such structures have been realized experimentally, and even shown to exhibit classical nonlinear effects [39; 40; 41; 42], making them prime candidates for quantum optics experiments. For the particular example we consider (Fig. 4a), the loss is periodic in frequency, with period set by the free spectral range of the cavity. The loss profile can be engineered by tuning the transmission and reflection parameters of the Fano interference (Figs. 4b-c).
By pumping to photon numbers near the upper bistable onset, one can create cavity states with amplitude noise far below the classical limit. Thus the "sharp loss" mechanism of noise reduction is quite general, and can be exploited in many systems with sufficiently sharp dispersion, strong nonlinearity, and low background loss. For this particular system, pumping at a frequency which is highly detuned from \(\omega_{a}\) enables the generation of states which contain several thousand photons, with Fano factors down to \(0.02\) (equivalent to 17 dB of squeezing). Moreover, small detunings enable the generation of states which become very close to intracavity Fock states: this particular example system supports a cavity state containing \(n=70\) photons, with an uncertainty \(\Delta n\approx 1\) (Fig. 4b).
The Fano mirror loss also introduces additional complexities in the phase diagram and quantum noise landscape compared to the F.W. model (Figs. 4b-c). Since the stability behavior and quantum noise are sensitive to the sideband losses at \(\omega_{p}\pm\Omega\), varying the pump frequency and steady-state number probes the dispersive loss. The result is that both the quantum noise and MI regions produce a quilted pattern, with MI boundaries determined by the zeros of the denominator in Eq. 7. By changing the dispersion of the loss (in this case, by
changing the reflection and transmission coefficients at of the Fano interference), one changes the instability pattern.
_Experimental considerations._ The architectures and parameters used in this work were chosen for their compatibility with current experimental capabilities. In particular, the parameters used in Figs. 2 and 3 are compatible with state of the art integrated nanophotonic structures [39; 40]. Such structures, as well as other potential structures on silicon nitride or lithium niobate, are appealing due to their simultaneous ability to provide highly optimized classical interference for dispersive dissipation, and substantial Kerr nonlinearity with a compact footprint. Additionally, exciton-polariton condensates in nanophotonic structures [38], can provide the modal confinement and nonlinear strength required to realize these effects with tens to thousands of photons (as shown in Fig. 4). Yet other more "macroscopic" platforms may also prove suitable hosts for the non-Markovian driven-dissipative platforms we describe; fiber systems in particular have the advantage of long propagation lengths to provide the requisite nonlinear phase shifts.
## IV Conclusion and outlook
In summary, we have introduced non-Markovian driven-dissipative systems as a platform to engineer the classical and quantum behavior of light. In these systems, the dependence of the loss rate on resonance frequency leads to driven-dissipative phases which are not present with Markovian dissipation. In particular, we showed that dispersive dissipation in coherently driven cavities leads to an unexplored class of modulational instability. Additionally, we showed that in these systems, the amplitude noise is shaped by the dispersion of the loss, with sharp resonance-frequency-dependent losses leading to the natural steady-state production of strongly intensity squeezed, and even near-Fock cavity states.
Unlike most driven-dissipative systems which interact instantaneously with their environments, non-Markovian architectures harness temporal correlations to greatly enlarge the space of possible behaviors. Our findings point toward the potential of these systems to create quantum states of light which are not naturally produced by other means. In addition to the strong intensity squeezing described, using these platforms to generate multi-mode correlated states could open new avenues in optical quantum information processing.
Although we have focused on Kerr nonlinearity in this work, we anticipate that it will be fruitful to explore more general non-Markovian driven-dissipative systems. In particular, by incorporating phase-sensitive dynamics (such as those realized by optical parametric oscillators and amplifiers [43; 44]), an even more interesting range of behaviors is expected. Moreover, the incorporation of gain media into nonlinear cavities can provide yet a another route to realizing strong non-Markovian effects [45; 46]. Finally, we note that while we considered continuous-wave driving, the physics here can also be extended to generating quantum states with pulsed driving, an ongoing topic of theoretical and experimental exploration [47; 48; 17; 49]. These platforms -- and others not yet imagined -- have the potential to bring new degrees of control to the classical and quantum behavior of light.
## V Acknowledgments
We acknowledge useful discussions with Yannick Salamin and Shiekh Zia Uddin. J.S. acknowledges previous support of a Mathworks Fellowship, as well as previous support from a National Defense Science and Engineering Graduate (ND-SEG) Fellowship (F-1730184536). N.R. acknowledges the support of a Junior Fellowship from the Harvard Society of Fellows. This work is also supported in part by the U. S. Army Research Office through the Institute for Soldier Nanotechnologies at MIT, under Collaborative Agreement Number W911NF-18-2-0048. We also acknowledge support of Parviz Tayebati.
Figure 4: **Driven-dissipative behavior of a nonlinear Fano mirror.** (a) Photonic crystal (PhC) based Fano mirror which exhibits dispersive loss as a result of interference between a line cavity and a single defect cavity. Parameters used are \(\omega_{a}=1.03\times 10^{15}\) s\({}^{-1}\), \(\kappa=10^{-4}\omega_{a}\), \(L=5\)\(\mu\)m, and \(\beta=10^{-4}\omega_{a}\),, which are chosen to represent systems with strong per-photon nonlinearity [38]. (b, c) Loss profile of the resonator \(a\) for different values of the reflection and transmission coefficients at the junction between the resonances and the external continuum. Profiles can be either be symmetric (b) or asymmetric (c). Below each loss profile is the phase diagram of the system subject to continuous driving with frequency \(\omega_{p}\) to a steady state photon number \(n\). The “wedge” region is unstable through the traditional mechanism, while the many other regions exhibit modulational instability. |
2309.07696 | Steady-state entanglement production in a quantum thermal machine with
continuous feedback control | Quantum thermal machines can generate steady-state entanglement by harvesting
spontaneous interactions with local environments. However, using minimal
resources and control, the entanglement is typically very noisy. Here, we study
entanglement generation in a two-qubit quantum thermal machine in the presence
of a continuous feedback protocol. Each qubit is measured continuously and the
outcomes are used for real-time feedback to control the local
system-environment interactions. We show that there exists an ideal operation
regime where the quality of entanglement is significantly improved, to the
extent that it can violate standard Bell inequalities and uphold quantum
teleportation. In particular, we find, for ideal operation, that the heat
current across the system is proportional to the entanglement concurrence.
Finally, we investigate the robustness of entanglement production when the
machine operates away from the ideal conditions. | Giovanni Francesco Diotallevi, Björn Annby-Andersson, Peter Samuelsson, Armin Tavakoli, Pharnam Bakhshinezhad | 2023-09-14T13:15:45Z | http://arxiv.org/abs/2309.07696v1 | # Steady-state entanglement production in a quantum thermal machine with continuous feedback control
###### Abstract
Quantum thermal machines can generate steady-state entanglement by harvesting spontaneous interactions with local environments. However, using minimal resources and control, the entanglement is typically very noisy. Here, we study entanglement generation in a two-qubit quantum thermal machine in the presence of a continuous feedback protocol. Each qubit is measured continuously and the outcomes are used for real-time feedback to control the local system-environment interactions. We show that there exists an ideal operation regime where the quality of entanglement is significantly improved, to the extent that it can violate standard Bell inequalities and uphold quantum teleportation. In particular, we find, for ideal operation, that the heat current across the system is proportional to the entanglement concurrence. Finally, we investigate the robustness of entanglement production when the machine operates away from the ideal conditions.
## I Introduction
Quantum thermal machines are quantum systems coupled to two, or several, thermal reservoirs, which exploit temperature gradients to perform useful tasks such as cooling, heating, time-keeping, and producing work [1, 2, 3]. In contrast to their classical counterparts, these machines rely on quantum features, like entanglement and tunneling. Therefore, they are promising testbeds for studying fundamental aspects of quantum physics, such as the generation, stabilization, and control of entanglement in the presence of thermal environments.
To this end, it was shown that a minimal quantum thermal machine, consisting of two coherently interacting qubits coupled to two reservoirs at different temperatures, is able to produce stationary entangled states [4]. The word'minimal' refers to the minimal setup required to generate entanglement. The success of this machine can be linked to the magnitude of the heat current flowing through the system [5]. However, the entanglement generated in such a machine is typically weak and noisy. For example, it is unable to perform well-known entanglement-based tasks such as teleportation or Bell inequality violation [6]. Therefore, in order to improve the entanglement production, it has been considered to supply the original autonomous system with some additional resources. It has been found that heralding the output state of a multi-dimensional autonomous quantum thermal machine, via a local measurement, can generate maximally entangled states [7]. This type of approach also enables multi-partite entanglement production [8]. However, this requires coherent control of multi-level systems and the ability to perform non-demolition filter measurements. An alternative approach is to introduce a third bath that is common to both qubits [9], which leads to an improvement in the entanglement production. Another approach that improves the entanglement is to perform a population inversion process in fermionic baths [6]. This amounts to bath engineering, but can improve entanglement production to the extent that non-trivial teleportation fidelities are possi |
2304.00105 | Normal forms, universal scaling functions, and extending the validity of
the RG | Our community has a deep and sophisticated understanding of phase transitions
and their universal scaling functions. We outline and advocate an ambitious
program to use this understanding as an anchor for describing the surrounding
phases. We explain how to use normal form theory to write universal scaling
functions in systems where the renormalization-group flows cannot be
linearized. We use the 2d Ising model to demonstrate how to calculate
high-precision implementations of universal scaling functions, and how to
extend them into a complete description of the surrounding phases. We discuss
prospects and challenges involved into extending these early successes to the
many other systems where the RG has successfully described emergent scale
invariance, making them invaluable tools for engineers, biologists, and social
scientists studying complex systems. | James P. Sethna, David Hathcock, Jaron Kent-Dobias, Archishman Raju | 2023-03-31T19:59:44Z | http://arxiv.org/abs/2304.00105v2 | # Normal forms, universal scaling functions, and extending the validity of the RG
###### Abstract
Our community has a deep and sophisticated understanding of phase transitions and their universal scaling functions. We outline and advocate an ambitious program to use this understanding as an anchor for describing the surrounding phases. We explain how to use normal form theory to write universal scaling functions in systems where the renormalization-group flows cannot be linearized. We use the 2d Ising model to demonstrate how to calculate high-precision implementations of universal scaling functions, and how to extend them into a complete description of the surrounding phases. We discuss prospects and challenges involved into extending these early successes to the many other systems where the RG has successfully described emergent scale invariance, making them invaluable tools for engineers, biologists, and social scientists studying complex systems.
## 1 Introduction
Half a century ago, Michael Fisher, together with Wilson and Kadanoff, introduced the renormalization group to analyze systems with emergent, fractal scale invariance. For five decades, physicists have applied these techniques to equilibrium phase transitions,
avalanche models, glasses and disordered systems, the onset of chaos, plastic flow in crystals, surface morphologies, etc. But these tools have not made a substantial impact on engineering and biology. We believe it is our duty to make these tools accessible to the broader science community.
\(\bullet\) We need to provide them with tools that allow them to describe not only the critical point, but properties of systems that exhibit incipient scale-invariant fluctuations yet are far from the critical region. These demand that we understand _corrections to scaling_, which become more important farther from the critical point. In section 3, we build on Michael's work on analytic corrections to scaling with Aharony [1, 2] and his work on the complex analytic properties of the Onsager solution [3] to extend our understanding of the 2d Ising critical point to a full description of the ferromagnetic and paramagnetic phases [4].
\(\bullet\) We need not only universal power laws, but a complete description of all behavior of the material. These demand convenient access to accurate _universal scaling functions_ that govern behavior involving more than two quantities at a time. Michael taught us about these powerful tools in his pedagogical reviews and lectures. So, magnetization as a function of temperature goes as \(M\sim t^{\beta}\), but magnetization as a function of temperature and external field goes as \(M\sim t^{\beta}\mathcal{M}(h/t^{\beta\,\delta})\), where \(\mathcal{M}(X)\) is an universal function, in principle predicted from the renormalization group. In section 2, we discuss the correct way [5] of writing universal scaling functions for systems where logarithms and exponentials are found in addition to power laws (e.g., in the upper and lower critical dimensions). In section 4, we present a user-friendly solution [6] for \(\mathcal{M}(X)\) in the 2d Ising model, with a systematic expansion that captures the correct singular behaviors with seven-digit accuracy.
\(\bullet\) We need rapidly converging methods that can connect our deep understanding of the singularities in phases and in universal scaling functions to quantitative predictions which make the best use of limited information away from the critical points. In modern numerical analysis, one can integrate or approximate analytic functions of one variable with exponential accuracy [7, secs. 4.6,5.8.1] so long as one knows the singularities at the endpoints. Chebyshev, Gauss, and Romberg methods have superseded Simpson's rule and its generalizations for approximating and integrating analytic functions, and can be adapted to capture known singularities. We know that all properties are analytic in phases, and our mission for five decades has been to understand the singularities between phases. Can we explain the phases using the critical points? The exponential convergence in sections 3 and 4 demonstrate that the normal form theory can do so.
Focusing on the 2d Ising model allows us to show proof of principle, but can we aspire to similar progress for other, less well studied critical points? We suggest this as a key task for the next stage of research in critical phenomena and emergent scale invariance. In section 5, we discuss progress by those using NPRG (non-perturbative functional renormalization-group) methods [8], which have found broad application not only in equilibrium thermodynamic systems, but in avalanche models, quantum systems, turbulence,...They explicitly coarse-grain and renormalize a system systems in fixed dimension, implicitly calculating the universal scaling functions and the system-specific behavior far from criticality. They have mostly been used to extract high-precision critical exponents, amplitude ratios, and proofs of success. If we can organize these calculations into user-friendly universal scaling functions, we may provide
experimentalists, simulators, and theorists in a variety of fields with tools to describe matter beyond systems tuned to a single point on the phase diagram.
In the final section 6, we embark into deep issues in the renormalization group and the prospects and challenges they provide to our mission to make the theory of critical phenomena an organizing principle for much of science.
## 2 Normal form theory, RG flows, and logs
The renormalization group takes an enormous leap of abstraction - studying emergent scale invariance as a flow under coarse-graining in the space of all possible systems. This reduces the problem to the study of fixed points for differential equations in an infinite-dimensional space (e.g., free energy \(f\), temperature \(t=(T-Tc)\), field \(h\), other parameters \(u\),...). Near the critical point for the Ising model, coarse-graining and rescaling the free energy \(f\) by a factor \(e^{\ell}\) is described by the differential equations
\[\begin{split} df/d\ell&=df+At^{2}+\text{other nonlinear}\ldots\\ dt/d\ell&=\lambda_{t}t+\text{nonlinear}\ldots\\ dh/d\ell&=\lambda_{h}h+\text{nonlinear}\ldots\\ du/d\ell&=\lambda_{u}u+\text{nonlinear}\ldots\end{split} \tag{1}\]
The first step in most treatments of these RG flows is to linearize the flow near the fixed point. Positive eigenvalues \(\lambda_{t}\) and \(\lambda_{h}\) correspond to relevant operators like field \(h\) and temperature \(t\); negative eigenvalues \(\lambda_{u}\) correspond to 'irrelevant' perturbations like \(u\) that provide _singular corrections to scaling_ that are subdominant near the critical point. Using this linearization, this treatment then argues for universal power laws for things like the correlation length \(\xi(t)\sim t^{\nu}=t^{1/\lambda_{t}}\) at \(h=0\), and universal power laws times universal functions
\[f(t,h,u) \sim t^{d\nu}\mathcal{F}(h/t^{\beta\delta},ut^{\omega})=t^{d/ \lambda_{t}}\mathcal{F}(h/t^{\lambda_{h}/\lambda_{t}},ut^{-\lambda_{u}/ \lambda_{t}}),\text{ and} \tag{2}\] \[M(t,h,u) \sim t^{\beta}\mathcal{M}(h/t^{\beta\delta},ut^{\omega})=t^{(d- \lambda_{h})/\lambda_{t}}\mathcal{M}(h/t^{\lambda_{h}/\lambda_{t}},ut^{- \lambda_{u}/\lambda_{t}}) \tag{3}\]
for things like the field-dependent free energy and magnetization, that describe relations between more than two quantities.
This linearization is not useful in many cases (e.g., the 1d, 2d, and 4d Ising models and all models in their upper and lower critical dimensions). How to systematically formulate universal scaling functions in these cases has hitherto been mysterious. In this section, we describe the use of normal form methods from dynamical systems theory by Raju et al. [[5]] to understand when this linearization is possible, and how to modify the invariant arguments to the universal scaling functions functions when it is not.
Wegner and co-workers [9, 10] in the early days justified this linearization by changing variables to _nonlinear scaling fields_ which transform linearly under the renormalization group. Cardy [11] denotes these new variables \(u_{t}\) and \(u_{h}\), but we shall use
tildes \(\widetilde{t}\) and \(\widetilde{h}\). By choosing a suitable Taylor expansion of the change of variables,
\[t(\widetilde{t},\widetilde{h},\widetilde{u}\dots) =\widetilde{t}+a_{tu}\widetilde{iu}+a_{tu}{}^{2}\widetilde{t} \widetilde{h}^{2}+\dots\] \[h(\widetilde{t},\widetilde{h},\widetilde{u}\dots) =\widetilde{h}+b_{hu}\widetilde{h}\widetilde{u}+b_{ht}\widetilde{ h}+\dots \tag{4}\] \[\dots\]
the equations simplify to \(d\widetilde{f}/d\ell=d\widetilde{f}\), \(d\widetilde{t}/d\ell=\lambda_{t}\widetilde{t}\), \(d\widetilde{u}/d\ell=\lambda_{u}\widetilde{u}\), etc. Aharony and Fisher [1, 2] ten years later noted that this change of variables leads to what we call _analytic corrections to scaling_, again subdominant near the critical point. The analytic corrections to scaling have power laws that involve integers and combinations of existing critical exponents \(\beta\), \(\nu\), \(\delta\), and include an analytic background to the free energy; these corrections can be written in terms of derivatives of the universal scaling function. The aforementioned singular corrections to scaling introduce new critical exponents, and become independent variables in the universal scaling functions.
Dynamical systems theory [12, 5] tells us that linearizing the flow can only be done for what are called hyperbolic fixed points. The change of variables \(f,t\), \(h,u\to\widetilde{f},\widetilde{t},\widetilde{h},\widetilde{u}\) is calculated one polynomial order at a time, and its radius of convergence can be a subtle mathematical issue [13]. In section 3, we shall take on the ambitious task of attempting to describe the entire surrounding paramagnetic and ferromagnetic phases for the 2d Ising model. There we shall see that convergence is tricky, but a good choice of coordinates can yield a radius of convergence that appears to converge precisely in the range from zero to infinite temperature.
Even for the two-dimensional Ising model, the RG fixed point cannot be linearized. The specific heat has a logarithmic singularity: often deemed \(\alpha=0(\log)\), but incompatible with a linearized flow. This arises because in 2d no polynomial change can remove the \(At^{2}\) term in the flow of \(f\) in Eq. 1 (although rescaling the relative magnitudes of \(f\) and \(t\) can change the value of \(A\) to one). Somewhat messy algebra can confirm that this is due to a integer _resonance_ between the two linear eigenvalues \(\lambda_{f}=d=2=2/\nu=2*\lambda_{t}\) for the 2d Ising model. The simplest normal form is thus \(d\widetilde{f}/d\ell=2\widetilde{f}-\widetilde{t}^{2}\), \(d\widetilde{t}/d\ell=\widetilde{t}\), etc. This results in a singularity in the free energy of the form \(t^{2}\log(t^{2})\) which will play an important role in sections 3 and 4.
For the 4d Ising model, the leading irrelevant operator \(u\) becomes marginal, with a zero eigenvalue \(\lambda_{u}\). There it has always been clear that one cannot linearize the RG flow. In the dynamical systems nomenclature, the flow undergoes a transcritical bifurcation in \(d=4\) (together with the same resonance \(d=2/\nu\) seen in 2d Ising above). Our analysis [5] shows that the normal form for the RG flows is1
Footnote 1: The nonlinear term proportional to \(\widetilde{u}\widetilde{h}\) in the equation for \(d\widetilde{h}/d\ell\) needed in the general normal form is known to be zero for the 4d Ising model. Raju [14] has shown this is because there is a redundant variable proportional to the magnetization cubed in the renormalization group (see sec. 6).
\[d\widetilde{f}/d\ell=4\widetilde{f}-\widetilde{t}^{2},\ \ \ d\widetilde{t}/d\ell=2 \widetilde{t}-A\widetilde{u}\widetilde{t},\ \ \ d\widetilde{h}/d\ell=3\widetilde{h},\ \ \ d\widetilde{u}/d\ell=-\widetilde{u}^{2}+D \widetilde{u}^{3}. \tag{5}\]
The universal scaling of the free energy in a system of length \(L\) does not take the usual scaling form \(f(t,h,u,L)=L^{-d}{\cal F}(X,Y,Z)+f_{\alpha}(t,h,u,L)\) with \(X=\widetilde{t}L^{2},Y=\widetilde{h}L^{3}\), and \(Z=\widetilde{u}/L^{\alpha/\nu}\). First, we have quite unusual scaling variables
\[X=\widetilde{t}L^{2}\left(W(yL^{1/D})/(1/(D\widetilde{u})-1)\right)^{-A},Y= \widetilde{h}L^{3},\ \ \mbox{and}\ \ Z=yL^{1/D} \tag{6}\]
Second, the free energy has a more complex form
\[f(t,h,u,L)=L^{-d}f(X,Y,Z)-W(Z)^{-A}\left(\frac{W(Z)^{-A}}{1-A}-\frac{1}{A}\right) +f_{a}(t,h,u,L). \tag{7}\]
Here \(W(x)\) is the Lambert \(W\) or product log function, \(\widetilde{u}\) is the marginal quartic term in the Landau free energy, \(y\) is a messy known function of \(\widetilde{u}\), \(A\) and \(D\) are the amplitudes of nonlinear terms in the renormalization group flow (Eq. 5), and \(f_{a}\) is a non-singular, analytic function near the critical point.
Eq. 6 captures the complete, correct singularity for the 4d Ising model; the traditional log and log-log corrections in the specific heat and the susceptibility arise from expansions of the Lambert \(W\) function for large arguments. At bifurcations like \(d=4\) and resonances as at \(d=2\), normal form theory dictates the nature of the singularity at the critical point.
It _also_ tells us that the free energy in the surrounding phase can be found by changing variables. So, for example, we know that the liquid-gas critical point at high pressures and temperatures is in the 3d Ising universality class. Hence the liquid and gas phase properties should be given by
\[f(T,P)=\widetilde{t}(T,P)^{3\nu}\mathcal{F}_{\rm 3dIsing}(\widetilde{h}(T,P)/ \widetilde{t}(T,P)^{\beta\delta},\widetilde{u}(T,P)t^{\alpha},\dots)+f_{a}(T, P), \tag{8}\]
where \(f_{a}\), \(\widetilde{t}\), \(\widetilde{h}\), and \(\widetilde{u}\) are analytic in temperature and pressure near the critical point. As usual, from the free energy (and a corresponding scaling form for the correlation function) one can derive all of the equilibrium and linear-response properties of the resulting phases. We shall implement a change of coordinates like this for the 2d Ising model in section 3.
We highlight the use of our normal-form methods [5] to solve a twenty-year-old puzzle - the unusual scaling of avalanches in the non-equilibrium 2d random-field Ising model [15] (see Fig. 1). Over 25 years ago, we used the random-field Ising model to study avalanches. We could understand the scaling in 3, 4, 5, 7, and 9 dimensions [16], but two dimensions made no sense. Changing the arguments from ratios of power laws to the invariant functions determined from normal form theory was one part of the puzzle.2
Footnote 2: Using random lattices to suppress faceting was the other obstacle.
The distribution of avalanche sizes in the random-field Ising model is cut off at size \(\Sigma(w)\) depending on the disorder \(w\). In three dimensions and higher it takes the traditional form \(\Sigma(w)=w^{-d_{f}\,\nu}\). In two dimensions, normal form theory predicts that \(\Sigma=(B+1/w)^{-Bd_{f}+C}\exp(d_{f}/w)\), with \(d_{f}\) a universal critical exponent, and \(B\) and \(C\) being universal constants associated with irremovable nonlinear terms in the renormalization-group flow. Using this, Fig. 1 does an excellent job explaining the behavior.
Figure 1: **Avalanche sizes** for the 2d \(T=0\) random-field Ising model (from [[15]]). (a) **Avalanches** for disorders \(w=0.5\), \(1.0\), and \(5.0\); each color is a separate avalanche. (b) **Scaling collapse** of the area-weighted avalanche size distribution. Note the factor of ten range in disorder \(w\), and the factor of \(2000\) range in typical avalanche size \(\Sigma\). Here the invariant scaling combination \(\Sigma(w)=\Sigma_{s}(B+1/w)^{-Bd_{f}+C}\exp(d_{f}/w)\) is not a ratio of power laws, but is derived directly from the nonlinear normal form of the renormalization-group equations in the lower critical dimension (as in Eq. 5). The area-weighted avalanche size distribution is thus \(sA(s|w)=(s/\Sigma(w))^{x}{\cal A}((s/\Sigma(w))^{y})\), with \({\cal A}\) a universal scaling function.
Changing coordinates to describe phases: Matching Onsager
As foreshadowed in section 2, normal form theory tells us how to transform universal scaling forms to describe the entire phases surrounding a critical point. This section summarizes our recent results [4], which implements this for the 2d Ising model in zero field, where Onsager's exact solution [17] for the free energy enables quantitative validation of the results.
Given the asymptotic scaling form for the free energy (or any other quantity) in normal form coordinates, \(\widetilde{f}(\widetilde{t},\widetilde{h},\widetilde{u})\), the free energy as a function of physical temperature, field, and irrelevant perturbations is \(f(t,h,u)=\widetilde{f}(\widetilde{t}(t,h,u),\widetilde{h}(t,h,u),\widetilde{ u}(t,h,u))+f_{a}(t,h,u)\). The functions \(\widetilde{t}(t,h,u)\) and \(\widetilde{h}(t,h,u)\), are given by precisely the analytic change of variables that inverts Eq. 4, mapping the nonlinear scaling variables back to their physical counterparts. Additionally, we must also add terms \(f_{a}(t,h,u)\) accounting for the analytic background of the free energy. Note that the change of coordinates are linear at lowest order: for example, \(\widetilde{t}\sim t\) and \(\widetilde{h}\sim h\). Therefore, \(f(t,h,u)\sim\widetilde{f}(t,h,u)\) near the critical point; in other words, the normal form free energy \(\widetilde{f}\) is the asymptotic scaling form near the critical point, usually computed using RG and related techniques.
We have already heard that the 2d Ising model in zero field has a nonlinear normal form due to the resonance between the free energy and temperature, \(d\widetilde{f}/d\ell=2\widetilde{f}-\widetilde{t}^{2}\), \(d\widetilde{t}/d\ell=\widetilde{t}\), and that these flow equations give rise to a logarithmic singularity, \(\widetilde{f}=-\widetilde{t}^{2}\log(\widetilde{t}^{2})\). Following the procedure outlined above, the free energy as a function of temperature is simply
\[f(t)=-\widetilde{t}(t)^{2}\log(\widetilde{t}(t)^{2})+f_{a}(t). \tag{9}\]
This result is consistent with Onsager's exact solution, which is known to have the form \(a(t)\log t^{2}+b(t)\) for some analytic functions \(a\) and \(b\). Comparing these expressions for the free energy, we need to find the coordinate change \(\widetilde{t}(t)\) and analytic background \(f_{a}(t)\) by solving
\[a(t)=-\widetilde{t}(t)^{2},\qquad b(t)=\widetilde{t}(t)^{2}\log(\widetilde{t}( t)^{2}/t^{2})+f_{a}(t). \tag{10}\]
Note that because \(\widetilde{t}(t)\) is linear to lowest order in t, the term \(\log\left(\widetilde{t}(t)^{2}/t^{2}\right)\) is indeed analytic.
We recently [4] computed the free energy Eq. 9, by perturbatively expanding Onsager's exact free energy around the critical point. A key question is the radius of convergence of the coordinate transformation \(\widetilde{t}(t)\) and \(f_{a}(t)\) to the normal form. Unlike Taylor expansions about analytic points, the radius of convergence of this normal-form analytic expansion about a singular point is not simply the distance in the complex plane to the next-nearest singularity. One might hope that physics would govern the convergence - perhaps the distance to zero temperature, infinite temperature, or the nearest other phase transition. Indeed, in each of the expansions discussed below, the critical point is closest to zero temperature, with this distance determining the radius of convergence.
Our investigations showed the importance of the choice of variable used to parameterize the distance to the critical point (Fig. 2(a)). Expanding \(\widetilde{t}\) and \(f_{a}\) in temperature \(t=(T-T_{c})\), for example, converges in an estimated range \(-T_{c}<t<T_{c}\) (or
\(0<T<2T_{c}\)). This is not the only natural choice, however. The low temperature expansions for the Ising model are expressed in powers of \(X=\exp(-2/T)\). \(X\) is also a natural variable in that the zeros of the 2d Ising partition function in the complex \(X\) plane form a circle passing through \(X_{c}\), as Fisher explained [3]. Using the Onsager solution to expand \(\widetilde{t}\) and \(f_{a}\) in terms of \(x=X-X_{c}\) yields a radius of convergence that extends all the way from zero temperature to \(X=2X_{c}\) (corresponding to \(T\approx 4.7\,T_{c}\)), but fails to describe higher temperatures.
Here we used special properties of the 2d Ising model to identify a new variable
\[V=\frac{5-3\sqrt{2}+X}{1+\sqrt{2}+X}\qquad v=V-V_{c}, \tag{11}\]
which allows our estimated radius of convergence to cover the full physical temperature range, from zero to infinity (Fig. 2(a)). Our coordinate \(v\), unlike \(t\) or \(x\), respects the self-dual symmetry of the 2d Ising model. Fisher's circle of zeros in \(X\) (and thus \(x\)) breaks this self-dual symmetry. The linear fractional transformation in Eq. 11 precisely unwraps this circle of zeros into a straight line, extending the self-dual symmetry to the complex plane. The circle of zeros becomes the branch cut of the logarithm in the scaling function for the free energy (Eq. 9).
Fig. 2a compares the expansions \(f(t)\), \(f(x(t))\), and \(f(v(t))\) to the exact free energy. In accordance with the discussion above, we see improved convergence for the \(x\) and \(v\) expansions. The expansion in \(v\) numerically appears to have a radius of convergence that extends to the entire physical range, zero to infinite temperature. Physics here does determine the range of convergence of normal form theory.
The above results leave two questions: can we improve our approximation for the free energy further, such that it converges, even at low orders, across all temperatures? Furthermore, can we approximate the coordinate change and analytic background without knowledge of the exact solution or the nonlinear terms in the RG flows? To resolve these challenges, we again expand the free energy in \(v\) using the form given in Eq. 9, but fix the expansion coefficients of \(\widetilde{t}\) and \(f_{a}\) by matching to low-temperature expansions of the free energy, instead of expanding the exact solution. Importantly, this approach requires minimal knowledge about the critical point (only the asymptotic scaling form), with most of the information coming from deep within the low-temperature phase. Because the matching guarantees the correct low temperature behavior and has the correct log-singularity at the critical point built into the expansion, we see uniform convergence across all temperatures. For example, by sixth order, the approximation differs from the true free energy by at most \(0.5\%\) (red dashed line in Fig. 2a) and we see exponential convergence as we add additional terms (Fig. 2b).
Our approach for extending critical scaling forms to the neighboring phases should naturally generalize to unsolved statistical physics models and experimental systems. For example, low-temperature expansions are relatively easy to compute in all dimensions and for a variety of systems and can be used to approximate the analytic corrections to established critical scaling forms. Candidates for this method include the 3d Ising model, where critical exponents are known to high precision from conformal bootstrap and the 2d Ising model in a field, whose scaling function is computed to high precision in the section 4 below. Finally, it is credible that the liquid phase, long-known as being
challenging because there is no'small parameter', could be described as a perturbation of the liquid-gas critical point, and described as the Ising critical point plus analytic corrections (determined, for example, by matching to virial expansions).
## 4 2d Ising critical point in a field
Ever since Onsager solved the zero-field 2d Ising model, people have searched for a high-precision approximate solution for the 3d Ising model, and occasionally also for the 2d Ising model in a field. The most successful of these attempts make use of parametric coordinates to interpolate behavior in temperature and field around the critical point in such a way that the singularity is naturally incorporated. [19, 18] These coordinates, first introduced many decades ago, [20] are polar-like, with a radius-like component \(R\) that
Figure 2: **Capturing the entire phase with analytic corrections, 2d Ising** (from [[4]]). (a) **Radius of convergence depends on the expansion variable**. Expanding Onsager’s exact free energy (black line) in \(t\), \(x\), and \(v\) (colored solid lines) leads to increasing radii of convergence, with the convergence of the \(v\)-expansion estimated to cover the entire physical temperature range. Each expansion shown is 20th order. Determining analytic corrections in \(v\) by matching to the low-temperature expansion does even better, accurately reproducing the free energy across all temperatures even at low orders (red dashed line, 6th order). (b) **Exponential convergence.** Adding analytic corrections to the universal scaling function at \(T_{c}\) by fitting to the low-temperature expansion results in exponential convergence to the Onsager solution as we add more terms.
controls the proximity to the critical point, and an angle-like coordinate \(\theta\) that rotates around the temperature-field plane. When taken to parameterize the temperature-_magnetization_ plane, such coordinates can be periodic in \(\theta\) like true polar coordinates, something once studied by Michael and collaborators to describe the coexistence region. [21] However, when parameterizing the temperature-_field_ plane, there is an inevitable cut resulting from the discontinuity between the up and down states at low temperatures, and \(\theta\) is not a periodic coordinate but ends at the abrupt transition at some \(\theta_{0}\).
The power of adopting these coordinates is that, as a function of \(\theta\), the critical singularity does not appear, and one can reasonably well-approximate the scaling function in this coordinate as a simple analytic function. Using this principle, Caselle _et al._ did a remarkably careful job of creating a function that satisfied lots of known properties and measured values (in yellow of Fig. 3). Their method was doomed to slow convergence at small external fields, though, because while the singularity of the critical point was indeed removed by the coordinate change, other more subtle singularities in the free energy remain, including a key essential singularity as one crosses the \(h=0\) line for \(t<0\). Fig. 3 shows a comparison between the approximate form of Caselle _et al._ and that of our work including this singularity.
In order to rigorously realize the idea of a free energy scaling function broken into singular and analytic pieces, two singularities in the scaling function must be accounted for in addition to the critical one. One is ancient and well known to the experts: the Yang-Lee edge singularity, for \(T>T_{c}\) in the complex plane.[22, 23] The other has only recently been realized to be relevant, but has a nice physical picture. As one crosses \(h=0\), the equilibrium magnetization jumps. But if you cross just a little bit, the up-spin
Figure 3: **Comparison between parametric approximations to the scaling functions with and without the appropriate singularities** (from [[6]]). Scaling functions for the magnetization and susceptibility plotted as functions of the scaling invariant \(h|t|^{-\beta\delta}\). The errors in the blue curves (Kent-Dobias) are estimated to be roughly \(10^{-7}\). Caselle’s earlier work [18] that we build upon has significant errors near \(h=0\) (red point); these are due to the essential singularity at \(h=0\) and \(t<0\) that we address. These discrepancies grow larger with higher derivatives of the free energy, as shown in the right panel of Fig. 5.
magnetized state is metastable, lasting for a good while until a bubble of down spins forms and grows to flip the system. (Think of supersaturated vapor - \(101\%\) humidity - and the nucleation of raindrops.) Here the surface tension cost between the up and down regions for small droplets is bigger than the bulk free energy gain of aligning the interior with the external field. The energy barrier \(B\) to reach large sizes diverges as \(h\) vanishes, \(B/k_{B}T=c/h\) for some constant \(c\). Just like in quantum mechanics, where the lifetime of a state gives the energy an imaginary part, the free energy becomes complex for \(h<0\) with an essential singularity \(\mathrm{Im}f\sim e^{(c/h)}\). One can use a Kramers-Kronig transform to see that the real part also has an essential singularity, influencing \(f\) for \(h>0\) as well.
We can incorporate these two singularities into the universal scaling function by first generating a'simplest' functional form that has the correct singularities, and then changing variables \(\theta\to\widetilde{\theta}=g(\theta)\) and adding an overall analytic function \(\mathcal{F}_{a}(\theta)=G(\theta)\), in a precise analogy with the analytic corrections we introduce to match the universal scaling function to describe the surrounding phases (sec. 3).
The'simplest' form is taken from writing the most singular part of the imaginary part of the free energy, and then taking advantage of a Kramers-Kronig like relation to find the corresponding real part. The requisite contour integral in the complex-\(\theta\) plane is shown in Fig. 4.
Though complicated, this procedure pays dividends. By incorporating these singularities and matching the power series terms in \(g(\theta)\) and \(G(\theta)\) (analogous to \(\widetilde{t}(t)\) and \(f_{a}(t)\) in section 3), we are able to achieve exponential convergence of the scaling function to exactly known values at \(t=0\), and also achieve exponential convergence of derivatives. The left-hand side of Fig. 5 shows this convergence in the free energy itself as a function of the scaling invariant \(th^{-B\delta}\) by subtracting our best converged approximation \(\mathcal{F}^{[6]}\) from the lower-order approximations \(\mathcal{F}^{[n]}(\theta)\) for \(n\in\{2,3,4,5\}\). These provide evidence that our seven-digit convergence at \(t=0\) extends to the whole scaling function. On the right of this plot, we can see the origin of this good behavior: the series expansion for the free energy around the abrupt transition point has zero radius of convergence, but this is captured naturally by our approximate scaling form.
Figure 4: **Contour in the complex \(\theta\) plane** to generate a free energy scaling form with the correct singularities (from [[6]]).
Figure 5: **Convergence of the universal scaling function** (from [[6]]). Left: The difference between the \(n\)th order approximation for the free energy scaling function in a field and the 6th order approximation as a function of the scaling invariant \(th^{-1/\beta\delta}\). The 5th and 6th order approximations differ by at most \(2\times 10^{-6}\). Higher derivatives behave similarly. Right: The Taylor series coefficients \({\cal F}_{-}^{(m)}(0)\) of the free energy scaling function at the abrupt transition line (the location of the essential singularity) as a function of derivative \(m\). By incorporating the essential singularity into the scaling function explicitly, the approximate forms have zero radius of convergence, matching numeric measurements of these coefficients.
## 5 Interpolating scaling functions between dimensions
Figure 6 illustrates the traditional thermodynamic critical points. Over five decades, these have been extensively explored using \(\epsilon\)-expansions, \(1/N\) expansions, cluster expansions, etc. The universal critical exponents are known essentially exactly from conformal bootstrap methods.3 Can we do the same for the universal scaling functions as functions of dimension \(d\) and the number of spin components \(N\)? And for (say) the random-field Ising model [24], or turbulence [25]?
Footnote 3: In the same spirit, Kent-Dobias’ solution [6] for the scaling function in 2d in section 4 is also essentially exact: both are well-defined algorithms that generate an exponentially converging approximation in a form useful for applications. After all, this is what we call the ‘exact’ solution for \(\sin(x)\).
Interpolation between dimensions could have several benefits. First, much is known analytically in two and four dimensions about the universal scaling functions that is only known numerically in three. Second, there are important features in scaling functions and their corrections to scaling that are a clear foreshadowing of properties in other dimensions. So the leading correction to scaling in 3d is the echo of the marginal variable of the Wilson-Fisher transcritical bifurcation in 4d. And, in a more dramatic example, the universal scaling function for the avalanche size distribution in the 3d random-field Ising model has a striking feature (it grows by a factor of ten before cutting off exponentially [16]), which is clearly related to the unusual scaling of the avalanche sizes in two dimensions [15] (Fig. 1).
In ordinary differential equations, normal form theory provides not only the behavior at the transition, but also an unfolding of the behavior near the bifurcation (up to a smooth coordinate transformation). Here we may expect to use this, not just to describe the phases near the critical points, but (following the lead of Wilson and Fisher) to describe
Figure 6: **Interpolating between dimensions.** A schematic showing the dimensions and spin components where traditional power-law scaling will break down. This leads to logarithms and exponentials replacing power laws, and rich but complicated invariant scaling combinations as arguments for the universal scaling functions [5]. On the other hand, these complexities also promise to inform and accelerate convergence of the universal scaling functions for dimensions in between.
the universal scaling functions as they evolve between dimensions.
How would we interpolate scaling functions, when even the arguments of the scaling functions vary with dimension (as in the \(W\)-functions we needed in section 2 for the 4d Ising model)? Consider taking the 4d flow equations 5 about the mean-field fixed-point, and keeping the unremovable nonlinear terms to describe nonlinear flows about the mean-field fixed point in \(d\) dimensions:
\[d\widetilde{f}/d\ell=d\widetilde{f}-\widetilde{t}^{2},\quad d\widetilde{t}/d \ell=2\widetilde{t}-A\widetilde{u}\widetilde{t},\quad d\widetilde{u}/d\ell=( 4-d)\widetilde{u}-\widetilde{u}^{2}+D\widetilde{u}^{3}, \tag{12}\]
Things to note. 1) The normal-form procedure of removing the nonlinear terms allows us to set the nonlinear terms in any way we wish. 2) We know the Wilson-Fisher fixed point for Eqs. 12 at \((\widetilde{t}^{*},\widetilde{u}^{*})\) must have a linearization whose eigenvalues \(1/\nu\) and \(\omega\) that, as a function of dimension, matches the conformal bootstrap values (or the exact values in 2d). This constrains the two nonlinear terms \(A(d)\) and \(D(d)\). Serendipitously, the \(\widetilde{t}^{2}\) resonance term in the 4d free energy flow is also needed to get the logarithmic specific heat in 2d. A high-precision universal scaling function for the free energy using the nonlinear scaling variables \(\widetilde{f},\widetilde{t}\) and \(\widetilde{u}\) in Eq. 12 would describe the universal crossover scaling from mean field to short-range magnetization in all dimensions between two and four. One could then linearize that scaling function about the 3d Wilson-Fisher fixed point to extract the traditional scaling functions.
How would one calculate such scaling functions? Here serendipity strikes in recent advances in non-perturbative functional renormalization-group (NPFRG) calculations [8], whose critical exponents [26] are almost competitive with those of conformal bootstrap, but which in the process also compute a functional form for the coarse-grained free energy as a function of magnetization.
Matthew Tissier and colleagues at the Jussieu campus of the Sorbonne and the first author have been exploring how to calculate high-precision universal scaling functions by combining the strengths of the normal-form theory with the systematically improvable scaling functions of NPFRG. In initial work, we have found that the NPFRG for the 4d Ising model should indeed have a scaling function whose arguments are of the exotic form in Eq. 6. We are now learning to solve the partial differential equations for \(\widetilde{t}(t,u)\)\(\widetilde{u}(t,u)\) and \(\widetilde{f}(t,u)\) in 3d to extract the universal scaling functions from the simplest of NPFRG models. We hope then to use the technology in section 4 to demonstrate how to tabulate universal scaling functions in a form convenient for experimentalists and simulators.
The NPFRG method has had striking success in high-precision calculations of equilibrium thermodynamic systems, disordered thermodynamic and avalanche models, fully developed turbulence, quantum many-body systems, and in QCD and electroweak models [8]. Our hope and vision is to inspire our colleagues to gather and tabulate their results into universal scaling functions that can describe the behaviors both near and far from points of emergent scale invariance, in a way accessible to engineers, biologists, and social scientists studying complex systems.
## 6 Prospects and challenges for future work
We have summarized what we believe to be promising indications that we can indeed generate usable, high-precision universal scaling functions in a systematic way, and can extend them systematically into high-precision descriptions of the surrounding phases. If this continues to work, one imagines our community will be able to provide quantitative theories of liquids, materials plasticity and failure, fluctuations in biological systems, and potentially turbulence and glass behavior. In all these cases, we must mesh our understanding of the universal, emergent critical behavior and our system-dependent knowledge of properties away from criticality to describe the entire phases. Will this work in practice?
To show that our method works to high precision, we chose to use a problem where we knew a great deal about the answer: the 2d Ising model. The reader should legitimately be skeptical that this is the easiest case. In particular, the 2d Ising mode is special in that it (1) is self-dual, (2) is analytically solvable in zero field, and (3) has no singular corrections to scaling. Let us discuss causes for concern that our methods will become general tools, and reasons for optimism.
(1) _The 2d Ising model is self-dual._ In section 3, we found exponential convergence of the Ising free energy \(f(t)\) in zero field using a power series of the Onsager solution about the critical point. To do so, we expanded in a variable \(v(t)\) that was fine-tuned to the self-dual and complex analytic properties of the 2d Ising model. We then used the low-temperature expansion to avoid any use of Onsager's solution. Will we need to find the 'right' variable in 3d to proceed? It turns out that matching to properties outside the radius of convergence stabilizes the expansion. In preliminary work, matching both high and low temperature series using the standard variable \(x=X-X_{c}=e^{-2/T}-e^{-2/T_{c}}\) in 2d (ignoring duality), we found exponential convergence in the entire temperature region [4].
(2) _The 2d Ising model in zero field has an exact solution_, and also has an exact solution at \(T_{c}\) as a function of field.4 But will the external field prevent us from finding an exponentially converging solution using analytic corrections to scaling \(\widetilde{t}(t,h)\), \(\widetilde{h}(t,h)\), and \(f_{a}(t,h)\), by matching the 2d universal scaling function from sec. 4 to the low \(T\)/high \(h\) and high \(T\)/low \(h\) cluster expansions, as we did for zero field in sec. 3?
Footnote 4: It may seem in retrospect clear that the essential singularity at \(h=0\) would frustrate a search for an exact solution in a field below \(T_{c}\).
We have an exact solution for the magnetization - the first derivative with respect to field. This has allowed us to determine \(\widetilde{t}\) and \(\widetilde{h}\) to linear order in \(h\). The susceptibility \(\chi(T)\) at zero field is not known exactly, but a remarkable amount is known [27]: there are formal expansions strongly indicating subdominant logarithmic corrections to scaling, and a possible essential boundary in the complex plane at the Fisher circle of zeros in the complex temperature plane [28, 29, 30, 27]. Log corrections to scaling are known to arise from resonances between relevant and irrelevant eigenvalues (sec. 2); the observed logs could arise from irrelevant variables whose resonances contribute only in \(O(h^{2})\) to the free energy. It appears that we can use these results [27] not only to determine \(\widetilde{t}\) and \(\widetilde{h}\) to quadratic order in \(h\), but also to extract information about irrelevant variables and their singular corrections to scaling in 2d (see below).
The universal scaling function in 3d presumably should succumb to the same kind of systematic approximation that we used in sec. 4 for 2d. All the ingredients appear to be available.5 The Yang-Lee singularity in 3d [31] has yielded to a high-precision NPFRG calculation. The essential singularity calculation depends only on the value of the surface tension near the critical point. In 2d, we used estimates of derivatives of the universal scaling function near \(h=0\) for \(T>T_{c}\) and \(T<T_{c}\). The latter (shown in Fig. 5b) are challenging to calculate using NPFRG methods (perhaps because of the essential singularity), but the former are now available [32]. Indeed, these authors used their estimates and the traditional Schofield coordinates to estimate the scaling function; adding the proper droplet singularity and Yang-Lee singularities could be straightforward and hopefully will yield convincing exponential convergence.
Footnote 5: And, of course, high-precision NPFRG calculations of the critical exponents [26] implicitly have calculated the universal scaling function.
Can we expect to extend a high-precision 3d Ising scaling funciton to describe the in-field behavior in the entire surrounding phases? Using analytic corrections to extend the critical singularity could break down at the roughening transition.6 However, this subtle transition should in principle also impede the use of Dlog Pade methods using the low-temperature expansion of the magnetization to describe the critical-point behavior, which works well in practice. In any case, the roughening transition does not arise in isotropic systems, so these concerns will not interfere with finally extracting a quantitative theory of liquids by adiabatic continuation of our understanding of the 3d Ising critical point.
Footnote 6: While one might argue that the behavior of an interface between spin up and spin down regions should not affect the bulk free energy, the (incredibly subtle) essential singularity as one crosses \(h=0\) (sec. 4) depends on the surface energy of the critical droplet, which will itself have an essential singularity as a function of temperature as it develops facets at the roughening temperature.
(3) _The Onsager solution exhibits no singular corrections to scaling._ First, this statement is more subtle than it seems. Onsager's solution has a pure logarithmic singularity, but conceivably there could be irrelevant eigenvalues with integer exponents that would lead to analytic terms in the free energy and magnetization. Also, there are irrelevant anisotropies in the correlation functions and the susceptibilities due to the square lattice, indeed with integer exponents. Even more subtle, these anisotropies are singular corrections to scaling in Wilson's RG formulation, but are clearly present at the fixed point of the real-space RG. Any coarse-graining procedure maintaining a square lattice will have a short-range square anisotropy.
Experts will remember that some of the irrelevant operators in the RG are _redundant_, and these redundant operators often involve flows between different fixed points in the same universality class.7 The idea of redundant variables had been first explored by Wegner [36], who pointed out that a RG transformation would always have certain operators that are redundant; they correspond to infinitesimal redefinitions of the field, and do not contribute to the singular part of the free energy.8
Footnote 8: The \(\widetilde{\widetilde{\widetilde{\widetilde{\widetilde{\widetilde{\widetilde{ \widetilde{\widetilde{\widetilde{\widetilde{\widetilde{\widetilde{\widetilde{\widetilde{ \widetilde{\widetilde{\widetilde{\widetilde{\widetilde{\widetilde{ \widetilde{{{ {{ { { }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\}}}}}}}}\}}}}}}}}\}}}}}}\}}}}\}}}}\}}}\]\]\]\]\}}}}}}}}\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]
The first author's first graduate student, Mohit Randeria, worked for Michael Fisher before shifting to my group. The two of them wrote a comment on a paper by Swendsen [40] claimed that one could modify the RG transformation to move the fixed point anywhere on the critical surface. Randeria and Fisher asserted that this could not in general be correct, because the amplitudes of the singular corrections to scaling are zero at the fixed points. To quote them,
"One fixed point may be mapped into another by a change of redundant variables and two RGs, say \({\cal R}\) and \(\widetilde{\cal R}\), may produce _formally_ different fixed points by this mechanism. Nevertheless, a general point on a critical manifold _cannot_ be transformed into a fixed point nor _vice versa_."
In the context of the 2d Ising model, there seem to be no singular corrections in the exact solution. Barma and Fisher had predicted that there should be logarithmic corrections to scaling coming from irrelevant variables [41]. They had also found evidence for an irrelevant operator with exponent \(-4/3\)[42]. Conformal field theory predicts a large number of irrelevant variables which come from descendant operators but does not predict the \(-4/3\) exponent. It has been conjectured in the past that all of these descendent operators are redundant [43]. This may explain why they do not lead to any logarithmic corrections to scaling (which otherwise are generically expected [14]). We would consider all of these descendant operators as constituting 'gauge' corrections to scaling, which can be removed by an appropriate coordinate choice, whereas the \(-4/3\) exponent would be a genuine singular correction to scaling.
In sec. 3 we succeeded in solving for the free energy of the square-lattice Ising model at all temperatures by applying a normal-form change of variables to the universal scaling function for the critical point. Is that success due to the lack of (non-redundant) singular corrections to scaling for this particular universality class?9 It is clear that in 3d we shall need to incorporate the (genuine) singular corrections to scaling in order to accurately describe the behavior away from the fixed point. Indeed, these contributions are subdominant near the fixed point, meaning conversely that they grow faster as one leaves the fixed point than do the relevant operators. Thus we will need a universal scaling function that depends not only on \(\widetilde{t}\) and \(\widetilde{h}\), but on potentially an infinite family of irrelevant variables \(\widetilde{u}\). Can we nonetheless hope for exponential accuracy?
Footnote 9: It is believed that the powers of logarithms in the 2d Ising susceptibility vanish in isotropic systems. Since all irrelevant operators in 2d have rational eigenvalues, and the redundant ones cannot generate resonant logarithms, one suspects that all the isotropic corrections to scaling are redundant.
The NPRRG methods mentioned in sec. 5, as it happens, precisely implement a flow with many irrelevant eigendirections at the fixed point. Applying our normal form analysis, we have been able to linearize this flow in the simplest case, allowing us to extract a functional form for the leading correction subdominant as \(\widetilde{t}^{\omega}\). It seems likely that one could generate exponential convergence for all properties of the phases by combining increasingly sophisticated NPFRG calculations, adding more irrelevant operators to our scaling functions, and matching to higher orders of low and high temperature expansions (or virial expansions, \(1/N\) expansions...).
Finally, there are intriguing hints of deep connections between normal form theory, redundant operators, and the construction of universal scaling functions.
We noted in sec. 4 that the procedure we used to generate the universal scaling function recapitulated exactly the same steps as the procedure we used in sec. 3 to generate the Onsager solution from the critical point. In both cases, we knew the singularities of the functions involved, and fit parameters in two functions (an analytic change of variables and an additive analytic background) to match known properties measured separately. If this systematic procedure can be generally used as a best practice, rapid progress could be made.
There is also a striking analogy between redundant operators (removing some singular corrections to scaling by reparameterizing the space of predictions) and our normal form theory (removing analytic corrections to scaling by reparameterizing the control parameters). Can we simultaneously remove both with a joint transformation? How are they related?
In thermodynamics, whether you write the free energies as a function of the external field and temperature or as a function of magnetization and temperature is considered a matter of choice. A Legendre transformation from control parameters \((f,t,h)\) to \((u,t,m)\) swaps the control parameter \(h\) with the prediction \(m\): does it swap normal-form transformations to redundant ones? (Indeed, the NPFRG methods coarse-grain the fixed-magnetization Gibbs free energy \(u(t,m)\), rather than Wilson's free energy at fixed field.) When we use our normal form theory to remove nonlinear terms in the RG flows by changing \(h\) to \(\widetilde{h}(t,h)\), we change the predicted magnetization. Is that change a redundant one?
Hankey and Stanley [44] showed that if you Legendre transform the generalized homogeneous functions we use for hyperbolic fixed points, you get another homogeneous function. Later, others, studying finite-size scaling of the Ising model [45, 46] in the microcanonical ensemble, predicted that it would be possible to speak of the entropy as a scaling function of the energy instead of the free energy as a function of the temperature.
Our preliminary work on this question ran into a difficulty, generalizing these ideas to the nonlinear normal forms necessary in the upper and lower critical dimensions. We have found that it is possible to Legendre transform RG flows at the linear level, but nonlinear flows generically lead to non-analyticities in the RG flow [14, 47].10 Whether this clarifies or confuses the correspondence between redundant and normal-form changes of variables needs to be explored.
Footnote 10: As noted in the footnote on page 4, even in four dimensions the RG flow of the magnetization is linearizable. Presumably this means that implementing an RG using flows in \((f,t,h)\) and \((u,t,m)\) will be possible even in four dimensions, while we find \((f,t,h)\) to \((S,E,h)\) leads to non-analytic RG flows.
As a long-time colleague of Michael Fisher, the lead author hopes that Fisher would have enjoyed this deep plunge into the complex history of the field and our ambitions for the future. He also hopes that Fisher's friends, colleagues, and collaborators who are contributing to this book, and of course our colleagues of the future who are reading it, will find this chapter more illuminating than obscure, and more useful than misleading or misguided.
## Acknowledgments
JPS would like to thank Matthieu Tissier for his insights and patience explaining the NPFRG methods, and for continuing collaborations in this work, Colin Clement for careful early work on the 2d Ising model, and Jacques Perk for helpful correspondence. JPS benefited by funding from NSF DMR-1719490 and CNRS (the French National Center for Scientific Research). DH was partially supported by an NSF Graduate Research Fellowship Grant No. DGE-2139899. JK-D acknowledges Simons Foundation Grant No. 454943. AR acknowledges support from the Department of Atomic Energy, India under project No. RTI4006, and the Simons Foundation Grant No. 287975.
|
2307.16551 | High-speed data processing onboard sunrise chromospheric infrared
spectropolarimeter for the SUNRISE III balloon telescope | The Sunrise Chromospheric Infrared spectroPolarimeter (SCIP) has been
developed for the third flight of the SUNRISE balloon-borne stratospheric solar
observatory. The aim of SCIP is to reveal the evolution of three-dimensional
magnetic fields in the solar photosphere and chromosphere using
spectropolarimetric measurements with a polarimetric precision of 0.03\%
(1$\sigma$). Multiple lines in the 770 and 850 nm wavelength bands are
simultaneously observed with two 2k$\times$2k CMOS cameras at a frame rate of
31.25 Hz. Stokes profiles are calculated onboard by accumulating the images
modulated by a polarization modulation unit, and then compression processes are
applied to the two-dimensional maps of the Stokes profiles. This onboard data
processing effectively reduces the data rate. SCIP electronics can handle large
data formats at high speed. Before the implementation into the flight SCIP
electronics, a performance verification of the onboard data processing was
performed with synthetic SCIP data that were produced with a numerical
simulation modeling the solar atmospheres. Finally, we verified that the
high-speed onboard data processing was realized on ground with the flight
hardware by using images illuminated by natural sunlight or an LED. | Masahito Kubo, Yukio Katsukawa, David Hernández Expósito, Antonio Sánchez Gómez, María Balaguer Jimenéz, David Orozco Suárez, José M. Morales Fernández, Beatriz Aparicio del Moral, Antonio J. Moreno Mantas, Eduardo Bailón Martínez, Jose Carlos del Toro Iniesta, Yusuke Kawabata, Carlos Quintero Noda, Takayoshi Oba, Ryohtaroh T. Ishikawa, Toshifumi Shimizu | 2023-07-31T10:30:48Z | http://arxiv.org/abs/2307.16551v1 | High-speed data processing onboard sunrise chromospheric infrared spectropolarimeter for the Sunrise III balloon telescope
###### Abstract
The Sunrise Chromospheric Infrared spectroPolarimeter (SCIP) has been developed for the third flight of the Sunrise balloon-borne stratospheric solar observatory. The aim of SCIP is to reveal the evolution of three-dimensional magnetic fields in the solar photosphere and chromosphere using spectropolarimetric measurements with a polarimetric precision of 0.03% (1\(\sigma\)). Multiple lines in the 770 and 850 nm wavelength bands are simultaneously observed with two 2k\(\times\)2k CMOS cameras at a frame rate of 31.25 Hz. Stokes profiles are calculated onboard by accumulating the images modulated by a polarization modulation unit, and then compression processes are applied to the two-dimensional maps of the Stokes profiles. This onboard data processing effectively reduces the data rate. SCIP electronics can handle large data formats at high speed. Before the implementation into the flight SCIP electronics, a performance verification of the onboard data processing was performed with synthetic SCIP data that were produced with a numerical simulation modeling the solar atmospheres. Finally, we verified that the high-speed onboard data processing was realized on ground with the flight hardware by using images illuminated by natural sunlight or an LED.
infrared radiation, Infrared spectroscopy, Polarization, Magnetism, Solar processes figure table
*Masahito Kubo, [email protected]
## 1 Introduction
The Sunrise balloon-borne stratospheric solar observatory is an international project to observe the Sun at a high spatial resolution with a 1-m telescope during balloon flight [1, 2]. The flight altitude is \(\sim\)35 km above the Atlantic Ocean and the flight time is more than 5 days from Sweden to Canada. The Sunrise Chromospheric Infrared spectroPolarimeter (SCIP) [3] is a slit-scanning spectropolarimeter, and one of the three focal plane instruments [4, 5] developed for the third flight of Sunrise. The SCIP runs multi-wavelength spectropolarimetric observations with a polarimetric
precision of 0.03% (1\(\sigma\)) and a spatial resolution of 0.21 arcsec [6, 7, 8, 9]. The spatial resolution of the SCIP corresponds to the diffraction limit of 1 m for the telescope at the wavelength of 850 nm. Two orthogonal linearly polarized beams are spatially separated in the direction perpendicular to the spectral dispersion by a polarization beam-splitter and simultaneously recorded by two spectropolarimeter (SP) cameras [10]: SP1 camera for Ca II lines in the 850 nm wavelength band and SP2 camera for K I lines in the 770 nm wavelength band. In addition to these two SP channels, a slit-jaw (SJ) imager observes two-dimensional solar images on the slit. The onboard data processing, such as demodulation, bit compression, and image compression, significantly reduces the data rate for spectropolarimetric observations. The usefulness of the onboard data processing for space-borne spectropolarimetry has been successfully demonstrated by the spectropolarimeter [11] of the Solar Optical Telescope (SOT) [12] on the Hinode mission [13]. An advancement over the Hinode/SOT spectropolarimeter is that the onboard data processing of the SCIP can handle images from cameras at a data rate 100 times higher than that of the Hinode/SOT spectropolarimeter. The size of the read-out frame is 1024\(\times\)112 pixels for each of the two CCD sensors for the Hinode/SOT spectropolarimeter and 2048\(\times\)2048 pixels for each of the two CMOS sensors for the SCIP. The frame rate is 10 Hz for the Hinode/SOT spectropolarimeter and 31.25 Hz for the SCIP. The polarization modulation is done by continuously rotating waveplates in both instruments, implying that the camera is read continuously during the polarization modulation. A larger data format is often requested for cameras because a larger field-of-view and wider wavelength coverage can provide deeper scientific insights. Particularly for SCIP, the orthogonal polarization of many Zeeman-sensitive absorption lines is measured by one camera to obtain the three-dimensional magnetic field structures from the photosphere to chromosphere [14, 15, 16]. Moreover, a fast polarization modulation is essential to detecting the temporal evolution of magnetic field structures related to the dynamical chromospheric
phenomena. This fast polarization modulation requires a fast modulator and a camera readout with a high frame rate. Thus, the high-speed data processing is important for state-of-art spectropolarimeters.
### Observation Mode
Two observing modes are available for the SCIP, as shown in Fig. 1. The onboard data processing, camera exposures, and scan mirror mechanism (SMM)[17, 18] synchronously co-operate with the polarization modulation unit (PMU)[19, 20] in the observing sequence. The PMU continuously rotates a pair of waveplates at a constant rate of 0.512 s/rotation and sends a pulse ("phase signal from Polarization Modulator Unit" in Fig. 1) every 22.5 deg to the SCIP electronics unit. The length
Figure 1: Control sequence of observations in (a) normal mode and (b) rapid mode.
of the pulse represents the phase of the PMU rotation: the longest, mid-long, and nominal pulses arrive every 360 deg, 90 deg, and 22.5 deg, respectively. The camera starts an exposure of the first column in its sensor at the leading edge of the pulses and then sends images to the data processing unit (DPU), which consists of a system controller and a frame grabber, in the SCIP electronics unit.
A set of Stokes I, Q, U, V, and R states are produced in normal mode. The Stokes parameters represent the intensity (Stokes I), linear polarization (Stokes Q and U), and circular polarization (Stokes V) of light. The R state is used for the polarization calibration of Stokes V (see Sec 2). The size of the output is 2048\(\times\)2048 pixels in the SP1 (850 nm) and SP2 (770 nm) cameras and 640\(\times\)640 pixels in the SJ camera. The field-of-view of SJ is 60 arcsec \(\times\) 60 arcsec with a pixel sampling of 0.09 arcsec, and the slit length, which is the same on the two SP channels, is 58 arcsec. The frame rate is 16 exposures per one PMU rotation, i.e., 32 ms per frame at a constant rate. The acquisition of a set of images starts at the first longest or mid-long pulse, followed by their demodulation by DPU. The shortest accumulation is 32 images during two PMU rotations (i.e., 1.024 s), and the largest accumulation is 640 images during 40 PMU rotations (i.e., 20.48 s). A requirement on the polarization sensitivity for the SCIP is 0.1% (1\(\sigma\)) at 1.024 s integration and 0.03% (1\(\sigma\)) at 10.24 s integration. The scan mirror is moved to the next position at the edge of the PMU pulse just after the camera sensor readout.
In rapid mode, only Stokes I is obtained at eight frames per one PMU rotation, i.e., every 64 ms, for the SP1 and SP2 cameras. The full field-of-view (58 arcsec \(\times\) 58 arcsec) can be covered in approximately 40 s by the slit-scanning observations in rapid mode. The size of the sensor readout is 2048\(\times\)2048 and 2048\(\times\)100 pixels for the SP1 and SP2 cameras, respectively. The size in the wavelength direction of the SP2 camera is kept small to reduce the data rate, which is limited by
a gigabit ethernet connection with a data storage in the instrument control system (ICS). The size of 100 pixels in the wavelength direction covers the most important line (K1 D1) in the 770 nm wavelength band. The number of pixels for the SP2 camera can be extended if a higher data rate is allowed. For the SJ camera, the image size is the same as that in the normal model; but the frame rate is eight images per one PMU rotation. The images of SP/SJ cameras are taken only in the even pulse of PMU, and the motion of SMM is conducted in the odd pulse, with the longest pulse of the phase signal from PMU being defined as the 0th pulse.
### Onboard Processing
Figure 2 shows onboard data flows for the SCIP. Demodulation (Sec. 2), bit compression (Sec. 3), and image compression (Sec. 4) processes are carried out for the images of the SP1 and SP2 cameras in the normal mode. Only image compression is processed for the SP cameras in the rapid mode. Demodulation and bit compression are always skipped for the outputs from the SJ camera, but image compression is applied. These onboard processes are applied in the same way to both science and calibration data. The SCIP data are sent to the data storage in ICS. The data rate from the SCIP electronics unit should satisfy a gigabit ethernet connection with the data storage in ICS. The recorded data in the data storage will be recovered after the balloon flight. After that, the calibration of the science data will be performed on the ground to make real Stokes data.
The onboard processing is carried out in the frame grabber within the SCIP DPU. This frame grabber is a custom design implemented on a field-programmable gate array (FPGA) provided by Xilinx, the Kintex Ultrascale XCKU040. The processing pipeline is programmed in a combination of custom cores and third-party cores. The custom cores are described in the VHDL language to implement the main functionalities of the data pipeline, i.e., the demodulation (integration), bit
compression and image compression cores. The third-party cores are supplied by Xilinx in the Vivado development tool and are mainly responsible for the data movement between processing cores, external memory access and fixed-point mathematical operations (adder, multiplier, and square root). In addition, the frame grabber is equipped with an embedded soft processor Microblaze, which is also provided by Xilinx. It is programmed in C.
Figure 2: Onboard data flows in (a) normal mode and (b) rapid mode.
## 2 Onboard Demodulation
A scheme of the integration and demodulation is shown in Fig. 3. In the normal mode, the camera takes an image every 22.5 deg of the PMU rotation. This image shows the modulated intensity, which is a combination of Stokes parameters multiplied by a factor corresponding to the phase of the PMU rotation. Therefore, the Stokes parameters can be obtained by accumulating multiple images taken at different PMU phases in the demodulation buffers. Before accumulation, the image taken at one phase of the PMU rotation is multiplied by a demodulation coefficient ("intermediate data" in Fig. 3). The demodulation coefficient varies among -1, -1/2, 1/2, and 1 according to the phase of the PMU rotation. As a minimum dataset, the SCIP is designed to calculate the Stokes parameters from 16 images obtained during one PMU rotation.
The CMOS sensor readout is not performed for all parts of the image simultaneously, but each column is read out in parallel along the wavelength direction. A new parameter R is introduced to compensate for this rolling shutter effect of the sensor [21]. The R-parameter is defined to have a phase shift of 45 deg with respect to Stokes V. The speed of the rolling shutter is 0.010 ms/column, and it produces a gap of about 21 ms between the first and final column along the wavelength direction. This time gap corresponds to the phase shift of the PMU rotation up to 15 deg. The phase shift of the PMU causes a crosstalk between Stokes Q and U and between Stokes V and R-parameter. The rolling shutter is precisely controlled at a constant speed, so the phase shift due to the rolling shutter can be accurately known. This allows us to apply the same demodulation matrix to all pixels of the image. The Stokes V is corrected using the R-parameter with the phase shift due to the rolling shutter.
SCIP can output raw data with the onboard demodulation disabled. For the verification of the
onboard demodulation, known polarized light was fed into the SCIP optical unit by a test optical system [21]. The datasets with and without the onboard demodulation were sequentially obtained, and we confirmed that they were consistent with each other. An example of the data after the onboard demodulation is shown in Fig. 4. This dataset was obtained on ground with the SP1 channel when natural sunlight was fed into the SCIP optical unit through the Sunrise III telescope after mounting it on the telescope. The two illuminated areas correspond to the orthogonal polarization beams. The most prominent horizontal dark lines in the Stokes I map correspond to the Ca II 854.2 and 849.8 nm lines. Relatively large linear polarizations (Stokes Q and U) are induced by folding mirrors in the telescope and light-distribution optics located upstream of the SCIP optical unit. The decrease in Stokes Q and increase in Stokes U along the wavelength direction is because of the rolling shutter of the CMOS sensor. The instrumental polarization and the effect of the rolling shutter can be calibrated using the polarization calibration data precisely measured before the flight [21].
Figure 3: (a) Demodulation scheme and (b) demodulation coefficients.
Figure 4: Example dataset of the SP1 channel (850 nm) observed in the normal mode. The panels in the first and third rows show the results of the onboard demodulation. The horizontal and vertical axes represent the spatial and wavelength directions, respectively. The panels in the second and fourth rows show the Stokes I, Q, U, V, and R profiles along the vertical dashed line in their upper panels.
## 3 Bit Compression
### Bit-Compression Algorithm
The raw images are provided from the cameras in a 12-bit word format. In the normal mode, the maximum accumulation time in the onboard demodulation procedure is 20.48 s. Considering the demodulation coefficients, the number of bits needed after the demodulation is unsigned 22 bits for Stokes I and signed 21 bits for Stokes Q, U, V, and R-parameter. However, the input range to the standard image compression algorithm described in Sec. 4.1 is 16 bits. Thus, it is necessary to perform bit compression to 16 bits before the image compression.
A nonlinear image transformation based on a square root function, similar to that used in the Hinode/SOT spectropolarimeter [11], is employed. The bit-compression algorithm is described by the following equations:
\[X = N\hskip 72.27pt(N\leq N_{c}) \tag{1}\] \[X = round(a+\sqrt{bN+c})\hskip 72.27pt(N{>}N_{c}), \tag{2}\]
where \(N\) is the input pixel data to the bit compression process, \(X\) is the value after bit compression, \(N_{c}\) is the boundary between linear and square root compression regions, and \(a\), \(b\), and \(c\) are the constant values.
The bit-compression parameters (\(a\), \(b\), and \(c\)) are calculated to satisfy the following three equa
tions with a given \(N_{c}\):
\[N_{c} = a+\sqrt{bN_{c}+c}, \tag{3}\] \[dX/dN = 1\hskip 28.452756ptat\;N=N_{c},\] (4) \[M_{max} = a+\sqrt{bN_{max}+c}, \tag{5}\]
where M\({}_{max}\) is 2\({}^{16}\) counts. The linear range (\(N_{c}\)) should be chosen to obtain compression errors that are as small as possible. The photon noise for 20 s integration (up to 22 bits) is \(\sim\)470 DN (1\(\sigma\)), considering the camera conversion factor (gain) of 0.052 DN/\(e^{-}\). We choose the linear range to be approximately three times the photon noise.
The bit-compression process was previously implemented through look-up tables (LUTs) in the Hinode case because of the limitation on the processing capabilities. In this method, a LUT is computed once for the entire input range and stored in a memory. The input pixel of 22 bits for the SCIP requires a memory usage of \(\sim\)8 MB for each LUT, which is 85 times larger than that in the Hinode case. The internal memory resources on the FPGA of the SCIP are insufficient for storing the LUT. Thus, the bit-compression function is computed pixel-wise using the FPGA resources directly. We employ a simple and efficient algorithm to calculate the square root function in Eq.(2) using a coordinate rotation digital computer (CORDIC) approach. The following calculation is implemented on the FPGA for an input value greater than \(N_{c}\):
\[X^{\prime} = \frac{a^{\prime}+cordic\_sqrt(b^{\prime}N-c^{\prime})}{2^{5}}, \tag{6}\]
where the new constants (\(a^{\prime}\), \(b^{\prime}\), and \(c^{\prime}\)) are in fixed-point representation and \(cordic\_sqrt\) is a module
that computes the square root in fixed-point precision using a CORDIC library1. The parameters for the bit-compression hardware implementation are displayed in Table 1. The SCIP data processing pipeline is configured in three different modes: no bit compression, bit compression for unsigned images, and bit compression for signed images. Figure 5 shows the bit-compression functions computed on the FPGA.
Footnote 1: [https://www.xilinx.com/products/intellectual-property/cordic.html](https://www.xilinx.com/products/intellectual-property/cordic.html)
### Bit-Compression Test Results
The bit-compression process is a lossy and irreversible method, and it introduces errors. The SCIP has a function to generate a dummy image containing the bit compression function. Using this function, we apply the bit-compression process to the reference sets of the entire input ranges for
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \# & & a\({}^{\prime}\) & b\({}^{\prime}\) & c\({}^{\prime}\) & \(N_{c}\) & Note \\ & & (15 bits) & (20 bits) & (30 bits) & & \\ \hline
0 & No compression & - & - & - & - & for rapid mode \\ \hline
1 & 22U to 16U & 24976 & 1023984 & 1054707814 & 1280 & for Stokes I \\ \hline
2 & 21S to 16U & 25359 & 999427 & 1035405312 & 1280 & for Stokes QUV and R \\ \hline \end{tabular}
\end{table}
Table 1: Bit-compression parameters implemented on an FPGA for SCIP
Figure 5: Bit-compressed output after applying (a) for Stokes I and (b) for Stokes QUV and the R-parameter.
both unsigned (#1) and signed (#2) cases. The errors are calculated as the difference between the reference set and the results after bit decompression, as shown in Fig. 6. These errors are compared with photon noise, which is calculated as the square root of the reference sets with the camera gain of 0.052 DN/\(e^{-}\). The errors produced by the bit-compression process are five times less than the photon noise (1\(\sigma\)) for both unsigned (#1) and signed (#2) cases. This implies that the inherent features of the data are intact even with irreversible compression.
For the verification of bit compression with the flight hardware, we sequentially took the images in the normal mode with and without bit compression at different integration times. In this test, the slit was uniformly illuminated by a white light LED. Because the intensity of the LED was lower than that of natural sunlight, the maximum integration time of 163.84 s was greater than the nominal range. The maximum intensity reached slightly higher than 20 bits. Figure 7 shows a
Figure 6: Bit-compression errors for (a, c) Stokes I and (b, d) for Stokes QUV and the R-parameter. The errors are shown in units of DN in the upper panels and relative to photon noise in the lower panels.
## 6 Conclusion
Figure 7: (a) Comparison of the observed Stokes I intensity with and without the bit-compression process for SP1 (850 nm). The red dashed line is the bit-compression function. (b) Residual of the Stokes I intensity observed with bit compression from the ideal bit-compression function implemented on an FPGA. (c) and (d) are same as (a) and (b), respectively, but for Stokes QUV and the R-parameter. Panels (e) - (h) are the same as the top four panels, but are for SP2 (770 nm).
comparison of the data taken with and without the bit-compression process. The bit-compressed data follow the ideal bit-compression functions for both SP channels. The deviation from the bit-compression functions is mainly caused by the temporal variation between the data with and without bit compression.
## 4 Image Compression
### Image Compression Algorithm
The image compression algorithm is based on the Lossless Data Compression 121.0-B-2 standard proposed by the consultative committee for space data systems (CCSDS)[22]. This algorithm is a low-complexity solution for lossless compression of any type of digital data, including 2D images, that require a moderate data rate reduction. It reaches typically compression factors of 1.5 to 2, without any loss during compression/decompression.
The compressor consists of two functional parts, a preprocessor and an adaptive entropy coder, that process the input data partitioned in blocks of \(J\) samples with a dynamic range of \(n\) bits per sample. An uncompressed sample, called the reference sample, is included in intervals of \(r\) blocks to enable the decompression and initialize the process. For the SCIP, the preprocessor is implemented with a unit-delay predictor and the entropy encoder processes the image in blocks of size \(J=16\), \(n=16\). The reference sample interval is configured to \(r=512\)[23].
### Image Compression Results
The compression efficiency of our compression algorithm is first verified with synthetic SCIP data. Then, the performance of the onboard image compression process is confirmed on ground with the flight hardware. An advantage of using the synthetic data is to reproduce the fine scale structures
such as granulation and magnetic elements in the solar atmospheres. These fine scale structures can be resolved by the SCIP during the balloon flight, but it is difficult to detect them in our laboratories during the Sunrise III testing on ground because of the poor atmospheric seeing conditions.
#### 4.2.1 Synthetic data
We use the Rybicki-Hummer (RH) statistical equilibrium and radiative transfer code[24, 25] to synthesize the Stokes profiles. The geometry package used in RH is the 1-D plane-parallel geometry. We assume a non-local thermodynamic equilibrium and complete redistribution for computing the atomic populations of K I[26] and the Ca II lines observed with the SCIP. The atomic information for the different spectral lines can be found in previous studies[27, 14]. We employ an enhanced network simulation[28] computed with Bifrost code[29] as solar atmospheres. The size of the Bifrost simulation is 24 Mm \(\times\) 24 Mm on the Sun with a pixel size of 48 km, which corresponds to 0.07 arcsec. The RH calculations are done with the original resolution of the Bifrost simulation. The instrumental effects of SCIP are included on the synthetic data as the following steps. The synthetic SCIP data results are shown in Figs. 8-10.
1. Spatial and spectral degradation The synthetic SP and SJ data are spatially degraded using a Point Spread Function (PSF). We assume an ideal PSF corresponding to the airy pattern produced by a 1-m telescope. Spatial binning is done to match the pixel size of the SCIP (0.09 arcsec). Because the box size of the Bifrost simulation is smaller than the slit length of 58 arcsec, periodic boundary conditions are assumed for replicating the field-of-view of the simulation. The illuminated areas for the orthogonal polarization are embedded in 2k\(\times\)2k images to keep their positions identical to the ones observed with each SP camera. The spectral sampling of the SCIP is 2\(\times\)10\({}^{5}\), which
Figure 8: Synthetic dataset of SP1 (850 nm) in the normal mode. The horizontal and vertical axes of the panels in the first and third rows represent the spatial and wavelength directions, respectively. The panels in the second and fourth rows show the Stokes I, Q, U, V, and R profiles along the vertical dashed line in their upper panels. The vertical dashed line is located at a small magnetic element with strong field strength.
Figure 9: Synthetic dataset of SP2 (770 nm) in the normal mode. The horizontal and vertical axes of the panels in the first and third rows represent the spatial and wavelength directions, respectively. The panels in the second and fourth rows show the Stokes I, Q, U, V, and R profiles along the vertical dashed line in their upper panels. The vertical dashed line is located at a small magnetic element with strong field strength
is 42.5 and 38.4 mA at the central wavelengths for SP1 and SP2, respectively. We convolve the full spectrum with the spectral PSF assuming a Gaussian shape to represent instrument degradation along the spectral direction. The synthetic data contains all of the important absorption lines, although their positions are not perfectly identical to the observed ones because of small errors in the spectral line database.
2. Polarization modulation To demonstrate the onboard demodulation, the modulated intensity (\(I^{\prime}(\lambda)\)) is calculated from the synthetic Stokes I, Q, U, and V maps. We assume the ideal case of PMU: the retardation of the waveplate is 127 deg without internal reflection and the PMU phase angle takes discrete values with a step difference of 22.5 deg. The phase shift due to the rolling shutter is also considered for the calculation of the modulated intensity.
3. Intensity scaling
Figure 10: (a) A synthetic SJ image. (b) Elongated image of the illuminated area in panel (a). The size of the illuminated area is 640\(\times\)640 pixels. The central vertical dark line represents the slit. (c) Intensity profile along the horizontal dashed line in panel (a).
The intensity of the SP channels in DN units is calculated as
\[I^{\prime}_{DN}(\lambda)=I^{\prime}(\lambda)\,T_{h}\,G\,t_{exp}. \tag{7}\]
The intensity of the original synthetic Stokes profiles is normalized by the local continuum value of each spectral window. The expected throughput (\(T_{h}\)) for disk center observations is 1.15\(\times\)10\({}^{6}\)\(e^{-}\) pixel\({}^{-1}\) s\({}^{-1}\) and 2.10\(\times\)10\({}^{6}\)\(e^{-}\) pixel\({}^{-1}\) s\({}^{-1}\) for the SP1 (850 nm) and SP2 (770 nm) channels, respectively.[3] The exposure time per frame (\(t_{exp}\)) is 32 and 10 ms in the normal and rapid modes, respectively. We use a default gain (G) of 0.052 DN/\(e^{-}\). The average intensity of the synthetic SJ image is scaled to 1500 DN.
4. Photon noise We compute the photon noise as the square root of the total number of electrons for the modulated intensity (\(I^{\prime}(\lambda)\,T_{h}\)) at each pixel. The calculated photon noise is added to Stokes IQUV and R-parameter.
5. Demodulation We apply a demodulation process similar to that onboard (Sec. 2). Datasets with integration of 32 images (1.024 s) and 320 images (10.24 s) were prepared for testing the image compression.
6. Flat field The synthetic demodulated data were multiplied by a flat field to add the observed patterns into the synthetic images. The flat field was created from the dataset obtained with the real hardware in the normal mode with an integration time of 10.24 s when the SJ field-of-view
including the slit was almost uniformly illuminated by the white light LED.[8] After the dark subtraction and gain correction, the Stokes I map was smoothed with a width of \(3\times 3\) pixels in the illuminated areas to reduce noise. The smoothed image was normalized by the intensity averaged over the illuminated areas. The non-illuminated area was set to one in the flat-field image. The SP and SJ synthetic images were multiplied by corresponding flat-field images.
7. Dark The measured dark images were added into Stokes parameter maps and R maps. The integration time of the dark images was the same as that for the maps of Stokes parameters. As a result, the final products of the synthetic SP data contained readout noise, dark current, and bias noise. The synthetic SJ images also sum with the measured dark images in the same way.
The synthetic Stokes I map for SP1 (Fig. 8) looks similar to the observed one (Fig. 4). However, the bias signals of the synthetic Stokes Q and U data are much smaller than the observed ones, and they fluctuate around zero. The large signals in the observed maps are polarization induced by the instrument, which is not considered in the synthetic data. On the other hand, large spiky signals can be seen in the synthetic Stokes V and R-parameter data. These signals are related to the fine-scale magnetic elements, which cannot be observed with the SCIP on the ground because of the poor seeing conditions.
The bit compression and image compression, similar to the onboard ones, are applied to the final products of the synthetic SP data in the normal mode. For the synthetic SJ data, only image compression is applied. The results of the compression efficiency in units of bits/pixel are summarized in Table 2. The compression efficiencies match the values used for the prediction of the data
rate ("presumed values" in Table 2). The results with a larger integration have slightly larger values of bits/pixel but are still smaller than the expected values. The size of the compressed images for SJ is smaller than those for SP because its bit depth (12 bits) before the image compression is smaller than those of SP (16 bits). We can assume that the size of the compressed images for the SP channels in the rapid mode is similar to the SJ result in Table 2 because the exposure time and bit depth are the same.
#### 4.2.2 Data with flight hardware
We took the datasets for SP1 in the normal mode with bit compression and image compression enabled during the sunlight test. The set of the Stokes maps was obtained at six different integration times from 1.024 to 10.24 s. The result at the 10.24 s integration is shown in Fig. 4. It is confirmed that the onboard processing will allow the SCIP to achieve 0.1% and 0.03% sensitivities with 1 s and 10 s integration times, respectively (Fig. 11(b)). The compression efficiency for 0.03%
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Cameras & Integration & & Presumed values [unit: bits/pixel] & Synthetic data [unit: bits/pixel] & Data with flight hardware [unit: bits/pixel] \\ \hline SP1 & 1 s & Stokes I & 10 & 7.2 & 7.5 \\ & & Stokes QUVR & 10 & 7.4 & 7.8 \\ \cline{2-6} & 10 s & Stokes I & 10 & 8.7 & 8.7 \\ & & Stokes QUVR & 10 & 9.1 & 9.1 \\ \hline SP2 & 1 s & Stokes I & 10 & 7.6 & 7.4 \\ & & Stokes QUVR & 10 & 7.8 & 8.3 \\ \cline{2-6} & 10 s & Stokes I & 10 & 9.0 & 8.8 \\ & & Stokes QUVR & 10 & 9.2 & 9.4 \\ \hline SJ & 10 ms & & 8 & 5.8 & 7.4 \\ \hline \end{tabular}
\end{table}
Table 2: Summary of image compression results in normal mode
sensitivity is \(\sim\)9 bits/pixel for Stokes IQU and 10 bits/pixel for Stokes V and the R-parameter (Figs. 11(c) and (d)). These values are slightly larger than those obtained using the synthetic data but are consistent with the presumed ones. The datasets that were useful for the verification of the image compression were not obtained for SP2 and SJ during the sunlight test because of technical problems. Because the opportunity of the sunlight test is limited, we verified the compression efficiency using the images illuminated by the white light LED. The integration time for SP2 and exposure time for SJ are adjusted to obtain the number of photons corresponding to those with natural sunlight. As summarized in Table 2, their compression efficiencies are similar to the presumed values as well as SP1.
Figure 11: Results of the natural sunlight test for SP1 (850 nm) in normal mode. (a) Continuum intensity of Stokes I as a function of integration time. The continuum intensity in units of DN is averaged over the area without the absorption lines. (b) Standard deviation of Stokes V normalized by the continuum intensity as a function of the integration time. (c) Size of the compressed images as a function of the integration time and (d) the standard deviation of Stokes V normalized by the continuum intensity.
## 5 Conclusion
We have verified with the SCIP flight hardware that the performance of the onboard data processing satisfies the required data reduction. The maximum data rate in the normal mode is estimated to be 512 Mbits/s in total. In this estimation, the integration time of normal mode is 1.024 s, and we assumed 10 and 8 bits/pixel for outputs from the SP and SJ cameras, respectively. The data rate in rapid mode was 601 Mbits/s in total with the assumption of 8 bits/pixel for all three cameras. The data rate in both observing modes has a sufficient margin for the limitation of 1 Gbits/s owing to the gigabit ethernet connection with the data storage. Onboard demodulation reduces the number of post-processing methods, so the polarization induced by the instruments must be more precisely controlled to achieve precise polarization measurements. Nevertheless, the onboard demodulation with bit compression and image compression can significantly reduce the telemetry rate. For example, for 10.24 s integration time, the data rate is reduced by a factor of 77 in the SCIP case: 320 images with 12 bits/pixel versus 5 images with 10 bits/pixel. This reduction is essential to obtaining 2048 pixels in the wavelength direction. Note that the wide wavelength coverage is one of the key advantages in the SCIP observations for obtaining three-dimensional magnetic field structures from the solar photosphere to chromosphere. Another approach to reduce the data rate is to simply sum the equivalent eight states from the half rotations of the PMU. The advantage of this approach is that the full polarimetric correction can be done on the ground. However, the data rate becomes higher than that of the demodulation into the five states. The number of the output images is increased from 5 to 8. The intensity of the polarization-modulated images does not change much because of the weak polarization signal. A larger bit depth per pixel is required for all 8 images when the integration time becomes long. On the other hand, the bit
depth in the Stokes QUV and R-parameter images can be reduced by calculating the difference of the modulated images.
The speed of the onboard processing is also important. As mentioned in Sec. 1, fast modulation and demodulation are necessary for polarization measurements to investigate the rapidly changing dynamics in the solar chromosphere. The SCIP electronics unit can process 2k\(\times\)2k images at 31.25 Hz from two cameras. Generally, the data rate is considered more seriously in satellite missions than in balloon experiments. The high-speed onboard processing of SCIP will be important for future spectropolarimetric observations in satellite missions.
_Code, Data, and Materials Availability_
The data that support the findings of this article are not publicly available due to the data policy of the Sunrise III team. The data can be requested from the authors.
_Acknowledgments_
The authors are grateful to two anonymous reviewers for their comments that improved the manuscript. The balloon-borne solar observatory Sunrise III is a mission of the Max Planck Institute for Solar System Research (MPS, Germany), and the Johns Hopkins Applied Physics Laboratory (APL, United States). Sunrise III looks at the Sun from the stratosphere using a 1-meter telescope, three scientific instruments, and an image stabilization system. Significant contributors to the mission are a Spanish consortium, the National Astronomical Observatory of Japan (NAOJ, Japan), and the Leibniz Institute for Solar Physics (KIS, Germany). The Spanish consortium is led by the Instituto de Astrofisica de Andalucia (IAA, Spain) and includes the Instituto Nacional de Tecnica Aeroespacial (INTA), Universitat de Valencia (UV), Universidad Politecnica de Madrid (UPM) and the
Instituto de Astrofisica de Canarias (IAC). Other partners include NASA's Wallops Flight Facility Balloon Program Office (WFF-BPO) and the Swedish Space Corporation (SSC). Sunrise III is supported by funding from the Max Planck Foundation, NASA (Grant No. 80NSSC18K0934), Spanish FEDER/AEI/MCIU (Grant No. RTI2018-096886-C5) and a "Center of Excellence Severo Ochoa" award to IAA-CSIC (Grant No. SEV-2017-0709), and the ISAS/JAXA Small Mission-of-Opportunity program and JSPS KAKENHI (Grant No. JP18H05234), and NAOJ Research Coordination Committee, NINS. We would also like to acknowledge the technical support from the Advanced Technology Center (ATC), NAOJ. We would like to thank Editage (www.editage.com) for English language editing.
|
2306.17566 | Imputing phylogenetic trees using tropical polytopes over the space of
phylogenetic trees | When we apply comparative phylogenetic analyses to genome data, it is a
well-known problem and challenge that some of given species (or taxa) often
have missing genes. In such a case, we have to impute a missing part of a gene
tree from a sample of gene trees. In this short paper we propose a novel method
to infer a missing part of a phylogenetic tree using an analogue of a classical
linear regression in the setting of tropical geometry. In our approach, we
consider a tropical polytope, a convex hull with respect to the tropical metric
closest to the data points. We show a condition that we can guarantee that an
estimated tree from our method has at most four Robinson-Foulds (RF) distance
from the ground truth and computational experiments with simulated data show
our method works well. | Ruriko Yoshida | 2023-06-30T11:39:48Z | http://arxiv.org/abs/2306.17566v2 | # Imputing phylogenetic trees using tropical polytopes over the space of phylogenetic trees
###### Abstract
When we apply comparative phylogenetic analyses to genome data, it is a well-known problem and challenge that some of given species (or taxa) often have missing genes. In such a case, we have to impute a missing part of a gene tree from a sample of gene trees. In this short paper we propose a novel method to infer a missing part of a phylogenetic tree using an analogue of a classical linear regression in the setting of tropical geometry. In our approach, we consider a tropical polytope, a convex hull with respect to the tropical metric closest to the data points. We show a condition that we can guarantee that an estimated tree from our method has at most four Robinson-Foulds (RF) distance from the ground truth and computational experiments with simulated data show our method works well.
## 1 Introduction
Due to a new technology, today we are able to generate sequences from genome with lower cost. However, at the same time, we have a great challenge to analyze large scale datasets from genome sequences. In phylogenomics, a new field which applies tools from phylogenetics to genome datasets, we often conduct comparative phylogenetic analyses, that is, to compare evolutionary histories among a set of taxa between different genes from genome (for example, see [5]). However, we often face the problem in this process that some taxa in the dataset have missing gene(s) [13]. When it happens, systematists infer missing part of a gene tree from other gene trees using supervised learning method, such as linear regression model.
A phylogenetic tree is a weighted tree which represents evolutionary history of a given set of taxa (or species). In a phylogenetics, leaves represent species or taxa in the present time which we can observed, and internal nodes in the tree, which represent ancestors of the species, do not have any labels. A gene tree is a phylogenetic tree reconstructed from an alignment of a gene in a genome. Gene trees with the same set of species or taxa do not have to have the same tree topology since each gene might have different mutation rates due to the selection pressures, etc [9]. In a comparative phylogenetic analysis, we often compare gene trees (for example, we compare how they are different, how their
mutation rates are different, and often we are interested in inferring the species tree).
To infer a missing part of a gene tree, we often apply a supervised method to regress the missing part. In this process, first, we compute an unique vector representation of each gene tree. Then we infer the missing components of the vector of the tree from the vectors computed from other gene trees in a dataset using a regression model, such as a multiple linear regression [13].
However, a set of all such vectors realizing all possible phylogenetic trees, which is called a _space of phylogenetic trees_, is not Euclidean. In fact, a space of phylogenetic trees is an union of polyhedral cones with a large co-dimension, so this is not even convex in terms of Euclidean metrics. Therefore, it is not appropriate to apply classical regression models, such as linear regression or Neural Networks, since they assume convexity in terms of Euclidean geometry. Thus, in this short paper, we propose an analogue of a classical multiple linear regression in the setting of tropical geometry with the max-plus algebra: an application of tropical polytopes to infer the missing part of a phylogenetic tree.
_Equidistant trees_ are used to model gene trees under the multi-species coalescent model [9]. Therefore, in this paper, we focus on an equidistant tree, which is a rooted phylogenetic tree such that the total weight on an unique path from its root to each leaf is the same, and we focus on the space of all possible equidistant trees. It is well-known that the space of all possible equidistant trees is a _tropical Grassmannian_, which means that it is a tropically linear space with respect to the _tropical metric_[1, 12, 11]. Therefore, with the tropical metric with the max-plus algebra, we can conduct statistical analyses using tropical linear algebra, analogue of a classical linear algebra. In fact, there has been much development in statistical learning over the space of phylogenetic trees using tools from tropical geometry [11, 17, 10, 2, 6, 14, 16].
Since a tropical polytope is tropically convex and since the space of equidistant trees is tropically convex, if all vertices are equidistant trees, then a tropical polytope is contained in the space of equidistant trees. Thus, in this paper, we propose to use a tropical polytope over the space of equidistant trees to infer missing part of a phylogenetic trees. Our proposed method has basically four main step: (1) compute induced trees on the set of leaves which we observe in \(T\), a tree with missing leaf (leaves) from a training set; (2) compare \(T\) with these induced trees; (3) compute a tropical polytope with trees with full set of leaves whose induced trees have closest tree topologies with \(T\); and (4) project \(T\) onto the tropical polytope computed in Step (3).
In Section 2 we discuss basics from tropical geometry and in Section 3, we discuss basics from phylogenetics. In Section 4, we show our novel method to impute a missing part of a phylogenetic tree. Then, in Section 5, we show a theoretical condition of \(T\) that the worst case scenario for the estimated tree via our method has the _Robinson-Foulds distance_ at most 4. Then Section 6 shows computational experiments of our method against other methods including a multiple linear regression and our method performs well.
## 2 Basics in Tropical Geometry
In this section, we discuss basics from tropical geometry. We consider the _tropical projective torus_, \(\mathbb{R}^{e}/\mathbb{R}\mathbf{1}\) where \(\mathbf{1}:=(1,1,\ldots,1)\) is the vector with all ones in \(\mathbb{R}^{e}\). Basically this means that any vectors in \(\mathbb{R}^{e}/\mathbb{R}\mathbf{1}\) is invariant with \(\mathbf{1}\), i.e., \((v_{1}+c,\ldots,v_{e}+c)=(v_{1},\ldots,v_{e})=v\) for any element \(v:=(v_{1},\ldots,v_{e})\in\mathbb{R}^{e}/\mathbb{R}\mathbf{1}\). For more details, see [4] and [7].
Under the tropical semiring \(\left(\,\mathbb{R}\cup\{-\infty\},\oplus,\odot\right),\) the tropical arithmetic operations of addition and multiplication are defined as:
\[a\oplus b:=\max\{a,b\},\ \ \ \ a\odot b:=a+b\ \ \ \ \ \text{where}\ a,b\in \mathbb{R}\cup\{-\infty\}.\]
For any scalars \(a,b\in\mathbb{R}\cup\{-\infty\}\) and for any vectors \(x=(x_{1},\ldots,x_{e}),\ y=(y_{1},\ldots,y_{e})\in\mathbb{R}^{e}/\mathbb{R} \mathbf{1}\), we have tropical scalar multiplication and tropical vector addition defined as:
\[a\odot x\oplus b\odot y:=(\max\{a+x_{1},b+y_{1}\},\ldots,\max\{a+x_{e},b+y_{e} \}).\]
**Definition 1**.: _Suppose we have a set \(S\subset\mathbb{R}^{e}/\mathbb{R}\mathbf{1}\). If_
\[a\odot x\oplus b\odot y\in S\]
_for any \(a,b\in\mathbb{R}\) and for any \(x,y\in S\), then \(S\) is called tropically convex. Suppose we have a finite subset \(V=\{v^{1},\ldots,v^{s}\}\subset\mathbb{R}^{e}/\mathbb{R}\mathbf{1}\). Then, the smallest tropically-convex subset containing \(V\) is called the tropical convex hull or tropical polytope of \(V\). \(\operatorname{tconv}(V)\) can also be written as:_
\[\operatorname{tconv}(V)=\{a_{1}\odot v^{1}\oplus a_{2}\odot v^{2}\oplus \cdots\oplus a_{s}\odot v^{s}\mid a_{1},\ldots,a_{s}\in\mathbb{R}\}.\]
_A tropical line segment, \(\Gamma_{v^{1},v^{2}}\), between two points \(v^{1},\,v^{2}\) is a tropical convex hull of \(\{v^{1},v^{2}\}\)._
**Remark 2**.: _By the definition, if a set \(S\subset\mathbb{R}^{e}/\mathbb{R}\mathbf{1}\) is tropically convex, then a tropical line segment between any two points in \(S\) must be contained in \(S\)._
**Definition 3**.: _For any points \(v:=(v_{1},\ldots,v_{e}),\,w:=(w_{1},\ldots,w_{e})\in\mathbb{R}^{e}/\mathbb{R} \mathbf{1}\), the tropical metric), \(d_{\mathrm{tr}}\), between \(v\) and \(w\) is defined as:_
\[d_{\mathrm{tr}}(v,w):=\max_{i\in\{1,\ldots,e\}}\bigl{\{}v_{i}-w_{i}\bigr{\}}- \min_{i\in\{1,\ldots,e\}}\bigl{\{}v_{i}-w_{i}\bigr{\}}.\]
**Definition 4**.: _Let \(V:=\{v^{1},\ldots,v^{s}\}\subset\mathbb{R}^{e}/\mathbb{R}\mathbf{1}\) and let \(P=\text{tconv}\,(v^{1},\ldots,v^{s})\subseteq\mathbb{R}^{e}/\mathbb{R}\mathbf{1}\) be a tropical polytope with its vertex set \(V\). For \(x\in\mathbb{R}^{e}/\mathbb{R}\mathbf{1}\), let_
\[\pi_{P}(x)\,{:=}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
## 3 Basics in Phylogenetic Trees
Let \([m]:=\{1,\ldots,m\}\). A phylogenetic tree \(T\) on \([m]\) is a weighted tree of \(m\) leaves with the labels \([m]\) and internal nodes in the tree do not have labels. A subtree \(T^{\prime}\) in \(T\) on \(a\subset[m]\) is a subtree of \(T\) with leaves \(a\). An _equidistant tree_ on \([m]\) is a rooted phylogenetic tree on \([m]\) such that the total weight on the path from its root to each leaf \(i\) in \([m]\) has the same distance for each \(i\in[m]\). In this paper, we assume on equidistant trees.
In order to conduct a statistical analysis, we have to convert a phylogenetic tree into a vector. Now we discuss one way to convert a phylogenetic tree into a vector.
**Definition 5**.: _Suppose we have a dissimilarity map \(D:[m]\times[m]\to\mathbb{R}\) such that_
\[D(i,j)=\begin{cases}D(i,j)\geq 0&\text{if }i\neq j\\ 0&\text{otherwise.}\end{cases}\]
_If there exists a phylogenetic tree on \([m]\) such that \(D(i,j)\) is the total weight on the unique path from a leaf \(i\in[m]\) to a leaf \(j\in[m]\), then we call \(D\) as a tree metric._
**Remark 6**.: _Since a tree metric of a phylogenetic tree on \([m]\) is symmetric and its diagonal is 0, we consider an upper triangular matrix of the tree metric and we consider the upper triangular matrix of the tree metric as a vector in \(e=\binom{m}{2}\)._
**Definition 7**.: _Let \(D:[m]\times[m]\to\mathbb{R}\) be a metric over \([m]\), namely, \(D\) is a map from \([m]\times[m]\) to \(\mathbb{R}\) such that_
\[D(i,j)=D(j,i) \text{for all }i,j\in[m]\] \[D(i,j)=0 \text{if and only if }i=j\] \[D(i,j)\leq D(i,k)+D(j,k) \text{for all }i,j,k\in[m].\]
_Suppose \(D\) is a metric on \([m]\). Then if \(D\) satisfies_
\[\max\{D(i,j),D(i,k),D(j,k)\} \tag{2}\]
_is attained at least twice for any \(i,j,k\in[m]\), then \(D\) is called an ultrametric._
It is well-known that if we have an ultrametric on \([m]\), then there is an unique equidistant tree on \([m]\) by the following theorem:
**Theorem 8** ([3]).: _Suppose we have an equidistant tree \(T\) with a leaf label set \([m]\) and suppose \(D(i,j)\) for all \(i,j\in[m]\) is the distance from a leaf \(i\) to a leaf \(j\). Then, \(D\) is an ultrametric if and only if \(T\) is an equidistant tree on \([m]\)._
Therefore by Theorem 8, in this paper, we consider the set of ultrametrics, \(\mathcal{U}_{m}\subset\mathbb{R}^{e}/\mathbb{R}\mathbf{1}\), on \(m\) as the space of equidistant trees on \([m]\).
**Definition 9**.: _Let \(a,b\subset[m]\) such that \(a\cup b=[m]\) and \(a\cap b=\emptyset\). Suppose we have an equidistant phylogenetic tree \(T\) with the leave set \([m]\). A clade of \(T\) with leaves \(a\subset[m]\) is an equidistant tree on \(a\) constructed from \(T\) by adding all common ancestral interior nodes of any combinations of only leaves \(a\) and excluding common ancestors including any leaf from \([m]-a\) in \(T\), and all edges in \(T\) connecting to these ancestral interior nodes and leaves \(a\)._
**Definition 10**.: _For a rooted phylogenetic tree, a nearest neighbor interchange (NNI) is an operation of a phylogenetic tree to change its tree topology by picking three mutually exclusive leaf sets \(X_{1},X_{2},X_{3}\subset X\) and changing a tree topology of the clade, possibly the whole tree, consisting with three distinct clades with leaf sets \(X_{1}\), \(X_{2}\), and \(X_{3}\)._
**Remark 11**.: _Since there are three possible ways of connecting three distinct clades, NNI move possibly creates two new tree topologies on \([m]\)._
**Definition 12**.: _Suppose we have rooted phylogenetic trees \(T_{1},T_{2}\) on \([m]\). The Robinson-Foulds (RF) distance is the number of operations that the subtree of \(T_{1}\) has the same tree topology as the subtree of \(T_{2}\) by removing a leaf of \(T_{1}\) and the subtree of \(T_{2}\) has the same tree topology as the subtree of \(T_{1}\) by removing a leaf of \(T_{2}\)._
**Remark 13**.: _The RF distance is always divisible by 2._
**Remark 14**.: _One can see clearly that the RF distance between two trees which is one NNI move a part is 2 since they differ only one internal edge in each tree._
## 4 Method
In this section we introduce our method to infer a missing part of an equidistant tree using tools from tropical geometry. Let \(RF(T_{1},T_{2})\) be the RF distance between \(T_{1}\) and \(T_{2}\). The algorithm on our method is shown in Algorithm 1.
## 5 Theoretical Results
Let \(a,b\subset[m]\) such that \(a\cup b=[m]\) and \(a\cap b=\emptyset\). Let \(\{T_{1},\ldots,T_{n}\}\) be a sample of equidistant trees with \(m\) leaves. Let \(T^{\prime}_{i}\) for \(i=1,\ldots n\) be an equidistant tree with \(a\) by dropping tips \(b\) from \(T_{i}\), i.e., \(T^{\prime}_{i}\) is an induced tree on \(a\).
**Theorem 15**.: _Suppose \(\{T_{1},\ldots,T_{n}\}\) is a sample of equidistant trees with \([m]\) and let \(T_{i}=T^{\prime}_{i}\cup T^{\prime\prime}_{i}\) such that \(T^{\prime}_{i}\) is a subtree on \(a\) which is an equidistant tree with \(a\) by dropping tips \(b\) from \(T_{i}\) and \(T^{\prime\prime}_{i}\) is an subgraph graph with \(b\) by adding all common ancestral interior nodes of any combinations of only leaves \(a\) and excluding common ancestors including any leaf from \([m]-a\) in \(T_{i}\) for \(i=1,\ldots,n\). Suppose \(T^{\prime}_{i}\) and \(T^{\prime}\) have the same tree topology for \(i=1,\ldots,n\). If \(T^{\prime\prime}_{i}\) are clade in \(T_{i}\) for \(i=1,\ldots,n\) and \(T^{\prime\prime}\) is also a clade in \(T\), then an estimated tree \(\hat{T}\) via our method with the tropical polytope \(P:=\text{tconv}\,(T_{1},\ldots,T_{n})\) and \(T\) differ at most the RF distance = 4._
Proof.: Since \(T_{i}^{\prime\prime}\) are connected trees for \(i=1,\ldots,n\), \(T_{i}^{\prime\prime}\) forms a clade in \(T_{i}\) for \(i=1,\ldots,n\). Also \(T^{\prime\prime}\) is a connected tree, so that \(T^{\prime\prime}\) is also a clade in \(T\). This means that \(T_{i}\) and \(T_{j}\) for any \(i,j\in\{1,\ldots,n\}\) have only one NNI move distance. Since \(T_{i}^{\prime}\) and \(T^{\prime}\) have the same tree topology and since \(T^{\prime\prime}\) is also a clade in \(T\), \(T_{i}\) and \(T\) have only one NNI move distance. Note that \(T_{i}\) and \(T_{j}\) have at most the RF distance = 2 since \(T_{i}\) and \(T_{j}\) have just one NNI move difference, and so as with \(T\). Let \(U_{i}\) is an ultrametric form a tree \(T_{i}\) for \(i=1,\ldots,n\). Then, take any tropical line segment \(\Gamma_{u_{i},u_{j}}\). Since \(T_{i}\) and \(T_{j}\) have just one NNI move difference, by Theorem 8 in [15], any tree topology of the tree realized from an ultrametric in \(\Gamma_{u_{i},u_{j}}\) has the same tree topology of \(T_{i}\) or \(T_{j}\). Since \(P\) is tropically convex, any point in \(P\) is a tropically convex combination of \(T_{i}\) for \(i=1,\ldots,n\). Thus the tree topology of the tree realized by an ultrametric in \(P\) has at most one NNI move different. Since the estimate \(\hat{T}\in P\), and any tree realized from an ultrametric in \(P\) to \(T\) has at most 2 NNI move difference. Thus, we have the result.
## 6 Computational Experiments
In this section, we apply our method to simulated data sets and compare its performance with the baseline model, which uses means of each missing element in an ultrametric computed from a tree, and multiple linear regression model.
### Simulated Data
To assess a performance of our method, we use simulated datasets generated from the multi-species coalescent model using the software Mesquite[8].
Under the multi-species coalescent model, there are two parameters: species depth \(SD\) and effective population size \(N_{e}\). In this paper we fix the effective population size as \(N_{e}=10,000\) and we vary \(SD\) as we vary the ratio
\[R=\frac{SD}{N_{e}}.\]
### Experimental Design
Here we vary \(R=0.25,0.5,1,2,5,10\). For this experiment, we fix the number of leaves as 10. Therefore \(e=45\). For each value of \(R\), we generate a random species tree via the Yule model first. Then we generate the set of 1000 gene trees from the multi-species coalescent model given the species tree. Therefore, for each \(R\), we have a simulated dataset with size 1000.
Note that when \(R\) is larger we have tighter constraints to gene tree topologies by its species tree. Therefore, we do not have large variance for generating gene trees so that it is easier to estimate missing part of a gene tree. On the other hand, if we have small \(R\), then we have a large variance for gene tree topologies, the coalescent model is getting more like a random process [9].
For estimating the performance of our method when we vary the number of leaves missing, we set three different cases: one leaf out of 10 leaves is removed, two leaves out of 10 leaves are removed, and three leaves out of 10 leaves are removed. For each scenario in terms of \(R\) and in terms of the number of leaves removed, we pick random 200 observations from the data set of 1000 trees as a test set.
To compare the performance of our method, we use the baseline model, i.e., we fill missing values of an ultrametric by taking the mean of observations with full set of leaves and the multiple linear regression model. For the multiple linear regression model, we set a missing element as a response variable and observed elements in an ultrametric as predictors [13].
### Results
To assess a performance of our method against the baseline and linear regression model, we use the RF distance between an estimated tree \(\hat{T}\) and \(T\). The results are shown in Table 1 and Figure 1. Note that the smaller the RF distance between two trees, the closer their tree topologies are. When the RF distance is 0, then their tree topologies are the same.
According to our computational experiments with simulated datasets shown in Table 1 and Figure 1, our method has smaller RF distances in any cases compared to other methods. It is interesting that the number of leaves removed seems very much affecting the results in general while clearly \(R\) affects performances of all three methods we compare.
Figure 1: This figure shows performance on the baseline (Left) and our method using a tropical polytope (Right). For each category, we infer 200 trees from 800 trees. The x-axis represents the ratio \(R\) and the y-axis shows the average RF distances between estimated trees and true trees for 200 trees. The smaller the RF distance is, we have better performance.
Figure 2: This figure shows performance on the baseline model (Left) and linear regression models (Right). For each category, we infer 200 trees from 800 trees. The x-axis represents the ratio \(R\) and the y-axis shows the average RF distances between estimated trees and true trees for 200 trees. The smaller the RF distance is, we have better performance. As one can see, these results are very close to each other for all \(R\).
If we have only one missing leaf and larger \(R\), often the average RF distances between inferred trees and true trees is less than 1 because we have often the condition satisfied in Theorem 15 due to very high constraints on tree topologies of gene trees.
## 7 Discussion
In this short paper, we show a novel method to impute a missing part of an equidistant tree on \([m]\) using a tropical polytope, which is an analogue of a linear regression in the setting of tropical geometry. From simulated data generated from the multi-species coalescent model, we show that this method works very well. In addition we show a condition that the estimate tree and the true tree have at most 4 RF distance (Theorem 15).
In future, we can investigate applying "tropical probcipal component analysis (PCA)" proposed by Yoshida, et al. in [17] to imputation of trees since the classical PCA can be viewed as a multivariate linear regression model with orthogonal projections.
|
2309.03396 | Detection of open cluster rotation fields from Gaia EDR3 proper motions | Context. Most stars from in groups which with time disperse, building the
field population of their host galaxy. In the Milky Way, open clusters have
been continuously forming in the disk up to the present time, providing it with
stars spanning a broad range of ages and masses. Observations of the details of
cluster dissolution are, however, scarce. One of the main difficulties is
obtaining a detailed characterisation of the internal cluster kinematics, which
requires very high quality proper motions. For open clusters, which are
typically loose groups with some tens to hundreds of members, there is the
additional difficulty of inferring kinematic structures from sparse and
irregular distributions of stars. Aims. Here, we aim to analyse internal
stellar kinematics of open clusters, and identify rotation, expansion or
contraction patterns. Methods. We use Gaia Early Data Release 3 (EDR3)
astrometry and Integrated Nested Laplace Approximations to perform vector-field
inference and create spatio-kinematic maps of 1237 open clusters. The sample is
composed of clusters for which individual stellar memberships were known, thus
minimising contamination from field stars in the velocity maps. Projection
effects were corrected using EDR3 data complemented with radial velocities from
Gaia Data Release 2 and other surveys. Results. We report the detection of
rotation patterns in 8 open clusters. Nine additional clusters display possible
rotation signs. We also observe 14 expanding clusters, with 15 other objects
showing possible expansion patterns. Contraction is evident in two clusters,
with one additional cluster presenting a more uncertain detection. In total, 53
clusters are found to display kinematic structures. Within these, elongated
spatial distributions suggesting tidal tails are found in 5 clusters.
[abridged] | Pedro Guilherme-Garcia, Alberto Krone-Martins, André Moitinho | 2023-09-06T23:08:15Z | http://arxiv.org/abs/2309.03396v1 | # Detection of open cluster rotation fields
###### Abstract
Context:Most stars from in groups which with time disperse, building the field population of their host galaxy. In the Milky Way, open clusters have been continuously forming in the disk up to the present time, providing it with stars spanning a broad range of ages and masses. Observations of the details of cluster dissolution are, however, scarce. One of the main difficulties is obtaining a detailed characterisation of the internal cluster kinematics, which requires very high-quality proper motions. For open clusters, which are typically loose groups with tens to hundreds of members, there is the additional difficulty of inferring kinematic structures from sparse and irregular distributions of stars.
Aims:Here, we aim to analyse internal stellar kinematics of open clusters, and identify rotation, expansion, or contraction patterns.
Methods:We use Gaia Early Data Release 3 (EDR3) astrometry and integrated nested Laplace approximations to perform vector-field inference and create spatio-kinematic maps of 1237 open clusters. The sample is composed of clusters for which individual stellar memberships were already known, thus minimising contamination from field stars in the velocity maps. Projection effects were corrected using EDR3 data complemented with radial velocities from Gaia Data Release 2 and other surveys.
Results:We report the detection of rotation patterns in eight open clusters. Nine additional clusters display possible rotation signs. We also observe 14 expanding clusters, with 15 other objects showing possible expansion patterns. Contraction is evident in two clusters, with one additional cluster presenting a more uncertain detection. In total, 53 clusters are found to display kinematic structures. Within these, elongated spatial distributions suggesting tidal tails are found in five clusters. These results indicate that the approach developed here can recover kinematic patterns from noisy vector fields, as those from astrometric measurements of open clusters or other stellar or galactic populations, thus offering a powerful probe for exploring the internal kinematics and dynamics of these types of objects.
Conclusions:
## 1 Introduction
Explaining how galaxies build up is one of the central quests in astrophysics. In this context, most stars are believed to be formed in clusters (e.g. Lada & Lada 2003; Kruijssen 2012), which with time ultimately disintegrate, building up the galactic field population. In the Milky Way, globular clusters (GCs), which were formed in the early stages of our galaxy, are one of the contributors to populating the halo. In contrast, open clusters (OCs) and associations have been continuously forming in the disk for over the last \(\sim 10\) Gyrs, enriching it with stars spanning a broad range of ages and masses.
As groups of stars resulting from the gravitational collapse and fragmentation of a parent molecular cloud, cluster stars can remain bound for some time under the balance of their collective gravitational field and the pressure arising from their dynamics. Analytical and numerical N-body simulations (e.g. Lamers et al. 2005; Gieles & Baumgardt 2008) have shed light on how clusters would then evolve, showing how factors such as encounters with giant molecular clouds and spiral arms, galactic tidal forces, and secular evolution (also referred to as evaporation) lead to quick disruption or gradual dissolution of star clusters.
On the observational side, studying the disintegration process poses significant challenges. On the one hand, stars that have been stripped or those that have escaped to the field and no longer belong to the cluster become distributed in low-brightness halos and tails, which are hard to detect observationally. On the other hand, detailed kinematic characterisation of the remaining cluster has traditionally been difficult due to the small stellar relative proper motions (except for the closest clusters), which limited studies to radial velocity line-of-sight studies and/or being seen in crowded fields.
Despite the challenges, some observational studies have yielded detections of these elusive patterns. Examples include the detection of tidal tails in OCs (e.g. Bergond et al. 2001; Davenport & Sandquist 2010; Dalessandro et al. 2015) as well as tens of GCs (e.g. Grillmair et al. 1995; Leon et al. 2000; Chun et al. 2010; Chen & Chen 2010; Jordi & Grebel 2010; Carballo-Bello et al. 2018), and rotation of GCs (e.g van Leeuwen et al. 2000; Anderson & King 2003; van de Ven et al. 2006; Massari et al. 2013; Bellini et al. 2017) in proper motions.
These difficulties are now being gradually overcome thanks to the European Space Agency (ESA) Gaia mission (Gaia Collaboration et al. 2016b). Gaia is one of the most ambitious astronomical all-sky surveys from space today. The main objective of the mission is to bring a better understanding of the formation and evolution of the Milky Way. To this end, Gaia has already released a succession of the deepest, most accurate, and com
plete all-sky astrometric and photometric catalogues ever (Gaia Collaboration et al., 2016, 2018, 2021).
With Gaia, recent studies have now detected rotation patterns in over 20 GCs (e.g. Bianchini et al., 2018; Sollima et al., 2019; Vasiliev and Baumgardt, 2021; Dalessandro et al., 2021; Szigeti et al., 2021). Open clusters, however, typically have many fewer members, ranging from tens of members to a few OCs with over a thousand identified members (Dias et al., 2002; Cantat-Gaudin et al., 2018, 2019; Dias et al., 2021). This leads to sparser distributions, making the detection of spatial and kinematic patterns much harder. Even more so considering that OCs are often seen against the crowded background of the Galactic disk. It is thus impressive how the high quality of the Gaia data is now easily revealing tails and coronae in OCs (e.g. Meingast and Alves, 2019; Meingast et al., 2021). Still, even with Gaia, very few measurements of OC rotation have been accomplished: In their kinematic study of 28 OCs using Gaia Data Release 2 (DR2), Kuhn et al. (2019) conclude that only one OC displayed signs of rotation; Loktin and Popov (2020) also using Gaia DR2 measured the rotation of Praesepe. Thorough searches in the literature have not revealed other examples, indicating that if there are more published determinations the number must be low.
Concurrently with the new availability of high-quality data in huge volumes, from Gaia and other surveys, we are also witnessing an explosion of advanced statistical and computation methods together with the necessary computing power. These new methods, or novel applications of older methods are both bringing new insights and enabling the analysis of very large data sets.
The focus of this work is to assess the dynamical state of large numbers of OCs, namely identifying signatures of rotation as well as expansion and contraction detectable with Gaia Early Data Release 3 (Gaia Collaboration et al., 2021, hereafter EDR3). For this, we developed a procedure for reconstructing the velocity fields of clusters based on the application of the integrated nested Laplace approximation (INLA) method (Rue et al., 2009) to positional and kinematic measurements of cluster members. The analysis was performed on 1237 clusters for which suitable data are available.
We now follow with Sect. 2, in which we present the data sources and selection processes. Sect. 3 details the methods developed for reconstruction of the proper motion vector fields. Analysis and results are shown in Sect. 4. We conclude with a summary of results and conclusions in Sect. 5. Plots including the reconstructed fields of clusters with a detected kinematic structure are given in the Appendix.
## 2 Data
In this article we use data from the Gaia EDR3, which contains proper motions precise at the hundreds of \(\mu\)as/yr level for more than a billion stellar sources, enabling kinematic and dynamic studies of OCs in large scales. These studies require membership lists, and here we use the detailed OC membership lists derived by Cantat-Gaudin et al. (2018, 2019) for several clusters applying the UPMASK method (Krone-Martins and Moitinho, 2014) to Gaia DR2 proper motions (Gaia Collaboration et al., 2018). Based on these memberships, we extracted the following data from EDR3: positions \(\alpha\), \(\delta\), proper-motions \(\mu_{\alpha}*\), \(\mu_{\delta}\), parallaxes \(\varpi\), the associated astrometric errors \(\sigma_{\alpha},\sigma_{\beta\pi*},\sigma_{\mu\mu},\sigma_{\varpi}\), correlations from EDR3, and cluster membership probabilities \(p_{mesh}\) for Cantat-Gaudin et al. (2018, 2019) members available for 1275 OCs. In addition to astrometric information, cluster radial velocities were required to correct the effect of perspective acceleration on observed kinematics. Although this effect is small for most objects (e.g. Brown et al., 1997; van Leeuwen, 2009; Gaia Collaboration et al., 2018; Kuhn et al., 2019), it should be taken into account if the aim is to avoid systematics and probe into the noise limits. To account for the perspective effects in proper motions, we adopted radial velocity estimates from Dias et al. (2021) for 965 clusters, and we further estimated bulk cluster radial velocities from the median of radial velocities of cluster member stars using Gaia DR2 (Katz et al., 2019) for 265 clusters, LAMOST Data Release 4 (Wu et al., 2017) for 33 clusters, RAVE Data Release 5 (Kunder et al., 2017) for seven clusters, and APOGEE Data Release 14 (Holtzman et al., 2018) for five clusters. As detailed in Appendix B, we corrected the perspective and the projection effects in the measured proper motions following van Leeuwen (2009).
## 3 Vector field reconstruction method
To study the internal kinematics of OCs, we searched the data for a statistically significant pattern defining the proper motion field shared by the cluster stars. To do so, we needed to reconstruct the underlying vector field and estimate its uncertainty, using the observed stellar proper motions and their uncertainties.
As OC members share a common motion and as external dynamical influences suffered by the cluster during its lifetime introduce spatially continuous perturbations, the underlying proper motion vector field can be considered mostly spatially correlated and continuous. However, observing this field is challenging due to the sparsity of stars, measurement errors, and the peculiar component of the stellar motions leading to internal motion dispersion, both at the order of hundreds of \(\mu\)as/yr, and thus at the same order or greater than the smooth signal that we have sought to study. Thus, it is natural to adopt methods that can profit from the expected spatial correlation and continuity conditions to try to infer the underlying field from such noisy data.
The INLA method (Rue et al., 2009) is one such method. It was created to model spatial data and it has been successfully used in different applications: from the mapping of the spread of disease (Schrode et al., 2011) to the prediction of heavy rainfalls (Opitz et al., 2018). INLA has also shown great potential in astronomy being used to reconstruct scalar fields of galaxy property maps from integral field unit observations (Gonzalez-Gaitan et al., 2018).
INLA is a faster alternative to Markov chain Monte Carlo (MCMC) methods for Bayesian inference, as it approximates the solution in a fraction of the time MCMC requires. However, the posterior distribution must be assumed to be Gaussian (in which case the solution is exact), or nearly Gaussian (and thus INLA approximates the solution). This constitutes one of the assumptions we have made for our work. We have also assumed that internal cluster underlying proper motion fields are continuous and spatially correlated. Thus, we can take advantage of the fact that a continuous and correlated spatial field can be approximated by a Gaussian Markov random field when it is a solution of a stochastic partial differential equation (SPDE) with a Matern correlation function (Lindgren and Rue, 2015). This correlation function encodes how much one point in space is influenced by all other points depending only on their relative distances.
INLA allows the aforementioned assumptions to be considered, but it was created to analyse scalar fields while we are interested in vector fields. So we created a simple pipeline to reconstruct separate scalar fields in the projections of the vector field in the right ascension and declination coordinates, and then joining the inferred fields into a vector field. This reconstruction
strategy provides a fast first approximation to the reconstruction, although it does not account for covariances between the proper motion components or some conditions as curl or divergence properties of the vector field.
Our starting point is an uncertain and non-homogeneous sampling of the underlying vector field, comprised by vector data measured at specific points in space and the uncertainty estimates for each position and vector measurement. Here we have used the positions of the stars in an equatorial coordinate system as covariates for the spatial model. The errors of the celestial sphere projected positions (\(\alpha,\delta\)) have been ignored, as they are much smaller than the size of the clusters. One important aspect to retain is that a model can take the spatial correlation into account and local information is essential to study these fields, as this correlation represents a proxy function for signatures of rigid-body-like rotations of the cluster, gravitational bonds between the cluster stars, external gravitational influences from which the cluster may be suffering, etc. Here this has been achieved using a Matern function (Matern, 1960) that is a flexible correlation structure including Gaussian and exponential correlations as special cases (e.g. Handcock and Stein, 1993; Guttorp and Gneiting, 2006).
Our pipeline to perform the vector field reconstruction was implemented in the R language (R Core Team, 2019) and adopted R-INLA1 to reconstruct the individual scalar projections of each component of the proper motion vectors. It works as follows:
Footnote 1: www.r-inla.org
1. For each cluster, we retrieved the relevant information for each star in its field: the right ascension, declination, proper motions in right ascension and declination as well as their errors, and the membership probability. To ensure the reconstruction focussed on stars that are more likely members of the cluster, stars with a membership probability \(\lesssim 50\%\) were rejected.
2. Then we removed the bulk cluster proper motion from the individual star proper motions by simply subtracting the cluster proper motion determined by Cantat-Gaudin et al. (2018, 2019) using all the stars. This allowed us to focus on the analysis of the cluster internal kinematics.
3. Next we created a two-dimensional triangular mesh in spatial coordinates, and in this mesh a representation of the scalar components of the vector field was subsequently estimated. The mesh covers the entire data region, with cutoff values preventing low mesh densities near observations (what could result in lower accuracy in the inference step), and maximum edges with respect to the distances between data points in spatial coordinates. Also, we refrained from extrapolating in the regions lacking data coverage.
4. Then, we created a weight matrix representing the error of the data at the positions of each star on the mesh, and an SPDE model from the Matern correlation matrix. Here, a scale parameter proportional to the membership probability divided by the standard error was used to give more weight to data points with lower uncertainties in the measurements and with higher cluster membership probability.
5. Afterwards, we applied R-INLA using a linear predictor structure with the SPDE model, which includes the effects of measurement errors and spatial correlations.
6. Finally, we projected the resulting fields of \(\mu_{\alpha}\) and \(\mu_{\delta}\) on the positions of the original stars and on a regular grid.
The application of this method results in a discretised field. At each position of this field, we had access to the inferred posteriors of the proper motion components, as represented by their means and standard deviations. So we could promptly reconstruct the most probable value of the proper motion vector at each position of the field and its error.
In addition to the field reconstruction, we used this field to estimate smooth velocity curves as a function of the projected radius in the plane of the sky for all clusters. The curves were derived directly from the Gaia EDR3 catalogue data and from the INLA reconstructed fields. We adopted a smooth weighted local linear regression (Cleveland, 1979) through the fANCOVA package (Wang, 2010). Weights for each star were selected as \(w_{i}=p_{mmwh,i}/\sigma_{\mu,j}^{2}\) and the smoothing length was selected based on generalised cross-validation (Golub et al., 1979). The curves were estimated for the total proper motion \(\mu_{tot}\), and for a polar decomposition into a radial component \(\mu_{p}\) that indicates expansion and contraction, and into an angular component \(\mu_{\theta}\) that indicates anticlockwise and clockwise rotation.
## 4 Analysis
We applied the method to the membership lists of 1275 clusters derived by Cantat-Gaudin et al. (2018, 2019) and were able to reconstruct 1237 proper motion fields. To visualise these reconstructions, we represented each field with scatter plots of the cluster member positions and added the vectors representing the inferred smooth field direction and magnitude at the position of each member star. We also created the distributions of the polar field decomposition (\(\mu_{\theta}\) and \(\mu_{\rho}\)) as functions of the distance to the cluster centre. We call these plots spatio-kinematic diagrams. All spatio-kinematic diagrams for the reconstructed fields were then visually inspected, and we looked for rotation-like as well as expansion- and contraction-like spatio-kinematic patterns.
We also created standard deviation maps from the reconstructed fields, and when the magnitude of the estimated standard deviation of the field was greater than the pattern that indicated the kinematic signal, we refrained from drawing conclusions. Additionally, this work we concentrate on the most clear patterns, as the EDR3 data can have systematic errors that are spatially correlated due to non-astrophysical reasons on scales of \(\lesssim 0.5^{\circ}\) of the order of a few tens of \(\mu\)as/yr (e.g. Lindegren et al., 2021).
We show in Figures 1 and 2 the original Gaia EDR3 and the vector field reconstruction for the clusters ASCC 114 and Collinder 140. These figures indicate that spatio-kinematic patterns can be perceived even in a careful inspection of the original Gaia EDR3 data. By considering the errors in the proper motions and the assumption that members of a same cluster should share common overall motion, and thus that their proper motions should be physically correlated, the method described herein makes the spatio-kinematic patterns of these vector fields stand out more clearly, and reveals interesting rotational as well as expansion and contraction patterns in several of the analysed OCs.
For the detection of kinematic patterns, we note that clusters with clear rotational as well as expansion and contraction patterns should have higher projected absolute velocities in the \(\theta\) (rotation) and \(\rho\) (radial expansion or contraction) components. Moreover, reliable signals should display smooth patterns throughout the cluster's radial direction. Taking these aspects into account, we adopted two types of indicators. The first type consists of the components \(\mu_{\theta}\) and \(\mu_{\rho}\) of the INLA reconstructed field. The second type consists of the areas under the INLA reconstructed velocity curves. For determining the area under the data points, a locally estimated scatter-plot smoothing (LOESS Cleveland, 1979) regression was performed and the
area under the fitted curve was calculated. We sorted the reconstructed fields using these indicators. Then by visual inspection, we identified those with clear kinematic patterns and we used them to set threshold values for the area under the INLA reconstructed velocity curves \(A_{\theta}\) and \(A_{\rho}\). Rotation candidates seem to follow both of the following criteria: \(|\mu_{\theta}|\geq 0.02\ mas/yr\) and \(A_{\theta}\geq 0.74\). While expansion and contraction candidates follow both of the following criteria: \(|\mu_{\rho}|\geq 0.058\ mas/yr\) and \(A_{\rho}\geq 0.4\).
From the initial set of 1237 reconstructions, applying these criteria results in 98 candidates with detections of kinematic patterns. A final inspection of the candidates was performed to identify and remove spurious reconstructions, resulting into a list of 53 clusters. In Appendix A we summarise the detected patterns of these 53 clusters. In Appendix C, we show the spatio-kinematic diagrams for all these clusters.
Within the 53 clusters, eight show clear rotation, with nine other objects presenting a less clear rotation signal. Expansion is clearly seen in 14 objects, while two display contraction. An additional 15 objects and another one show(s) less clear expansion and contraction patterns, respectively. Finally, there are 17 objects with \(A_{\rho}\geq 0.01\ mas/yr\) and \(A_{\rho}\geq 0.01\ mas/yr\) and \(A_{\rho}\geq 0.01\ mas/yr\). The \(A_{\rho}\) and \(A_{\rho}\) are similar to the \(A_{\rho}\) and \(A_{\rho}\). The \(A_{\rho}\) and \(A_{\rho}\) are similar to the \(A_{\rho}\) and \(A_{\rho}\).
jects for which the reconstruction showed spatio-kinematics patterns not compatible with any of the above behaviours (rotation, contraction, expansion). These unexpected patterns can appear due to factors such as remaining field contamination, errors in catalogued cluster centres, asymmetries introduced by variable extinction, external gravitational disturbances, or multiple populations with different kinematics. Multiple populations may appear because of the alignment of more than one cluster along the same line of sight (e.g. Trumpler 22), or by substructure expected in younger groups. This last possibility seems to be the case for objects such as NGC 2244, NGC 6193, NGC 6531, NGC 6871, FSR 0904, Gulliver 9, IC 1396, van den Bergh 92, Stock 8, and Trumpler 16, all with age estimates \(\lesssim 20\) Myr (e.g. Bossini et al. 2019; Dias et al. 2021). The age spread of clusters presenting unexpected reconstructions is however large, spanning almost the entire age interval of the sample considered in this work. We note that contraction and expansion patterns can appear as artefacts created by non-homogeneous distributions of the available samples of cluster members and/or under- or over-corrected perspective effects due to the bulk radial velocity of the cluster. This under- or over-correction can be seen as a source of concern for the detection of the effect in some objects, as this is driven by the cluster radial velocities which in many cases are ill-constrained, with errors at the level of several km/s. For the clusters for which we report the detection of a kinematic pattern, we used radial velocities from Dias et al. (2021), which were double-checked. Finally, we note that within the group of 53 clusters for which we have found kinematic signals, we identify clear elongated spatial distributions, suggesting tidal tails, in five clusters: Platais 3, Platais 8, NGC 6991, Mamajek 4, and IC 4655.
The age distribution of the clusters with detected spatio-kinematical patterns is presented in Fig. 3. The age determinations used therein are mostly from Bossini et al. (2019), complemented by Monteiro & Dias (2019) for Alessi 13 and by Kharchenko et al. (2005) for Alessi 9. Although the spread is large, the age distribution of clusters presenting rotation patterns presents two groups, a younger one with \(\log_{10}(\mathrm{age})\sim 7.5-8\) and an older group at \(\log_{10}(\mathrm{age})\sim 8.5-9\). At the age ranges of these groups, there appears to be a marginal tendency to favour rotation for the younger ages if only certain detections are included in the analysis (upper panel of Fig. 3). One possible mechanism could be the enhanced destruction of clusters that rotate in the same direction of Galactic rotation (Ossipkov 2014). Thus, with time the disruption of clusters with unfavourable rotation would lead to a smaller fraction of older rotating clusters. However, if we consider the cases with possible rotation (middle panel of Fig. 3), this picture becomes blurred. At this moment, we consider it an observational suggestion for which a statistical or physical explanation requires further investigation.
The correlation of the ages and the possible kinematic patterns detected in this sample, including less uncertain pattern detection, is represented in Fig. 4. This figure indicates that more than half of the clusters for which a possible rotation was detected presented no detectable expansion or contraction pattern from the reconstructions based on Gaia EDR3 data. It also indicates that half of the objects older than 100 Myr in this sample are possibly rotating, and that most objects that are possibly rotating and at the same time might be expanding are younger than \(\sim 100\) Myr. Finally, Fig. 4 also indicates that no rotation pattern was detected for the majority of objects with a possible detection of expansion, and that the large majority of these objects have ages \(\lesssim 100\) Myr.
To validate the results, we also performed an additional test using a different method and implementation of the reconstruction. We created reconstructions using a simpler spatial Gaussian process with a Laplace approximation and an exponential spatial correlation structure with its length determined by generalised cross validation. The results obtained with this simpler method were similar to those resulting from INLA, in part due to the correlation structure and the posterior approximation being similar. Other methods that can be adopted for spatially correlated vector field reconstructions, as \(\epsilon-\)support vector regression with matrix valued kernels (e.g. Macedo & Castro 2008), can further provide interesting physical information as the stars acting as support vectors could be interpreted as naturally indicating boundaries that define distinct kinematic behaviours in the OCs. Such methods can also provide a faster estimation, perhaps enabling internal kinematics and dynamics to be considered in iterative cluster membership analysis such as UPMASK, in addition to a rigorous vector formulation providing the possibility to enforce curl and divergence properties as optional constraints. This is however at the expense of the posterior distribution inference as they are based on strict mathematical optimisation paradigms. Finally, methods such as NIFY (Selig et al. 2013; Arras et al. 2019) may enable the Gaussian approximation for the posterior to be relaxed and also provide conditions to be placed in the power spectrum of the distribution and full three-dimensional inference, possibly including reconstructions of the internal positions of the objects within the cluster, however, in exchange for higher computational complexity. These avenues remain to
Figure 3: Upper plot: Age distribution of the clusters for which the detection of a rotation pattern was more certain. Middle plot: Same distribution, but including the clusters for which the rotation detection is much less certain – with the inclusion of these possible rotation cases, the age distribution of the clusters with detected rotation just seems to raise, without significantly changing its shape. Bottom plot: Cluster age distribution for objects with some apparent pattern of contraction or expansion.
be explored in future works making use of the upcoming Gaia Data Releases.
## 5 Conclusions
We report the detection of rotation patterns in eight OCs, with another nine possibly rotating, from Gaia EDR3 data. Additionally, we also detected expansion in 14 OCs and contraction in two, with an additional 15 objects possibly expanding and one possibly contracting. In addition to the kinematic patterns, we also identify clear elongated spatial distributions in five clusters, suggesting tidal tails. The signals reported here are above EDR3 systematic error levels, suggesting that there are many more interesting objects and effects having yet to be revealed with the increased astrometric accuracy and precision of the upcoming Gaia Data Releases.
To detect these patterns, we implemented a method to reconstruct OC proper motion vector fields using the INLA. The method reveals spatial correlations in vector fields, which in the case of astronomical objects such as OCs, are expected to exist for physical reasons due to the object kinematics and dynamics. We applied this method to astrometric data of OC members derived from Gaia data, resulting in the detection of objects with clear and interesting patterns in their internal proper motion fields, corresponding to the detection of systematic internal motions of stars within such a large set of OCs.
The vector-field reconstruction methods used in this work represent another step in the kinematic and dynamical study of star clusters. Application of these methodologies to precise astrometry from the upcoming Gaia Data Releases and proposed missions such as JASMINE (Gouda, 2011), GaiaNIR (Hobbs et al., 2016), and Theia (The Theia Collaboration et al., 2017), open a path for future dynamical studies of astronomical systems, such as stellar clusters, streams, nearby dwarfs, the entire Milky Way, or even flows of larger-scale cosmological structures.
###### Acknowledgements.
We wish to thank the anonymous referee for constructive comments. This work was partially supported by the Portuguese Fundacao para a Ciencia e a Tecnologia (FCT) through the Portuguese Strategic Programme (IDID/FIS/00099/2020 for CENTRA. AAM additionally acknowledges the support from the Portuguese Fundacao para a Ciencia e a Tecnologia (FCT) through grants SFRH/BPD/74697/2010, PTDC/FIS-AST/31546/2017, EXPL/FIS-AST/1368/2021, and from the Caltech Division of Physics, Mathematics and Astronomy for hosting research leaves during 2017-2018 and 2019, when some of the ideas and codes underlying this work were initially developed. This work has made use of results from the ESA space mission _Gaia_, the data from which were processed by the Gaia Data Processing and Analysis Consortium (DPAC). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. The _Gaia_ mission website is [http://www.cosmos.esa.int/gaia](http://www.cosmos.esa.int/gaia). Some of the authors are members of the _Gaia_ Data Processing and Analysis Consortium (DPAC). This research has made use of data obtained from the GES Data Archive, prepared and hosted by the Wide Field Astronomy Unit, Institute for Astronomy, University of Edinburgh, which is funded by the UK Science and Technology Facilities Council. Funding for RAVE (www.rave-survey.org) has been provided by institutions of the RAVE participants and by their national funding agencies. Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences. This work has made use of APOGEE data. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonsonian Center for Astrophysics, Instituto de Astrofisica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut fur Astrophysik Postsdam (AIP), Max-Planck-Institut fur Astronomie (MPIA Heidelberg). Max-Planck-Institut fur Astrophysik (MPA Garching), Max Planck-Institut fur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatario Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autonoma de Mexico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. This work has made use of GALAH data, based on data acquired through the Australian Astronomical Observatory, under programmes-X/2013B/13 (The GALAH pilot survey). A/2014/A25, A/2015/A19, and A2017/A18(The GALAH survey). We acknowledge the traditional owners of the land on which the AAT stands, the Gamilaraay people, and pay our respects to elders past and present.
|
2309.13185 | Visualizing Topological Importance: A Class-Driven Approach | This paper presents the first approach to visualize the importance of
topological features that define classes of data. Topological features, with
their ability to abstract the fundamental structure of complex data, are an
integral component of visualization and analysis pipelines. Although not all
topological features present in data are of equal importance. To date, the
default definition of feature importance is often assumed and fixed. This work
shows how proven explainable deep learning approaches can be adapted for use in
topological classification. In doing so, it provides the first technique that
illuminates what topological structures are important in each dataset in
regards to their class label. In particular, the approach uses a learned metric
classifier with a density estimator of the points of a persistence diagram as
input. This metric learns how to reweigh this density such that classification
accuracy is high. By extracting this weight, an importance field on persistent
point density can be created. This provides an intuitive representation of
persistence point importance that can be used to drive new visualizations. This
work provides two examples: Visualization on each diagram directly and, in the
case of sublevel set filtrations on images, directly on the images themselves.
This work highlights real-world examples of this approach visualizing the
important topological features in graph, 3D shape, and medical image data. | Yu Qin, Brittany Terese Fasy, Carola Wenk, Brian Summa | 2023-09-22T21:20:41Z | http://arxiv.org/abs/2309.13185v1 | # Visualizing Topological Importance: A Class-Driven Approach
###### Abstract
This paper presents the first approach to visualize the importance of topological features that define classes of data. Topological features, with their ability to abstract the fundamental structure of complex data, are an integral component of visualization and analysis pipelines. Although not all topological features present in data are of equal importance. To date, the default definition of feature importance is often assumed and fixed. This work shows how proven explainable deep learning approaches can be adapted for use in topological classification. In doing so, it provides the first technique that illuminates what topological structures are important in each dataset in regards to their class label. In particular, the approach uses a learned metric classifier with a density estimator of the points of a persistence diagram as input. This metric learns how to reweigh this density such that classification accuracy is high. By extracting this weight, an importance field on persistent point density can be created. This provides an intuitive representation of persistence point importance that can be used to drive new visualizations. This work provides two examples: Visualization on each diagram directly and, in the case of sublevel set filtrations on images, directly on the images themselves. This work highlights real-world examples of this approach visualizing the important topological features in graph, 3D shape, and medical image data.
Topological Data Analysis, Persistence Diagrams, Metric Learning, Classification
## 1 Introduction
Topological data analysis (TDA) [17] is a crucial component of many data analysis and visual analytics pipelines. Features from TDA, extracted using persistent homology [18], contour trees [8], Reeb graphs [4, 40], and Morse(-Smale) complexes [22, 14], provide improvements of data structure in applications ranging from physics [28, 21, 36, 6] and chemistry [3, 20] to medicine [32, 33, 34, 35], to name a few. As a natural consequence of the importance of these features, researchers often want to use data topology to drive analysis tasks such as classification.
Although, there is little intuition about which topological features are important to define a class, or to what degree. For example, it is commonly assumed that the persistence (lifetime in function value) of a feature is a good weight for importance, as low-persistence features' ephemeral lifetimes are often attributed to noise. But, contrary to this assumption, work [41, 7] has shown that low-persistence features are more important for some types of data. Therefore, the current practice of determining which features to target and which to discount is to assume persistence, make an educated guess, or, worse, determine the correct weights for features as the result of a trial-and-error process. In response to the diversity of datasets, it is necessary to develop a visualization approaches that can aid users in understanding which
features define important structures in a dataset.
In this work, we introduce an approach to provide such visualizations. The core of our work is to use proven, explainable deep learning methods from computer vision on unweighted, vectorized density estimators of the points in persistence diagrams. Our metric learning approach automatically learns the regional importance of topological features in a diagram and the weights on densities that are necessary for accurate classification. Rather than assume importance weights (persistence) or find them through trial-and-error, we learn them. As a result of using explainable deep learning, our approach provides an importance field over a diagram that allows TDA researchers for the first time to determine which features define a class and which do not matter. As an initial step towards interpreting topological features, understanding the importance field across a diagram has dual benefits. It enhances both the field of TDA-based machine learning and TDA-based visualization by encoding more meaningful topological information. Using this field, new visualizations can be designed to illuminate the critical features but also challenge any preconceived assumptions about fundamental structure in data. For example, as our results will show, the commonly assumed single, uniform weighting strategy on diagram points is insufficient as importance varies by both class and dataset. While using deep learning for classification with interpretation is not new, this work is the first to apply such an approach to topological features and use said result to visualize topological importance.
This work has the following novel contributions:
* A field over the space of a dataset's persistence diagram that highlights regions of importance in defining a class;
* An approach that utilizes this field to visualize feature importance directly for zero-dimensional features of a sublevel-set filtration of a scalar field;
* A deep metric learning approach for classification using topological features that outperforms the accuracy of the state-of-the-art topological-based methods; and
* Examples of our visualization approach highlighting, for the first time, the importance of topological features for classes of graph, shape, and medical data.
## 2 Background and Related Work
In this section, we begin with a brief introduction to an abstraction of topological features widely used in TDA, persistence diagrams. We then detail how these diagrams are extended to encode a richer set of features. For more details on this concept, we refer the reader to [17]. Next, we discuss the weight functions on topological features and the need to learn these weights, vectorizations of persistence diagrams, and a brief overview of the approaches to visualize topological features.
### _Topological Features and Persistence Diagrams_
Homology is a concept from algebraic topology that describes the _holes_ (connected components, cycles, voids, etc.) of a topological space. Notationally, for each integer \(k\), we let \(H_{k}(X)\) denote the \(k\)-th homology group of a domain, \(X\); see [23, 37] for details. For our purposes, we use \(\mathbb{Z}_{2}\) coefficients, and so, the \(k\)-dimensional homology groups are vector spaces that describe the \(k\)-dimensional holes of \(X\).
A _filtration_ is an ordered family of topological spaces, connected by inclusion maps between them. For example, if we let \(X_{i}:=\{x\in X|f(x)<\alpha\}\) w.r.t. \(\alpha\in\mathbb{R}\) denote sublevel of \(f\), we get a nested sequence of topological spaces, \(X_{1}\subseteq X_{2}\subseteq\cdots\subseteq X_{n}=X\). The inclusion \(X_{i}\hookrightarrow X_{j}\) for \(i<j\) induces a linear map between the homology groups \(H_{k}(X_{i})\to H_{k}(X_{j})\) on the corresponding \(k\)-th homology. The two most common filtrations are the sublevel sets of scalar functions (e.g., for image data) or the evolution of a Vietoris-Rips complex for unstructured data (e.g., for point clouds) [24, 51].
Persistent homology [17], roughly, encodes the lifetime of a homological feature in this nested sequence. Homology provides a static description of topology, while persistent homology captures topology evolution over multiple scales through filtration and tracks changes in homology groups. This is accomplished by recording where a feature first appears (_birth_) and where it is subsumed by an older feature (_death_). Plotting this lifetime in \(\mathbb{R}^{2}\) (birth as the \(x\)-coordinate and death as the \(y\)-coordinate) gives a _persistence diagram_. The diagram \(D\) is composed of a set of points in the plane, where each point \((b,d)\) represents a feature. The feature corresponds to a \(k\)-dimensional homological structure that is created at the filtration value \(X_{b}\) and destroyed at the filtration value \(X_{d}\). In the case of a sublevel set filtration of a scalar field, these coordinates are always function values of critical points. For example, births for zero-dimensional (0D) features (\(H_{0}\), i.e., connected components) occur at local minima. We call the difference between
Fig. 3: An illustration of an extended persistence diagram. (a) A graph with height filtration, where each node is associated with a filter value. (b) The ordinary and extended barcode.1 (c) The persistence diagram with extended filtration. (d) The persistence diagram with standard filtration. The extended persistence diagram highlights the effectiveness of the extended filtration function in capturing the additional topological information beyond the standard filtration.
Fig. 2: The progression of a sublevel-set (\(L_{i}^{-}\)) of a scalar field for increasing function values (\(i\)). The green feature is born at the minimum introduced at \(2\), and dies when it merges with an older feature (shown in purple) at 5. The birth and death are represented as a point in the 0D persistence diagram \((2,5)\). The lifetime \((5-2=3)\) of this feature is its persistence.
the birth time (\(b\)) and the death time (\(d\)), \(|d-b|\), the _persistence_ of the feature. See Fig. 2.
Extended persistence.In the sublevel set filtration, the homology group of every topological space is captured by going upward in function value. However, this filtration may be insufficient in some contexts to encode the topology of domain \(X\). For example, let \(X\) be a graph, in the case where \(X\) contains cycles, the homology group of \(X\) remains unchanged as the cycles never die.
To address this limitation, an _extended persistence diagram_[13] is proposed using an extended filtration. This approach ensures that every feature that appears in the space eventually disappears. We use relative homology theory and consider both upward and downward directions. Specifically, we compute the homology group going upward and the relative homology group coming back down. This results in paired births and deaths, where every feature that appears in the space eventually dies, and all births are paired with corresponding deaths.
In persistent homology, the extended filtration distinguishes between three categories of topological features: ordinary features that are born and die going upward, relative features that are born and die going downward, and extended features that are born going upward and die coming downward.
This approach is best illustrated with a simple example. See Fig. 3 (a) with a graph with a scalar height function on the nodes. First, we compute the persistence diagram using standard filtration by going upward. The corresponding topological features with finite lifetime under this filtration are defined as ordinary features, which capture two OD features (i.e., connected components). Specifically, one feature is born at height \(c\) and dies at \(e\), while the other is born at height \(d\) and dies at \(e\). These two features are represented by the yellow and green lines in the ordinary barcode of Fig. 3 (b).
Additionally, three topological features are born and never die under this filtration, namely one OD feature born at height \(a\) and two 1D features born at height \(e\) and \(g\). We utilize relative homology theory to pair the death time of these topological features. Intuitively, if such feature is also created by going downward, then the corresponding time denotes the death time. This is because the downward-created feature represents a relative death time with respect to the upward-created feature that disappears. In the extended barcode of Fig. 3 (b), there is a 0D feature that is born at height \(a\) going upward and dies at height \(f\) coming downward (shown in blue line). Additionally, there are two unpaired 1D features: one is born at height \(e\) going upward and dies at height \(b\) going downward, while the other is born at height \(g\) going upward and dies at height \(d\) going downward (shown in purple and orange). Note that 1D features are born by going "up" and die by going "down." Therefore their birth time is larger than their death time. These features are encoded below the diagonal of the persistence diagram, as represented by the purple and orange diamonds in Fig. 3 (c).
Comparing with the persistence diagram under standard filtration in Fig. 3 (d), we observe that the extended persistence diagram in Fig. 3 (c) captures additional topological features. Specifically, the extended persistence diagram pairs three 0D and 1D topological features not paired in the standard filtration.
Wasserstein Distance.The classic distance between persistence diagrams is the \(p\)-Wasserstein distance [12]. At a high level, given two diagrams, this distance accumulates the cost of optimal point-wise matching between points of two diagrams. The diagonals of the persistence diagrams are also viewed as having an infinite number of points. As low persistence points are close to the diagonal, they do not significantly add to the accumulation when not matched. High persistence features are far from the diagonal and therefore incur a steeper penalty when they do not have a good match. Therefore, this distance naturally encodes persistence as a measure of importance.
### _Weighting Topological Features_
Weighting topological features is essential in extracting more meaningful and relevant information from complex topological structures. Traditionally, the weight function is defined as the persistence of a feature, but as mentioned, persistence may not always be the most appropriate weight. Moreover, uniformly weighting all data does not account for any variance in the importance of topological features with respect to a dataset or class label. For instance, Hofer et al. [25] also noticed that the weight function of a persistence diagram should not be pre-fixed (i.e., weighting based on persistence). Similarly, both Harish et al. [16] and Hamish et al. [9] proposed methods that enable users to interactively define the importance of topological features. However, these methods require prior domain knowledge and do not integrate with any learning approaches. Zhao et al. [56] also has shown a real-world scenario in the atomic configurations of molecules where low persistence features are most important and, therefore, should be given a larger weight. Finally, Riihimaki and Licon-Saliaz [43], also highlighted the significance of low persistence features in topological persistence in their design of a contour metrics for topological features.
### _Persistence Images_
In order to utilize topological features for downstream tasks, such as machine learning, it is necessary to transform them into vector representations. To accomplish this, several methods have been proposed that convert topological features into vectors [10, 1, 2, 7, 31]. One such vectorization is a _persistence image_, which is used by our approach.
Given a persistence diagram \(D\) in birth-death \((b,d)\) coordinates. Let \(T:\mathbb{R}^{2}\rightarrow\mathbb{R}\) be the linear transformation: \(T(b,d)=(b,d-b)\), and let \(T(D)\) be the transformed multiset in birth-persistence coordinates 2. Set \(\phi_{H}:\mathbb{R}^{2}\rightarrow\mathbb{R}\) be a differentiable probability distribution with mean \(\mu=(\mu_{b},\mu_{d})\in\mathbb{R}^{2}\) and bandwidth \(\sigma\).
Footnote 2: In our experiments, we exclude points that correspond to features with infinite persistence since they are less informative than features with a defined birth and death time.
The corresponding _persistence surface_ is a function \(\Phi:\mathbb{R}^{2}\rightarrow\mathbb{R}\) defined by \(\Phi(T(D))=\sum_{\mu\in T(D)}\mathbf{w}(\mu)\phi_{\mu}(z)\) for any \(z\in\mathbb{R}^{2}\), where \(\phi_{\mu}(\cdot)\) is the Gaussian kernel function as described above. \(\mathbf{w}(\cdot)\) is a weight function, which is typically a piecewise linear function. The _persistence image_[1] is obtained by discretizing \(\Phi(T(D))\) and taking samples over a fixed regular grid. To be precise, we choose a rectangular region in the plane with a collection of \(n\times n\) pixels, and compute the value of each pixel over the region within the bounding box of the interval by \(I(D):=\iint\Phi(T(D))dydx\), where \(x\) and \(y\) are the direction of the grid. The resulting image is denoted as \(I(D)\). For simplicity, we drop the function notation and refer to a persistence image as just \(I\).
In the original paper, the weight function \(\mathbf{w}(\cdot)\) is defined as the persistence of a feature. Persistence images with such a weight, we refer to as _persistence-weighted persistence images_. This weight function is also commonly utilized in other proposed methods for vectorizing topological features. As previously mentioned, persistence may not always be the appropriate weight. To enable a more flexible weight function, Divol et al. [15] first proposed a cross-validation method to select a better weight function of persistence images for different datasets, their result showed customized weight function for each dataset leads to better accuracy when using topological representation in classification.
In this work, we also do not assume persistence is the measure of importance but build models to learn the correct weight. Similar work has also been pursued by Zhao et al. [56] where they proposed a kernel method to learn a similarity metric for persistence images based on class labels. The learned metric on persistence images is then applied to graph classification. However, this work only investigated a non-deep distance metric of topological features without consideration of interpreting the importance of topological features. In contrast, we propose a deep metric learning model, which combines a deep neural network and metric learning. As we show, our deep network approach outperforms this previous work concerning classification accuracy. More importantly, using a deep metric allows explainable deep learning approaches to extract the importance of topological features used in the classification. We use this importance to provide, for the first time, a visualization of what topological features define a class.
### Visualizing Topological Features
Persistence diagrams are specialized scatter plots, therefore their visualization is straightforward and generally has not changed from their inception. The majority of work on visualizing topological features has focused on features that have a direct geometric interpretation. For instance, it is common to visualize manifolds and cells of uniform gradient flow in a Morse-Smale complex or the branching structures of contour trees [8] and Reeb graphs [4, 40]. There has also been working to visualize the generators from homology groups [27, 38] (or cohomology groups [52]) as a way of aiding the analysis of data. Finally, systems [39, 50] for topological analysis allow the visualization of the topological features (critical point pairs) embedded directly in the scalar fields that produce them. As mentioned previously, persistence is often the default measure of importance. Therefore, visualizations produced by users of these systems commonly color or resize these pairs based on persistence [19, 30]. In this work, we provide the first approach to visualize a proxy for the actual _importance_ of topological features in classification. In addition, we show how our work can drive in-image visualizations with an approach to illustrating the importance of 0D features of sublevel-set filtrations.
## 2 Learning and Visualizing Topological Importance
As we discussed Section 2, persistence as a weight for importance is not the best choice for some applications. Rather than assume that importance can be guessed in advance, it is better to build an approach that learns the best weight for topological features. Since we need a basis to learn these weights, we restrict our approach to the classification of known and unknown class labels. A learned weight function will also provide insight into which topological features are important in determining class label. To accomplish our goal, we propose a deep metric model using a convolutional neural network (CNN) with an attention module. After this model is trained, we utilize explainable machine learning techniques to visualize the importance of topological features. At a high level, our approach has two parts: learning a weight on topological features in Section 2.1 and visualizing the learned weight in Section 2.2.
### Metric Learning for Topological Classification
We use persistence images [1] as initial vectorized density estimators of diagram points. Rather than use the typical persistence weights, we use a uniform weight, \(\mathbf{w}(\cdot)=1\). This allows our CNN to learn how to re-weight the pixels of these _unweighted_ persistence images such that classes are well-separated.
To achieve this goal, we introduce our deep metric learning framework as shown in Fig. 4 that contains the following modules: a CNN with a metric learning loss function as described in Section 2.1.1 and an attention module as outlined in Section 2.1.2.
#### 2.1.1 Deep Metric Learning
Here we give a more concrete overview of the deep metric model used in this work. Given a set of labeled unweighted persistence images, the goal is to learn a weight that can distinguish between similar and dissimilar samples. This learned weight is used as the basis for our visualization of topological feature importance. Our model uses a deep neural network to learn a feature vector and then uses a metric loss function to learn a similarity metric based on these features.
We tested two potential CNNs for our deep metric model: one standard CNN and VGG16 [46] containing 13 convolutional layers. For both of our CNN architectures, we applied an attention module (see Section 2.1.2 ) for refinement. In our testing, we found that the feature vectors produced for unseen data by the standard CNN were slightly more accurate (+1%) than VGG16. Therefore, our deep metric model uses the smaller, 6 convolutional layered CNN with one fully-connected layer as shown in Fig. 4.
We use triplet loss as the metric loss function in our model due to its aptitude for learning meaningful feature representations. Triplet loss excels in comparing instances, making it ideal for capturing topological structures. By utilizing anchor, positive, and negative examples, it guides the model to create embeddings that respect data topology. This aligns with our goal of visualizing and classifying topological features. Additionally, triplet loss enables us to integrate domain-specific knowledge by selecting instances strategically, enhancing interpretability and performance.
**Triplet Loss.** This loss is computed using three input examples, chosen at random: 1) a target image \(I_{T}\); 2) a positive example \(I_{P}\) that has the same class label as the target; and 3) a negative example \(I_{N}\) that has different class label as the target. Following the previous work [26], the triplet loss function \(L(\cdot)\) can be formulated as:
\[L(I_{T},I_{P},I_{N}):=max(||f(I_{T})-f(I_{P})||^{2}-\\ ||f(I_{T})-f(I_{N})||^{2}+\beta,0),\]
where \(f(\cdot)\) is the learned weight function of the deep learning model and \(\beta\) is the margin for the loss, which sets the minimum distance between positive and negative examples. In the training, positive and negative examples are randomly sampled, given a target image.
#### 2.1.2 Attention Module
In order to improve the learned weight of our model, an attention module is applied to re-weight the activation map of the CNN, which gives _attentional_ importance to each neuron. An activation map in a CNN is a 2D representation of the output of a specific layer in the network. It shows the level of activation of each neuron in the layer. The attention module integrated into CNN enables the network to assign different weights to various regions of the activation map, allowing it to concentrate on the most informative areas that were crucial in determining the final classification decision. Attentional importance is inspired by visual neuroscience where the most informative neurons suppress the activities of the surrounding neurons. This concept is applied to our CNN through an energy function \(e\) that calculates the linear separability between a target neuron and others to estimate the importance of individual neurons. See [54] for a more detailed description of the energy function and approximate solution. The energy function enhances our learned weights and visualization by determining the importance of each neuron and re-weighting them accordingly. Specifically, in our testing, we observed that using this function led to a higher classification result (+3%) compared to not using it.
Given an activation map \(A\in\mathbb{R}^{C<H\times W}\), where \(C\) is the number of channels and \(W\), \(H\) are the width and height of the convolutional layer, respectively. An attention module is applied to a CNN to re-weight the activation map. The new \(\hat{A}\in\mathbb{R}^{C<H\times W}\) can be calculated as:
\[\hat{A}=sigmoid(\frac{1}{E})\odot A,\]
where \(\odot\) is a scaling operator (multiplication) and \(E\) aggregates all energy function \(e\) values across the channel and spatial dimensions. We add this attention module to the third and last two CNN layers similar to the original paper.
**Parameter Details.** Our deep metric learning model is trained from scratch without fine-tuning. We randomly initialize the model's weights to fully explore its parameter space without the biases or constraints
Figure 4: The architecture of our deep metric model includes a CNN with attention modules and a metric loss function (triplet loss), where input is unweighted persistence images. The number at the bottom of a layer denotes the number of channels. FC means fully-connected layer and rectified linear activation function (ReLU) is a piecewise linear function. In training, a target is chosen with a randomly sampled positive and negative example.
imposed by a pre-existing model. The model inputs are unweighted persistence images with the size of \(40\times 40\) and \(\sigma=0.1\), which are the same parameters used in [56]. Both the ordinary and extended persistence diagrams can be used to generate persistence images for our input. To train our deep metric model, we set the learning rate as 0.001 and batch size as 64. Adam optimizer is used to speed up the gradient calculation and the dropout regularization method is also used to avoid over-fitting. The Rectified Linear Unit (ReLU) function is used as our activation function, \(\textit{max}(0,x)\), where \(x\) is the value of the activation map, because we are only interested in features that have a positive impact on the class label. We use a standard setting for the triplet loss hyperparameters: a margin of 0.1 and cosine similarity distance to measure the distance between examples in the embedding space. An \(L_{p}\) regularizer term is applied in the triplet loss calculation. For the attention module, we use the same parameter setting as [54]. Our implementation is based on PyTorch.
### Visualization of Topological Importance
As a direct benefit of using a CNN in our deep metric model, we provide the first approach to visualize the importance of topological features in classification. Note that this approach can be applied to not only our seen, training data, but also any unseen, new data. In particular, we leverage an explainable CNN method to highlight regions in our input persistence images that contribute the most to the model's decision-making. In this section, we introduce the explainable CNN technique used by our approach, Grad-CAM [44] and how it can be used to create a field describing the importance of topological features. This importance field can be visualized directly, mapped back to the original points in the persistence diagram, or even mapped to features in the original data as shown below.
#### 3.2.1 Field of Topological Importance
To visualize the learned weight function in our model such that the most significant regions of topological features in the persistence image are highlighted, we apply the Grad-CAM method in the last convolutional layer. The last is chosen as its activation maps are the most meaningful as it combines information from all other layers.
**Grad-CAM**[44] Given our _attentically_ weighted activation maps \(\hat{A}\in\mathbb{R}^{C\times H\times W}\), where \(C\) is the number of channels and \(W\), \(H\) are the width and height of the convolutional layer, respectively. \(A^{c}\subseteq\hat{A}\) refers to the activation map in the \(c\)-th channel produced by the last convolutional layer. We first calculate the gradient of class label score for \(y^{k}\) as \(\frac{d\times^{k}}{d\times^{k}}\), where \(k\) is the class label, \(y^{k}\) is the predicted probability of \(k\) given by the network.
Then the average gradient of the class label score, \(\alpha\), can be computed under global-average-pooling as:
\[\alpha_{c}^{k}=\frac{1}{M}\sum_{i=1}^{W}\sum_{j=1}^{H}\frac{dy^{k}}{dA^{c}_{ ij}},\]
where \(i\) and \(j\) are the index of width \(W\) and height \(H\), \(A^{c}_{ij}\) is the activation weight at location \((i,j)\) of the activation map, \(A^{c}\), and \(M=H\times W\). Finally, we can weigh the activation maps across all channels in the CNN through a linear combination with ReLU:
\[\text{ReLU}(\sum_{c=1}^{C}\alpha_{c}^{k}\cdot A^{c}),\]
where the ReLU function is added to filter out negative influences on the pixel of interest.
_The resulting weighted activation map provides a field over the space of a persistence image that defines regions of importance._ This field can then be used as a proxy to define the importance of topological features. The following paragraphs will describe how to design visualizations of this field, including how it can be used to drive an in-image visualization of pixel importance for sublevel set filtrations.
#### 3.2.2 Visualization of Importance Field
We can visualize the importance field of topological features by directly displaying the weighted activation map as a colored (_magma_) heatmap, similar to how it is done in other works on explainable CNNs. To help orient the visualization in regards to the original persistence diagram, we add the standard diagonal line on our map visualization. To give better intuition on the shape and amount of importance of each region, we overlay _green_ isocontours. We draw contours for three isovales set to be \(50\%,70\%\), and \(90\%\) of the maximum importance weight. For example, the region of the inner contour line means the feature value in this region inside is greater than \(90\%\) of the maximum importance weight. We can visualize the map directly (see Fig. 6) or as an overlay on the plot of the persistence diagram (see Fig. 7).
**In-image Visualization of the Topological Importance.** To provide better insight into what topological features drive a classification, we show how our importance field can be used to design an in-image visualization of topological feature importance. We target the most interpretable dimension and filtration: 0D features via a sublevel set filtration of an image.
In this case, there is a natural correspondence between 0D topological features and critical pairs (minima and saddles) that define them. This correspondence, combined with our importance field over the space of a diagram, can provide an intuitive visualization of what structure defines a class in original data.
Given an image and its persistence diagram, we can use the diagram to obtain critical pair information for each point \((b,d)\) in the diagram, which corresponds to a interlevel set in the image. Specifically, let \(p_{b}\) be the sublevel set in the image corresponding to the grayscale value \(b\) (the minimum point), and let \(p_{d}\) be the sublevel set corresponding to the grayscale value \(d\) (the saddle point). Each point, \((b,d)\), can be used to look up the importance directly in our importance field (discrete, but linearly interpolated). Based on these pixel values, the corresponding pixel locations can be plotted and visualized directly, say by picking each minimum for each point like previous work [11, 19, 30, 47]. Although this only provides an intuition of the extremes in a feature, not the 0D topological feature each pair represents. Therefore, for our approach, we visualize features by drawing the interlevel set between \(p_{b}\) and \(p_{d}\) for each pair.
We now discuss how to visualize the interlevel set of 0D topological features. To begin, we consider the sublevel set filtration for the image, which involves constructing a nested sequence of topological spaces based on the image's grayscale values. Given an image with grayscale values ranging from 0 to 255. A continuous function can be defined that assigns each pixel its grayscale value. The sublevel set filtration of the image is then defined as the nested sequence of sublevel sets: \(p_{0}\subseteq p_{1}\subseteq\cdots\subseteq p_{255}\), where the sublevel sets \(p_{k}\) correspond to the set of pixels in the image with grayscale values less than or equal to \(k\).
Given a point \((b,d)\) in the persistence diagram, the corresponding interlevel set can be determined as \(p_{b}\subseteq\cdots\subseteq p_{d}\), where \(p_{b}\) and \(p_{d}\) are critical pairs in the image. This interlevel set captures the birth and death of a 0D topological feature in the image, providing insight into its lifetime. By visualizing the interlevel set, we aim to gain a deeper understanding of the topological features present in the image.
We color each interlevel set based on the importance value of the diagram point in our field, again, using the _magma_ color map. In cases where an older feature subsumes a younger feature in the filtration, we assign the same color to both features. This is because the older feature includes the 0D feature of the interlevel set of the younger feature in our extraction.
To highlight high-importance regions we process the set of persistence points, rendering their 0D features of interlevel sets, in inverse order of importance. Thus, the most important regions are in front. Given that our image data has discrete function values, there will be the potential of several sets getting the same importance value. Since they share the same color, their relative ordering does not matter.
## 4 Results
In this section, we demonstrate the effectiveness of our approach in: (1) learning a metric for topological features, such that features are
weighted for accurate classification and; (2) visualizing topological importance such that key structures for the classification are highlighted. The real-world datasets evaluated in our approach are detailed in Section 4.1. We begin with a study in Section 4.2 with a scenario where we assume to have prior knowledge of the meaningful importance weight function (persistence), and show that our method can learn that weight. In practice, however, this prior knowledge cannot be assumed therefore, importance must be _learned_. We evaluate our learned weight on a variety datasets and provide topological classification results in Section 4.3. We compare and show these results are more accurate than other state-of-the-art approaches. Finally, we provide examples of using the importance field extracted from the learned weight to visualize topological importance in Section 4.4. _All examples in the following figures use unseen data to our model_. Our code is available in an 0SF repository. Our importance field is presented with a _magma_ colormap, but to keep our results distinct, we present all other persistence images (weighted, unweighted) using _virridis_.
### Evaluation Datasets
We evaluate our approach on five datasets from graph, shape, and medical imaging, which includes a range of filtration functions and dimensions of topological features.
**3D Shape**[48] This dataset contains 6 different 3D shape classes including faces, human heads, camels, horses, cats, and elephants. There are 1,200 persistence diagrams in total with 200 persistence diagrams for each class. Diagrams of 0D features are produced using the implementation of [11] that uses a Vietoris-Rips filtration.
**PROTEINS**[5]: This graph dataset of protein molecules contains 1,113 graphs with 2 classes: enzymes and non-enzymes. Nodes of each graph are amino acids and edges connect pairs that are less than 6 Angstroms apart. Following [56], the Jaccard-index function on graph edges allows extended persistence diagrams to be computed using sublevel-set and superlevel-set filtration to extract 0D and 1D features.
**COLLAB**[53]: This is a graph dataset denoting scientific collaborations in High Energy Physics, Condensed Matter Physics, and Astro Physics. This set has 5,000 graphs with 3 labels that indicate the research area. Similar to the PROTEINS dataset, extended persistence diagrams with 0D and 1D features were produced for each graph.
**Prostate Cancer**[32]: This set includes 5,182 region-of-interest images from hematoxylin & eosin (H&E) stained histological images with 3 classes, that denote the progression of cancer (Gleason score 3, 4, and 5). Persistence diagrams of 0D features were produced for each image via sublevel set filtration using the Giotto-tda library [49].
**Colorectal Cancer**[29]: This is a set of 1,800 region-of-interest images from H&E stained histological images with 9 classes. Similar to the prostate images, diagrams are obtained for 0D features via sublevel set filtration using Giotto-tda library [49].
### Learning Persistence Weights
Our deep metric model is designed to learn the best weight for diagram point density for classification. We present a scenario in which persistence is the appropriate weighting for topological features, and demonstrate how our learned weight can effectively capture "persistence". To evaluate this ability, we generated two synthetic datasets, each containing diagrams of a distinct class, with one high persistence feature present in all members of that class. Additionally, each diagram contains 100 randomly generated low persistence points, representing random noise. This scenario tests the efficacy of using persistence as a measure of importance when one high-persistence feature defines a class amidst low-persistence noise.
Fig. 5 illustrates our results. Fig. 5 (a) gives an example diagram from each class where the important high persistence feature is denoted with a red arrow. Fig. 5 (b) shows the average for all class members of the standard persistence-weighted persistence image where the high persistence features receive a larger weight. Fig. 5 (c) is the average unweighted persistence image that gives the density of points, where all points are considered equal. This is the average image of the inputs to our approach. Note that the diagonal noise dominates as it contains the highest density of points. As persistence is the ideal weight for this scenario, if our approach works as it should, the average importance field produced should be similar to the average persistence-weighted persistence image. Fig. 5 (d) shows that this is the case since high persistence features are deemed important and low persistence features are discounted. Therefore, our approach can learn persistence weighting if that weight is the right one for a dataset, but it is not limited to only considering that measure of importance.
We provide another, real-world example of our approach learning a persistence-based weight in Fig. 6. This dataset consists of 3D shapes and has been previously used for feature tracking based on high-persistence topological features [11]. Given this prior use, we can assume that standard persistence-based weighting would yield satisfactory results. This assumption is supported by the accuracy of persistence-based weighting strategies in our classification results, which are discussed in Section 4.3. As shown in Fig. 6, our deep metric model indeed learns to assign importance based on persistence. High-persistence features are given more weight, while low-persistence features are discounted.
### Learned Weight Accuracy
For our visualization to be effective, the deep metric classifier on which it is based should be accurate. To this end, we evaluate the accuracy of our learned weight by comparing it to other commonly used topological representations in classification. Similar to the state-of-the-art approach
Fig. 5: (a) Example persistence diagrams (PD) for 2 classes with 0D features. Each class has one high persistence point and a random distribution of many low persistence points. In this case, a persistence weight would be ideal for classification. (b) A persistence weighted persistence image (PD). (c) Given a uniform density distribution, (d) our approach can learn to weight by persistence.
Fig. 6: 3D Shape examples along with a visualization of topological importance for their classification. The top portion of each figure shows the persistence diagram for the 3D shape example with 0D features, while the bottom portion shows our visualization of the importance field. In this case, our deep learning model leans a persistence-like weight of features.
in learned topological classification [56], we employ an SVM-kernel classifier and adopt a 90/10 training-test data split. In order to ensure a fair comparison, we use the same training and testing dataset for both our deep metric model and the classifier. Accuracy results are based on the classification of the unseen, test data.
We compare our method to other topological representations frequently used in classification, such as persistence diagrams using 1-Wasserstein distance (W1), persistence-weighted persistence images (PWPI) [1], Betti curves (BC) [42], persistence-weighted Gaussian kernels (PWGK) [31], and sliced Wasserstein kernels (SWK) [10]. Additionally, we compare our method to the previous state-of-the-art in learned weights for topological classification: the weighted persistence image kernel (WKPI) [56]. We use the same parameter settings for persistence images (size \(40\times 40\), \(\sigma=0.1\)) and Betti curves (BC) size (\(40\times 40\)) as described in [56]. The training time for our method is at least 5 times faster than the WKPI method for learning to weight, as we employ a more streamlined model. To illustrate, when considering the Colorectal dataset with 1800 datapoints, our training process takes approximately 11.25 minutes, while WKPI requires over an hour.
A sensitivity analysis was conducted to evaluate how changes in parameter settings affect the persistence images and Betti curves in Prostate cancer image classification. The size of persistence images and Betti curves were varied of the range \([10\times 10\), \(100\times 100]\) in increments of 1, and \(\sigma\) of persistence image was varied of the range \([0.001,1]\) in steps of 0.001. The results showed that the classification accuracy was not significantly affected, with a difference of less than 1%.
The classification results in Table 1 demonstrate that our approach outperforms traditional W1 distance of persistence diagrams in terms of accuracy. In fact, our approach achieves a significant improvement in accuracy over the next best method in the graph classification task for the COLLAB dataset, with an increase of +6%. Although W1 already provides accurate results for 3D shape classification, our method further improves accuracy to achieve perfect classification results. For the prostate imagery dataset, our method achieves an increase in accuracy of +7% over the next best method, resulting in an overall classification accuracy of 95%. Our approach also yields a +4% improvement in classification accuracy for the colorectal cancer dataset.
Of particular note is our accuracy for the PROTEINS graph classification (87%). This not only outperforms the other representations (an increase of +8% compared to the next best), it outperforms the best-known machine learning approach [55] (85%) according to the Papers with Code website at the time of submission.
Our results illustrate how our learned weight outperforms approaches that assume persistence as the measure of importance (PWPI, BC, PWGK, SWK). This implies that persistence is not the ideal weight in these datasets. In addition, our approach is comparable or better than the state-of-the-art in learned weights for topological classification, WKPI. This indicates that our approach's use of a deep learning network is more effective than the previous work.
The accuracy of our classification results motivates our next step: visualizing the importance of topological features in order to understand what topological features are key to defining classes.
However, our importance field visualization highlights that the most important features for classification are the 0D features that are born at the low function value, which are similar in both examples. Therefore, our _learned_ metric is superior for classification, and the evaluation results in Table 1 support this conclusion.
Fig. 9 shows the importance field visualization for graph examples from scientific collaboration networks in High Energy Physics, Condensed Matter Physics, and Astro Physics (COLLAB). For each class, the plots show two example persistence diagrams from each class overlaid with the importance field visualization.
For High Energy Physics in Fig. 9, the first example demonstrates that the classification is determined by both low persistence 0D and 1D features that are born and die at low function values. The second example highlights the significance of both low persistence 0D and 1D features as well, while they are born and die at medium function values. In Condensed Matter Physics, both examples have a similar importance field, with low-medium persistence 0D features born at low-medium function values being the most significant. In Astro Physics, both examples show that the 1D features (lower triangle in the diagram) are important in classification, despite being in different ranges of persistence and birth-death locations. This implies that collaboration loops between authors are likely more indicative in Astro Physics than other classes (i.e., Condensed Matter Physics). These examples illustrate the use of a mix of high, medium, and low persistence features in classification, suggesting that a single weighting scheme (persistence, inverse-persistence, or other) would not yield high-quality classifications. This is supported by our quality measures in Table 1.
Our visualization results for medical imaging datasets provide an answer to the third question by demonstrating the in-image visualization of topological importance, which effectively highlights the medically significant structures within the data. This visualization is interpreted by our biomedical collaborators.
In Fig. 1 and Fig. 10, we present the visualization results for digital pathology images of prostate cancer. The dataset includes examples of Gleason 3 to 5, where higher grades correspond to more advanced stages of the disease. A hallmark of this disease is that well-formed prostate glands deteriorate and lose structure, such that at more advanced stages, no glandular structure is present. As our visualization shows, this is indeed the case as the stroma defining the glandular structure are the most important features in the classification of Gleason 3. As cancer progresses to Gleason 4 and as the glands break down, the important features become more cellular, involving both the semi-structural stroma and nuclei. At the final stage of progression in Gleason 5 where glands have entirely deteriorated, no structure is present, and the important features become local/cellular information. These important features are not captured by common measures of importance, such as persistence, and can be identified and visualized for the first time with our approach. For example, prostate calcifications only occur in well-formed glands. In Fig. 1, these are important features in our field (red arrow). Furthermore, as Fig. 10 illustrates, important regions in the diagram vary in different examples, while our in-image visualizations show that they correspond to similar structures. This provides further evidence that a single weighting strategy is not ideal for this dataset. This is supported by the fact that our learned weight achieves 95% accuracy, as shown in Table 1.
Our final example, shown in Fig. 11, demonstrates the application of our method to colorectal cancer image classification with 9 classes. Each class has various of structural arrangements and distributions. Our visualization provides the first step to interpret the structural difference between them by highlighting the importance of topological features in distinguishing their classes. For instance, normal colon mucosa (NORM) features a uniform and regular arrangement of epithelial cells with glandular structure, while cancer-associated stroma (STR) displays a more disorganized arrangement that disrupts the normal tissue structure. As shown in this figure, our visualization emphasizes such structural importance. Moreover, in comparison with NORM class, colorectal adenocarcinoma epithelium (TUM) also has glandular structure, but with abnormal epithelial cells. Our in-image visualization highlights the glandular structure in both NORM and TUM, indicating the normal cells in NORM and the abnormal cells in TUM.
Other examples further demonstrate how our visualization aligns with the important structural characteristics of each class. For instance, the background (BACK) example shows that the important features are mainly artifacts, indicating that this class is predominantly noise. Adipose (ADI) tissue comprises adipocytes and connective tissue, and our in-image visualization highlights the connective tissue structure. Debris (DEB) refers to damaged tissue that has broken down, which is
Fig. 8: PROTEINS graph classification with class enzymes. (left) Example persistence diagrams with 0D and 1D features overlaid with our visualization of the importance field for same class, which has the largest W1 distance. (right) Visualization of the corresponding persistence-weighted persistence images.
Fig. 9: COLLAB graph classification with three classes. (left) Example persistence diagrams with 0D and 1D features overlaid with our visualization of the importance field from 3 collaboration networks in physics. (right) Example persistence diagrams overlaid with our visualization of the importance field for the same class.
also reflected in the lack of structure in the in-image visualization. Next, our importance visualization highlights the fibers of found in smooth muscle (MUS) and the sparse tissue of mucus (MUC). Lymphocytes (LYM) are a type of white blood cell, and our in-image visualization emphasizes that the cell structure is crucial in determining class.
All persistence diagrams overlaid with our learned importance in this example are distinct, indicating the need to learn the weight function instead of relying on a pre-fixed one (i.e. persistence). This example demonstrates how our visualization can effectively highlight the medically significant structures in a complex dataset, a conclusion supported by feedback from our biomedical collaborators.
## 4 Discussion
In this paper, we introduced the first visualization of the importance of topological features, which includes the visualization of an importance field through the learned weight function and in-image visualization of topological significance. This allows TDA researchers to gain insight into the topological features that drive dataset classification for the first time. Rather than an assumed, fixed weighting our novel deep metric model optimizes the weight function given labeled data. Furthermore, our model outperforms other topological representations, including those that use persistence-based weights or learned kernel weight functions.
However, our novel approach also has limitations. The persistence image, acting as a density estimator, may lead to density and importance overlapping along the diagonal due to smoothing. This could pose challenges for points near the diagonal on extended diagrams. We demonstrated how our field can drive in-image visualization of 0D features through sublevel set filtrations on images. While our field is dimension and filtration agnostic, mapping diagram points back to original data is not always straightforward. For instance, visualizing generators for even 1D features remains an active research area [27, 35, 38]. Additionally, visualizing importance in unstructured datasets is an open question. These topics offer exciting avenues for future research. For example, our approach could extend ongoing work on visualizing 1D features or enhance visualizations of the Morse-Smale Complex.
Our results highlight the variability of topological importance across domains, classes, and datasets. Thus, a single fixed weighting strategy may not be optimal for various datasets. This emphasizes the need for further research in this domain, where our approach serves as the pioneering visualization tool.
Figure 11: Colorectal cancer image examples from 9 classes. Each column is an example image per class, its persistence diagram with 0D features overlaid with our learned importance, and in-image visualization of topological importance.
Figure 10: Prostate cancer medical image classification. Each class has two examples in Gleason 3,4 and 5. From top to bottom are the original image, its persistence diagram with 0D features overlaid with our learned importance, and in-image visualization of topological importance.
## Acknowledgments
This work has received support from the Department of Energy, the National Science Foundation, and the National Institutes of Health (DOE ASC DE-SC0022873, NSF-IIS 2136744, NIH R01GM143789, NSF CCF 2046730, and NSF DMS 1664858).
|
2309.14826 | From CT scans to 4-manifold topology | In this survey paper the ultrahyperbolic equation in dimension four is
discussed from a geometric, analytic and topological point of view. The
geometry centres on the canonical neutral metric on the space of oriented
geodesics of 3-dimensional space-forms, the analysis discusses a mean value
theorem for solutions of the equation and presents a new solution of the Cauchy
problem over a certain family of null hypersurfaces, while the topology relates
to generalizations of codimension two foliations of 4-manifolds. | Brendan Guilfoyle | 2023-09-26T10:46:52Z | http://arxiv.org/abs/2309.14826v1 | # From CT scans to 4-manifold topology
###### Abstract.
In this survey paper the ultrahyperbolic equation in dimension four is discussed from a geometric, analytic and topological point of view. The geometry centres on the canonical neutral metric on the space of oriented geodesics of 3-dimensional space-forms, the analysis discusses a mean value theorem for solutions of the equation and presents a new solution of the Cauchy problem over a certain family of null hypersurfaces, while the topology relates to generalizations of codimension two foliations of 4-manifolds.
Key words and phrases:Ultrahyperbolic equation, neutral geometry, X-ray transform, 4-manifold topology 2010 Mathematics Subject Classification: 53A25,35Q99
_The air is full of an infinity of straight lines and rays which cut across each other without displacing each other and which reproduce on whatever they encounter the true form of their cause._
Leonardo da Vinci
MS. A. 2v, 1490
## 1. Introduction
Our staring point is, as the title suggests, the acquisition of density profiles of biological systems using the loss of intensity experienced by a ray traversing the system. Basic mathematical physics arguments imply that this loss is modelled by the integral of the density function along the ray. One goal of Computerized Tomography is to invert the X-ray transform: reconstruct a real-valued function on \(\mathbb{R}^{3}\) from its integrals over families of lines.
The reconstruction of a function on the plane from its value on all lines, or more generally, a function on Euclidean space from its value on all hyperplanes, dates back at least to Johann Radon [62]. One could argue that Allan MacLeod Cormack's 1979 Nobel prize for the theoretical results behind CAT scans [11] is the closest that mathematics has come to winning a Nobel prize, albeit in Medicine. The choice of axial rays reduces the inversion of the X-ray transform to that of the Radon transform over planes in \(\mathbb{R}^{3}\)[43].
The basic problems of tomography - acquisition and reconstruction - arise far more widely than just medical diagnostics, finding application in industry [74], geology [70], archaeology [58] and transport security [56]. Indeed, advances in CT technology, trialed in Shannon Airport recently, could warrant the removal of the 100ml liquid rule for airplane travellers globally [63].
Rather surprisingly, sitting behind the X-ray transform and its many applications is a largely unstudied second order differential equation: the ultrahyperbolic equation. For a function \(u\) of four variables \((X_{1},X_{2},X_{3},X_{4})\) the equation is
\[\frac{\partial^{2}u}{\partial X_{1}^{2}}+\frac{\partial^{2}u}{\partial X_{2}^{ 2}}-\frac{\partial^{2}u}{\partial X_{3}^{2}}-\frac{\partial^{2}u}{\partial X_ {4}^{2}}=0. \tag{1}\]
The reasons for the relative paucity of mathematical research on the equation despite the link to tomography will be discussed below.
The purpose of this mainly expository paper is to describe recent research on the ultrahyperbolic equation, its geometric context and its applications. It turns out that the ultrahyperbolic equation is best viewed in terms of a conformal class of neutral metrics and that in this context it advances new paradigms that can contribute to the understanding of four dimensional topology. We now discuss the mathematical background of this undertaking before giving a more detailed summary of the paper.
### Background
The _X-ray transform_ of a real valued function on \(\mathbb{R}^{3}\) is defined by taking its integral over (affine) lines of \(\mathbb{R}^{3}\). That is, given a real function \(f:\mathbb{R}^{3}\to\mathbb{R}\) and a line \(\gamma\) in \(\mathbb{R}^{3}\), let
\[u_{f}(\gamma)=\int_{\gamma}fdr,\]
where \(dr\) is the unit line element induced on \(\gamma\) by the Euclidean metric on \(\mathbb{R}^{3}\).
Thus we can view the X-ray transform of a function \(f\) (with appropriate behaviour at infinity) as a map \(u_{f}:\mathbb{L}(\mathbb{R}^{3})\to\mathbb{R}:\gamma\mapsto u_{f}(\gamma)\), where \(\mathbb{L}(\mathbb{R}^{3})\), or \(\mathbb{L}\) for short, is the space of oriented lines in \(\mathbb{R}^{3}\). Here we pick an orientation on the line to simplify later local constructions, much as Leonardo does when invoking _rays_ as distinct from lines, and note that the space \(\mathbb{L}\) double covers the space of lines.
In comparison, the _Radon transform_ takes a real-valued function on \(\mathbb{R}^{3}\) and integrates it over _planes_ in \(\mathbb{R}^{3}\). By elementary considerations, the space of affine planes in \(\mathbb{R}^{3}\) is three dimensional, equal to the dimension of the underlying space, while the space of oriented lines is four dimensional.
Thus, by dimension count, if we consider the problem of inverting the two transforms, given a function on planes one can reconstruct the original function on \(\mathbb{R}^{3}\), while the problem is over-determined for functions on lines. The consistency condition for a function on line space to come from an integral of a function on \(\mathbb{R}^{3}\) is exactly the ultrahyperbolic equation [46].
Viewed simply as a partial differential equation, equation (1) is neither elliptic nor hyperbolic, and so many standard techniques of partial differential equation do not apply. Indeed, in early editions of their influential classic _Methods of Mathematical Physics_, Richard Courant and David Hilbert showed that the ultrahyperbolic equation in \(\mathbb{R}^{2,2}\) has an ill-posed Cauchy boundary value problem when the boundary has Lorentz signature, thus relegating the equation as unphysical in a mechanical sense.
It was Fritz John who in 1937 proved that, to the contrary, the ultrahyperbolic equation can have a well-posed characteristic boundary value problem if the boundary 3-manifold is assumed to be _null_, rather than Lorentz [46]. Later editions of Courant and Hilbert's book acknowledge John's contribution and his discovery of
the link to line space, but study of the ultrahyperbolic equation never took off in the way that it did for elliptic and hyperbolic equations.
On the other hand, by reducing the X-ray transform to the Radon transform for certain null configurations of lines, Cormack side-stepped the ultrahyperbolic equation altogether. Moreover, for applied mathematicians, the equation, or its associated John's equations, arises mainly as a compatibility condition if more than a 3-manifold's worth of data is acquired. Its possible utility from that perspective therefore is to check such excess data, rather than to help reconstruct the function.
Our first goal, contained in Section 2 is the geometrization of the ultrahyperbolic equation. In particular, we view it as the Laplace equation of the canonical metric \(\mathbb{G}\) of signature \((++--)\) on the space \(\mathbb{L}\) of oriented lines in \(\mathbb{R}^{3}\)[36]. The fact that \(\mathbb{G}\) is conformally flat and has zero scalar curvature means that a conformal multiple of a harmonic function satisfies the flat ultrahyperbolic equation (1). Fritz John did not explicitly use the neutral metric, but at the cost of the introduction of unmotivated multiplicative factors in calculations, factors that can now be related with the conformal factor of the metric.
The introduction of the neutral metric not only clarifies the ultrahyperbolic equation, but it highlights the role of the conformal group in tomography. Properties such as conformal flatness of a metric, zero distance between points or nullity of a hypersurface are properties of the conformal class of a metric. Moreover, mathematical results can be extended by applying conformal maps [9].
Section 2 describes how these neutral conformal structures arise in the space of oriented geodesics of any 3-dimensional space-form, namely \(\mathbb{R}^{3}\), \(\mathbb{S}^{3}\) and \(\mathbb{H}^{3}\). The commonality between these three spaces allows one to apply many of the results (mean value theorem, doubly ruled surfaces, null boundary problems) to non-flat spaces. Surprisingly, electrical impedance tomography calls for negative curvature and so tomography in hyperbolic 3-space is not quite as fanciful as it may at first seem - see [4]. The link between the ultrahyperbolic equation and the neutral metric on the space of oriented geodesics in \(\mathbb{H}^{3}\) as given in Theorem 8 is new and so the full proof is given below.
In Section 3 conformal methods are used to extend both a classical mean value theorem and its interpretation in terms of doubly ruled surfaces in \(\mathbb{R}^{3}\). Aside from the discussion of the conformal extension of the mean value theorem, the section contains a new geometric formula for a solution of the ultrahyperbolic equation given only values on the null hypersurface formed by lines parallel to a fixed plane. In fact, this example was considered by John, but the geometric version we present using the null cone of the neutral metric has not appeared elsewhere.
The final Section turns to global aspects of complex points on Lagrangian surfaces in \(\mathbb{L}\) and an associated boundary value problem for the Cauchy-Riemann operator. This proof of the Caratheodory Conjecture using the canonical neutral metric on the space of oriented lines [35] is under review, but significant parts of the arguments have now appeared in print. In particular, the essence as to _why_ the Conjecture is true - namely the size of the Euclidean group - has been established [30] and shown to be sharp [26].
The efficacy of second order methods of parabolic partial differentiation in higher codimension has also been proven in this context for both interior [32] and boundary problems [28]. The final argument hinges on the technical point as to whether a hyperbolic angle condition in codimension two in dimension four can be made _sticky_
enough to confine the boundary of a line congruence evolving under mean curvature flow. This is the sole remaining part of the proof under review.
Having established the _why_, this approach to the Caratheodory Conjecture also lends itself to other independent methods of completion - one needs only to establish the existence of enough holomorphic discs attached to a given Lagrangian surface and the Conjecture follows. Indeed, a local index bound [34] and a conjecture of Toponogov [31] would also follow from existence of such families. This could be proven, for example, by the use of the method of continuity and pseudo-holomorphic curves [25], which would be a first order rather than second order proof. In any event, the acceptance that this infamous Conjecture has been finally put to rest will probably only come about when it has been proven at least twice.
A positive outcome of these developments has been the first application of differential geometry in the theory of complex polynomial: the index bound for an isolated umbilic point on a real analytic surface has been shown to restrict the number of zeros inside the unit circle for a polynomial with self-inversive second derivative [29]. This and related issues are discussed in more detail in Section 4.
The reason codimension two has a special significance in four dimensional topology is briefly discussed and the final section considers topological obstructions to neutral metrics as applied to closed 4-manifolds. In the case where the 4-manifold is compact with boundary, many open questions remain about what geometric information from a neutral metric can be seen at the boundary. Whether for a neutral 4-manifold with null boundary, coming full circle, it is possible to X-ray the inside and explore its topology.
## 2. The Geometry of Neutral Metrics
This section discusses the geometry of metrics of indefinite signature \((++--)\). While the study of positive definite metrics and Lorentz metrics are very well-developed, the neutral signature case is less well understood, even in dimension four. Rather than the general theory, of which [13] is a good survey, the section will focus on spaces of geodesics and the invariant neutral structures associated with them.
### The Space of Oriented Lines
The space \(\mathbb{L}\) of oriented lines (or _rays_) of Euclidean \(\mathbb{R}^{3}\) can be identified with the set of tangent vectors of \(\mathbb{S}^{2}\) by noting that
\[\mathbb{L}=\{\vec{U},\vec{V}\in\mathbb{R}^{3}\mid\ |\vec{U}|=1\ \text{ and }\ \vec{U}\cdot\vec{V}=0\ \}=T\mathbb{S}^{2}, \tag{2}\]
where \(\vec{U}\) is the direction vector of the line and \(\vec{V}\) the perpendicular distance vector to the origin.
Topologically, \(\mathbb{L}\) is a non-compact simply connected 4-manifold which can be viewed as the two dimensional vector bundle over \(\mathbb{S}^{2}\) with Euler number two. One can see the Euler number by taking the zero section, which is the 2-sphere of oriented lines through the origin and perturbing it to another sphere of oriented lines (the oriented lines through a nearby point, for example). The two spheres are easily seen to intersect in two oriented lines, hence the Euler number of the bundle is two.
This space comes with a natural projection map \(\pi:\mathbb{L}\to\mathbb{S}^{2}\) which takes an oriented line to its unit direction vector \(\vec{U}\). In fact, there is a wealth of canonical geometric structures on \(\mathbb{L}\), where canonical means invariant under the Euclidean
group. These include a neutral Kahler structure, a fibre metric and an almost paracomplex structure. All three have a role to play in what follows and so we take some time to describe them in detail.
To start with the Kahler metric on \(\mathbb{L}\), one has
**Theorem 1**.: _[_36_]_ _The space \(\mathbb{L}\) of oriented lines of \(\mathbb{R}^{3}\) admits a metric \(\mathbb{G}\) that is invariant under the Euclidean group acting on lines. The metric is of neutral signature \((++--)\), is conformally flat and scalar flat, but not Einstein._
_It can be supplemented by a complex structure \(J_{0}\) and symplectic structure \(\omega\), so that \((\mathbb{L},\mathbb{G},J_{0},\omega)\) is a neutral Kahler 4-manifold._
Here the complex structure \(J_{0}\) is defined at a point \(\gamma\in\mathbb{L}\) by rotation through \(90^{o}\) about the oriented line \(\gamma\). This structure was considered in a modern context first by Nigel Hitchin [42], who dated it back at least to Karl Weierstrass in 1866 [73].
The symplectic structure \(\omega\) is by definition a non-degenerate closed 2-form on \(\mathbb{L}=T\mathbb{S}^{2}\), and it can be obtained by pulling back the canonical symplectic structure on the cotangent bundle \(T^{*}\mathbb{S}^{2}\) by the round metric on \(\mathbb{S}^{2}\).
These two structures are invariant under Euclidean motions acting on line space and fit nicely together in the sense that \(\omega(J\cdot,J\cdot)=\omega(\cdot,\cdot)\). The metric obtained by their composition \(\mathbb{G}(\cdot,\cdot)=\omega(J\cdot,\cdot)\), is of neutral signature \((++--)\), however. The existence of a Euclidean invariant metric of this signature on line space was first noted by Eduard Study in 1891 [68], but its neutral Kahler nature wasn't discovered until 2005 [36]. Interestingly, the space of oriented lines in Euclidean \(\mathbb{R}^{n}\) admits an invariant metric iff \(n=3\), and in this dimension it is pretty much unique [64]. This accident of low dimensions offers an alternative geometric framework to investigate the semi-direct nature of the Euclidean group in dimension three, one which expresses three dimensional Euclidean quantities in terms of neutral geometric quantities in four dimensions.
This is but one of the many accidents that arise in the classification of invariant symplectic structures, (para)complex structures, pseudo-Riemannian metrics and (para)Kahler structures on the space of oriented geodesics of a simply connected pseudo-Riemannian space of constant curvature or a rank one Riemannian symmetric space [1].
Returning to oriented line space, the neutral metric \(\mathbb{G}\) at a point \(\gamma\in\mathbb{L}\) can be interpreted as the angular velocity of any line near \(\gamma\). If the angular velocity is zero - and hence the oriented lines are null-separated - then the lines either intersect or are parallel. One can adopt the projective view, which arises quite naturally, that parallel lines intersect at infinity, and then nullity of a curve with respect to the neutral metric implies the intersection of the underlying infinitesimal lines in \(\mathbb{R}^{3}\). Nullity for higher dimensional submanifolds will be discussed in the next section.
The invariant neutral metric is not flat, although its scalar curvature is zero and its conformal curvature vanishes. The non-zero Ricci tensor has zero neutral length, but its interpretation in terms of a recognisable energy momentum tensor is lacking. Given the difference of signature to Lorentz spacetime, it is also difficult to see the usual physical connection as in general relativity.
Since the metric is conformally flat, there exist local coordinates \((X_{1},X_{2},X_{3},X_{4})\) and a strictly positive function \(\Omega\) so that it can be written as
\[ds^{2}=\Omega^{2}(dX_{1}^{2}+dX_{2}^{2}-dX_{3}^{2}-dX_{4}^{2}). \tag{3}\]
Such a metric has zero scalar curvature iff \(\Omega\) satisfies the ultrahyperbolic equation, thus characterising a Yamabe-type problem for neutral metrics [50]. Such coordinates \((X_{1},X_{2},X_{3},X_{4})\) were first constructed using the Plucker embedding on the space of lines by John [46], who showed that the compatibility condition for a function on line space to be the integral of a function on \(\mathbb{R}^{3}\) is exactly the flat ultrahyperbolic equation in these coordinates.
Write \(\mathbb{R}^{2,2}\) for \(\mathbb{R}^{4}\) endowed with the flat neutral metric. In Section 3 the ultrahyperbolic equation will be considered in more detail and an explicit formula presented for data prescribed on a certain null hypersurface.
A peculiarity of neutral signature metrics in dimension four is the existence of 2-planes on which the induced metric is identically zero, so-called _totally null_ 2-planes. In \(\mathbb{R}^{2,2}\) there are a disjoint union of two \(S^{1}\)'s worth of totally null 2-planes, termed \(\alpha-\)planes and \(\beta-\)planes.
One way to see these is to consider the null cone \(\mathcal{C}_{0}\) at the origin. This is a cone over the 2-torus \(S^{1}\times S^{1}\) given by
\[X_{1}^{2}+X_{2}^{2}-X_{3}^{2}-X_{4}^{2}=0.\]
An \(\alpha-\)plane is a cone over a diagonal in the torus \(t\mapsto(X_{1}+iX_{2},X_{3}+iX_{4})=(e^{it},e^{i(t+t_{0})})\), while a \(\beta-\)plane is a cone over an anti-diagonal in the torus \(t\mapsto(X_{1}+iX_{2},X_{3}+iX_{4})=(e^{it},e^{-i(t+t_{0})})\).
This null structure exists in the tangent space at a point in any neutral four manifold and if one can piece it together in a geometric way there can be global topological consequences. One natural question is whether the \(\alpha-\)planes or \(\beta-\)plane fields are integrable in the sense of Frobenius, thus having surfaces to which the plane fields are tangent. These are guaranteed for the invariant neutral metrics endowed on the space of oriented geodesics of any 3-dimensional space-form, as they are all conformally flat [15].
Roughly speaking, an \(\alpha-\)surface in a geodesic space is the set of oriented geodesics through a fixed point, while \(\beta-\)surfaces are the oriented geodesics contained in a fixed totally geodesic surface in the ambient 3-manifold. Thus a neutral metric on a geodesic space allows for the geometrization of both intersection and containment.
Restricting our attention to \(\mathbb{R}^{3}\), the \(\alpha-\)planes in \(\mathbb{L}\) are the oriented lines through a point or the oriented lines with the same fixed direction. The latter are the 2-dimensional fibres of the canonical projection \(\pi:\mathbb{L}\to\mathbb{S}^{2}\) taking an oriented line to its direction.
The distance between parallel lines in \(\mathbb{R}^{3}\) induces a fibre metric on \(\pi^{-1}(p)\) for \(p\in\mathbb{S}^{2}\). If \(\xi\) is a complex coordinates about the North pole of \(\mathbb{S}^{2}\) given by stereographic projection and \(\eta\) the complex fibre coordinate in the projection \(T\mathbb{S}^{2}\to\mathbb{S}^{2}\), then the fibre metric has the form
\[d\tilde{s}^{2}=\frac{4d\eta\;d\bar{\eta}}{(1+\xi\bar{\xi})^{2}}. \tag{4}\]
In Section 3.3 this arises in the X-ray transform from certain null data.
Note that the complex coordinates \((\xi,\eta)\) on \(\mathbb{L}\) are essentially the vectors \(U\) and \(V\) in definition (2), the direction and perpendicular distance to the origin. They are related to John's conformal flat coordinates \((X_{1},X_{2},X_{3},X_{4})\) by
**Proposition 2**.: _[_8_]_ _For complex coordinates \((\xi,\eta)\) on \(T\mathbb{S}^{2}\), over the upper hemisphere \(|\xi|^{2}<1\) the conformal coordinates \((X_{1},X_{2},X_{3},X_{4})\) are_
\[X_{1}+iX_{2}=\frac{2}{1-\xi^{2}\bar{\xi}^{2}}\left(\eta+\xi^{2}\bar{\eta}-i(1+ \xi\bar{\xi})\xi\right)\]
\[X_{3}+iX_{4}=\frac{2}{1-\xi^{2}\bar{\xi}^{2}}\left(\eta+\xi^{2}\bar{\eta}+i(1+ \xi\bar{\xi})\xi\right).\]
We turn now to null \(3\)-manifolds (or hypersurfaces) in a neutral \(4\)-manifold. An example of such is the null cone of a point in \(\mathbb{L}\). Fix any oriented line \(\gamma_{0}\in\mathbb{L}\) and define its _null cone_ to be
\[C_{0}(\gamma_{0})=\{\gamma\in\mathbb{L}\mid\,Q(\gamma_{0},\gamma)=0\},\]
where \(Q\) is the neutral distance function introduced by John [46]. For convenience introduce the complex conformal coordinates given in terms of the real conformal coordinates of equation (3) by
\[Z_{1}=X_{1}+iX_{2}\qquad\qquad Z_{2}=X_{3}+iX_{4}.\]
If two oriented lines \(\gamma,\tilde{\gamma}\) have complex conformal coordinates \((Z_{1},Z_{2})\) and \((\tilde{Z}_{1},\tilde{Z}_{2})\) then the neutral distance function is
\[Q(\gamma,\tilde{\gamma})=|Z_{1}-\tilde{Z}_{1}|^{2}-|Z_{2}-\tilde{Z}_{2}|^{2}.\]
Two oriented lines have zero neutral distance iff either they are parallel or they intersect. The null cone arises in the formula for the ultrahyperbolic equation in Theorem 13.
More generally, null hypersurfaces in \(\mathbb{L}\) can be understood as \(3\)-parameter families of oriented lines in \(\mathbb{R}^{3}\) as follows. The degenerate hyperbolic metric induced on a null hypersurface \(\mathcal{H}\) at a point \(\gamma\) defines a pair of totally null planes intersecting on the null normal of the hypersurface in \(T_{\gamma}\mathcal{H}\), one an \(\alpha-\)plane, one a \(\beta-\)plane. These plane fields can be integrable or contact, as explored in [20].
There is a unique \(\alpha-\)surface in \(\mathbb{L}\) containing \(\gamma\) with tangent plane agreeing with the \(\alpha-\)plane at \(\gamma\). Such a holomorphic Lagrangian surface is either the oriented lines through a point, or the oriented lines in a fixed direction. This is the neutral metric interpretation of the classical surface statement that a totally umbilic surface is either a sphere or a plane.
Thus, the \(\alpha-\)plane at \(\gamma\in\mathbb{L}\) identifies a point on each \(\gamma\subset\mathbb{R}^{3}\) (albeit at infinity) which is the centre of the associated \(\alpha-\)surface. The locus of all these centres in \(\mathbb{R}^{3}\) as one varies over \(\mathcal{H}\) will be called the _focal set_ of the null hypersurface. A null hypersurface is said to be _regular_ if the focal set is a submanifold of \(\mathbb{R}^{3}\).
**Proposition 3**.: _A regular null hypersurface \(\mathcal{H}_{n}\) with focal set of dimension \(n\) must be one of the following:_
* _The set of oriented lines parallel to a fixed plane,_
* _The set of oriented lines through a fixed curve,_
* _The set of oriented lines tangent to a fixed surface._
Assuming the fixed curve and fixed surface are convex, we have \(\mathcal{H}_{0}=\mathcal{H}_{2}=S^{1}\times\mathbb{R}^{2}\) and \(\mathcal{H}_{1}=S^{2}\times\mathbb{R}\). The null cone of a point \(\gamma\in\mathbb{L}\) is clearly an example of null hypersurface \(\mathcal{H}_{1}\), the fixed curve being the line \(\gamma\subset\mathbb{R}^{3}\).
On the other hand, the formula presented in Section 3.3 assumes data on a null hypersurface \(\mathcal{H}_{0}\). Both the \(\alpha-\) and \(\beta-\)planes in \(\mathcal{H}_{0}\) are integrable, so it can be foliated by \(\alpha-\)surfaces (all the oriented lines in a fixed direction) and by \(\beta-\)surfaces (all oriented lines contained in a plane parallel to the fixed plane).
The \(\alpha\)-foliation underpins the projection operator in the formula and it is not clear how the formula would look for data on null hypersurfaces of type \(\mathcal{H}_{1}\) or \(\mathcal{H}_{2}\), as the \(\alpha-\)planes are not in general integrable.
In Figure 1 the three types of regular null hypersurfaces \(\mathcal{H}_{0},\mathcal{H}_{1},\mathcal{H}_{2}\) are shown. The left null hypersurface is \(\mathcal{H}_{0}\), the standard configuration for acquiring data in CT scans, and is discussed in Section 3.3.
Reconstruction using either of the other two null hypersurfaces would have advantages if one seeks to reduce the amount of radiation exposure during the scan. In particular, using the oriented lines \(\mathcal{H}_{1}\) through a fixed line would reduce the exposure of each point to a semi-circle of radiation rather than the full circle in the \(\mathcal{H}_{0}\). On the other hand, using the oriented lines \(\mathcal{H}_{2}\) tangent to a convex surface would leave the interior occluded, and hence shielded completely from radiation. Whether either of these two configurations can be practically acquired by a physical scanner is another matter.
### Paracomplex Structures
The complex structure \(J_{0}\) on the space of oriented geodesics of a 3-dimensional space form evaluated at an oriented geodesic is obtained by rotation through \(90^{o}\) about the geodesic. This almost complex structure is integrable in the sense of Nijinhuis, which for any almost complex structure \(J\) says
\[N^{k}_{ij}=J^{m}_{j}\partial_{m}J^{k}_{i}-J^{m}_{i}\partial_{m}J^{k}_{j}+J^{k} _{m}(\partial_{i}J^{m}_{j}-\partial_{j}J^{m}_{i})=0,\]
and thus a complex structure. This is due to the fact that the ambient space has constant curvature [42].
One can also take reflection of an oriented line in a fixed oriented line \(\gamma\in\mathbb{L}\) to generate a map \(J_{1}:T_{\gamma}\mathbb{L}\to T_{\gamma}\mathbb{L}\) such that \(J_{1}^{2}=1\) and the \(\pm 1-\)eigenspaces are 2-dimensional. This _almost paracomplex structure_ is not integrable in the sense of Nijinhuis and thus not a _paracomplex structure_. It is however anti-isometric with
Figure 1. Regular null hypersurfaces in oriented line space
respect to the canonical neutral metric \(\mathbb{G}\):
\[\mathbb{G}(J_{1}\cdot,J_{1}\cdot)=-\mathbb{G}(\cdot,\cdot).\]
**Theorem 4**.: _[_19_]_ _The space of oriented lines of Euclidean 3-space admits an invariant commuting triple \((J_{0},J_{1},J_{2})\) of a complex structure, an almost paracomplex structure and an almost complex structure, respectively, satisfying \(J_{2}=J_{0}J_{1}\). The complex structure \(J_{0}\) is isometric, while \(J_{1}\) and \(J_{2}\) are anti-isometric. Only \(J_{0}\) is parallel w.r.t. \(\mathbb{G}\), and only \(J_{0}\) is integrable._
_Composing the neutral metric \(\mathbb{G}\) with the (para)complex structures \(J_{0},J_{1},J_{2}\) yields closed 2-forms \(\Omega_{0}\) and \(\Omega_{1}\), and a conformally flat, scalar flat, neutral metric \(\tilde{\mathbb{G}}\), respectively. The neutral 4-manifolds \((\mathbb{L},\mathbb{G})\) and \((\mathbb{L},\tilde{\mathbb{G}})\) are isometric. Only \(J_{0}\) is parallel w.r.t. \(\tilde{\mathbb{G}}\)._
An almost paracomplex structure is an example of an _almost product structure_, in which a splitting of the tangent space at each point of the manifold is given, in this case \(4=2+2\). Such pointwise splittings can only be extended over a manifold subject to certain geometric and topological conditions. For example
**Theorem 5**.: _[_19_]_ _A conformally flat neutral metric on a 4-manifold that admits a parallel anti-isometric or isometric almost paracomplex structure has zero scalar curvature._
The parallel condition for an isometric almost paracomplex structure can be expressed in terms of the first order invariants of the eigenplane distributions:
**Theorem 6**.: _[_19_]_ _Let \(j\) be an isometric almost paracomplex structure on a pseudo-Riemannian 4-manifold. Then \(j\) is parallel iff the eigenplane distributions are tangent to a pair of mutually orthogonal foliations by totally geodesic surfaces._
Canonical examples for neutral conformally flat metrics are the indefinite product of two surfaces of equal constant Gauss curvature, which have exactly this double foliation. It is instructive in this case to use the isometric paracomplex structure \(j=I\oplus-I\) to flip the sign of the product metric. The result is a Riemannian metric which turns out to be Einstein. This construction holds more generally:
**Theorem 7**.: _[_19_]_ _Let \((M,g)\) be a Riemannian \(4\)-manifold endowed with a parallel isometric paracomplex structure \(j\), and let the associated neutral metric be \(g^{\prime}(\cdot,\cdot)=g(j\cdot,\cdot)\). Then, \(g^{\prime}\) is locally conformally flat if and only if \(g\) is Einstein._
This transformation will be used in Section 4.3 to find global topological obstructions to parallel isometric paracomplex structures.
### The Space of Oriented Geodesics of Hyperbolic 3-Space
In this section we consider the space \(\mathbb{L}(\mathbb{H}^{3})\) of oriented geodesics in three dimensional hyperbolic space \(\mathbb{H}^{3}\) of constant sectional curvature \(-1\). The canonical neutral metric on this space has been considered in detail [22][23][65], but its relation to the
ultrahyperbolic equation has not. To illustrate the ideas of this paper, and explore the commonality with the flat case, proofs are provided in this section.
The space \(\mathbb{L}(\mathbb{H}^{3})\) of oriented geodesics in hyperbolic 3-space is diffeomorphic to that of oriented lines \(\mathbb{L}(\mathbb{R}^{3})\) in Euclidean 3-space \(\mathbb{L}(\mathbb{H}^{3})=\mathbb{L}(\mathbb{R}^{3})=T\mathbb{S}^{2}\), but the projection map does not have the same geometric significance. In fact each oriented geodesic has _two_ Gauss maps (the beginning and end directions at the boundary of the ball model for \(\mathbb{H}^{3}\)) and there is a natural embedding into \(S^{2}\times S^{2}\). Thus it is natural to view \(\mathbb{L}(\mathbb{H}^{3})\) as \(S^{2}\times S^{2}\) with the diagonal removed or, more geometrically, the _reflected_ diagonal removed [23].
The canonical neutral metric \(\tilde{\mathbb{G}}\) on \(\mathbb{L}(\mathbb{H}^{3})\) is conformally flat and scalar flat, thus relating the solutions of the flat ultrahyperbolic equation with harmonic functions, as in the case of \(\mathbb{L}(\mathbb{R}^{3})\).
**Theorem 8**.: _For any compactly supported or asymptotically constant function \(f\) on hyperbolic 3-space, its X-ray transform is harmonic with respect to the canonical neutral metric:_
\[\triangle_{\tilde{\mathbb{G}}}u_{f}=0,\]
_where \(\triangle_{\tilde{\mathbb{G}}}\) is the Laplacian of \(\tilde{\mathbb{G}}\)._
Proof.: Consider the upper half-space model of hyperbolic 3-space \(\mathbb{H}^{3}\), that is \((x_{1},x_{2},x_{3})\in\mathbb{R}^{3},x_{3}\in\mathbb{R}_{>0}\) with metric
\[ds^{2}=\frac{dx_{1}^{2}+dx_{2}^{2}+dx_{3}^{2}}{x_{3}^{2}}.\]
We can locally model the space of oriented geodesics in this model by \((\xi,\eta)\in\mathbb{C}^{2}\) where the unit parameterised geodesic is [23]
\[z=x_{1}+ix_{2}=\eta+\frac{\tanh r}{\tilde{\xi}}\qquad\qquad x_{3}=\frac{1}{| \xi|\cosh r}. \tag{5}\]
With respect to these coordinates the neutral metric is
\[ds^{2}=-\frac{i}{4}\left(\frac{1}{\xi^{2}}d\xi^{2}-\frac{1}{\tilde{\xi}^{2}}d \tilde{\xi}^{2}+\bar{\xi}^{2}d\eta^{2}-\xi^{2}d\bar{\eta}^{2}\right),\]
and the Laplacian is
\[\triangle_{\tilde{\mathbb{G}}}u=8\mathrm{I}m\left(\frac{1}{\tilde{\xi}^{2}} \partial_{\eta}^{2}u+\partial_{\xi}(\xi^{2}\partial_{\xi}u)\right).\]
Note that
\[\frac{\partial}{\partial r}=\frac{1}{\cosh^{2}r}\left(\frac{1}{\tilde{\xi}} \frac{\partial}{\partial z}+\frac{1}{\tilde{\xi}}\frac{\partial}{\partial\bar {z}}-\frac{\sinh r}{|\xi|}\frac{\partial}{\partial t}\right).\]
Now a straight-forward calculation establishes the following identity
\[\triangle_{\tilde{\mathbb{G}}}u_{f}=4i\int_{-\infty}^{\infty}\frac{\partial}{ \partial r}\left(\frac{1}{\tilde{\xi}}\partial_{z}f-\frac{1}{\xi}\partial_{ \bar{z}}f\right)dr=4i\left[\frac{1}{\tilde{\xi}}\partial_{z}f-\frac{1}{\xi} \partial_{\bar{z}}f\right]_{-\infty}^{\infty}.\]
Thus, by integration by parts, as long as the transverse gradient of \(f\) falls off at the boundary faster than \(|\xi|\), the boundary terms vanish and we get
\[\triangle_{\tilde{\mathbb{G}}}u_{f}=0.\]
In Section 3.1 unit (pseudo-)circles in flat planes are proven to be the domains of integration of a mean value theorem for solutions of the ultrahyperbolic equation and to generate doubly ruled surfaces in the underlying \(\mathbb{R}^{3}\). We now present a local conformally flat coordinate system for \(\mathbb{L}(\mathbb{H}^{3})\) using the hyperboloid model of hyperbolic 3-space \(\mathbb{H}^{3}\), which lets one explicitly construct such doubly ruled surfaces in \(\mathbb{H}^{3}\).
In the hyperboloid model in Minkowski space \(\mathbb{R}^{3+1}\), \(\mathbb{H}^{3}\) is the hyperboloid \(x_{0}^{2}-x_{1}^{2}-x_{2}^{2}-x_{3}^{2}=1\) and the oriented geodesics are the intersections with oriented planes of Lorentz signature through the origin in \(\mathbb{R}^{3+1}\).
An oriented geodesic in \(\mathbb{H}^{3}\) in the ball model can be uniquely determined by the directions at the boundary \((\mu_{1},\mu_{2})\in\mathbb{S}^{2}\times\mathbb{S}^{2}\). These directions \((\mu_{1},\mu_{2})\) are exactly the null directions on the Lorentz plane.
The relationships between the complex coordinates \((\mu_{1},\mu_{2})\in\mathbb{C}^{2}\) obtained by stereographic projection on each \(\mathbb{S}^{2}\) factor and the complex coordinates \((\xi,\eta)\) introduced in Theorem 8 is
\[\xi=\tfrac{1}{2}\left(\bar{\mu}_{1}+\tfrac{1}{\mu_{2}}\right)^{-1}\qquad \qquad\eta=\tfrac{1}{2}\left(-\mu_{1}+\tfrac{1}{\bar{\mu}_{2}}\right).\]
**Proposition 9**.: _If \((\mu_{1},\mu_{2})\) are the standard holomorphic coordinates on \(\mathbb{L}(\mathbb{H}^{3})\), consider the complex combination_
\[Z_{1}=\frac{(1+\mu_{2}\bar{\mu}_{2})\bar{\mu}_{1}+(1+\mu_{1}\bar{\mu}_{1})\bar {\mu}_{2}+i[(1-\mu_{2}\bar{\mu}_{2})\bar{\mu}_{1}-(1-\mu_{1}\bar{\mu}_{1})\bar {\mu}_{2}]}{1-\mu_{1}\bar{\mu}_{1}\mu_{2}\bar{\mu}_{2}}\]
\[Z_{2}=\frac{(1+\mu_{2}\bar{\mu}_{2})\bar{\mu}_{1}+(1+\mu_{1}\bar{\mu}_{1})\bar {\mu}_{2}-i[(1-\mu_{2}\bar{\mu}_{2})\bar{\mu}_{1}-(1-\mu_{1}\bar{\mu}_{1}) \bar{\mu}_{2}]}{1-\mu_{1}\bar{\mu}_{1}\mu_{2}\bar{\mu}_{2}}.\]
_The flat neutral metric \(ds^{2}=dZ_{1}d\bar{Z}_{1}-dZ_{2}d\bar{Z}_{2}\) pulled back by the above is equal to \(\Omega^{2}\tilde{\mathbb{G}}\) where_
\[\Omega=\frac{|1+\mu_{1}\bar{\mu}_{2}|^{2}}{1-|\mu_{1}|^{2}|\mu_{2}|^{2}}.\]
_The inverse mapping from \((\mu_{1},\mu_{2})\) to \((Z_{1},Z_{2})\) is given by_
\[\mu_{1}=\tfrac{1}{2}(\bar{A}+\bar{B})-\frac{\bar{A}-\bar{B}}{2|A-B|^{2}}\left( |A|^{2}-|B|^{2}+2-\sqrt{(|A|^{2}-|B|^{2}+2)^{2}-|A-B|^{2}|A+B|^{2}}\right) \tag{6}\]
\[\mu_{2}=\tfrac{1}{2}(\bar{A}-\bar{B})-\frac{(\bar{A}+\bar{B})}{2|A+B|^{2}} \left(|A|^{2}-|B|^{2}+2-\sqrt{(|A|^{2}-|B|^{2}+2)^{2}-|A-B|^{2}|A+B|^{2}}\right) \tag{7}\]
_where \(A=\tfrac{1}{2}(Z_{1}+Z_{2})\) and \(B=\tfrac{1}{2i}(Z_{1}-Z_{2})\)._
Proof.: A direct calculation.
In Section 3.2 these transformations will be used to construct surfaces in \(\mathbb{H}^{3}\) that are ruled by geodesics in two distinct ways - doubly ruled surfaces.
## 3. The Ultrahyperbolic Equation
In this section solutions of the ultrahyperbolic equation (1) are studied. A mean value property for such solutions is presented along with its interpretation in terms of doubly ruled surfaces in \(\mathbb{R}^{3}\). Classically it was known that a non-flat doubly ruled surface in \(\mathbb{R}^{3}\) is either a one-sheeted hyperboloid or a hyperbolic paraboloid [40]. The construction of doubly ruled surfaces is extended to hyperbolic 3-space and the analogue of the 1-sheeted hyperboloid is exhibited. An explicit geometric formula is then given for the ultrahyperbolic equation with data given on a certain null hypersurface.
### Mean Value Theorem
The X-ray transform takes a function \(f:\mathbb{R}^{3}\to\mathbb{R}\) to \(u_{f}:\mathbb{L}\to\mathbb{R}\) by integrating over lines. In 1937 Fritz John showed that if a function \(f\) satisfies certain fall-off conditions at infinity (which hold for compactly supported functions), then \(u_{f}\) satisfies the ultrahyperbolic equation (1), [46].
The link between the ultrahyperbolic equation (1) and the neutral metric is
**Theorem 10**.: _[_8_]_ _Let \(u:\mathbb{R}^{2,2}\to\mathbb{R}\) and \(v:\mathbb{L}\to\mathbb{R}\) be related by \(v=\Omega^{-1}u\), where \(\Omega\) is the conformal factor._
_Then \(u\) is a solution of the ultrahyperbolic equation (1) iff \(v\) is in the kernel of the Laplacian of the neutral metric: \(\Delta_{\mathbb{G}}v=0\)._
Leifur Asgeirsson [2] had earlier shown that solutions of the ultrahyperbolic equation satisfy a mean value property. In particular, for \(u:\mathbb{R}^{2,2}\to\mathbb{R}\) a solution of equation (1) satisfies
\[\int_{0}^{2\pi}u(a+r\cos\theta,b+r\sin\theta,c,d)\ d\theta=\int_{0}^{2\pi}u(a, b,c+r\cos\theta,d+r\sin\theta)\ d\theta, \tag{8}\]
for all \(a,b,c,d\in\mathbb{R}\) and \(r>0\). The two domains of integration are circles of equal radius lying in a pair of orthogonal planes \(\pi,\pi^{\perp}\) in \(\mathbb{R}^{2,2}\) with definite induced metrics on them.
It can be shown that the mean value theorem holds over a much larger class of curves, namely the image of these circles under any conformal map of \(\mathbb{R}^{2,2}\). We refer to such curves as _conjugate conics_ and these turn out to be pairs of circles, hyperbolae and parabolae lying in orthogonal planes of various signatures:
**Theorem 11**.: _[_8_]_ _[_9_]_ _Let \(S\) and \(S^{\perp}\) be curves contained in orthogonal affine planes \(\pi\) and \(\pi^{\perp}\) in \(\mathbb{R}^{2,2}\), respectively, which are one of the following pairs:_
1. _[label=(0)]_
2. _Circles with equal and opposite radii_ \(\pm r_{0}\) _when the two planes are definite,_
3. _Hyperbolae with equal and opposite radii_ \(\pm r_{0}\) _when the two planes are indefinite,_
4. _Parabolae in non-intersecting degenerate affine planes determined by the property that every point on_ \(S\subset\pi\) _is null separated from every point on_ \(S^{\perp}\subset\pi^{\perp}\)_._
_Then the following mean value property holds for any solution \(u\) of the ultrahyperbolic equation:_
\[\int_{S}u\ dl=\int_{S^{\perp}}u\ dl,\]
_where \(dl\) is the line element induced on the curves by the flat metric \(g\)._
One can view this as a conformal extension of the original mean value theorem, one that intertwines the classical conic sections, the ultrahyperbolic equation and neutral geometry.
### Doubly Ruled Surfaces
John also pointed out the relationship between the two circles in Asgeirsson's theorem and the double ruling of the hyperboloid of \(1\) sheet [46]. In fact, conjugate conics have been shown to correspond to the pairs of families of lines of all non-planar doubly ruled surfaces in \(\mathbb{R}^{3}\).
**Theorem 12**.: _[_9_]_ _Let \(S,S^{\perp}\) be two curves in \(\mathbb{R}^{2,2}\) representing the two one-parameter families of lines \(L,L^{\perp}\) in \(\mathbb{R}^{3}\). Then \(S,S^{\perp}\) are a pair of conjugate conics in \(\mathbb{R}^{2,2}\) if and only if \(L\) and \(L^{\perp}\) are the two families of generating lines of a non-planar doubly ruled surface in \(3\)-space._
The geometric reason these curves yield a doubly ruled surface is that every point on one curve is zero distance from every point on the other curve - this follows from the neutral Pythagoras Theorem! But, as mentioned earlier, zero distance between oriented lines implies intersection, we see that every line of one ruled surface intersects every line of the other ruling, hence the double ruling.
While this result was originally proven in \(\mathbb{R}^{3}\), it holds in any \(3\)-dimensional space of constant curvature, where the canonical neutral Kahler metric plays the same role. To demonstrate this, let us construct doubly ruled surfaces in \(3\)-dimensional hyperbolic space \(\mathbb{H}^{3}\).
Recall the conformal coordinates for \(\mathbb{L}(\mathbb{H}^{3})\) given in equations (6) and (7). To generate the hyperbolic equivalent of the \(1\)-sheeted hyperboloid, the two curves (parameterized by \(u\)) are circles of radii \(\pm r_{0}\) in two definite planes:
\[Z_{1}=r_{0}e^{iu}\qquad\qquad Z_{2}=0,\]
and
\[Z_{1}=0\qquad\qquad Z_{2}=r_{0}e^{iu}.\]
For the curves we can view the doubly ruled surfaces in either the upper half-space model or the ball model of \(\mathbb{H}^{3}\). For the former, one uses the equations (5), while for the latter one can use
\[x_{1}+ix_{2}=\frac{\mu_{2}(1+\mu_{1}\bar{\mu}_{1})e^{v}-\mu_{1}(1+\mu_{2}\bar {\mu}_{2})e^{-v}}{(1+\mu_{1}\bar{\mu}_{1})(1+\mu_{2}\bar{\mu}_{2})\cosh v+[(1+ \mu_{1}\bar{\mu}_{2})(1+\mu_{2}\bar{\mu}_{1})(1+\mu_{1}\bar{\mu}_{1})(1+\mu_{ 2}\bar{\mu}_{2})]^{\frac{1}{2}}}\]
\[x_{3}=\frac{(1+\mu_{1}\bar{\mu}_{1})(1-\mu_{2}\bar{\mu}_{2})e^{v}-(1+\mu_{2} \bar{\mu}_{2})(1-\mu_{1}\bar{\mu}_{1})e^{-v}}{2\left((1+\mu_{1}\bar{\mu}_{1})( 1+\mu_{2}\bar{\mu}_{2})\cosh v+[(1+\mu_{1}\bar{\mu}_{2})(1+\mu_{2}\bar{\mu}_{ 1})(1+\mu_{1}\bar{\mu}_{1})(1+\mu_{2}\bar{\mu}_{2})]^{\frac{1}{2}}\right)}.\]
Figure 1 is a plot of a doubly ruled surface in the upper half-space model while Figure 2 is in the ball model of hyperbolic \(3\)-space. These are the hyperbolic equivalent of the \(1\)-sheeted hyperboloid, although they satisfy a fourth order (rather than second order) polynomial equation.
### Cauchy Problem for the Ultrahyperbolic Equation
One way to reconcile the difference between the dimension of \(\mathbb{L}(\mathbb{R}^{3})\) and that of \(\mathbb{R}^{3}\) is to consider the problem of determining the value of a solution \(v:\mathbb{L}\to\mathbb{R}\) of the Laplace equation
\[\triangle_{\mathbb{G}}v=0,\]
on all of oriented line space \(\mathbb{L}\), given only the values of the function on a null hypersurface \(\mathcal{H}\subset\mathbb{L}\).
Consider the case where the data is known on the hypersurface generated by all oriented lines parallel to a fixed plane in \(P_{0}\subset\mathbb{R}^{3}\) - the case of regular dimension zero focal set \(\mathcal{H}_{0}\) in Proposition 3.
This null hypersurface is suitable as a boundary for the Cauchy problem, as proven by John [46]. In fact, it can be foliated both by \(\alpha-\)planes and \(\beta-\)planes - the former being the oriented lines parallel to \(P_{0}\) in a fixed direction, while the latter are all oriented lines parallel to \(P_{0}\) at a fixed height.
Denote
\[\mathcal{H}=\{\gamma\in\mathbb{L}\mid\ \gamma\parallel P_{0}\ \}.\]
Clearly \(\mathcal{H}=\mathbb{S}^{1}\times\mathbb{C}\) and for convenience, suppose that \(P_{0}\) is horizontal in standard coordinates, so that in complex coordinates the hypersurface is \(\xi=e^{i\theta}\), since the only restriction on the oriented line is that its direction lies along the equator.
The distance between parallel lines in \(\mathbb{R}^{3}\) induces the metric (4) and associated distance function \(\|.\|\). In fact, there is an invariant metric on \(\mathcal{H}\) with volume form \(d^{3}Vol=d\eta\,d\bar{\eta}\,d\theta\).
Suppose that \(\gamma_{0}\notin\mathcal{H}\) and consider the intersection of this null hypersurface with the null cone \(C_{0}(\gamma_{0})\cap\mathcal{H}=\mathbb{S}^{1}\times\mathbb{R}\). This surface intersects each fibre in an affine line. Let \(Pr_{0}(\gamma)\) be the projection of \(\gamma\) onto this affine line with respect to the fibre metric: \(Pr_{0}:\mathbb{S}^{1}\times\mathbb{R}^{2}\to\mathbb{S}^{1}\times\mathbb{R}\).
We now prove the following explicit geometric formula that determines the value of a solution of the ultrahyperbolic equation from its value on the null hypersurface of type \(\mathcal{H}_{0}\) in \(\mathbb{L}\):
**Theorem 13**.: _If \(v:\mathbb{L}\to\mathbb{R}\) is a function satisfying the ultrahyperbolic equation, then at an oriented line \(\gamma_{0}\)_
\[v(\gamma_{0})=-\tfrac{1}{2\pi^{2}}\iiint_{\gamma\in\mathcal{H}}\frac{v(\gamma)- v(Pr_{0}(\gamma))}{\|\gamma-Pr_{0}(\gamma)\|^{2}}\;d^{3}Vol,\]
_where \(Pr_{0}(\gamma)\) is projection onto the intersection of the null cone of \(\gamma_{0}\) with the \(\alpha\)-plane through \(\gamma\) that lies in the null hypersurface \(\mathcal{H}\)._
Proof.: Our starting point is Fritz John's formula (equation (13) of [46]) which gives the solution of the ultrahyperbolic equation at an oriented line \(\gamma_{0}\) by the cylindrical average over all planes parallel to \(\gamma_{0}\):
\[v(\gamma_{0})=-\tfrac{1}{\pi}\int_{0}^{\infty}\frac{F(R)-F(0)}{R^{2}}dR, \tag{9}\]
where
\[F(R)=\tfrac{1}{2\pi}\int_{0}^{2\pi}\iint_{P_{(R,\alpha)}}\rho(r,s)drds\;d\alpha,\]
\(P_{(R,\alpha)}\) is the plane parallel to \(\gamma_{0}\) at a distance \(R\) and angle \(\alpha\), and \((r,s)\) are flat coordinates on that plane.
Consider the map
\[z=\frac{1}{1+\nu\bar{\nu}}\left(2\nu R+(e^{iA}-\nu^{2}e^{-iA})r+i(e^{iA}+\nu^{ 2}e^{-iA})s\right) \tag{10}\]
\[x_{3}=\frac{1}{1+\nu\bar{\nu}}\left((1-\nu\bar{\nu})R-(\bar{\nu}e^{iA}+\nu e^{ -iA})r-i(\bar{\nu}e^{iA}-\nu e^{-iA})s\right). \tag{11}\]
For fixed \(R\in\mathbb{R}\), \(\nu\in\mathbb{C}\) and \(A\in[0,2\pi)\), the map \((r,s)\mapsto(z(r,s),x_{3}(r,s))\in\mathbb{R}^{3}\) paramaterizes the plane a distance \(R\) from the origin with normal direction \(\nu\). Changing \(A\) rotates the \(r\)- and \(s\)-axes in the plane.
By a translation we can assume \(\gamma_{0}\) contains the origin and so has complex coordinates (\(\xi=\xi_{0},\eta=0\)). Let us restrict attention to planes that are parallel \(\gamma_{0}\). Thus the normal direction of \(P_{(R,\nu)}\) is perpendicular to the direction of \(\gamma_{0}\), we have
\[\nu=\frac{\xi_{0}+e^{i\alpha}}{1-\bar{\xi}_{0}e^{i\alpha}},\]
where \(\alpha\in[0,2\pi)\).
The quantity \(R\) is then just the distance from the plane to the line \(\gamma_{0}\). Finally we want to rotate the ruling by \(s\) on the plane so that it is horizontal and thus a curve in \(\mathcal{H}\). Clearly this is achieved by
\[\nu=r_{0}e^{iA},\]
or more explicitly
\[A=\tfrac{1}{2i}\ln\left[\frac{(\xi_{0}+e^{i\alpha})(1-\xi_{0}e^{-i\alpha})}{( \bar{\xi}_{0}+e^{-i\alpha})(1-\bar{\xi}_{0}e^{i\alpha})}\right]\qquad r_{0}= \left[\frac{(\xi_{0}+e^{i\alpha})(\bar{\xi}_{0}+e^{-i\alpha})}{(1-\xi_{0}e^{- i\alpha})(1-\bar{\xi}_{0}e^{i\alpha})}\right]^{\frac{1}{2}}.\]
The first of these is invertible for fixed \(\xi_{0}\), \(A\leftrightarrow\alpha\).
The horizontal ruling for \(P_{(A,\alpha)}\) is
\[z=\frac{2\nu}{1+\nu\bar{\nu}}R+\frac{1-\nu\bar{\nu}}{1+\nu\bar{\nu}}re^{iA}+ise ^{iA}\]
\[x_{3}=\frac{1-\nu\bar{\nu}}{1+\nu\bar{\nu}}R-\frac{2|\nu|}{1+\nu\bar{\nu}}r.\]
The direction of the ruling is
\[\frac{\partial}{\partial s}=ie^{iA}\frac{\partial}{\partial z}-ie^{-iA}\frac{ \partial}{\partial\bar{z}}\]
so that the complex coordinates are \(\xi=ie^{iA}\) and
\[\eta=\tfrac{1}{2}(z-2x_{3}\xi-\bar{z}\xi^{2})=-(r-iR)\left(\frac{r_{0}-i}{r_{0} +i}\right)e^{iA}.\]
Thus we have parameterized \(\mathcal{H}\) by coordinates \((R,\alpha,r)\) and a straightforward calculation shows that the fibre metric is simply
\[d\eta d\bar{\eta}=dR^{2}+dr^{2}\qquad\qquad\text{and}\qquad\qquad d^{3}Vol= drdRd\alpha.\]
The null cone of \(\gamma_{0}\) consists of all lines that either intersect or are parallel to it. For non-horizontal \(\gamma_{0}\) the null cone intersects the null hypersurface \(\mathcal{H}\) at the lines that intersect \(\gamma_{0}\), namely those with coordinates \((R=0,\alpha,r)\) which is a line through the origin in each fibre. We have chosen \(\gamma_{0}\) to contain the origin in \(\mathbb{R}^{3}\), which is why the line in the fibre is through the origin. More generally the intersection of the null cone with a fibre is an affine line (not necessarily through the origin), as claimed.
Thus the fibre projection is simply \(Pr_{0}(R,\alpha,r)=(0,\alpha,r)\) and
\[R=\|\gamma-Pr_{0}(\gamma)\|.\]
Now putting this together with the integral formula
\[v(\gamma_{0})= -\tfrac{1}{2\pi^{2}}\int_{0}^{\infty}\int_{0}^{2\pi}\frac{1}{R^{ 2}}\left[\iint_{P_{(R,\alpha)}}\rho(r,s)drds-\iint_{P_{(0,\alpha)}}\rho(r,s) drds\right]\,dRd\alpha\] \[= -\tfrac{1}{2\pi^{2}}\int_{0}^{\infty}\int_{0}^{2\pi}\int_{-\infty }^{\infty}\frac{v(R,\alpha,r)-v(0,\alpha,r)}{R^{2}}drdRd\alpha\] \[= -\tfrac{1}{2\pi^{2}}\ii\iint_{\gamma\in\mathcal{H}}\frac{v(\gamma )-v(Pr_{0}(\gamma))}{\|\gamma-Pr_{0}(\gamma)\|^{2}}\,d^{3}Vol,\]
as claimed.
## 4. Topological Considerations
In this section global topological aspects of neutral metrics and almost product structures are explored. These include the relationship between umbilic points on surfaces in \(\mathbb{R}^{3}\) and complex points on Lagrangian surfaces in \(\mathbb{L}\), and an associated boundary value problem for the Cauchy-Riemann operator. The significance of these constructions for a number of conjectures from classical surface theory is indicated.
Some background on the problems of 4-manifold topology are discussed with particular attention to codimension two. The significance of neutral metrics to these issues is that they are uniquely capable of quantifying codimension two topological phenomena, and thus can be used as geometric tools to resolve certain long-standing questions. For the case of closed 4-manifolds, we end with a discussion of topological obstructions that arise to certain neutral geometric structures.
### Global Results
Topological aspects of neutral metrics become evident in the identification of complex points on Lagrangian surfaces in \(\mathbb{L}\) with umbilic points on surfaces in \(\mathbb{R}^{3}\)[37].
The Lagrangian surface \(\Sigma\subset|mathbbL\) is formed by the oriented normal lines to the surface \(S\subset\mathbb{R}^{3}\) and the index \(i(p)\in\mathbb{Z}/2\) of an isolated umbilic point \(p\in S\) on a convex surface is exactly one half of the complex index of the corresponding complex point \(\gamma\in\Sigma\): \(I(\gamma)=2i(p)\in\mathbb{Z}\). Thus problems of classical surface theory can be explored through studying Lagrangian surfaces in the four dimensional space of oriented lines \(\mathbb{L}\) with its neutral metric \(\mathbb{G}\).
The metric induced on a Lagrangian surface is Lorentz or degenerate - the degenerate points being the umbilic points of \(S\) and the null cone at \(\gamma\) being the principal directions of \(S\) at \(p\). The indices of isolated umbilic points carry geometric information from the neutral metric and vice versa.
If an isolated umbilic point \(p\) has half-integer index then the principal foliation around \(p\) is non-orientable - it defines a line field rather than a vector field about the umbilic point. The foliation is orientable if the index is an integer. The following theorem establishes a topological version of a result of Ferdinand Joachimsthal [45] for surfaces intersecting at a constant angle:
**Theorem 14**.: _[_33_]_ _If \(S_{1}\) and \(S_{2}\) are smooth convex surfaces intersecting with constant angle along a curve that is not umbilic in either \(S_{1}\) or \(S_{2}\), then the principal foliations of the two surfaces along the curve are either both orientable, or both non-orientable._
_That is, if \(i_{1}\in\mathbb{Z}/2\) is the sum of the umbilic indices inside the curve of intersection on \(S_{1}\) and \(i_{2}\in\mathbb{Z}/2\) is the sum of the umbilic indices inside the curve of intersection on \(S_{2}\) then_
\[2i_{1}=2i_{2}\bmod 2.\]
Pushing deeper, if one considers the problem of finding a holomorphic disc in \(\mathbb{L}\) whose boundary lies on a given Lagrangian surface \(\Sigma\), one encounters a classical problem of Riemann-Hilbert for the Cauchy-Riemann operator. Given a totally real surface \(\Sigma\) in a complex surface \(\mathbb{M}\), the Riemann-Hilbert problem seeks a map \(f:(D,\partial D)\to(\mathbb{M},\Sigma)\) which is holomorphic: it lies in the kernel of the Cauchy-Riemann operator \(\bar{\partial}f=0\). For this to be an elliptic boundary value problem it is required that the boundary surface \(\Sigma\) be totally real i.e. has no complex points. In the Riemannian case Lagrangian implies totally real, and so Lagrangian boundary conditions are often used when the ambient metric is Riemannian.
In our case, due to the neutral signature of the metric formed by the composition of the symplectic structure (which defines Lagrangian) and the complex structure (which defines holomorphic), new features arise. In particular, Lagrangian surfaces may not be totally real, and therefore at complex points they are not suitable as a boundary condition for the \(\bar{\partial}\)-operator. If, however, the boundary surface is assumed to be spacelike with respect to the metric, then by the neutral Wirtinger identity it is also totally real and is suitable.
The deformation from Lagrangian to spacelike by the addition of a holomorphic twist can be achieved over an open hemisphere. This _contactification_ of the problem throws away the surface \(S\) in \(\mathbb{R}^{3}\), as the perturbed spacelike surface \(\tilde{\Sigma}\) in \(\mathbb{L}(\mathbb{R}^{3})\) forms a 2-parameter family of twisting oriented lines in \(\mathbb{R}^{3}\) that are not orthogonal to any
surface. Any holomorphic disc with boundary lying on \(\tilde{\Sigma}\) yields a holomorphic disc with boundary lying on \(\Sigma\) by subtracting the holomorphic twist and so the problems are equivalent over a hemisphere.
The Riemann-Hilbert problem then follows the standard case, with the linearisation at a solution defining an elliptic boundary value problem with analytic index \(\mathcal{I}\) given by
\[\mathcal{I}=\mathrm{Dim\ Ker\ }\bar{\partial}-\mathrm{Dim\ Coker\ }\bar{\partial}.\]
The analytic index for the problem is well-known to be related to the Keller-Maslov index \(\mu(\partial D,\Sigma)\) along the boundary by
\[\mathcal{I}=\mu+2.\]
The Keller-Maslov index in the case of a section of \(\mathbb{L}\) is given by the sum \(i\) of the umbilic indices inside the curve \(\partial D\) in the boundary \(\Sigma\), as viewed in \(\mathbb{R}^{3}\)[37]:
\[\mu=4i.\]
For the Keller-Maslov class to control the dimension of the space of holomorphic discs, one needs the dimension of the cokernel to be zero. If the problem is Fredholm regular, by a small perturbation the cokernel vanishes and the space of holomorphic discs is indeed determined by the number of enclosed umbilic points.
Remarkably, the Riemann-Hilbert problem associated with a convex sphere containing a single umbilic point _is_ Fredholm regular:
**Theorem 15**.: _[_30_]_ _Let \(\Sigma\subset\mathbb{L}\) be a Lagrangian sphere with a single isolated complex point. Then the Riemann-Hilbert problem with boundary \(\Sigma\) is Fredholm regular._
The reason behind this result is that the Euclidean isometry group acts holomorphically and symplectically on \(\mathbb{L}\), thus preserving the problem. The action is also transitive and so fixing the single complex point one considers the equivariant problem, the result being that it is Fredholm regular, as in the totally real case.
The non-existence of a convex sphere containing a single umbilic point is the famous conjecture of Constantin Caratheodory, and Theorem 15 gives the reason the Conjecture is true. Namely, were such a remarkable surface \(S\) to exist, the Riemann-Hilbert problem with boundary given by the normal lines \(\Sigma\) would be Fredholm regular and so have the property that the dimension of the space of parameterised holomorphic discs with boundary lying on it would be entirely determined by the number of umbilic points in the interior on \(S\).
\[\mathcal{I}=\mathrm{Dim\ Ker\ }\bar{\partial}=4i+2 \tag{12}\]
This property would also hold for a dense set of perturbations of \(S\) in an appropriate function space. To show that such a surface \(S\) cannot exist, one can seek to find violations of equation (12), in particular, a holomorphic disc which encloses a totally real disc on the boundary \(\Sigma\).
By equation (12), if the boundary encloses a totally real disc, then \(\mathcal{I}=2\). However, since the Mobius group acts on the space of parameterized holomorphic discs, the space of unparameterized holomorphic discs is \(2-3=-1\). Thus, over an umbilic-free region of the remarkable surface \(S\) it should be impossible to solve the \(\bar{\partial}\)-problem.
The proof of the Caratheodory Conjecture in [35] follows from the existence of holomorphic discs with boundary enclosing umbilic-free regions, as established
by evolving to them using mean curvature flow of a spacelike surface in \(\mathbb{L}\), thus disproving equation (12).
At this point in time two thirds of the proof given in [35] has appeared in print, with the final part containing the boundary estimates for mean curvature flow currently under review.
In fact, the interior estimates required to prove long time existence and convergence hold for more general spacelike mean curvature flow with respect to indefinite metrics satisfying certain curvature conditions [32].
The final step of the proof of the Conjecture is the establishment of boundary estimates for mean curvature flow in \(\mathbb{L}\) and sufficient control to show that the flow weakly converges in an appropriate function space to a holomorphic disc. The boundary conditions used for mean curvature flow (a second order system) include a constant angle condition and an asymptotic holomorphicity condition.
The constant angle condition is defined between a pair of spacelike planes that intersect along a line and is hyperbolic in nature. The asymptotic holomorphicity condition ensures that the ultimate disc is holomorphic rather than just maximal.
The sizes of the constant hyperbolic angle and the added holomorphic twist are free parameters in the evolution and can be used to control the flowing surface. If one views it as a codimension two capillary problem, the effect of the parameter changes is to increase the friction at the boundary, stopping it from skating off the hemisphere, thus preserving strict parabolicity.
An analogous result in the rotationally symmetric case for mean curvature flow in the space of oriented lines with Dirichlet and Neumann boundary conditions shows that the evolving surface can be made to converge to a holomorphic disc - in this case to a family of holomorphic discs called the Bishop family [6] - or to a maximal surface, depending on the boundary condition imposed [28].
For the full flow one can then show that:
**Theorem 16**.: _[_35_]_ _Let \(S\) be a \(C^{3+\alpha}\) smooth oriented convex surface in \(\mathbb{R}^{3}\) without umbilic points and suppose that the Gauss image of \(S\) contains a closed hemisphere. Let \(\Sigma\subset\mathbb{L}\) be the oriented normal lines of \(S\) forming a Lagrangian surface in the space of oriented lines._
_Then \(\exists f:D\to\mathbb{L}\) with \(f\in C^{1+\alpha}_{loc}(D)\cap C^{0}(\overline{D})\) satisfying_
1. \(f\) _is holomorphic,_
2. \(f(\partial D)\subset\Sigma\)_._
This would conclude the proof of the Caratheodory Conjecture for \(C^{3+\alpha}\) smooth surfaces.
The appearance of Gauss hemispheres here is noteworthy, for this meets with a conjecture of Victor Toponogov that a complete convex plane must have an umbilic point, albeit at infinity [71]. Toponogov showed that such planes have hemispheres as Gauss image and established his conjecture under certain fall-off conditions at infinity.
In fact, the same reasoning as above that pits Fredholm regularity against mean curvature flow proves the Toponogov Conjecture:
**Theorem 17**.: _[_31_]_ _Every \(C^{3+\alpha}\)-smooth complete convex embedding of the plane \(P\), satisfies \(\inf_{P}|\kappa_{1}-\kappa_{2}|=0\)._
The proof follows from applying Theorem 16 in this case, while Fredholm regularity is established easily, as a putative counter-example is by assumption totally real (even at infinity).
Without the high degree of symmetry of the Euclidean group, one would not expect Fredholm regularity to hold and this obstructs the generalisation of the Caratheodory Conjecture to non-Euclidean ambient metrics. This turns out to be the case and the delicate nature of the problem is revealed:
**Theorem 18**.: _[_26_]_ _For all \(\epsilon>0\), there exists a smooth Riemannian metric \(g\) on \(\mathbb{R}^{3}\) and a smooth strictly convex 2-sphere \(S\subset\mathbb{R}^{3}\) such that_
* \(S\) _has a single umbilic point,_
* \(\|g-g_{0}\|^{2}\leq\epsilon\)_,_
_where \(\|.\|\) is the \(L_{2}\) norm on \(\mathbb{R}^{3}\) with respect to the flat metric \(g_{0}\)._
The proof here is constructive: the Euclidean metric is deformed while keeping the standard round 2-sphere fixed (although not round in the deformed metric) and one can essentially brush the principal foliation of the surface into any configuration one chooses by changing the ambient geometry.
Finally, establishing the local index bound \(i(p)\leq 1\) for any isolated umbilic point \(p\) has long been the preferred route to proving the Caratheodory Conjecture in the real analytic case [38][44]. The above methods can also be used to find a slightly weaker local index bound for isolated umbilics on smooth surfaces:
**Theorem 19**.: _[_34_]_ _The index of an isolated umbilic \(p\) on a \(C^{3,\alpha}\) surface in \(\mathbb{R}^{3}\) satisfies \(i(p)<2\)._
The proof follows from the extension of Theorem 15 to surfaces of higher genus by removing hyperbolic umbilic points and adding totally real cross-caps to the Lagrangian section. The existence of holomorphic discs over open hemispheres again contradicts Fredholm regularity and the local index bound follows.
Once again, the role of the Euclidean isometry group is paramount, and even a small perturbation of the ambient metric means that the index bound does not hold.
**Theorem 20**.: _[_26_]_ _For all \(\epsilon>0\) and \(k\in\mathbb{Z}/2\), there exists a smooth Riemannian metric \(g\) on \(\mathbb{R}^{3}\) and a smooth embedded surface \(S\subset\mathbb{R}^{3}\) such that_
* \(S\) _has an isolated umbilic point of index_ \(k\)_,_
* \(\|g-g_{0}\|^{2}\leq\epsilon\)_,_
_where \(\|.\|\) is the \(L_{2}\) norm on \(\mathbb{R}^{3}\) with respect to the flat metric \(g_{0}\)._
Finally, the local umbilic index bound \(i(p)\leq 1\) of Hamburger [38] for real analytic surfaces has recently been used to prove results on the zeros of certain holomorphic polynomials. In particular, a polynomial whose zero set is invariant under inversion in the unit circle is called _self-inversive_[7][47][57][66][72].
**Theorem 21**.: _[_29_]_ _Let \(P_{N}\) be a polynomial of degree \(N\) with self-inversive second derivative and suppose that none of the roots of \(P_{N}\) lies on the unit circle. Then
the number of roots (counted with multiplicity) of \(P_{N}\) inside the unit circle is less than or equal to \(\lfloor N/2\rfloor+1\)._
This result is in the spirit of a converse to the Gauss-Lucas theorem [51] in which the zeros of the first derivative of a polynomial are restricted by the zeros of the polynomial. Here, however, by methods of differential geometry, the locations of the zeros of the second derivative restrict the zeros of the polynomial - the first such application. It is also worth noting that the result is sharp.
The method of proof is to take a polynomial with self-inversive second derivative and to construct a real analytic strictly convex with an isolated umbilic point whose index is determined by the number of zeros inside the unit circle.
### Four Manifold Topology
The proof by Grigori Perelman of Thurston's Geometrization Conjecture [59][60][61] naturally raises the question as to whether closed 4-dimensional manifolds can be geometrized in some way. The approach in three dimensions, however, does not apply in higher dimensions and even basic things are harder.
For example, any finitely presented group can be the fundamental group of a smooth closed 4-manifold, while the fundamental group of a prime 3-manifold must be a quotient of the isometry group of one of the eight Thurston homogenous geometries [69], and so it is clear that new geometric paradigms are required.
To make matters worse, while in three dimensions there is no distinction between smooth, piecewise-linear and topological structures on closed manifolds, in higher dimension this may not be true. If one considers open manifolds, these problems are compounded further. In each dimension \(n\geq 3\) there are uncountably many _fake_\(\mathbb{R}^{n}\)'s - open topological manifolds that are homotopy equivalent to, but not homeomorphic to \(\mathbb{R}^{n}\)[12][24][54]. While many of these involve infinite constructions, an example of Barry Mazur in dimension four requires only the attachment of two thickened cells [53].
Four dimensions also has its share of peculiar problems that do not arise in higher dimensions. In particular, the Whitney trick, in which closed loops are contracted to a point across a given disc, plays a major role in many higher dimensional results, for example Stephen Smale's proof of the h-cobordism theorem [67]. The issue is that, while in dimensions five and greater a generic 2-disc is embedded, in dimension four a generic 2-disc is only immersed and will have self-intersections, making it unsuitable to contract loops across.
Against this array of formidable difficulties, the Disc Theorem of Micheal Freedman [16] utilizes a doubly infinite codimension two construction to claim that there is a topological work-around for the Whitney trick. This result leads to the proof of the topological Poincare Conjecture in dimension four, as well as the classification of all simply connected closed topological 4-manifolds based almost entirely on their intersection form in the second homology.
Contradictions with Donaldson's ground-breaking work on smooth 4-manifolds [14] lead to extraordinary families of exotic manifolds (homeomorphic but not diffeomorphic) not seen in any other dimension. Since the work of John Milnor [55] it has been known that exotic differentiable structures in dimensions seven and above
exist, but only in finitely many families. According to the Disc Theorem exotic differentiable structures in dimension four occur in uncountable families - indeed, no 4-manifold is known to have only countably many distinct differentiable structures.
Both the original Disk Theorem [16] and subsequent attempts to complete it [3][17][18][27][39] have depended upon the iterative attachment of 1- and 2-handles or there generalizations, as one attempts to push unwanted codimension two intersections to infinity. The ultimate homeomorphism that is sought is shown to exist using Bing shrinking and what is called decomposition space theory [5].
One key aspect of these efforts is that they all involve codimension two constructions - gluing in thickened 2-discs or more general surfaces into 4-manifolds. The work in this survey involves geometric paradigms associated with neutral metrics which can gain more control of these codimension two constructions.
Unlike Riemannian metrics which exist on all smooth manifolds, neutral metrics see the topology of the underlying manifold and can be used to express topological invariants. The next section considers closed 4-manifolds and illustrates the manner in which the existence of certain neutral metrics restricts the topology of the underlying 4-manifold. These are modest steps in the direction of understanding a tiny part of the wild world of 4-manifolds in which there is a splitting \(4=2+2\).
### Closed Neutral 4-manifolds
The simplest topological invariant of a closed 4-manifold \(M\) is its _Euler number_\(\chi(M)\). Let \(H_{n}(M,\mathbb{R})\) be the \(n^{th}\) homology group of \(M\) with real coefficients and \(b_{n}\) be the associated Betti numbers \(n=0,1,...,4\). For a closed connected 4-manifold we have \(b_{0}=b_{4}=1\), and \(b_{3}=b_{1}\) by Poincare duality and the Euler number is defined
\[\chi(M)=\sum_{n=0}^{4}(-1)^{n}\ \text{dim}\ H_{n}(M,\mathbb{R})=2-2b_{1}+b_{2},\]
The Chern-Gauss-Bonnet Theorem states that one can express this geometrically as
\[\chi(M)=\frac{\epsilon}{32\pi^{2}}\int_{M}|W(g)|^{2}-2|Ric(g)|^{2}+\tfrac{2}{ 3}S^{2}\ d^{4}V_{g},\]
for any metric \(g\) of definite (\(\epsilon=1\)) or neutral signature (\(\epsilon=-1\)) [49].
On a closed 4-manifold there is a natural symmetric bilinear pairing on the integral second homology \(H_{2}(M,\mathbb{Z})\). It is the sum of the number of transverse intersection points between two surfaces representing the homology classes.
The intersection form can be diagonalised over \(\mathbb{R}\) and the number of positive and negative eigenvalues is denoted \(b_{+}\) and \(b_{-}\), respectively. Thus \(b_{2}=b_{+}+b_{-}\) and the _signature_\(\tau(M)=b_{+}-b_{-}\) is another topological invariant of \(M\).
The existence of a neutral metric on a closed 4-manifold is equivalent to the existence of a field of oriented tangent 2-planes on the manifold [52]. Moreover:
**Theorem 22**.: _[_41_]__[_48_]__[_52_]_ _Let \(M\) be a closed 4-manifold admitting a neutral metric. Then_
\[\chi(M)+\tau(M)=0\ \text{mod}\ 4\qquad\text{and}\qquad\chi(M)-\tau(M)=0\ \text{mod}\ 4. \tag{13}\]
_If \(M\) is simply connected, these conditions are sufficient for the existence of a neutral metric._
Thus, neither \(\mathbb{S}^{4}\) nor \(\mathbb{C}P^{2}\) admit a neutral metric, while the K3 manifold does.
Given a neutral metric \(g^{\prime}\) on \(M\), the Euler number and signature can be expressed in terms of curvature invariants by
\[\chi(M)=\frac{-1}{32\pi^{2}}\int_{M}|W^{+}|^{2}+|W^{-}|^{2}-2|Ric|^{2}+\tfrac{2} {3}S^{2}\;d^{4}V_{g}.\]
\[\tau(M)=b_{+}-b_{-}=\frac{1}{48\pi^{2}}\int_{M}|W^{+}|^{2}-|W^{-}|^{2}\;d^{4}V_ {g}.\]
where \(W^{\pm}\) is the Weyl curvature tensor split into its self-dual and anti-self-dual parts, \(Ric\) is the Ricci tensor and \(S\) is the scalar curvature of \(g^{\prime}\).
From these and Theorem 7, the following can be proven
**Theorem 23**.: _[_19_]_ _Let \((M,g^{\prime})\) be a closed, conformally flat, scalar flat, neutral 4-manifold. If \(g^{\prime}\) admits a parallel isometric paracomplex structure, then_
\[\tau(M)=0\qquad\qquad\text{and}\qquad\qquad\chi(M)\geq 0.\]
_If, moreover, the Ricci tensor of \(g^{\prime}\) has negative norm \(|Ric(g^{\prime})|^{2}\leq 0\), then \(M\) admits a flat Riemannian metric._
On the other hand, Theorem 7 can also be used on Riemannian Einstein 4-manifolds to find obstructions to parallel isometric paracomplex structures:
**Theorem 24**.: _[_19_]_ _Let \((M,g)\) be a closed Riemannian Einstein 4-manifold._
_If \(g\) admits a parallel isometric paracomplex structure, then \(\tau(M)=0\)._
The \(K3\) 4-manifold, as well as the 4-manifolds \(\mathbb{C}P^{2}\#k\overline{\mathbb{C}P}^{2}\) for \(k=3,5,7\), admit Riemannian Einstein metrics and isometric almost paracomplex structures, but, as a consequence of Theorem 24, these almost paracomplex structures cannot be parallel.
**Acknowledgements**:
Most of the work described in this paper was carried out in collaboration with Guillem Cobos, Nikos Georgiou and Wilhelm Klingenberg, with whom it has been a pleasure to learn. Thanks are due to Morgan Robson for assistance with the Figures. Any opinions expressed are entirely the author's.
|
2310.20438 | The Phase Transition Phenomenon of Shuffled Regression | We study the phase transition phenomenon inherent in the shuffled (permuted)
regression problem, which has found numerous applications in databases,
privacy, data analysis, etc. In this study, we aim to precisely identify the
locations of the phase transition points by leveraging techniques from message
passing (MP). In our analysis, we first transform the permutation recovery
problem into a probabilistic graphical model. We then leverage the analytical
tools rooted in the message passing (MP) algorithm and derive an equation to
track the convergence of the MP algorithm. By linking this equation to the
branching random walk process, we are able to characterize the impact of the
signal-to-noise-ratio ($\snr$) on the permutation recovery. Depending on
whether the signal is given or not, we separately investigate the oracle case
and the non-oracle case. The bottleneck in identifying the phase transition
regimes lies in deriving closed-form formulas for the corresponding critical
points, but only in rare scenarios can one obtain such precise expressions. To
tackle this technical challenge, this study proposes the Gaussian approximation
method, which allows us to obtain the closed-form formulas in almost all
scenarios. In the oracle case, our method can fairly accurately predict the
phase transition $\snr$. In the non-oracle case, our algorithm can predict the
maximum allowed number of permuted rows and uncover its dependency on the
sample number. | Hang Zhang, Ping Li | 2023-10-31T13:21:14Z | http://arxiv.org/abs/2310.20438v1 | # The Phase Transition Phenomenon of Shuffled Regression
###### Abstract
We study the phase transition phenomenon inherent in the shuffled (permuted) regression problem, which has found numerous applications in databases, privacy, data analysis, etc. For the permuted regression task: \(\mathbf{Y}=\mathbf{\Pi}^{\natural}\mathbf{X}\mathbf{B}^{\natural}\), the goal is to recover the permutation matrix \(\mathbf{\Pi}^{\natural}\) as well as the coefficient matrix \(\mathbf{B}^{\natural}\). It has been empirically observed in prior studies that when recovering \(\mathbf{\Pi}^{\natural}\), there exists a phase transition phenomenon: the error rate drops to zero rapidly once the parameters reach certain thresholds. In this study, we aim to precisely identify the locations of the phase transition points by leveraging techniques from _message passing_ (MP).
In our analysis, we first transform the permutation recovery problem into a probabilistic graphical model. We then leverage the analytical tools rooted in the message passing (MP) algorithm and derive an equation to track the convergence of the MP algorithm. By linking this equation to the branching random walk process, we are able to characterize the impact of the _signal-to-noise-ratio_ (snr) on the permutation recovery. Depending on whether the signal is given or not, we separately investigate the oracle case and the non-oracle case. The bottleneck in identifying the phase transition regimes lies in deriving closed-form formulas for the corresponding critical points, but only in rare scenarios can one obtain such precise expressions. To tackle this technical challenge, this study proposes the Gaussian approximation method, which allows us to obtain the closed-form formulas in almost all scenarios. In the oracle case, our method can fairly accurately predict the phase transition snr. In the non-oracle case, our algorithm can predict the maximum allowed number of permuted rows and uncover its dependency on the sample number.
Our numerical experiments reveal that the observed phase transition points are well aligned with our theoretical predictions. It is anticipated that our study will motivate exploiting MP algorithms (and the related techniques) as an effective tool for solving the permuted regression problems, which have found many applications in machine learning, privacy, databases, etc.
Introduction
In this paper, we consider the following permuted (shuffled) linear regression problem:
\[\mathbf{Y}=\boldsymbol{\Pi}^{\natural}\mathbf{X}\mathbf{B}^{\natural}+\sigma \mathbf{W}, \tag{1}\]
where \(\mathbf{Y}\in\mathbb{R}^{n\times m}\) denotes the matrix of observations, \(\boldsymbol{\Pi}^{\natural}\in\{0,1\}^{n\times n}\) is the permutation matrix, \(\mathbf{X}\in\mathbb{R}^{n\times p}\) is the design matrix, \(\mathbf{B}^{\natural}\in\mathbb{R}^{p\times m}\) is the matrix of signals (regressors), \(\mathbf{W}\in\mathbb{R}^{n\times m}\) denotes the additive noise matrix (with unit variance), and \(\sigma^{2}\) is the noise variance. The task is to recover both the signal matrix \(\mathbf{B}^{\natural}\) and the permutation matrix \(\boldsymbol{\Pi}^{\natural}\). The research on this challenging permuted regression problem dates back at least to 1970s under the name "broken sample problem" (DeGroot et al., 1971; Goel, 1975; DeGroot and Goel, 1976, 1980; Bai and Hsing, 2005). Recent years have witnessed a revival of this problem due to its broad spectrum of applications in (e.g.,) privacy protection, data integration, etc. (Unnikrishnan et al., 2015; Pananjady et al., 2018; Slawski and Ben-David, 2019; Pananjady et al., 2017; Slawski et al., 2020; Zhang and Li, 2020).
Specifically, this paper will focus on studying the "phase transition" phenomenon in recovering the whole permutation matrix \(\boldsymbol{\Pi}^{\natural}\): the error rate for the permutation recovery sharply drops to zero once the parameters reach certain thresholds. In particular, we leverage techniques in the _message passing_ (MP) algorithm literature to identify the precise positions of the phase transition thresholds. The bottleneck in identifying the phase transition regimes lies in deriving closed-form formulas for the corresponding critical points. This is a highly challenging task because only in rare scenarios can one obtain such precise expressions. To tackle the difficulty, we propose the Gaussian approximation method which allows us to obtain the closed-form formula in almost all scenarios. We should mention that, in previous studies (Slawski et al., 2020; Slawski and Ben-David, 2019; Pananjady et al., 2017; Zhang et al., 2022; Zhang and Li, 2020), this phase transition phenomenon was empirically observed.
### Related work
The problem we study in this paper simultaneously touches two distinct areas of research: (A) permutation recovery, and (B) message passing (MP). In the literature of permuted linear regression, essentially all existing works used the same setting (1). Pananjady et al. (2018); Slawski and Ben-David (2019) consider the single observation model (i.e., \(m=1\)) and prove that the _signal-to-noise-ratio_ (\(\mathsf{snr}\)) for the correct permutation recovery is \(\mathbb{O}_{\mathrm{P}}\left(n^{c}\right)\), where \(c>0\) is some positive constant. Slawski et al. (2020); Zhang and Li (2020); Zhang et al. (2022) investigate the multiple observations model (i.e., \(m>1\)) and suggest that the \(\mathsf{snr}\) requirement can be significantly decreased, from \(\mathbb{O}_{\mathrm{P}}\left(n^{c}\right)\) to \(\mathbb{O}_{\mathrm{P}}\left(n^{c/m}\right)\). In particular, Zhang and Li (2020) develop an estimator which we will leverage and analyze for studying the phase transition phenomenon. Our analysis leads to the precise identification of the locations of the phase transition thresholds.
Another line of related research comes from the field of statistical physics. For example, using the replica method, Mezard and Parisi (1985, 1986) study the _linear assignment problem_ (LAP), i.e., \(\min_{\boldsymbol{\Pi}}\sum_{i,j}\boldsymbol{\Pi}_{ij}\mathbf{E}_{ij}\) where \(\boldsymbol{\Pi}\) denotes a permutation matrix and \(\mathbf{E}_{ij}\) is i.i.d random variable uniformly distributed in \([0,1]\). Martin et al. (2005) then generalize LAP to multi-index matching and presented an investigation based on MP algorithm. Recently, Caracciolo et al. (2017); Malatesta et al. (2019) extend the distribution of \(\mathbf{E}_{ij}\) to a broader class. However, all the above works exhibit no phase transition. In Chertkov et al. (2010), this method is extended to the particle tracking problem, where a phase transition phenomenon is observed. Later, Semerjian et al. (2020) modify it to fit the graph matching problem, which paves way for our work in studying the permuted linear regression problem.
### Our contributions
We propose the first framework to identify the precise locations of phase transition thresholds associated with permuted linear regression. In the oracle case where \(\mathbf{B}^{\natural}\) is known, our scheme is able to determine the phase transition \(\mathsf{snr}\). In the non-oracle case where \(\mathbf{B}^{\natural}\) is not given, our method will also predict the maximum allowed number of permuted rows and uncover its dependence on the ratio \(p/n\). In our analysis, we identify the precise positions of the phase transition points in the large-system limit, e.g., \(n\), \(m\), \(p\) all approach to infinity with \(m/n\to\tau_{m}\), \(p/n\to\tau_{p}\). Interestingly, numerical results well match predictions even when \(n,m,p\) are not large. There is one additional contribution. In the graphical model based on the linear assignment problem, we can modify the graph and design a scheme for partial recovery, which is a separate contribution and may be further analyzed for future study.
Here, we would also like to briefly mention the technical challenges. Compared with the previous works (Mezard and Parisi, 1986, 1987; Parisi and Ratieville, 2002; Linusson and Wastlund, 2004; Mezard and Montanari, 2009; Talagrand, 2010; Semerjian et al., 2020), where the edge weights are relatively simple, our edge weights usually involve high-order interactions across Gaussian random variables and are densely correlated. To tackle this issue, our proposed approximation method to compute the phase transition thresholds consists of three parts: 1) performing Gaussian approximation; 2) modifying the leave-one-out technique; and 3) performing size correction. A detailed explanation can be found in Section B. Hopefully, our approximation method will serve independent technical interests for researchers in the machine learning community.
### Notations
In this paper, \(a\stackrel{{\mathrm{a.s.}}}{{\longrightarrow}}b\) denotes \(a\) converges almost surely to \(b\). We denote \(f(n)\simeq g(n)\) when \(\lim_{n\to\infty}f(n)/g(n)=1\), and \(f(n)=\mathbb{O}_{\mathrm{P}}\left(g(n)\right)\) if the sequence \(f(n)/g(n)\) is bounded in probability, and \(f(n)=o_{\mathrm{P}}\left(g(n)\right)\) if \(f(n)/g(n)\) converges to zero in probability. The inner product between two vectors (resp. matrices) are denoted as \(\left\langle\cdot,\cdot\right\rangle\). For two distributions \(d_{1}\) and \(d_{2}\), we write \(d_{1}\cong d_{2}\) if they are equal up to normalization. Moreover, \(\mathcal{P}_{n}\) denotes the set of all possible permutation matrices: \(\mathcal{P}_{n}\triangleq\{\mathbf{\Pi}\in\{0,1\}^{n\times n},\sum_{i} \mathbf{\Pi}_{ij}=1,\sum_{j}\mathbf{\Pi}_{ij}=1\}\). The _signal-to-noise-ratio_ is \(\mathsf{snr}=\frac{\left\|\mathbf{B}^{\natural}\right\|_{\mathrm{P}}^{2}}{m \cdot\sigma^{2}}\), where \(\left\|\!\left|\cdot\right|\!\right|_{\mathrm{F}}\) is the Frobenius norm and \(\sigma^{2}\) is the variance of the sensing noise.
## 2 Permutation Recovery Using the Message Passing Algorithm
Inspired by Mezard and Montanari (2009); Chertkov et al. (2010); Semerjian et al. (2020), we leverage tools from the statistical physics to identify the locations of the phase transition threshold. We start this section with a brief review of the _linear assignment problem_ (LAP), which reads as
\[\widehat{\mathbf{\Pi}}=\ \operatorname*{argmin}_{\mathbf{\Pi}\in\mathcal{P}_{n }}\left\langle\mathbf{\Pi},\mathbf{E}\right\rangle, \tag{2}\]
where \(\mathbf{E}\in\mathbb{R}^{n\times n}\) is a fixed matrix and \(\mathcal{P}_{n}\) denotes the set of all possible permutation matrices. We follow the approach in Mezard and Montanari (2009); Semerjian et al. (2020) and introduce a probability measure over the permutation matrix \(\mathbf{\Pi}\), which is written as
\[\mu(\mathbf{\Pi})=\ (\nicefrac{{1}}{{2}})\prod_{i}\mathbbm{1}\left(1-\sum_{j} \mathbf{\Pi}_{ij}\right)\prod_{j}\mathbbm{1}\left(1-\sum_{i}\mathbf{\Pi}_{ij} \right)\times\exp\bigg{(}-\beta\sum_{i,j}\mathbf{\Pi}_{ij}\mathbf{E}_{ij} \bigg{)}, \tag{3}\]
where \(\mathbbm{1}(\cdot)\) is the indicator function, \(Z\) is the normalization constant of the probability measure \(\mu(\mathbf{\Pi})\), and \(\beta>0\) is an auxiliary parameter.
It is easy to verify the following two properties:
* the ML estimator in (2) can be rewritten as \(\widehat{\boldsymbol{\Pi}}=\operatorname*{argmax}_{\boldsymbol{\Pi}}\mu( \boldsymbol{\Pi})\);
* the probability measure \(\mu(\boldsymbol{\Pi})\) concentrates on \(\widehat{\boldsymbol{\Pi}}\) when letting \(\beta\to\infty\).
In the next three subsections, we study the impact of \(\{\mathbf{E}_{ij}\}\) on the reconstructed permutation \(\widehat{\boldsymbol{\Pi}}\) with the _message passing_ (MP) algorithm. First, we associate a probabilistic graphical model with the probability measure defined in (3). Then, we rewrite the solution in (2) in the language of the MP algorithm. Finally, we derive an equation (7) to track the convergence of the MP algorithm. By exploiting relation of (7) to the _branching random walk_ (BRW) process, we identify the phase transition points corresponding to the LAP in (2).
### Construction of the graphical model
Firstly, we construct the factor graph associated with the probability measure in (3). Adopting the same strategy as in Chapter 16 of Mezard and Montanari (2009), we conduct the following operations:
* associating each variable \(\boldsymbol{\Pi}_{ij}\) a variable node \(v_{ij}\);
* connecting the variable node \(v_{ij}\) a function node representing the term \(e^{-\beta\boldsymbol{\Pi}_{ij}\mathbf{E}_{ij}}\);
* linking each constraint \(\sum_{i}\boldsymbol{\Pi}_{ij}=1\) to a function node and similarly for the constraint \(\sum_{j}\boldsymbol{\Pi}_{ij}=1\).
A graphical representation is available in Figure 1.
Now we briefly review the MP algorithm. Informally speaking, MP is a local algorithm to compute the marginal probabilities over the graphical model. In each iteration, the variable node \(v\) transmits the message to its incident function node \(f\) by multiplying all incoming messages except the message along the edge \((v,f)\). The function node \(f\) transmits the message to its incident variable node \(v\) by computing the weighted summary of all incoming messages except the message along the edge \((f,v)\). For a detailed introduction to MP, we refer readers to Kschischang et al. (2001), Chapter 16 in MacKay et al. (2003), and Chapter 14 in Mezard and Montanari (2009).
Figure 1: The constructed graphical model. The circle icons represent the variable nodes and the square icons represent the function nodes: a blue square for the constraint on the rows of \(\boldsymbol{\Pi}\), a green square for the constraint on the columns of \(\boldsymbol{\Pi}\), and a red square for the function \(e^{-\beta\pi\mathbf{E}_{ij}}\).
It is known that MP can obtain the exact marginals (Mezard and Montanari, 2009) for singly connected graphical models. For other types of graphs, however, whether MP can obtain the exact solution still remains an open problem (Cantwell and Newman, 2019; Kirkley et al., 2021). At the same time, numerical evidences have been witnessed to show that MP can yield meaningful results for graphs with loops; particular examples include applications in the coding theory (Chung, 2000; Richardson and Urbanke, 2001, 2008) and the LAP (which happens to be our case) (Mezard and Montanari, 2009; Chertkov et al., 2010; Caracciolo et al., 2017; Malatesta et al., 2019; Semerjian et al., 2020).
### The message passing (MP) algorithm
Next, we perform permutation recovery via MP. The following derivation follows the standard procedure, which can be found in the previous works (Mezard and Montanari, 2009; Semerjian et al., 2020). We denote the message flow from the node \(i^{\mathsf{L}}\) to the variable node \((i^{\mathsf{L}},j^{\mathsf{R}})\) as \(\widehat{m}_{i^{\mathsf{L}}\to(i^{\mathsf{L}},j^{\mathsf{R}})}(\cdot)\) and that from the edge \((i^{\mathsf{L}},j^{\mathsf{R}})\) to node \(i^{\mathsf{L}}\) as \(m_{(i^{\mathsf{L}},j^{\mathsf{R}})\to i^{\mathsf{L}}}(\cdot)\). Similarly, we define \(\widehat{m}_{j^{\mathsf{R}}\to(i^{\mathsf{L}},j^{\mathsf{R}})}(\cdot)\) and \(m_{(i^{\mathsf{L}},j^{\mathsf{R}})\to j^{\mathsf{R}}}(\cdot)\) as the message flow transmitted between the functional node \(j^{\mathsf{R}}\) and the variable node \(\left(i^{\mathsf{L}},j^{\mathsf{R}}\right)\). Here the superscripts \(\mathsf{L}\) and \(\mathsf{R}\) are used to indicate the positions of the node (left and right, respectively). Roughly speaking, these transmitted messages can be viewed as (unnormalized) conditional probability \(\mathbb{P}(\Pi_{i,j}=\{0,1\}|(\cdot))\) with the joint PDF being defined in (3). The message transmission process is to iteratively compute these conditional probabilities.
First, we consider the message flows transmitted between the functional node \(i^{\mathsf{L}}\) and the variable node \(\left(i^{\mathsf{L}},j^{\mathsf{R}}\right)\), which are written as
\[m_{(i^{\mathsf{L}},j^{\mathsf{R}})\to i^{\mathsf{L}}}(\pi)\cong\widehat{m}_{j ^{\mathsf{R}}\to(i^{\mathsf{L}},j^{\mathsf{R}})}(\pi)e^{-\beta\pi\mathbf{E}_ {i^{\mathsf{L}},j^{\mathsf{R}}}},\]
\[\widehat{m}_{i^{\mathsf{L}}\to(i^{\mathsf{L}},j^{\mathsf{R}})}(\pi)\cong\sum _{\pi^{\mathsf{L}}_{\mathsf{L}},k^{\mathsf{R}}}\prod_{k^{\mathsf{R}}\neq j^{ \mathsf{R}}}\widehat{m}_{k^{\mathsf{R}}\to(i^{\mathsf{L}},k^{\mathsf{R}})}( \pi_{i^{\mathsf{L}},k^{\mathsf{R}}})\cdot e^{-\beta\pi_{\mathsf{L}},\llcorner k ^{\mathsf{R}}}\mathbb{E}_{i^{\mathsf{L}},k^{\mathsf{R}}}\mathbb{1}(\pi+\sum_ {k}\pi_{i^{\mathsf{L}},k^{\mathsf{R}}}=1), \tag{4}\]
where \(\pi\in\{0,1\}\) is a binary value. Similarly, we can write the message flows between the functional node \(j^{\mathsf{R}}\) and the variable node \(\left(i^{\mathsf{L}},j^{\mathsf{R}}\right)\), which are denoted as \(m_{(i^{\mathsf{L}},j^{\mathsf{R}})\to j^{\mathsf{R}}}(\pi)\) and \(\widehat{m}_{j^{\mathsf{R}}\to(i^{\mathsf{L}},j^{\mathsf{R}})}(\pi)\), respectively. With the parametrization approach, we define
\[h_{i^{\mathsf{L}}\to(i^{\mathsf{L}},j^{\mathsf{R}})}\triangleq\frac{1}{\beta} \log\frac{\widehat{m}_{i^{\mathsf{L}}\to(i^{\mathsf{L}},j^{\mathsf{R}})}(1)}{ \widehat{m}_{i^{\mathsf{L}}\to(i^{\mathsf{L}},j^{\mathsf{R}})}(0)},\qquad h_{ j^{\mathsf{R}}\to(i^{\mathsf{L}},j^{\mathsf{R}})}\triangleq\frac{1}{\beta} \log\frac{\widehat{m}_{j^{\mathsf{R}}\to(i^{\mathsf{L}},j^{\mathsf{R}})}(1)}{ \widehat{m}_{j^{\mathsf{R}}\to(i^{\mathsf{L}},j^{\mathsf{R}})}(0)}.\]
Denote \(\zeta_{i^{\mathsf{L}},j^{\mathsf{R}}}\) as \(h_{i^{\mathsf{L}}\to(i^{\mathsf{L}},j^{\mathsf{R}})}+h_{j^{\mathsf{R}}\to(i^ {\mathsf{L}},j^{\mathsf{R}})}-\mathbf{E}_{i^{\mathsf{L}},j^{\mathsf{R}}}\). We select the edge \(\left(i^{\mathsf{L}},j^{\mathsf{R}}\right)\) according to the probability \(m_{(i^{\mathsf{L}},j^{\mathsf{R}})}(\pi)\triangleq\frac{\exp(\pi\cdot\beta_{ \mathsf{L}},j^{\mathsf{R}})}{1+\exp(\beta\zeta_{\mathsf{L}},j^{\mathsf{R}})}, \ \pi\in\{0,1\}\). Provided \(m_{(i^{\mathsf{L}},j^{\mathsf{R}})}(1)>m_{(i^{\mathsf{L}},j^{\mathsf{R}})}(0)\), or equivalently,
\[\zeta_{i^{\mathsf{L}},j^{\mathsf{R}}}>0, \tag{5}\]
we pick \(\widehat{\pi}(i^{\mathsf{L}})=j^{\mathsf{R}}\); otherwise, we have \(\widehat{\pi}(i^{\mathsf{L}})\neq j^{\mathsf{R}}\). Due to the fact that \(\mu(\mathbf{\Pi})\) concentrates on \(\widehat{\mathbf{\Pi}}\) when \(\beta\) is sufficiently large, we can thus rewrite the MP update equation as
\[h_{i^{\mathsf{L}}\to(i^{\mathsf{L}},j^{\mathsf{R}})}=\min_{k^{\mathsf{R}}\neq j ^{\mathsf{R}}}\mathbf{E}_{i^{\mathsf{L}},k^{\mathsf{R}}}-h_{k^{\mathsf{R}}\to(i^ {\mathsf{L}},k^{\mathsf{R}})},\qquad h_{j^{\mathsf{R}}\to(i^{\mathsf{L}},j^{ \mathsf{R}})}=\min_{k^{\mathsf{L}}\neq i^{\mathsf{L}}}\mathbf{E}_{k^{\mathsf{L}},j^{\mathsf{R}}}-h_{k^{\mathsf{L}}\to(k^{\mathsf{L}},j^{\mathsf{R}})}, \tag{6}\]
which is attained by letting \(\beta\to\infty\).
### Identification of the phase transition threshold
To identify the phase transition phenomenon inherent in the MP update equation (6), we follow the strategy in Semerjian et al. (2020) and divide all edges \(\left(i^{\mathsf{L}},j^{\mathsf{R}}\right)\) into two categories according to whether the edge \(\left(i^{\mathsf{L}},j^{\mathsf{R}}\right)\) corresponds to the ground-truth permutation matrix \(\mathbf{\Pi}^{\natural}\) or not. Within each category, we assume the edges' weights and the message flows along them can be represented by independently identically distributed random variables.
For the edge \((i^{\mathsf{L}},\pi^{\natural}(i^{\mathsf{L}}))\) for the ground-truth correspondence, we represent the random variable associated with the weight \(\mathbf{E}_{ij}\) as \(\Omega\). The random variable for the message flow along this edge is denoted \(H\) (for both \(h_{i^{\mathsf{L}}\to(i^{\mathsf{L}},j^{\mathsf{R}})}\) and \(h_{j^{\mathsf{R}}\to(i^{\mathsf{L}},j^{\mathsf{R}})}\)). For the rest of edges \((i^{\mathsf{L}},j^{\mathsf{R}})\) (\(j^{\mathsf{R}}\neq\pi^{\natural}(i^{\mathsf{L}})\)), we define the corresponding random variables for the edge weight and message flow as \(\widehat{\Omega}\) and \(\widehat{H}\), respectively. Then, we can rewrite (6) as
\[\widehat{H}^{(t+1)}=\min\left(\Omega-H^{(t)},H^{{}^{\prime}(t)}\right),\ \ \ \ H^{(t+1)}=\min_{1\in i\in n-1}\widehat{\Omega}_{i}-\widehat{H}_{i}^{(t)}, \tag{7}\]
where \((\cdot)^{(t)}\) denotes the update in the \(t\)-th iteration, \(H^{{}^{\prime}}\) is an independent copy of \(H\), \(\{H_{i}^{(t)}\}_{1\leqslant i\leqslant n-1}\) and \(\{\widehat{\Omega}_{i}\}_{1\leqslant i\leqslant n-1}\) denote the i.i.d. copies of random variables \(H_{(\cdot)}^{(t)}\) and \(\widehat{\Omega}_{(\cdot)}\). This equation (7) can be viewed as the analogous version of the _density evolution_ and _state evolution_, which are used to analyze the convergence of the message passing and approximate message passing algorithm, respectively (Chung, 2000; Richardson and Urbanke, 2001, 2008; Donoho et al., 2009; Maleki, 2010; Bayati and Montanari, 2011; Rangan, 2011).
**Remark 1**.: _We conjecture that the distribution difference in the edges' weights is a necessary component in capturing the phase transition. On one hand, according to Mezard and Parisi (1986, 1987); Parisi and Raiteville (2002); Linusson and Wastlund (2004); Mezard and Montanari (2009); Talagrand (2010), there is no phase transition phenomenon in LAP if the edges' weights, i.e., \(\mathbf{E}_{ij}\), are assumed to be i.i.d uniformly distributed in \([0,1]\). On the other hand, Semerjian et al. (2020) show a phase transition phenomenon when assuming the weights \(\mathbf{E}_{ij}\) follow different distributions among the edges associated with the ground-truth correspondence \(\left(i^{\mathsf{L}},\pi^{\natural}(i^{\mathsf{L}})\right)\) and the rest edges._
**Relation to _branching random walk_ (BRW) process.** Conditional on the event that the permutation can be perfectly reconstructed, i.e., \(H+H^{{}^{\prime}}>\Omega\) as in (5), we can simplify (7) as
\[H^{(t+1)}=\min_{1\leqslant i\leqslant n-1}H_{i}^{(t)}+\Xi_{i}, \tag{8}\]
where \(\Xi\) is defined as the difference between \(\widehat{\Omega}\) and \(\Omega\), which is written as \(\Xi\triangleq\widehat{\Omega}-\Omega\), and \(\{H_{i}^{(t)}\}_{1\leqslant i\leqslant n-1}\) and \(\{\Xi_{i}\}_{1\leqslant i\leqslant n-1}\) denote the i.i.d. copies of random variables \(H_{(\cdot)}^{(t)}\) and \(\Xi_{(\cdot)}\).
Adopting the same viewpoint of Semerjian et al. (2020), we treat (8) as a _branching random walk_ (BRW) process, which enjoys the following property.
**Theorem 1** (Hammersley (1974); Kingman (1975); Semerjian et al. (2020)).: _Consider the recursive distributional equation \(K^{(t+1)}=\min_{1\leqslant i\leqslant n}K_{i}^{(t)}+\Xi_{i}\), where \(K_{i}^{(t)}\) and \(\Xi_{i}\) are i.i.d copies of random variables \(K_{(\cdot)}^{(t)}\) and \(\Xi_{(\cdot)}\), we have \(\frac{K^{(t+1)}}{t}\xrightarrow{\text{a.s.}}-\inf_{\theta>0}\frac{1}{\theta} \log\left[\sum_{i=1}^{n}\mathbb{E}e^{-\theta\Xi_{i}}\right]\), conditional on the event that \(\lim_{t\to\infty}K^{(t)}\neq\infty\)._
With Theorem 1, we can compute phase transition point for the correct (full) permutation recovery, i.e., \(H+H^{{}^{\prime}}>\Omega\), by letting \(\inf_{\theta>0}\frac{1}{\theta}\log\left[\sum_{i=1}^{n}\mathbb{E}e^{-\theta \Xi_{i}}\right]=0\), since otherwise the condition in (5) will be violated. In practice, directly computing the infimum of \(\inf_{\theta>0}\frac{1}{\theta}\log\left[\sum_{i=1}^{n}\mathbb{E}e^{-\theta \Xi_{i}}\right]\) is only possible for limited scenarios. In the next section, we propose an approximate computation method for the phase transition points, which is capable of covering a broader class of scenarios.
**Remark 2**.: _This paper considers the phase transition phenomenon w.r.t. the full permutation recovery. Informally speaking, this can be partly deduced from (7) and (8)._
_Here, we regard message flows \(h_{i\downarrow\rightarrow(i^{\mathsf{L}},j^{\mathsf{R}})}\) and \(h_{j^{\mathsf{R}}\rightarrow(i^{\mathsf{L}},j^{\mathsf{R}})}\) i.i.d. samples from certain distributions (represented by the random variable \(H\)). When studying the evolution behavior of the random variable \(H\), we track the behaviors of all message flows. Hence, if we find an arbitrary sample \(H\) that will yield the correct recovery, we can say that the correspondence between all pairs is correct. On the other hand, we can say that there exist some pairs with wrong correspondence if \(H\) leads to incorrect recovery. This can explain why the phase transition phenomenon exists._
## 3 Analysis of the Phase Transition Points
Recall that, in this paper, we consider the following linear regression problem with permuted labels
\[\mathbf{Y}=\boldsymbol{\Pi}^{\natural}\mathbf{X}\mathbf{B}^{\natural}+\sigma \mathbf{W},\]
where \(\mathbf{Y}\in\mathbb{R}^{n\times m}\) represents the matrix of observations, \(\boldsymbol{\Pi}^{\natural}\in\mathcal{P}_{n}\) denotes the permutation matrix to be reconstructed, \(\mathbf{X}\in\mathbb{R}^{n\times p}\) is the sensing matrix with each entry \(\mathbf{X}_{ij}\) following the i.i.d standard normal distribution, \(\mathbf{B}^{\natural}\in\mathbb{R}^{p\times m}\) is the matrix of signals, and \(\mathbf{W}\in\mathbb{R}^{n\times m}\) represents the additive noise matrix and its entries \(\mathbf{W}_{ij}\) are i.i.d standard normal random variables. In addition, we denote \(h\) as the number of permuted rows corresponding to the permutation matrix \(\boldsymbol{\Pi}^{\natural}\).
In this work, we focus on studying the "phase transition" phenomenon in recovering \(\boldsymbol{\Pi}^{\natural}\) from the pair \((\mathbf{Y},\mathbf{X})\). That is, the error rate for the permutation recovery sharply drops to zero once certain parameters reach the thresholds. In particular, our analysis will identify the precise positions of the phase transition points in the large-system limit, i.e., \(n\), \(m\), \(p\), and \(h\) all approach to infinity with \(m/n\rightarrow\tau_{m}\), \(p/n\rightarrow\tau_{p}\), \(h/n\rightarrow\tau_{h}\). We will separately study the phase transition phenomenon in 1) the oracle case where \(\mathbf{B}^{\natural}\) is given as a prior, and 2) the non-oracle case where \(\mathbf{B}^{\natural}\) is unknown.
In this section, we consider the oracle scenario, as a warm-up example. To reconstruct the permutation matrix \(\boldsymbol{\Pi}^{\natural}\), we adopt the following _maximum-likelihood_ (ML) estimator:
\[\widehat{\boldsymbol{\Pi}}^{\mathsf{oracle}}=\operatorname{argmin}_{ \boldsymbol{\Pi}}\left\langle\boldsymbol{\Pi},-\mathbf{Y}\mathbf{B}^{\natural \top}\mathbf{X}^{\top}\right\rangle,\ \ \text{s.t.}\ \ \sum_{i}\boldsymbol{\Pi}_{ij}=1,\sum_{j}\boldsymbol{\Pi}_{ij}=1, \boldsymbol{\Pi}\in\left\{0,1\right\}^{n\times n}. \tag{9}\]
Denoting the variable \(\mathbf{E}_{ij}^{\mathsf{oracle}}\) as \(-\mathbf{X}_{\pi^{\natural}(i)}^{\top}\mathbf{B}^{\natural}\mathbf{B}^{ \natural\top}\mathbf{X}_{j}-\ \sigma\mathbf{W}_{i}^{\top}\mathbf{B}^{\natural\top}\mathbf{X}_{j}\), (\(1\leqslant i,j\leqslant n\)), we can transform the objective function in (9) as the canonical form of LAP, i.e., \(\sum_{i,j}\boldsymbol{\Pi}_{ij}\mathbf{E}_{ij}^{\mathsf{oracle}}\).
### The phase transition threshold for the oracle case
In the oracle case where \(\mathbf{B}^{\natural}\) is known, we define the following random variable \(\Xi\):
\[\Xi=\boldsymbol{x}^{\top}\mathbf{B}^{\natural}\mathbf{B}^{\natural\top}\left( \boldsymbol{x}-\boldsymbol{y}\right)+\sigma\boldsymbol{w}\mathbf{B}^{\natural \top}\left(\boldsymbol{x}-\boldsymbol{y}\right), \tag{10}\]
where \(\boldsymbol{x}\) and \(\boldsymbol{y}\) follow the distribution \(\mathsf{N}(\mathbf{0},\mathbf{I}_{p\times p})\), and \(\boldsymbol{w}\) follows the distribution \(\mathsf{N}(\mathbf{0},\mathbf{I}_{m\times m})\).
Recalling Theorem 1, we predict the phase transition point by letting
\[\inf_{\theta>0}\nicefrac{{1}}{{\theta}}\cdot\log\left(\sum_{i=1}^{n}\mathbb{E}e^{- \theta\Xi_{i}}\right)=\inf_{\theta>0}\nicefrac{{1}}{{\theta}}\cdot\left(\log n+ \log\mathbb{E}e^{-\theta\Xi}\right)=0. \tag{11}\]
The computation procedure consists of two stages:
* **Stage I.** We compute the optimal \(\theta_{*}\), which is written as \(\theta_{*}=\operatorname*{argmin}_{\theta>0}\nicefrac{{1}}{{\theta}}\cdot \left(\log n+\log\mathbb{E}e^{-\theta\Xi_{i}}\right).\)
* **Stage II.** We plug the optimal \(\theta^{*}\) into (11) and obtain the phase transition \(\mathsf{snr}\) accordingly.
The following context illustrates the computation details.
Stage I: Determine \(\theta_{*}\).The key in determining \(\theta_{*}\) lies in the computation of \(\mathbb{E}e^{-\theta\Xi}\), which is summarized in the following proposition.
**Proposition 1**.: _For the random variable \(\Xi\) defined in (10), we can write its expectation as_
\[\mathbb{E}e^{-\theta\Xi}=\prod_{i=1}^{\operatorname*{rank}(\mathbf{B}^{ \natural})}\left[1+2\theta\lambda_{i}^{2}-\theta^{2}\lambda_{i}^{2}\left( \lambda_{i}^{2}+2\sigma^{2}\right)\right]^{-\frac{1}{2}}, \tag{12}\]
_provided that_
\[\theta^{2}\sigma^{2}\lambda_{i}^{2}<\ 1\text{ and }\ \theta^{2}\lambda_{i}^{2} \left(\lambda_{i}^{2}+2\sigma^{2}\right)\leq 1+2\theta\lambda_{i}^{2} \tag{13}\]
_hold for all singular values \(\lambda_{i}\) of \(\mathbf{B}^{\natural}\), \(1\leq i\leq\operatorname*{rank}(\mathbf{B}^{\natural})\)._
Proof.: Denote the singular values of \(\mathbf{B}^{\natural}\) as \(\left\{\lambda_{i}\right\}_{i=1}^{\operatorname*{rank}(\mathbf{B}^{\natural})}\). We exploit the rotation invariance property of Gaussian random variables; and have \(\Xi\) be identically distributed as
\[\Xi=\sum_{i=1}^{\operatorname*{rank}(\mathbf{B}^{\natural})}\lambda_{i}^{2}x_ {i}\left(x_{i}-y_{i}\right)+\sigma\sum_{i=1}^{\operatorname*{rank}(\mathbf{B}^ {\natural})}\lambda_{i}w_{i}\left(x_{i}-y_{i}\right).\]
Due to the independence across \(\boldsymbol{w},\ \boldsymbol{x}\), and \(\boldsymbol{y}\), we have
\[\mathbb{E}e^{-\theta\Xi} = \prod_{i=1}^{\operatorname*{rank}(\mathbf{B}^{\natural})}\mathbb{ E}_{x,y,w}\exp\left[-\theta\lambda_{i}^{2}x\left(x-y\right)-\theta\sigma \lambda_{i}w\left(x-y\right)\right]\] \[= \prod_{i=1}^{\operatorname*{rank}(\mathbf{B}^{\natural})}\mathbb{ E}_{x,y}\exp\left(\frac{\theta\lambda_{i}^{2}(x-y)\left(\theta\sigma^{2}(x-y)-2x \right)}{2}\right)\] \[\underline{\underline{\underline{\underline{\underline{\ \underline{\ \underline{\ \underline{\ \underline{\ \underline{\ \underline{\ \underline{\ \underline{\ \underline{\ \underline{\ \underline{\underline{\underline{\underline{\
**Remark 3**.: _When the conditions in (13) is violated, we have the expectation \(\mathbb{E}e^{-\theta\Xi}\) diverge to infinity, which suggests the optimal \(\theta_{*}\) for \(\inf_{\theta>0}\nicefrac{{\log\left(n\cdot\mathbb{E}e^{-\theta\Xi}\right)}}{{ \theta}}\) cannot be achieved._
With (12), we can compute the optimal \(\theta_{*}\) by setting the gradient \(\frac{\partial\left[\nicefrac{{\log\left(n\cdot\mathbb{E}e^{-\theta\Xi}\right)} }{{\partial\theta}}\right]}{{\partial\theta}}=0\). However, a closed-form of the exact solution for \(\theta^{*}\) is out of reach. As a mitigation, we resort to approximating \(\log\mathbb{E}e^{-\theta\Xi}\) by its lower-bound, which reads as
The corresponding minimum value \(\widetilde{\theta}_{*}\) is thus obtained by minimizing the lower-bound, which is written as \(\widetilde{\theta}_{*}=2\log n/\left(\left\|\!\left|\mathbf{B}^{\natural} \mathbf{B}^{\natural}\right|\!\right|\!\right|_{\mathrm{F}}^{2}+2\sigma^{2} \left|\!\left|\!\left|\mathbf{B}^{\natural}\right|\!\right|\!\right|_{ \mathrm{F}}^{2}\right)\).
Stage II: Compute the phase transition \(\mathsf{snr}\).We predict the phase transition point \(\mathsf{snr}_{\mathrm{oracle}}\) by letting the lower bound being zero, which can be written as
\[\frac{\log n}{\theta^{*}}-\left|\!\left|\!\left|\mathbf{B}^{\natural}\right|\! \right|\!\right|_{\mathrm{F}}^{2}+\frac{\theta^{*}}{2}\left(\left|\!\left| \!\left|\mathbf{B}^{\natural}\mathbf{B}^{\natural}\right|\!\right|\!\right|_{ \mathrm{F}}^{2}+2\sigma^{2}\left|\!\left|\!\left|\mathbf{B}^{\natural}\right|\! \right|\!\right|_{\mathrm{F}}^{2}\right)=0.\]
With standard algebraic manipulations, we obtain the equation
\[2(\log n)\mathsf{snr}_{\mathrm{oracle}}\cdot\left|\!\left|\!\left|\mathbf{B}^{ \natural}\right|\!\right|\!\right|_{\mathrm{F}}\cdot\left|\!\left|\!\left| \mathbf{B}^{\natural}\right|\!\right|\!\right|_{\mathrm{F}}\left|\!\right|\! \right|_{\mathrm{F}}^{2}+4\nicefrac{{\log n}}{{m}}=\mathsf{snr}_{\mathrm{ oracle}}. \tag{14}\]
To evaluate the accuracy of our predicted phase transition threshold, we compare the predicted values with the numerical values. The results are shown in Table 1, from which we can conclude the phase transition threshold \(\mathsf{snr}\) can be predicted to a good extent. In addition, we observe that the gap between the theoretical values and the numerical values keeps shrinking as \(m\) increases.
### Gaussian approximation of the phase transition threshold
From the above analysis, we can see that deriving a closed-form expression of the infimum value \(\theta\) of \(\nicefrac{{\log\left(n\mathbb{E}e^{-\theta\Xi}\right)}}{{\theta}}\) can be difficult. In fact, in certain scenarios, even obtaining a closed-form expression
\begin{table}
\begin{tabular}{l|c c c c c c} \hline \hline \(m\) & 20 & 30 & 40 & 50 & 60 & 70 \\ \hline \(\mathbf{P}\) & 3.283 & 1.415 & 0.902 & 0.662 & 0.523 & 0.432 \\ \(\mathbf{S}\) & \(2.529\pm 0.079\) & \(1.290\pm 0.054\) & \(0.872\pm 0.034\) & \(0.649\pm 0.012\) & \(0.515\pm 0.016\) & \(0.429\pm 0.015\) \\ \hline \hline \(m\) & 100 & 110 & 120 & 130 & 140 & 150 \\ \hline \(\mathbf{P}\) & 0.284 & 0.255 & 0.231 & 0.211 & 0.195 & 0.181 \\ \(\mathbf{S}\) & \(0.282\pm 0.008\) & \(0.256\pm 0.006\) & \(0.232\pm 0.006\) & \(0.212\pm 0.004\) & \(0.196\pm 0.006\) & \(0.183\pm 0.005\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison between the predicted value of the phase transition threshold \(\mathsf{snr}_{\mathsf{oracle}}\) and its simulated value when \(n=500\). \(\mathbf{P}\) denotes the predicted value while \(\mathbf{S}\) denotes the simulated value (i.e., \(\mathrm{mean}\pm\mathrm{std}\)). \(\mathbf{S}\) corresponds to the \(\mathsf{snr}\) when the error rate drops below \(0.05\). A detailed description of the numerical method can be found in the Appendix.
of \(\mathbb{E}e^{-\theta\Xi}\) is difficult. To handle such challenge, we propose to approximate random variable \(\Xi\) by a Gaussian \(\mathsf{N}(\mathbb{E}\Xi,\mathrm{Var}\Xi)\), namely,
\[\mathbb{E}e^{-\theta\Xi}\approx\exp\left(-\theta\mathbb{E}\Xi+\frac{\theta^{2} }{2}\mathrm{Var}\Xi\right). \tag{15}\]
With this approximation, we can express \(\theta_{*}\triangleq\inf\nicefrac{{\log\left(n\cdot\mathbb{E}e^{-\theta\Xi} \right)}}{{\theta}}\) in a closed form, which is \(\sqrt{\nicefrac{{2\log n}}{{\mathrm{Var}\Xi}}}\). Thus, the critical point corresponding to the phase transition in (11) is written as
\[2(\log n)\cdot\mathrm{Var}\Xi=\left(\mathbb{E}\Xi\right)^{2}. \tag{16}\]
Comparison with (14).To verify that this approximation can yield meaningful results, we revisit the oracle case and have
\[\mathbb{E}\Xi=\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left| \!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\!\left|\!\!\left| \!\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left| \!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\! \!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\! \left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\! \left|\!\!\left|\!\!\left|\!\!\!\left|\!\
Extension to Non-Oracle Case
Having analyzed the oracle case in the previous section, we now extend the analysis to the non-oracle case, where the value of \(\mathbf{B}^{\natural}\) is not given. Different from the oracle case, the ML estimator reduces to a _quadratic assignment problem_ (QAP) as opposed to LAP. As a mitigation, we adopt the estimator in Zhang and Li (2020), which reconstructs the permutation matrix within the LAP framework, i.e.,
\[\widehat{\boldsymbol{\Pi}}^{\mathsf{oracle}}=\operatorname*{argmin}_{ \boldsymbol{\Pi}}\left\langle\boldsymbol{\Pi},-\mathbf{Y}\mathbf{Y}^{\top} \mathbf{X}\mathbf{X}^{\top}\right\rangle,\ \ \text{s.t.}\ \ \ \sum_{i}\boldsymbol{\Pi}_{ij}=1,\sum_{j}\boldsymbol{\Pi}_{ij}=1, \boldsymbol{\Pi}\in\left\{0,1\right\}^{n\times n}. \tag{19}\]
We expect this estimator can yield good insights of the permuted linear regression since
* this estimator can reach the statistical optimality in a broad range of parameters;
* this estimator exhibits a phase transition phenomenon, which follows a similar pattern to that in the oracle case.
Following the same procedure as in Section 3, we identify the phase transition threshold \(\mathsf{snr}\) with Theorem 1. To begin with, we write the random variable \(\Xi\) as
\[\Xi\simeq\mathbf{Y}_{i}\mathbf{Y}^{\top}\mathbf{X}\left(\mathbf{X}_{\pi^{ \natural}(i)}-\mathbf{X}_{j}\right)^{\top},\]
where \(i\) and \(j\) are uniformly distributed among the set \(\left\{1,2,\cdots,n\right\}\). Afterwards, we adopt the Gaussian approximation scheme illustrated in Subsection 3.2 and determine the phase transition points by first computing \(\mathbb{E}\Xi\) and \(\operatorname{Var}\Xi\), respectively.
**Theorem 2**.: _For the random variable \(\Xi\) defined in (20), its mean \(\mathbb{E}\Xi\) and variance \(\operatorname{Var}\Xi\) are_
\[\mathbb{E}\Xi \simeq\ n\left(1-\tau_{h}\right)\left[\left(1+\tau_{p}\right) \left\|\!\left|\mathbf{B}^{\natural}\right|\!\right|\!\right|_{\mathrm{F}}^{2}+ m\tau_{p}\sigma^{2}\right],\] \[\operatorname{Var}\Xi \simeq\ n^{2}\tau_{h}\left(1-\tau_{h}\right)\tau_{p}^{2}\left[ \left|\!\left|\mathbf{B}^{\natural}\right|\!\right|\!\right|_{\mathrm{F}}^{2}+ m\sigma^{2}\right]^{2}+n^{2}\left[2\tau_{p}+3\left(1-\tau_{h}\right)^{2}\right] \left|\!\left|\!\left|\mathbf{B}^{\natural^{\top}}\mathbf{B}^{\natural} \right|\!\right|\!\right|_{\mathrm{F}}^{2}\] \[+\ n^{2}\left[6\tau_{p}\left(1-\tau_{h}\right)^{2}+\left(3-\tau_{ h}\right)\tau_{p}^{2}\right]\left|\!\left|\!\left|\mathbf{B}^{\natural^{\top}} \mathbf{B}^{\natural}\right|\!\right|\!\right|_{\mathrm{F}}^{2},\]
_respectively, where the definitions of \(\tau_{p}\) and \(\tau_{h}\) can be found in Section 3._
The proof of Theorem 2 is quite complicated, which is a combination of Wick's theorem, Stein's lemma, the conditional technique, and the leave-one-out technique, etc. For the presentation conciseness, we only give an outline of the proof and defer the technical details to Appendix.
### Proof outline
To begin with, we decompose the random variable \(\Xi\) as
\[\Xi=\ \Xi_{1}+\sigma\left(\Xi_{2}+\Xi_{3}\right)+\sigma^{2}\Xi_{4}, \tag{20}\]
where \(\Xi_{i}\) (\(1\leq i\leq 4\)) are respectively defined as
\[\Xi_{1} \triangleq\ \mathbf{X}_{\pi^{\natural}(i)}^{\top}\mathbf{B}^{ \natural}\mathbf{B}^{\natural\top}\mathbf{X}^{\top}\boldsymbol{\Pi}^{\natural \top}\mathbf{X}(\mathbf{X}_{\pi^{\natural}(i)}-\mathbf{X}_{j}),\] \[\Xi_{2} \triangleq\ \mathbf{X}_{\pi^{\natural}(i)}^{\top}\mathbf{B}^{ \natural}\mathbf{W}^{\top}\mathbf{X}(\mathbf{X}_{\pi^{\natural}(i)}-\mathbf{ X}_{j}),\] \[\Xi_{3} \triangleq\ \mathbf{W}_{i}^{\top}\mathbf{B}^{\natural\top}\mathbf{X}^{ \top}\boldsymbol{\Pi}^{\natural\top}\mathbf{X}(\mathbf{X}_{\pi^{\natural}(i)}- \mathbf{X}_{j}),\] \[\Xi_{4} \triangleq\ \mathbf{W}_{i}^{\top}\mathbf{W}^{\top}\mathbf{X}( \mathbf{X}_{\pi^{\natural}(i)}-\mathbf{X}_{j}).\]
Unlike the oracle case, obtaining a closed-form expression of \(\mathbb{E}e^{-\theta\Xi}\) would be too difficult. Hence, we adopt the Gaussian approximation method as presented in Section 3.2. The task then transforms to computing the expectation and variance of \(\Xi\).
Computation of the mean \(\mathbb{E}\Xi\).For the computation of the mean \(\mathbb{E}\Xi\), we can verify that \(\mathbb{E}\Xi_{2}\) and \(\mathbb{E}\Xi_{3}\) are both zero, due to the independence between \(\mathbf{X}\) and \(\mathbf{W}\). For \(\mathbb{E}\Xi_{1}\) and \(\mathbb{E}\Xi_{4}\), we adopt Wick's theorem (Janson, 1997) to obtain
\[\mathbb{E}\Xi_{1}= n\left(1-\tau_{h}\right)\left(1+\tau_{p}\right)\left[1+o_{ \mathrm{P}}\left(1\right)\right]\left\|\mathbb{B}^{\natural}\right\|_{\mathrm{F }}^{2},\] \[\mathbb{E}\Xi_{4}= nm\tau_{p}\left(1-\tau_{h}\right)\left[1+o_{\mathrm{P}} \left(1\right)\right].\]
Computation of the variance \(\mathrm{Var}\Xi\).Since \(\mathrm{Var}\Xi=\mathbb{E}\Xi^{2}-\left(\mathbb{E}\Xi\right)^{2}\), we just need to compute \(\mathbb{E}\Xi^{2}\), which can be expanded into the following six terms
\[\mathbb{E}\Xi^{2}= \mathbb{E}\Xi_{1}^{2}+\sigma^{2}\mathbb{E}\Xi_{2}^{2}+\sigma^{2} \mathbb{E}\Xi_{3}^{2}+\sigma^{4}\mathbb{E}\Xi_{4}^{2}+2\sigma^{2}\mathbb{E} \Xi_{1}\Xi_{4}+2\sigma^{2}\mathbb{E}\Xi_{2}\Xi_{3}. \tag{21}\]
The computation of the above terms turns out to be quite complex due to the high-order Gaussian random variables. For example, the term \(\mathbb{E}\Xi_{1}^{2}\) involves the eighth-order Gaussian moments, the terms \(\mathbb{E}\Xi_{2}^{2},\mathbb{E}\Xi_{3}^{2},\mathbb{E}\Xi_{1}\Xi_{4}\) and \(\mathbb{E}\Xi_{2}\Xi_{3}\) all involve the sixth-order Gaussian variables, etc. To handle the difficulties in computing \(\mathbb{E}\Xi^{2}\), we propose the following computation procedure, which can be roughly divided into 3 phases.
* **Phase I: Leave-one-out decomposition.** The major technical difficulty comes from the correlation between the product \(\mathbf{X}^{\top}\mathbf{\Pi}^{\natural}\mathbf{X}\) and the difference \(\mathbf{X}_{\pi^{\natural}(i)}-\mathbf{X}_{j}\). We decouple this correlation by first rewriting the matrix \(\mathbf{X}^{\top}\mathbf{\Pi}^{\natural}\mathbf{X}\) as the sum \(\sum_{\ell}\mathbf{X}_{\ell}\mathbf{X}_{\pi^{\natural}(\ell)}^{\top}\). Then we collect all terms \(\mathbf{X}_{\ell}\mathbf{X}_{\pi^{\natural}(\ell)}^{\top}\) independent of \(\mathbf{X}_{\pi^{\natural}(i)}\) and \(\mathbf{X}_{j}\) in the matrix \(\mathbf{\Sigma}\) and leave the remaining terms to the matrix \(\mathbf{\Delta}\), i.e., \(\mathbf{\Delta}\triangleq\mathbf{X}^{\top}\mathbf{\Pi}^{\natural}\mathbf{X}- \mathbf{\Sigma}\). This decomposition shares the same spirit as the leave-one-out technique (Karoui, 2013; Bai and Silverstein, 2010; Karoui, 2018; Sur et al., 2019). Then, we divide all terms in \(\mathbb{E}\Xi^{2}\) into 3 categories: 1) terms only containing matrix \(\mathbf{\Sigma}\); 2) terms containing both \(\mathbf{\Sigma}\) and \(\mathbf{\Delta}\); and 3) terms only containing \(\mathbf{\Delta}\).
* **Phase II: Conditional technique.** Concerning the terms in the first two categories, which covers the majority of terms, we can exploit the independence among the rows in the sensing matrix \(\mathbf{X}\). With the conditional technique, we can reduce the order of Gaussian moments by separately taking the expectation w.r.t \(\mathbf{\Sigma}\) and w.r.t vectors \(\mathbf{X}_{\pi^{\natural}(i)}\) and \(\mathbf{X}_{j}\).
* **Phase III: Direct computation.** For the few terms in the third category (i.e., terms only containing \(\mathbf{\Delta}\)), we compute the high-order Gaussian moments by exhausting all terms and iterative applying of Wick's Theorem and Stein's Lemma, which can reduce the higher-order Gaussian moments to lower-order Gaussian moments.
Adopting the above proof strategy, we obtain the following results for each term listing as
\[\mathbb{E}\Xi_{1}^{2} \approx (n-h)^{2}\left(1+\frac{2p}{n}+\frac{p^{2}}{n(n-h)}\right)[\operatorname {Tr}(\mathbf{M})]^{2}\] \[+ n^{2}\left[\frac{2p}{n}+3\left(1-\frac{h}{n}\right)^{2}+\frac{6( n-h)^{2}p}{n^{3}}+\frac{(3n-h)p^{2}}{n^{3}}\right]\operatorname{Tr}(\mathbf{M} \mathbf{M}),\] \[\mathbb{E}\Xi_{2}^{2} \approx 2np\left(1+p/n\right)\operatorname{Tr}(\mathbf{M}),\] \[\mathbb{E}\Xi_{3}^{2} \approx 2n^{2}\left(\frac{p}{n}+\left(1-\frac{h}{n}\right)^{2}+\frac{p^ {2}}{n^{2}}+\frac{4p(n-h)^{2}}{n^{3}}\right)\operatorname{Tr}(\mathbf{M}),\] \[\mathbb{E}\Xi_{4}^{2} \approx \frac{(n-h)m^{2}p^{2}}{n},\] \[\mathbb{E}\Xi_{1}\Xi_{4} \approx \frac{mp(n-h)(n+p-h)}{n}\operatorname{Tr}(\mathbf{M}),\] \[\mathbb{E}\Xi_{2}\Xi_{3} \approx \frac{p(n-h)(n+p-h)}{n}\operatorname{Tr}(\mathbf{M}).\]
Plugging the calculation results thereof to (21) and exploiting the relation \(\operatorname{Var}\Xi=\mathbb{E}\Xi^{2}-(\mathbb{E}\Xi)^{2}\), we complete the proof of Theorem 2.
### An illustrating example
This subsection predicts the phase transition points with Theorem 2. Unlike the oracle case, we notice the edge weights \(\mathbf{E}_{ij}\) are strongly correlated, especially when \(j=\pi^{\natural}(j)\), which corresponds to the non-permuted rows. To factor out these dependencies, we only take the permuted rows into account and correct the sample size from \(n\) to \(\tau_{h}n\). The prediction \(\mathsf{snr}_{\text{non-oracle}}\) is then computed by solving \(2\log(n\tau_{h})\text{Var}\Xi=\left(\mathbb{E}\Xi\right)^{2}\), where \(\mathbb{E}\Xi\) and \(\text{Var}\Xi\) are in Theorem 2.
To illustrate the prediction accuracy, we consider the case where \(\mathbf{B}^{\natural}\)'s singular values are of the same order, i.e., \(\frac{\lambda_{i}(\mathbf{B}^{\natural})}{\lambda_{j}(\mathbf{B}^{\natural})} =O(1),\ 1\leqslant i,j\leqslant m\), where \(\lambda_{i}(\cdot)\) denotes the \(i\)-th singular value. Then, we obtain the \(\mathsf{snr}_{\text{non-oracle}}\), which is written as
\[\mathsf{snr}_{\text{non-oracle}}\approx\eta_{1}/\eta_{2}. \tag{22}\]
Here, \(\eta_{1}\) and \(\eta_{2}\) are defined as
\[\eta_{1} \doteq 2\tau_{h}\tau_{p}^{2}\log\left(n\tau_{h}\right)-\tau_{p}(\tau_{p} +1)\left(1-\tau_{h}\right)+\tau_{p}\sqrt{2(1-\tau_{h})\tau_{h}\cdot\log\left(n \tau_{h}\right)},\] \[\eta_{2} \doteq \left(1-\tau_{h}\right)\left(\tau_{p}+1\right){}^{2}-2\tau_{h} \tau_{p}^{2}\log(n\tau_{h}).\]
Note that the predicted \(\mathsf{snr}_{\text{non-oracle}}\) varies for different \(\tau_{h}\) and \(\tau_{p}\). Viewing \(\mathsf{snr}_{\text{non-oracle}}\) as a function of \(\tau_{h}\), we observe a singularity point of \(\tau_{h}\), i.e., \(\mathsf{snr}_{\text{non-oracle}}(\tau_{h})=\infty\), or equivalently, \(\eta_{2}(\tau_{h})=0\). This suggests a potential phase transition phenomenon w.r.t. \(\tau_{h}\). This predicted phenomenon is confirmed by the numerical experiments, in which we vary the proportion of the permuted rows and study the change in the reconstruction error rate.
**Remark 4**.: _To isolate the reconstruction performance from the impact of \(\mathsf{snr}\), we adopt the noiseless setting, which corresponds to infinite \(\mathsf{snr}\). Hence, the change in the error rate comes solely from the increasing number of permuted rows rather than the insufficient \(\mathsf{snr}\)._
**Remark 5**.: _The capability to predict the precise phase transition point of \(\tau_{h}\) is a novel feature of our method. In contrast, previous proof in Zhang and Li (2020) only establishes \(\tau_{h}\) as being of the order \(O(1)\), without specifying its exact values, which our method can now predict._
Figure 2: **Left panel**: Predicted phase transition points \(\mathsf{snr}_{\text{non-oracle}}\). **Right panel**: Plot of the recovery rate under the noiseless setting, i.e., \(\mathsf{snr}=\infty\). **Gaussian**: \(\mathbf{B}_{ij}^{\natural}\overset{\text{i.i.d}}{\sim}\mathsf{N}(0,1)\); **Identity: \(\mathbf{B}^{\natural}=\mathbf{I}_{p\times p}\); **Block-diagonal**: \(\mathbf{B}^{\natural}=\operatorname{diag}\left\{1,\cdots,1,0.5,\cdots,0.5\right\}\). We observe that the correct recovery rates drop sharply within the regions of our predicted value.
#### 4.2.1 Impact of \(n\) on the phase transition point
We study the impact of \(n\) on \(\tau_{h}\). The numerical experiment is shown in Figure 2, where we study the dependence of \(\mathsf{snr}_{\text{non-oracle}}\) on \(\tau_{h}\). We can see the predicted phase transition \(\tau_{h}\) matches to a good extent to the numerical experiments. Then, we fix the \(p\) and study the impact of \(n\) on \(\tau_{h}\). We observe that the phase transition \(\tau_{h}\) increases together with the sample number \(n\), which is also captured by our formula in (22).
#### 4.2.2 Limits of \(\tau_{h}\)
In addition, we consider the limiting behavior of \(\tau_{h}\) when \(\tau_{p}\) approaches \(0\), or equivalently, \(p=o_{\text{P}}\left(n\right)\). We can simplify \(\mathbb{E}\Xi\) and \(\text{Var}\Xi\) in Theorem 2 as
\[\mathbb{E}\Xi \simeq\] \[\text{Var}\Xi \simeq 3n^{2}\left(1-\tau_{h}\right)^{2}\left\|\!\left|\!\left|\!\left| \!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\! \!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\! \!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\! \!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\! \!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\! \!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\! \!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\! \!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\! \left|\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\! \left|\!\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\! \left|\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\! \left|\!\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\! \left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\! \left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\! \left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\! \left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\left|\!\!\!\! \left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\! \left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\! \left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\!\! \left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\! \left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\! \left|\!\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\! \left|\!\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\! \left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\! \left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\! \left|\!\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\! \left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\! \left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\! \!\left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\! \left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\! \left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\! \left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\! \left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\! \left|\!\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\! \left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\! \left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\!\left|\!\!\!\!\! \left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\! \left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\! \left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\!\left|\!\!\!\!\!\!\left|\!\!\!\!\! \left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\!\left|\!\!\!\!\!\!\left|\!\!\!\!\!\! \left|\!\!\!\!\!\!\left|\!\!\!\!\!\!\left|\!\!\!\!\!\!\!\left
Conclusion
The shuffled (permuted) regression problem is a well-known challenging task, with practical applications in databases, machine learning, and privacy. This is the first work that can identify the precise location of phase transition thresholds of permuted linear regressions. For the oracle case where the signal \(\mathbf{B}^{\natural}\) is given as a prior, our analysis can predict the phase transition threshold \(\mathsf{snr}_{\mathsf{oracle}}\) to a good extent. For the non-oracle case where \(\mathbf{B}^{\natural}\) is not given, we have modified the leave-one-out technique to approximately compute the phase critical \(\mathsf{snr}_{\text{non-oracle}}\) value for the phase transition, as the precise computation becomes significantly complicated as the high-order interaction between Gaussian random variables is involved. Moreover, we have associated the singularity point in \(\mathsf{snr}_{\text{non-oracle}}\) with a phase transition point w.r.t the maximum allowed number of permuted rows. Finally, we have presented many numerical experiments to corroborate the accuracy of our theoretical predictions.
|
2309.10498 | A Configurable Library for Generating and Manipulating Maze Datasets | Understanding how machine learning models respond to distributional shifts is
a key research challenge. Mazes serve as an excellent testbed due to varied
generation algorithms offering a nuanced platform to simulate both subtle and
pronounced distributional shifts. To enable systematic investigations of model
behavior on out-of-distribution data, we present $\texttt{maze-dataset}$, a
comprehensive library for generating, processing, and visualizing datasets
consisting of maze-solving tasks. With this library, researchers can easily
create datasets, having extensive control over the generation algorithm used,
the parameters fed to the algorithm of choice, and the filters that generated
mazes must satisfy. Furthermore, it supports multiple output formats, including
rasterized and text-based, catering to convolutional neural networks and
autoregressive transformer models. These formats, along with tools for
visualizing and converting between them, ensure versatility and adaptability in
research applications. | Michael Igorevich Ivanitskiy, Rusheb Shah, Alex F. Spies, Tilman Räuker, Dan Valentine, Can Rager, Lucia Quirke, Chris Mathwin, Guillaume Corlouer, Cecilia Diniz Behn, Samy Wu Fung | 2023-09-19T10:20:11Z | http://arxiv.org/abs/2309.10498v2 | # A Configurable Library for Generating and Manipulating Maze Datasets
###### Abstract
Understanding how machine learning models respond to distributional shifts is a key research challenge. Mazes serve as an excellent testbed due to varied generation algorithms offering a nuanced platform to simulate both subtle and pronounced distributional shifts. To enable systematic investigations of model behavior on out-of-distribution data, we present maze-dataset, a comprehensive library for generating, processing, and visualizing datasets consisting of maze-solving tasks. With this library, researchers can easily create datasets, having extensive control over the generation algorithm used, the parameters fed to the algorithm of choice, and the filters that generated mazes must satisfy. Furthermore, it supports multiple output formats, including rasterized and text-based, catering to convolutional neural networks and autoregressive transformer models. These formats, along with tools for visualizing and converting between them, ensure versatility and adaptability in research applications.
## 1 Introduction
Out-of-distribution generalization is a critical challenge in modern machine learning (ML) research. For interpretability and behavioral research in this area, training on algorithmic tasks offers benefits by allowing systematic data generation and task decomposition, as well as simplifying the process of circuit discovery [19]. Although mazes are well suited for these investigations, we have found that existing maze generation packages [1, 7, 4, 16, 21] do not provide support in flexibility of maze generation algorithms with fine-grained control of generation parameters and the ability to easily transform between multiple representations of the mazes (Images, Textual, Tokenized) for training and testing models.
This work aims to facilitate deeper research into generalization and interpretability by addressing these limitations. We introduce maze-dataset, an accessible Python package [10]. This package offers flexible configuration options for maze dataset generation, allowing users to select from a range of algorithms and adjust corresponding parameters (Section 2). Furthermore, it supports various output formats tailored to different ML architectures (Section 3).
Figure 1: Example mazes from various algorithms. Left to right: randomized depth-first search (RDFS), RDFS without forks, constrained RDFS, Wilson’s [23], RDFS with percolation (\(p=0.1\)), RDFS with percolation (\(p=0.4\)), random stack RDFS. Further examples available in the appendix of this work (Section 8).
## 2 Maze Generation and Usage
Our package can be installed from PyPi via pip install maze-dataset, or directly from the git repository [10].
To create a dataset, we first create a MazeDatasetConfig configuration object, which specifies the seed, number, and size of mazes, as well as the generation algorithm and its corresponding parameters. This object is passed to a MazeDataset class to create a dataset. Crucially, this MazeDataset inherits from a PyTorch [17] Dataset, and can thus be easily incorporated into existing data pre-processing and training pipelines, e.g., through the use of a Dataloader class.
```
from maze_datasetimportMazeDataset,MazeDatasetConfig,LatticeMazeGenerators cfg:MazeDatasetConfig=MazeDatasetConfig( name="example", grid_n=3, n_mazes=32, maze_ctor=LatticeMazeGenerators.gen_dfs, ) dataset:MazeDataset=MazeDataset.from_config(cfg)
```
When initializing mazes, further configuration options can be specified through the from_config() factory method as necessary. Options include 1) whether to generate the dataset during runtime or load an existing dataset, 2) if and how to parallelize generation, and 3) where to store the generated dataset. Full documentation of this is available in our repository [10]. Available maze generation algorithms are static methods of the LatticeMazeGenerators class and include the following:
* gen_dfs (**randomized depth-first search**): Parameters can be passed to constrain the number of accessible cells, the number of forks in the maze, and the maximum tree depth. Creates a spanning tree by default or a partially spanning tree if constrained.
* gen_wilson (**Wilson's algorithm**): Generates a random spanning tree via loop-erased random walk [23].
* gen_percolation (**percolation**): Starting with no connections, every possible lattice connection is set to either true or false with some probability \(p\), independently of all other connections. For the kinds of graphs that this process generates, we refer to existing work [3, 5].
* gen_dfs_percolation (**randomized depth-first search with percolation**): A connection exists if it exists in a maze generated via gen_dfs OR gen_percolation. Useful for generating mazes that are not acyclic graphs.
Furthermore, a dataset of mazes can be filtered to satisfy certain properties:
```
dataset_filtered:MazeDataset=dataset.filter_by.path_length(min_length=3)
```
Custom filters can be specified, and several filters are included:
* path_length(min_length:int): shortest length from the origin to target should be at least min_length.
* start_end_distance(min_distance:int): Manhattan distance between start and end should be at least min_distance, ignoring walls.
* remove_duplicates(...): remove mazes which are similar to others in the dataset, measured via Hamming distance.
* remove_duplicates_fast(): remove mazes which are exactly identical to others in the dataset.
All implemented maze generation algorithms are stochastic by nature. For reproducibility, the seed parameter of MazeDatasetConfig may be set. In practice, we do not find that exact duplicates of mazes are generated with any meaningful frequency, even when generating large datasets.
Output Formats
Internally, mazes are SolvedMaze objects, which have path information, and a connection list optimized for storing sub-graphs of a lattice. These objects can be converted to and from several formats.
### Training and Evaluation
There are examples in the literature for training Recurrent Convolutional Neural Network (RCNN) derived architectures on maze tasks [20]. To this end, we replicate the format of [21] and provide the RasterizedMazeDataset class, which returns rasterized pairs of (input, target) mazes as shown in Figure 3.
Figure 2: Various output formats. Top row (left to right): ASCII diagram, rasterized pixel grid, and advanced display. Bottom row: text format for autoregressive networks.
To train autoregressive text models such as transformers, we use the full sequences provided by as_tokens() shown in Figure 2. During deployment we provide only the prompt up to the <PATH_START> token. To map the vocabulary onto indices, we first allocate a portion of the indices for the "special" tokens which these do not represent coordinates. Next, we add each coordinate as a unique token. Coordinates are ordered in the vocabulary such that a maze of size \(m\) will be processed the same way as the top \(m\times m\) cells of a size-\(n\) maze, where \(n>m\). This is done so that models can be deployed on mazes smaller than the training size without destroying the structure of the vocabulary. Examples of usage of this dataset to train autoregressive transformers can be found in our maze-transformer library [11]. Other tokenization and vocabulary schemes are also included, such as representing each coordinate as a pair of \(i,j\) index tokens.
## 4 Benchmarks of Generation Speed
We provide approximate benchmarks for relative generation time across various algorithms, parameter choices, maze sizes, and dataset sizes.
\begin{table}
\begin{tabular}{|c|c||c|c|c|} \hline Method \& Parameters & \multicolumn{3}{c|}{Average time per maze (ms)} \\ \hline Generation algorithm & Generation parameters & all sizes & \begin{tabular}{c} small \\ (\(g\leq 10\)) \\ \end{tabular} &
\begin{tabular}{c} medium \\ (\(10<g\leq 32\)) \\ \end{tabular} & large \\ \hline \hline gen\_dfs & accessible\_cells=20 & 2.4 & 2.4 & 2.6 & 2.4 \\ gen\_dfs & do\_forks=False & 3.0 & 2.4 & 3.7 & 3.8 \\ gen\_dfs & max\_tree\_depth=0.5 & 4.5 & 2.2 & 4.9 & 11.6 \\ gen\_dfs & – & 31.1 & 2.8 & 28.0 & 136.5 \\ gen\_dfs\_percolation & p=0.1 & 53.9 & 3.6 & 42.5 & 252.9 \\ gen\_dfs\_percolation & p=0.4 & 58.8 & 3.7 & 44.7 & 280.2 \\ gen\_percolation & – & 59.1 & 3.3 & 43.6 & 285.2 \\ gen\_wilson & – & 767.9 & 10.1 & 212.9 & 4530.4 \\ \hline \hline \multicolumn{2}{|c||}{**median (all runs)**} & 10.8 & 6.0 & 44.4 & 367.7 \\ \multicolumn{2}{|c||}{**mean (all runs)**} & 490.0 & 11.7 & 187.2 & 2769.6 \\ \hline \end{tabular}
\end{table}
Table 1: Average time to generate a single maze, averaged across multiple runs and dataset size. All benchmarks were run with parallelization disabled on a Intel i9-8950HK CPU.
Figure 4: **Left:** maze prompt up to <PATH_START>. **Right:** relative ordering of the cells in the vocabulary. Note that the top-left square of size \(n\times n\) can be described using only the first \(n^{2}\) tokens in the vocabulary.
Figure 3: Input is the rasterized maze without the path marked (left), and provide as a target the maze with all but the correct path removed. Configuration options exist to adjust whether endpoints are included and if empty cells should be filled in.
## 5 Implementation
We refer to our GitHub repository [10] for documentation and up-to-date implementation details.
This package utilizes a simple, efficient representation of mazes. Using an adjacency list to represent mazes would lead to a poor lookup time of whether any given connection exists, whilst using a dense adjacency matrix would waste memory by failing to exploit the structure (e.g., only 4 of the diagonals would be filled in). Instead, we describe mazes with the following simple representation: for a \(d\)-dimensional lattice with \(r\) rows and \(c\) columns, we initialize a boolean array \(A=\{0,1\}^{d\times r\times c}\), which we refer to in the code as a connection_list. The value at \(A[0,i,j]\) determines whether a downward connection exists from node \([i,j]\) to \([i+1,j]\). Likewise, the value at \(A[1,i,j]\) determines whether a rightwards connection to \([i,j+1]\) exists. Thus, we avoid duplication of data about the existence of connections, at the cost of requiring additional care with indexing when looking for a connection upwards or to the left. Note that this setup allows for a periodic lattice.1
Footnote 1: That is, rather than a sub-graph of \(\mathbb{Z}^{2}\), we are working on the lattice \(\mathbb{Z}/r\mathbb{Z}\times\mathbb{Z}/c\mathbb{Z}\). This is achieved by using modular arithmetic for indexing. Specifically, when considering connections from a node at position \([i,j]\), the downward connection leads to the node at position \([(i+1)\%r,j]\), and the rightward connection leads to the node at position \([i,(j+1)\%c]\). However, although our data structure supports this in principle, our algorithms for solving and visualizing the mazes do not. In practice, the last elements of \(A\) are always set to \(0\) to remove the possibility of periodic connections.
To produce solutions to mazes, two points are selected uniformly at random without replacement from the connected component of the maze, and the \(A^{*}\) algorithm [8] is applied to find the shortest path between them.
Parallelization is implemented via the multiprocessing module in the Python standard library, and parallel generation can be controlled via keyword arguments to the MazeDataset.from_config() function.
### Relation to Existing Works
As mentioned in the introduction, a multitude of public and open-source software packages exist for generating mazes [21; 4; 16]. However, our package provides more flexibility and efficiency in the following ways:
* For rigorous investigations of the response of a model to various distributional shifts, preserving metadata about the generation algorithm with the dataset itself is essential. To this end, our package efficiently stores the dataset along with its metadata in a single human-readable file [9]. This metadata is loaded when the dataset is retrieved from disk and reduces the complexity of discerning the parameters under which a dataset was created.
* Prior works provide maze datasets in only a rasterized format, which is not suitable for training autoregressive text-based transformer models. As discussed in Section 3, our package provides these different formats natively.
* Our package provides a selection of maze generation algorithms, which all write to a single unified format. All output formats are reversible, and operate to and from this unified format.
Figure 5: Plots of maze generation time. Generation time scales exponentially with maze size for all algorithms (left). Generation time does not depend on the number of mazes being generated, and there is minimal overhead to initializing the generation process for a small dataset (right). Wilson’s algorithm is notably less efficient than others and has high variance. Note that for both plots, values are averaged across all parameter sets for that algorithm, and parallelization is disabled.
As mentioned in Section 3.1, we also include the RasterizedMazeDataset class in our codebase, which can exactly mimic the outputs provided in easy-to-hard-data [21]. Our as_ascii() method provides a format similar to that used in [22]. The text format provided by as_tokens() is similar to that of [14], but provides a custom tokenization scheme.
### Limitations of maze-dataset
For simplicity, the package primarily supports mazes that are sub-graphs of a 2-dimensional rectangular lattice. Some support for higher-dimensional lattices is present, but not all output formats are adapted for higher dimensional mazes. As mentioned in Section 5, our codebase does not fully support utilizing the periodic structure allowed by the data structure representing the maze. Since the use of \(A^{*}\) described in Section 5 does not have a preference between two paths of equal length, solutions to mazes which are not acyclic may not always be unique.
## 6 Conclusion
The maze-dataset library [10] introduced in this paper provides a flexible and extensible toolkit for generating, processing, and analyzing maze datasets. By supporting various procedural generation algorithms and conversion utilities, it enables the creation of mazes with customizable properties to suit diverse research needs. Planned improvements to the maze-dataset include adding more generation algorithms (such as Prim's algorithm [12; 18; 2] and Kruskal's algorithm [13], among others [6]), adding the ability to augment a maze with an adjacency list to add "shortcuts" to the maze, and resolving certain limitations detailed in Section 5.2. Future work will make extensive use of this library to study interpretability and out-of-distribution generalization in autoregressive transformers [11], recurrent convolutional neural networks [20], and implicit networks [15].
## 7 Acknowledgements
First and foremost, the authors would like to thank each other for the good times had in developing this library and the subsequent research which was carried out. We are also indebted to AI Safety Camp and AI Safety Support for supporting this project and bringing many of the authors together. This work was partially funded by National Science Foundation award DMS-2309810. We thank the Mines Optimization and Deep Learning group (MODL) for fruitful discussions. |
2309.09877 | Not Enough Labeled Data? Just Add Semantics: A Data-Efficient Method for
Inferring Online Health Texts | User-generated texts available on the web and social platforms are often long
and semantically challenging, making them difficult to annotate. Obtaining
human annotation becomes increasingly difficult as problem domains become more
specialized. For example, many health NLP problems require domain experts to be
a part of the annotation pipeline. Thus, it is crucial that we develop
low-resource NLP solutions able to work with this set of limited-data problems.
In this study, we employ Abstract Meaning Representation (AMR) graphs as a
means to model low-resource Health NLP tasks sourced from various online health
resources and communities. AMRs are well suited to model online health texts as
they can represent multi-sentence inputs, abstract away from complex
terminology, and model long-distance relationships between co-referring tokens.
AMRs thus improve the ability of pre-trained language models to reason about
high-complexity texts. Our experiments show that we can improve performance on
6 low-resource health NLP tasks by augmenting text embeddings with semantic
graph embeddings. Our approach is task agnostic and easy to merge into any
standard text classification pipeline. We experimentally validate that AMRs are
useful in the modeling of complex texts by analyzing performance through the
lens of two textual complexity measures: the Flesch Kincaid Reading Level and
Syntactic Complexity. Our error analysis shows that AMR-infused language models
perform better on complex texts and generally show less predictive variance in
the presence of changing complexity. | Joseph Gatto, Sarah M. Preum | 2023-09-18T15:37:30Z | http://arxiv.org/abs/2309.09877v1 | Not Enough Labeled Data? Just Add Semantics: A Data-Efficient Method for Inferring Online Health Texts
###### Abstract
User-generated texts available on the web and social platforms are often long and semantically challenging, making them difficult to annotate. Obtaining human annotation becomes increasingly difficult as problem domains become more specialized. For example, many health NLP problems require domain experts to be a part of the annotation pipeline. Thus, it is crucial that we develop **low-resource NLP solutions** able to work with this set of limited-data problems.
In this study, we employ Abstract Meaning Representation (AMR) graphs as a means to model low-resource Health NLP tasks sourced from various online health resources and communities. AMRs are well suited to model online health texts as they can represent multi-sentence inputs, abstract away from complex terminology, and model long-distance relationships between co-referring tokens. AMRs thus improve the ability of pre-trained language models to reason about high-complexity texts. Our experiments show that we can improve performance on 6 low-resource health NLP tasks by augmenting text embeddings with semantic graph embeddings. Our approach is task agnostic and easy to merge into any standard text classification pipeline. We experimentally validate that AMRs are useful in the modeling of complex texts by analyzing performance through the lens of two textual complexity measures: the Flesch Kincaid Reading Level and Syntactic Complexity. Our error analysis shows that AMR-infused language models perform better on complex texts and generally show less predictive variance in the presence of changing complexity.
## Introduction
In recent years, fine-tuning pre-trained language models (PLMs) has become a standard approach to text classification [11, 12]. However, it is becoming clear that many tasks are too complex to be modeled using the standard fine-tuning pipeline and require a more intricate, task-specific solution. This is particularly true in the domain of low-resource NLP -- the set of tasks where large-scale human-annotated data is unavailable. An exact definition of what makes a problem low-resource varies throughout the literature [1], and is often both problem and domain-specific. However, it is clear that when a certain language is underrepresented [13], or a specific type of problem is very difficult to obtain annotation for [14], a low-resource solution can improve performance on a given problem.
Limited annotation is common amongst datasets grounded in web and social media data [15, 16, 17]. This is due to the complex nature of the data as they are often heterogeneous, multi-sentence texts, making them challenging to annotate. Additional degrees of complexity may be added for specific domains. For example, web and social texts regarding _health_ bring about more significant annotation challenges, as domain expertise is required for many health annotation tasks. Works supporting low-resource health NLP tasks have become increasingly common in recent years. However, many of these works focus on clinical health texts [16, 17]. Related works also lie in the field of health data augmentation [15, 16]. However, task-agnostic augmentation methods have been shown to have limited scope for boosting PLM performance [13] with specific examples of augmentation challenges for complex online health texts being discussed in [10].
Limited work has been done about generalizable solutions to modeling _online health texts_, i.e., healthcare texts specific to online health resources, communities, and social media, which we argue require explicit NLP solutions as they contain their unique linguistic traits. For example, the context within which various health terms are discussed on public platforms constantly evolves, making online textual health data semantically challenging to model. Additionally, most transformers are pre-trained with formal or grammatically correct language and are not inherently well-suited to user-generated texts. Finally, web and social health platforms encourage texts which are often multi-sentence, which pose unique NLP modeling challenges for understanding long-range dependency information between co-referring tokens. To address these challenges, in this study, we propose using Abstract Meaning Representation (AMR) graphs to model complex web and social health texts.
Abstract Meaning Representation (AMR) graphs have be
come a popular semantic structure used in a variety of NLP tasks [14, 15, 16]. Unlike other linguistic modeling tools such as Semantic Role Labeling (SRL) or Dependency Trees (DTs), AMRs abstract _away_ from the text, representing meaning using only high-level semantic concepts from a fixed vocabulary. Figure 1 provides an example AMR graph. The graph contains mostly abstract concepts instead of actual words from the original sentence.
In this study, we show how we can leverage AMR graphs to improve performance on various low-resource health classification tasks by augmenting text embeddings with AMR information. We define a low-resource health task as one with \(<6,000\) human-annotated samples. This is a conservative definition of low-resource for health texts consistent with related literature in NLP and web mining [1, 13, 14]. AMRs are an intuitive choice for the modeling of low-resource online health texts as AMRs provide a compact representation of the input space while explicitly modeling co-reference of multi-sentence texts [15]. This makes AMRs well-suited to model the nuances of health texts. Additionally, AMRs can abstract away from complex health terminology. Consider the example in Figure 1, where "Epidemics" is abstracted away to "Emergency". Such abstraction can make BERT-based models more generalizable for domain-specific tasks. AMRs are also easy to integrate into existing transformer-based NLP pipelines. Finally, AMRs can be adapted to other languages, making them more inclusive than other English-only semantic structures.
We validate our claim that AMRs aid in the modeling of complex health texts by investigating how performance changes with respect to textual complexity. We do this through the lens of two different complexity metrics 1) Estimated Reading Level [10], a statistical measure of textual reading difficulty and 2) Syntactic Complexity [12], a measure of the average difference between co-referring tokens in a given text. Our results show that AMR-infused PLMs exhibit better performance on texts with higher degrees of textual complexity when compared to an off-the-shelf text-only model.
A summary of our contributions are as follows:
1. We show that AMR-augmented classifiers can increase performance across various low-resource health NLP domains including medical advice modeling, telemedicine, and the modeling of medical research literature. Augmenting text classifiers with AMR embeddings is shown to, on average, provide a 3-pt increase in F1 score to frozen text encoders and 1-pt increase in F1 score to dynamic end-to-end text encoders across all tasks.
2. We provide a thorough error analysis of our solution through the lens of textual complexity. We verify our claim that AMRs improve the modeling of complex texts by analyzing model performance on samples at varying reading levels and degrees of syntactic complexity. Our analysis concludes that AMR-based models outperform text-only models on high-complexity health texts.
## 2 Background and Related Works
### Low-Resource Health NLP
In recent years, the machine learning community has seen an emergence of important Health NLP problems. Unfortunately, many tasks in this space are difficult to obtain large-scale annotation for and live in the domain of low-resource NLP. For example, the onset of the COVID-19 pandemic inspired many researchers to construct COVID-specific misinformation datasets from web articles and social media posts [20]. However, given the difficulty of annotating health texts, it often appears that the best-case scenario for many of these problems is to obtain a few thousand ground truth, human-annotated samples, with many containing less than 1,000 annotations [23, 24]. Such little annotation hinders the use of various deep NLP architectures to solve these problems. Many other examples of low-resource health datasets can be found in the domain of public health entity extraction [12], predicting suicide risk [10], and detecting adverse drug reactions on Twitter [13]. In this work, we aim to study a diverse set of low-resource text classification tasks in the domain of Health NLP and how we can reduce the need for large numbers of human-annotated examples.
### Semantically Grounded Transformer Models
The explicit modeling of semantics in transformer-based architectures has become a popular way to increase their modeling capacity. For example, in [20] they show that augmenting BERT with Semantic Role Label information can improve performance on a variety of tasks within the GLUE benchmark. In [22], the authors show how explicit modeling of predicate-argument structures can increase a transformer's capacity to detect paraphrases. In [1] it is shown how leveraging semantic structures during pre-training can improve a transformer's ability to understand complex dialogues. Such works motivate our desire to embed semantic graphs to alleviate the need for large-scale data annotation.
Figure 1: Example AMR graph and corresponding linearization for a given text. The AMR graph abstracts away from the text into high-level semantic concepts connected by semantic relations. The linearized version of the graph is a depth-first traversal of the AMR that allows the AMR to be read by standard transformer models.
Unlike previous works, we specifically aim to leverage semantics in a way that can be easily plugged into any text classification pipeline.
### Abstract Meaning Representation Graphs
Generally speaking, the goal of Abstract Meaning Representation (AMR) graphs is to capture "who did what to whom" for a given text [1]. More specifically, AMRs provide a semantic representation which _abstracts away_ from the text, producing a semantic graph with no 1-to-1 mapping to the original text, but rather a high-level encoding of the semantics necessary to preserve meaning. An example AMR graph can be found in Figure 1. Numerous studies in recent years have shown various use cases for AMRs in the domains of data augmentation [14], dialogue modeling [1], sentiment classification [15], and semantic textual similarity [16].
Many AMR-based models involve complex architecture designs which enable the inclusion of AMRs in the text classification pipeline. This study differs in that we encode the AMR separately and simply concatenate the embedding to a text embedding. We aim to show that this simple model can significantly improve performance on low-resource health tasks with little engineering effort.
In [16] they explore the retrofitting of AMR embeddings to text embeddings similar to our concatenate-then-predict format. However, their study does not explore end-to-end training of the AMR encoder and exclusively studies the impact of AMRs on multi-lingual tasks. We base our AMR encoder on the model introduced in this work.
### Textual Complexity
The notion of textual complexity is inherently subjective and requires further grounding to perform analysis. We choose to look at two statistical measures: 1) _The Flesch Kincaid Reading Level_[17] is an estimate of what US grade level is required to read a given text. This metric is a function of the number of words per sentence and number of syllables per word. We find this metric to do a good job of separating short, simple texts from long, verbose texts and thus useful for this analysis. Most reading levels roughly fall within a range of 0-16 to represent kindergarten (least complex) through university level (most complex) texts. 2) _Syntactic Complexity_: Texts which are syntactically complex will have large distances between words which refer to one another in a sentence [10]. We obtain syntactic complexity by parsing each text into a dependency tree and computing the mean distance between parent and child nodes in the sentence. Most mean dependency scores fall in the range of 0-5, with higher numbers indicating samples with many co-referring tokens that are far from one-another in a given text. This metric is of interest as AMRs do a good job at capturing long-distance co-reference between nodes in multi-sentence texts. We compute both metrics using the TextDescriptives library 1. Examples of complexity statistics are displayed in Table 1.
Footnote 1: [https://github.com/HLasse/TextDescriptives](https://github.com/HLasse/TextDescriptives)
## Methods
In this section, we describe the end-to-end process to obtain AMR representations of text. We then discuss the contrastive learning framework for embedding AMR graphs. Finally, we discuss how AMR embeddings are incorporated into a text classifier.
Parsing & Linearizing AMR graphsTo obtain the AMR representations, we use the amrlib 2 library to parse each text into an AMR graph. Specifically, we use the 'parse_xfm_bart_base' model, which is a sequence-to-sequence parser based on the BART transformer [10]. In order to leverage the knowledge of pre-trained language models, we must convert the graph into a format which can be used by a transformer encoder. We do this by _linearizing_ the AMR graphs. This method is employed as opposed to operating on the graph directly, as it has been extensively shown to be the method of choice for many AMR-related tasks [12, 13]. To linearize a given graph, we employ a depth-first search-based linearization as done in [16]. An example AMR graph and it's corresponding linearization can be found in Figure 1. The resulting linearized graph can now be treated as a textual input in our classification pipeline.
Footnote 2: [https://github.com/bjascob/amrlib](https://github.com/bjascob/amrlib)
Embedding Linearized AMRsOnce the AMRs are linearized, one can employ traditional sentence embedding strategies such as _contrastive learning_ to build meaningful AMR representation vectors. Contrastive learning works
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}}
**Text** & **S-Complex** & **R-Complex** \\ \hline Is it safe to soak my earrings in rubbing alcohol (ethyl 70\%) everyday before putting them in \& putting them in right after? & 4.22 & 9.08 \\ \hline Aspirin allergy - is it worth getting a bracelet? & 1.7 & 6.70 \\ \hline Authorities have identified that the international chemical-warfare terrorist ‘Samuel Whitcomb Hyde’ is behind the deadly China ‘coronavirus.’’’ & 5.48 & 14.37 \\ \hline You shouldn’t open your windows & 1.5 & 0.51 \\ \hline \end{tabular}
\end{table}
Table 1: Example syntactic complexity (S-Complex) and reading level (R-Complex) from four samples on various ends of the complexity spectrum.
by constructing a dataset of triplets with the format (anchor, positive, negative), where the goal is for the model to push the embeddings of (anchor, positive) closer together and push (anchor, negative) further apart. This process produces meaningful semantic text embeddings that can be analyzed in high-dimensional space. Contrastive learning has been shown to be successful for many text embedding models Reimers and Gurevych (2019); Gao et al. (2021) as well as for multi-lingual AMR representations Cai et al. (2022).
We construct a contrastive learning dataset using Natural Language Inference (NLI) data Williams et al. (2018); Bowman et al. (2015). NLI is a pairwise inference task whose goal is to detect if Sentence B _entails_ or _contradicts_ Sentence A. This annotation serves as a surrogate contrastive triplet of the form (anchor, entailment, contradiction). The use of NLI for contrastive learning has become standard in the domain of sentence embeddings Gao et al. (2021). Our general framework for AMR embedding is inspired by Cai et al. (2022). To train our contrastive AMR embedding model, we first convert all 275,601 NLI training triplets into linearized AMR representations. We then fine-tune a PLM using the Sentence-Transformers library 3 with contrastive learning using the Multiple Negatives Rankings loss Henderson et al. (2017). Our AMR-encoder uses the MiniLM architecture Wang et al. (2020) as it's backbone. We use the MiniLM model provided by the Sentence-Transformers library as it is a parameter efficient model (and thus more inclusive of future works with hardware restrictions) which has been pre-trained for sentence embedding tasks -- an initialization we found empirically useful in our experimentation 4. We fine-tune our model using linearized AMRs for 1 epoch with a learning rate of \(2e-5\) and early stopping using the STS-B development set Cer et al. (2017). Additionally, we use mean pooling and the native PLM tokenizer.
Footnote 3: [https://www.sbert.net/](https://www.sbert.net/)
Footnote 4: Huggingface Model: ‘sentence-transformers/all-MiniLM-L6-v2’
**Merging Text & AMR Embeddings at Inference Time**
In all experiments where we combine text and AMR information, our final representation is a simple concatenation of text and AMR embedding. Figure 2 shows an end-to-end example of how texts become augmented by AMRs at inference time. We choose this approach as the purpose of this paper is to show that AMRs can be easily integrated into any NLP inference pipeline. We leave the application of more complicated AMR architectures [e.g. Grover et al. (2022); Li et al. (2022)] to future problem-specific applications.
## Evaluation Tasks
In this section, we provide details about each of our 7 evaluation tasks. An example from each dataset can be found in Table 2. We additionally introduce the metric used to evaluate each dataset as well as provide intuition on why AMRs are useful to the modeling of the given task.
### Health Advice Detection (HAD)
We explore Health Advice Detection (HAD) using the HAD dataset Li et al. (2021), which contains 5982 sentences from structured abstracts of biomedical research articles. The task is to detect if a sentence contains strong, weak, or no advice. We believe the HAD task will benefit from AMR's ability to model action polarities as well as common advice modifiers, such as conditionality and temporality. We evaluate performance on HAD by reporting the mean macro-F1 score over a stratified 5-fold cross-validation.
### Health Conflict Detection (HCD)
HCD was introduced in Preum et al. (2017), where the task is to take two pieces of health advice from a public health resource and detect if and how they are conflicting. This task aims at protecting users with multiple pre-existing conditions from following advice that is true in the context of one of their diagnoses but dangerous or conflicting in the context of another. In Gatto et al. (2022), they show that PLMs perform poorly on this task in part due to the complex multi-sentence nature of health advice data.
In this study, we explore two HCD sub-tasks: Conditional Conflict Detection (**HCD-C**) and Temporal Conflict Detection (**HCD-T**). A _conditional conflict_ occurs when two
\begin{table}
\begin{tabular}{l l c c} \hline \hline
**Tasks** & **Sample** & **Label** & **Evaluation Metric** \\ \hline
**Health Advice Detection** & Interventions to reduce self-harm in adolescents are needed. & Strong Advice & Macro F1 \\ \hline
**COVID Rumor** & Weed kills coronavirus & False & Macro F1 \\ \hline
**Medical Severity Detection** & I have pure pressures head aches and coughing persistent? & Severe & Macro F1 \\ \hline
**Medical Question Pairs** & Q1: Is hypo-therapy dangerous? & & Paraphrase \\ & Q2: Are effects of hypo-therapy permanent? I heard they can be dangerous. Is it true? & & Paraphrase \\ \hline
**Conditional Conflict Detection** & Advice 1: Limit dairy foods & & \\ & Advice 2: If stomach upset occurs while you are taking this medication, you may take it with food or milk. & Conflicting & Positive F1 \\ \hline
**Temporal Conflict Detection** & Advice 1: Limit liquids before bed & & \\ & Advice 2: Be sure to drink enough fluids to prevent dehydration unless your doctor directs you otherwise. & Conflicting & Positive F1 \\ \hline
**BIOSES** & Sentence 1: The oncogenic activity of mutant Kras appears dependent on functional Craf & 2/4 & Spearman’s Rank \\ \hline \hline \end{tabular}
\end{table}
Table 2: Sample data from each of our 7 evaluation datasets. Health Advice Detection, COVID Rumor, and Medical Severity Detection are all single-text datasets. Medical Question Pairs, Conditional/Temporal Conflict Detection and BIOSESS are pairwise-inference tasks. The label for each sample and evaluation metric for each dataset are shown on the right.
pieces of health advice only disagree under a certain condition. A _temporal conflict_ occurs only when two pieces of advice only differ in terms of temporality. Examples of each conflict type can be found in Table 2. We believe AMRs are well suited to aid in the modeling of such texts as the AMR vocabulary explicitly models conditional and temporal relationships between texts. A good example of AMRs ability to model temporality can be found in Figure 1. Here we see the AMR identify that Epidemics happen with some "frequency" and that "every 100 years" is abstracted into a "temporal quantity".
Each HCD task is a binary classification task identifying if a pair of advice texts do or don't contain a conditional or temporal quantity respectively. We evaluate using HCD's 2825 train and 470 synthetic test samples. Due to reasons outlined in [10], we are unable to perform cross-validation on HCD as many pieces of advice are used in multiple pairs which could lead to data leakage. Thus, we report the mean F1 score of the positive class over 5 experimental trials.
### COVID Rumor Detection
The onset of the COVID-19 pandemic sparked a significant spike in online textual health misinformation [13]. Detection of misinformation requires a complex understanding of world knowledge and semantic reasoning. Given AMRs use in argument modeling in existing literature [14], we feel they may help identify the argument structure of erroneous claims. We evaluate AMRs impact on misinformation detection through the **COVID Rumors** dataset [1]. COVID Rumors contains a sub-task where the goal is to detect if a rumor is True, False, or Unverified. The dataset consists of 4129 claims from online news articles. An example False, or misinformative rumor can be found in Table 2. Samples were collected from various fact-checking websites where veracity was explicitly mentioned for each claim. We evaluate performance on COVID Rumors by reporting the mean F1 score over a stratified 5-fold cross-validation.
### Detecting Medical Question Duplicates
The Medical Question Pairs (**MQP**) dataset [15] sources questions from an online health community, HealthTap.com, to identify duplicate user queries. This is an important task in telemedical triage pipelines. MQP contains 3048 pairs of questions annotated for duplicate queries. For this task, doctors were presented with 1524 questions and asked to write similar and dissimilar pairs to produce a duplicate questions dataset. We argue that AMRs are valuable in the detection of questions with similar semantics and thus relevant to tasks like predicting question duplicates. We evaluate performance on MQP by reporting the mean macro F1-score over a stratified 5-fold cross-validation.
### Classifying Severity of Telemedical Queries (Med-Severity)
The Medical Severity Classification (**Med-Severity**) task [10] contains 573 telemedical queries from online health communities such as HealthTap, HealthcareMagic, and iCliniq annotated for perceived severity. Similar to MQP, this is a crucial task in the telemedical triage pipeline. Med-Severity is a binary classification task where samples are annotated as either "Severe" or "Non-Severe". We evaluate performance on this dataset via the mean macro F1 over a 5-fold stratified cross-validation.
### Health-Specific Semantic Textual Similarity
We also evaluate how AMRs aid in health-specific semantic textual similarity (STS) through the lens of the **BIOSSES** dataset [12]. BIOSSES is a biomedical STS dataset containing 100 sentence pairs from biomedical research literature. Annotation is done on a scale of [0-4], with 0 meaning sentences have no similarly and 4 meaning they are semantically equivalent. AMRs have been shown to aid related STS tasks in the literature [14]. As mentioned in our discussion of MQP, AMRs, being they are a semantic structure, should be able to aid in the modeling of semantic relatedness.
Unlike our other experiments, BIOSSES is evaluated in a completely unsupervised setting. We first map the labels to be between [0-1] and then compute the cosine similarity between each pair of vectors. This is the standard approach for STS evaluation in the literature [10]. We report the Spearman's Rank correlation between all the cosine similarities and ground truth annotations as our BIOSSES evaluation metric.
Figure 2: End-to-end pipeline displaying how each text gets \(\rightarrow\) Parsed into an AMR graph \(\rightarrow\) Linearized into a flattened string representation \(\rightarrow\) Embedded by our AMR Embedding Module \(\rightarrow\) AMR Embeddings are then concatenated with Text Embeddings and fed into a linear classifier. Note this example describes the process for a single-text input task. When the task is pairwise, additional steps in the pipeline occur to accommodate a second text.
## Evaluation Setting
We perform each experiment with both _static_ and _dynamic_ embeddings. We define a static embedding experiment as one where learning occurs _only_ in the classification head (i.e. text and AMR encoders are frozen). A dynamic embedding experiment is where the text and AMR encoders can be updated during training, i.e., they are _not frozen_. Evaluating in both contexts is important for low-resource learning as small datasets can overfit in end-to-end fine-tuning settings. So it is useful to analyze performance in the static setting where the pipeline is more regularized.
In each experiment we use the MiniLM-based SBERT [22] architecture provided by the SentenceTransformers library [19] as it is one of their top performing, most popular sentence encoders which has been pre-trained on over 1-billion sentence pairs 5. We choose this PLM as it is extremely accessible in terms of both availability and number of parameters. We use the same encoder backbone for both text and AMR models, as well as static and dynamic experiments to maintain fair and consistent evaluation across different evaluation settings.
Footnote 5: Huggingface model string: sentence-transformers/all-MiniLM-L6-v2
All static embedding experiments are performed by taking a fixed embedding and feeding it into a linear classification head. We train the linear layers for 5 epochs with a learning rate of 0.001 using the AdamW optimizer with weight decay = 0.01 to reduce overfitting. All classification experiments use the standard Cross Entropy loss with a balanced class weight.
Dynamic embedding experiments are similarly performed by passing embeddings into a linear classification head. However, since we allow gradient updates to occur in the encoders, we fine-tune for 5 epochs with a learning rate of \(5e-5\). All other training parameters are the same as the static experiments.
## Results
### Static Embeddings
We find that all of the advice datasets (i.e. HCD-C, HCD-T, HAD) find static AMR embeddings useful for text classification. This validates our claim that advice texts have explicit semantic structures which can be exploited by AMR graphs in text classification pipelines. A particularly interesting result is that an AMR-only model with no access to the text shows higher performance than text-only models on both the HCD-C and HCD-T tasks. This is likely due to AMRs capacity to model explicit conditional and temporal relationships in the data. Patterns amongst labels and AMR relations such as _:condition_ and _:time_ may have provided a useful signal towards the detection of a conditional or temporal conflict. This result motivates future work which aims to connect AMRs to HCD in a more task-specific manner.
We find that AMRs do not help the MQP dataset in the static setting. This result is surprising as duplicate questions should have similar AMRs. However, it may be too challenging for the AMRs to be compared in the linear classification head of the static classifier, causing poor results on this task. On average, AMR-infused models aided in predictions of the COVID Rumor dataset. However, each run had relatively high variability as shown in Table 3, thus limited conclusions can be drawn from such experiments. On the Med-Severity dataset, AMRs were shown to provide a performance boost of 2 F1-pts. Given the extremely low-resource nature of Med-Severity, this task benefits mainly from the additional modality, as severity detection has less overt connections with semantics and greater ties to world/medical knowledge.
Finally, we find that improved performance in the BIOSSSE task follows other results in the literature where AMR improves results on STS [13]. That is, simple concatenation of text and semantic embeddings can improve performance on unsupervised cosine-similarity tasks such as BIOSSSES, even without any domain-specific pretraining.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline Model & HCD-C & HCD-T & MQP & COVID Rumor & HAD & Med-Severity & BIOSSS & Avg. \\ \hline \hline \multicolumn{10}{c}{_Static Embeddings_} \\ \hline SBERT & 0.20 \(\pm.13\) & 0.69 \(\pm.003\) & \(\mathbf{0.69\pm.01}\) & \(0.64\pm.04\) & 0.68 \(\pm.01\) & 0.86 \(\pm.03\) & 0.81 & 0.64 \\ AMR-Only & \(\mathbf{0.41\pm.02}\) & 0.72 \(\pm.004\) & 0.59 \(\pm.03\) & 0.62 \(\pm.04\) & 0.73 \(\pm.01\) & 0.85 \(\pm.03\) & 0.76 & 0.65 \\ SBERT+AMR & 0.34 \(\pm.006\) & \(\mathbf{0.74\pm.003}\) & 0.67 \(\pm.03\) & \(\mathbf{0.66\pm.04}\) & \(\mathbf{0.76\pm.01}\) & \(\mathbf{0.88\pm.01}\) & \(\mathbf{0.83}\) & \(\mathbf{0.67}\) \\ \hline \multicolumn{10}{c}{_Dynamic Embeddings_} \\ \hline SBERT & \(0.79\pm.01\) & 0.83 \(\pm.02\) & \(\mathbf{0.77\pm.04}\) & \(\mathbf{0.71\pm.05}\) & \(\mathbf{0.89\pm.01}\) & 0.88 \(\pm.03\) & - & 0.81 \\ AMR-Only & 0.67 \(\pm.01\) & 0.78 \(\pm.02\) & 0.67 \(\pm.05\) & 0.64 \(\pm.03\) & 0.84 \(\pm.01\) & 0.87 \(\pm.03\) & - & 0.74 \\ SBERT+AMR & \(\mathbf{0.82\pm.01}\) & \(\mathbf{0.85\pm.02}\) & \(\mathbf{0.77\pm.06}\) & \(\mathbf{0.71\pm.04}\) & \(\mathbf{0.89\pm.01}\) & \(\mathbf{0.89\pm.02}\) & - & 0.82 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Each column describes performance of Text-Only, AMR-Only, and Text+AMR models for each evaluation task. In the static evaluation setting, we find AMRs improve performance on 6/7 evaluation tasks. In the dynamic evaluation setting, AMRs only improve performance on 3/6 tasks. Note that BIOSSSES is not applicable in the dynamic setting as STS is evaluated completely unsupervised.
### Dynamic Embeddings
Our experimental results show that the only tasks with a conclusive boost in performance from AMR-infused PLMs are the HCD tasks. Again, AMRs are particularly well-suited for such tasks given the overlap between the HCD label space and the AMR relation vocabulary. On other inference tasks such as MQP, COVID Rumor, and HAD, we do not find AMR-infused models to be effective in the dynamic setting. We are again surprised by the result that AMRs to not aid in the detection of medical question duplicates. Future works should explore if domain-specific pre-training or deep learning architectures which are built to _compare_ AMR graphs for text classification may make them useful for this task. Both COVID Rumor and HAD saw the same performance with and without AMR embeddings. Since AMRs were extremely helpful in the static setting for HAD, it may be the case that AMRs are only useful when advice modeling is in an either lower-resource or more regularized state (e.g. static embedding evaluation). We can confirm this suspicion by re-running the dynamic experiments in an artificially lower-resource setting by randomly sampling 500 examples from HAD and re-running the experiment. Our results in Table 4 confirm that, in the extremely low resource setting, AMRs to in fact help the dynamic HAD experiment. They do not, however improve performance on extremely low-resource variants of MQP or COVID Rumor. In general, low-performance on COVID-Rumor is likely due to the nature of veracity prediction as it is extremely dependent on world knowledge. Future works may wish to explore incorporating AMRs into knowledge-infused PLM architectures.
## Why Do AMRs Improve Performance on Low-Resource Health Datasets?
In this section, we provide an error analysis of the four static experiments where we saw the greatest improvement from the inclusion of AMR embeddings -- HAD, HCD-C, HCD-T, and Med-Severity. We are specifically interested in investigating our claim that AMRs improve performance on complex health texts, which can be both multi-sentence and filled with challenging vocabulary.
Why Analyze Complexity?We choose to perform error analysis through the lens of complexity as AMRs are by design abstracting away from individual words and into a smaller set of high-level concepts, producing a simpler representation of textual semantics. Our intuition is that if AMRs are doing a good job at representing a sentence using only relations between semantic concepts, it should make the modeling of difficult texts more efficient. However, it is important to mention that these two metrics are not a perfect measure of _sample difficulty_. For example, the Flesch Kincaid Reading Level considers sentences with many high-syllable words as being more complex. However, one could argue that the high-syllable adjective "significant" is easier to understand in context than the noun "bank", as one could be referring to a river bank, a bank shot, or the Bank of America. However, our empirical analysis found this to be a reasonably good way to separate extremely easy texts from the extremely difficult ones. In other words, we believe there is a functional difference in sample complexity when we look at two samples that are very far away on our defined complexity spectrum and thus they serve as a useful measure of text complexity. In summary, this analysis doesn't necessarily mean we are improving performance on more _difficult_ samples (i.e. difficult for transformers to understand), but it does show how performance changes through the lens of different linguistic attributes often associated with what humans understand to be complex texts.
Experimental Setup:In order to evaluate how Reading Level and Syntactic Complexity affect performance, we conduct the following experiment: First, we take the predictions from our SBERT Sentence-Only static experiment and compare it to the Sentence + AMR variant. For HAD and Medical Severity Detection, we look at all predictions from all 5 test sets from each fold in our 5-fold cross-validation. For the HCD tasks, we look at the mean prediction from each of our 5 experimental trials. For each text, we compute each measure of complexity. We then split the predictions into bins based on complexity. For each sample bin, we recompute the macro F1 score for samples in that bin and plot them in Figure 3.
The Relationship Between F1 Score and Syntactic Complexity:Figure 3 shows that AMR-infused models perform better across-the-board on texts with the highest syntactic complexity. We specifically find an interesting pattern of performance divergence between text-only and AMR-infused models as the complexity of texts increases. For example, on the HCD-T task, both models perform equally well on easy texts, but start to diverge as texts get more and more complex. A similar pattern is found on the Med-Severity and HCD-C tasks, where there is a clear divergence in F1 between the last two complexity bins. On the HAD task, while we don't see the same divergence pattern, we do notice that the AMR-infused model has much less variability with respect to changes in syntactic complexity when compared to text-only models. We can thus conclude that AMRs make PLMs more robust to syntactically complex health texts in low-resource settings.
The Relationship Between F1 Score and Reading Level:Our results show that, for HCD-C, HCD-T, and HAD,
\begin{table}
\begin{tabular}{l l l l} \hline \hline Model & MQP & COVID Rumor & HAD \\ \hline SBERT & \(\mathbf{0.68\pm.06}\) & \(0.60\pm.05\) & \(0.75\pm.04\) \\ AMR-Only & \(0.54\pm.04\) & \(0.55\pm.08\) & \(0.75\pm.02\) \\ SBERT+AMR & \(0.66\pm.06\) & \(\mathbf{0.61\pm.04}\) & \(\mathbf{0.80\pm.01}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results of the dynamic experiments on a random subset of 500 samples from each of our three largest datasets. This experiment aims to investigate if AMRs are more useful in an extremely low-resource setting. Our results find that HAD shows significant improvement from AMR embeddings when only presented with 500 samples. The other two tasks, however, still show no conclusive evidence that they benefit from semantic graph embedding.
Reading Levels for both Sentence-Only and Sentence+AMR models follow similar trends. That is, changes in complexity seem to affect performance for Sentence-Only and Sentence+AMR models similarly. However, AMR-infused PLMs perform better on texts with higher reading levels in general. On the Med-Severity dataset, we find a significant drop in F1 between middle school and high school level texts when compared to the AMR-infused model which remains relatively stable. From this experiment, we can only conclude that AMRs do in fact perform better on texts with higher-reading levels. However, we do not observe the same divergence trend found in the syntactic complexity experiment, as AMR-infused models lose performance at a similar rate to text-only models as the reading level increases.
## Broader Impacts
**Seamless Integration of AMR Embeddings For Any Downstream Health NLP Task:** This study improves performance on low-resource health tasks while maintaining an easy-to-implement classification technique. The success found using a simple concatenate-then-predict infrastructure means this work can be easily extended to any future low-resource Health NLP tasks with little engineering effort. Additionally, we release our model using the popular Hugging-face / Sentence-Transformer libraries which allow users to produce AMR embeddings with very few lines of code.
**Reducing The Need for Large-Scale Annotation of Health Web Texts:** Our work shows that with semantic modeling, we may be able to reduce the need for large-scale annotation of training samples for Health NLP tasks. This result extends the problem space solvable by encoder-based PLMs, specifically augmenting the scope of PLMs for web and social media texts.
**Complex Texts Become More Accessible to PLMs:** Our work shows that AMR embeddings make complex health texts easier for PLMs to process. This is important as social media datasets from sources like Twitter, Facebook, and Reddit often produce multi-sentence samples. Thus, explicit modeling of multi-sentence texts is crucial for the advancement of social media-based text classification pipelines.
## Ethical Impact
This research involved no human subjects as experiments were all run on publicly available data. The selection of datasets chosen aimed to represent a diverse and relevant set of Health NLP tasks. However, it may be the case that the datasets chosen contain sample distributions that underrepresent certain population groups. For example, the MQP dataset contains 1524 unique questions, which is small compared to the list of possible medical ailments one may inquire about. Future works which leverage AMR embeddings for medical duplicate detection should ensure performance stays consistent across different types of medical conditions not reflected in the MQP dataset. Similarly, samples in the conflict detection dataset are from official public health sources which target the general population and may not reflect health conflicts common among smaller underrepresented subgroups. Any deployment of an AMR-based solution to low-resource health tasks should be aware of such data biases.
## Limitations & Future Work
**Limitations:** Any AMR-based model which depends on silver-labeled AMR parsings of texts is inherently limited
Figure 3: Analyzing how the F1 score relates to reading level and syntactic complexity on four datasets using the static evaluation setup. The x-axis denotes the textual complexity bins (higher = more complex). Specifically, the x-axis of reading level experiments are binned by US grade levels. The x-axis of syntactic complexity experiments are binned by mean dependency distance. Our results show that AMR-based models perform better and show less predictive variance on the most complex health texts.
by the performance of the AMR parser. In this paper, the parser we employed achieves an 82.3 SMATCH [12] score on the AMR LDC 2020 dataset. This parser shows high performance, and we may see AMR embedding results improve as AMR parsing algorithms become more accurate. However, given that there are potential errors in the AMR parsing process, we may find that difficult samples produce bad semantic embeddings, which may introduce errors into the system. This problem could potentially become more prevalent as dataset complexity increases.
Our evaluation framework has additional limitations as some tasks (e.g., HCD-T, HCD-C) are not constructed to use a hold-out evaluation set for training without encountering data leakage. Similarly, other tasks, such as Med-Severity, are extremely low-resource (\(<1000\) samples) and are thus too small to permit a hold-out validation set. These challenges motivate using a standardized set of training parameters, but this may not produce the optimally performing model for each experiment. Similar discussions on low-resource validation sets can be found in the literature [13].
Future Works:Future work on AMR embeddings for health tasks may explore domain-specific pre-training strategies. In this work, we only leverage NLI data for AMR embeddings, but improvements may be found via in-domain pre-training for any of our evaluation datasets. For example, unsupervised pre-training of our AMR encoder on COVID-19 tweets may have improved performance in the static embedding experiments for COVID Rumor.
Additionally, future works may explore more complicated social media sources such as Reddit, where posts can often be multiple paragraphs long. Gold-labeled AMRs have annotations for many samples with multiple sentences, making them potentially valuable for modeling paragraphs. Evidence for AMRs use in tasks with longer, more complicated texts are evident in dialogue modeling [1] and thus may apply to longer Reddit posts.
## Conclusion
The challenges associated with coordinating large-scale human annotation of health texts, such as time constraints, access to resources, and access to domain experts, are unlikely to go away soon. We must take explicit steps towards crafting low-resource solutions for texts in the health space as they are crucial to implementing many safety-critical public resources. In this work, we introduce AMR embeddings in the context of low-resource health tasks and show how they can help increase performance without large-scale datasets. We additionally show that AMRs are helpful in modeling complex health texts found on the web and online health communities, which are often complicated multi-sentence text with varying degrees of nuance.
|
2305.19769 | Attention-Based Methods For Audio Question Answering | Audio question answering (AQA) is the task of producing natural language
answers when a system is provided with audio and natural language questions. In
this paper, we propose neural network architectures based on self-attention and
cross-attention for the AQA task. The self-attention layers extract powerful
audio and textual representations. The cross-attention maps audio features that
are relevant to the textual features to produce answers. All our models are
trained on the recently proposed Clotho-AQA dataset for both binary yes/no
questions and single-word answer questions. Our results clearly show
improvement over the reference method reported in the original paper. On the
yes/no binary classification task, our proposed model achieves an accuracy of
68.3% compared to 62.7% in the reference model. For the single-word answers
multiclass classifier, our model produces a top-1 and top-5 accuracy of 57.9%
and 99.8% compared to 54.2% and 93.7% in the reference model respectively. We
further discuss some of the challenges in the Clotho-AQA dataset such as the
presence of the same answer word in multiple tenses, singular and plural forms,
and the presence of specific and generic answers to the same question. We
address these issues and present a revised version of the dataset. | Parthasaarathy Sudarsanam, Tuomas Virtanen | 2023-05-31T12:00:51Z | http://arxiv.org/abs/2305.19769v1 | # Attention-Based Methods For Audio Question Answering
###### Abstract
Audio question answering (AQA) is the task of producing natural language answers when a system is provided with audio and natural language questions. In this paper, we propose neural network architectures based on self-attention and cross-attention for the AQA task. The self-attention layers extract powerful audio and textual representations. The cross-attention maps audio features that are relevant to the textual features to produce answers. All our models are trained on the recently proposed Clotho-AQA dataset for both binary yes/no questions and single-word answer questions. Our results clearly show improvement over the reference method reported in the original paper. On the yes/no binary classification task, our proposed model achieves an accuracy of 68.3% compared to 62.7% in the reference model. For the single-word answers multiclass classifier, our model produces a top-1 and top-5 accuracy of 57.9% and 99.8% compared to 54.2% and 93.7% in the reference model respectively. We further discuss some of the challenges in the Clotho-AQA dataset such as the presence of the same answer word in multiple lenses, singular and plural forms, and the presence of specific and generic answers to the same question. We address these issues and present a revised version of the dataset.
Audio question answering, attention mechanism, Clotho-AQA
## I Introduction
Question answering (QA) is the task of producing natural language answers when posed with questions in natural language. Often, these questions are accompanied by a natural signal such as an image or audio and the questions posed are about the contents of these signals. If the auxiliary input is an image, the task is referred to as visual question answering (VQA) and if it is an audio signal, it is called audio question answering (AQA). Although the question answering framework is somewhat well-studied for image [1, 2, 3, 4, 5] and textual modalities [6, 7, 8], audio question answering is comparatively less explored. Audio question answering unlocks new possibilities in areas such as monitoring and surveillance, machine listening, human-technology interaction, acoustical scene understanding, etc.
One of the challenging aspects of any multimodal machine learning system is how the information from different modalities is fused to achieve a given task. Traditionally, in question answering systems, the multimodal features are fused using point-wise multiplication [9] or they are concatenated [10] to generate a multimodal representation. This may not be an efficient strategy as these features are learned independently without any context from the other modality. Our hypothesis is that using an attention mechanism to learn a multimodal representation helps the model to learn specific features in the audio representation that are closely related to the natural language words in the question and hence improve the performance of the system.
Recently, attention-based architectures [11] have achieved state-of-the-art performances in various tasks ranging from natural language processing [11], to image classification [12], sound event detection [13], sound event localization and detection [14] etc. The ability of transformers to learn powerful and meaningful representations is due to the self-attention and cross-attention layers. Traditional self-attention layers learn bidirectional temporal characteristics of their inputs efficiently. Cross-attention layers are useful to learn or combine features in multimodal translation tasks. For example, in visual question answering, [15, 1] used cross-attention layers to improve the ability of the model to find relevant visual cues depending on the question. However, the effect of attention layers for audio question answering task is unexplored.
In this work, we propose neural network architectures based on attention mechanisms and study their effectiveness for the audio question answering task. Our results show improvement over the reference method described for the Clotho-AQA dataset in [10].
The remainder of this paper is organized as follows. In Section II, our proposed method for the AQA task is explained. Then in Section III, the dataset, reference methods, and experimental setup are described in detail. Subsequently, in Section IV, the results of all our experiments are presented. Finally, in Section V, the conclusion of this work and possible future works are discussed.
## II Methods
### _Proposed model_
An AQA system processes an audio signal and an associated natural language question to produce a natural language answer. The answers can be produced either using a generative model or the answers can be chosen from a list of possible answers using a discriminative classification model.
In this work, the AQA task is tackled as a classification problem. Our proposed model architecture is shown in Figure 1. It consists of two input branches, one for processing the audio features and the other for textual features. The audio branch takes in the mel-spectrogram of the audio signal with 128 mel bands and \(T\) time frames and uses a pre-trained OpenL3 [16] to extract audio features. The OpenL3 model is
based on L\({}^{3}\)-Net [17] trained on videos from the Audioset [18] dataset for audio-visual correspondence task. The output from the pre-trained OpenL3 audio sub network is \(\mathbf{X}_{a}\in\mathbb{R}^{T^{\prime}\times 512}\), where \(T^{\prime}\) is the number of output time frames in the OpenL3 model and 512 is the audio embedding size.
Similarly, the text branch produces the word vectors of the input textual question with \(z\) words using the pre-trained Fasttext [19]. The output representation from the Fasttext model is \(\mathbf{P}_{t}\in\mathbb{R}^{z\times 300}\), where 300 is the dimension of the learned word vectors.
These extracted features are passed individually through a series of self-attention (SA) layers for both modalities. In the SA mechanism, each time step in the input feature attends to all other time steps to learn temporal relationships. Since all the time steps are fed simultaneously to the self-attention layer, the order of input features is not known. Hence, we add sinusoidal positional embeddings described in [11] to each of the time steps of the audio and textual features which aids the model in learning the relative positions of these features.
The SA layers calculate the dot-product attention of the input features with themselves. For any input \(\mathbf{H}\in\mathbb{R}^{t\times i}\), where \(t\) is the number of time steps and \(i\) is the input dimension, the output of the SA layer is calculated as
\[\text{SA}(\mathbf{H})=\text{softmax}(\mathbf{H}\mathbf{W_{q}}\mathbf{W_{k}^{ T}}\mathbf{H^{T}})\mathbf{H}\mathbf{W_{v}} \tag{1}\]
where, \(\mathbf{W_{q}},\mathbf{W_{k}}\in\mathbb{R}^{i\times k}\) and \(\mathbf{W_{v}}\in\mathbb{R}^{i\times o}\) are learnable query, key and value matrices respectively. Here, \(k\) is the key dimension in the attention layer and \(o\) is the output dimension. The softmax operation is performed over the rows. Note that in a self-attention layer, the query, key, and value are calculated from the same input.
For the audio features \(\mathbf{X_{a}}\) obtained from the OpenL3 model, the output of the \(n^{th}\) audio SA layer is \(\mathbf{X}_{n}\in\mathbb{R}^{T^{\prime}\times N}\), where \(N\) is the output attention size. Similarly, for the word vectors \(\mathbf{P}_{t}\) from the Fasttext model, the output of the \(n^{th}\) text SA layer is \(\mathbf{P}_{n}\in\mathbb{R}^{z\times M}\), where \(M\) is the output attention size. In our experiments, the output attention size of the SA layers in the audio branch is fixed to 512, while that of the text branch is fixed to 300. We used two SA layers for both branches.
To perform a fusion of the audio and textual features, we use multi-head cross-attention (MHCA) layers to learn relevant multimodal features. Attention is applied from the textual features to the audio features to determine which audio features are important to each of the question words. Hence, the output of the final text self-attention layer \(\mathbf{P}_{2}\in\mathbb{R}^{z\times 300}\) is used to compute the query and the output of the final audio self-attention layer \(\mathbf{X}_{2}\in\mathbb{R}^{T^{\prime}\times 512}\) is used to calculate the key and value inputs to the MHCA layers. The output of a cross-attention (CA) layer is computed as
\[\text{CA}(\mathbf{P_{2}},\mathbf{X_{2}})=\text{softmax}(\mathbf{P_{2}}\mathbf{ W_{q}}\mathbf{W_{k}^{T}}\mathbf{X_{2}^{T}})\mathbf{X_{2}}\mathbf{W_{v}} \tag{2}\]
where, \(\mathbf{W_{q}}\in\mathbb{R}^{300\times 512}\), \(\mathbf{W_{k}}\in\mathbb{R}^{512\times 512}\) and \(\mathbf{W_{v}}\in\mathbb{R}^{512\times O}\) are learnable query, key and value matrices respectively and the softmax operation is performed over the rows. Here, \(O\) is the output dimension. For an MHCA layer with M attention heads, the outputs from all the heads are concatenated along the rows and \(\mathbf{W_{p}}\in\mathbb{R}^{MO\times O}\), a learned projection matrix projects it into the desired output dimension. The output of the MHCA layer is given by
\[\text{MHCA}(\mathbf{P_{2}},\mathbf{X_{2}})=\underset{m=1\dots M}{\text{Concat}} [\text{CA}_{m}(\mathbf{P_{2}},\mathbf{X_{2}})]\mathbf{W_{p}} \tag{3}\]
In all our experiments, the output attention size is fixed at 1024 with 8 attention heads. All the hyperparameters were tuned based on the model's performance on the validation data. The output of the MHCA layer is \(\mathbf{D}\in\mathbb{R}^{z\times O}\). To obtain a fixed size representation, the mean is taken over the words axis of the output of the attention layer to produce \(\mathbf{D^{\prime}}\in\mathbb{R}^{O}\). This is then passed through two dense layers for combining the learned features and then to the classification layer.
We developed two classifiers using this architecture. A binary classifier for questions that have 'yes' or 'no' as answers and a multiclass classifier for questions that have other single-word answers in the Clotho-AQA dataset. The classification layer is a logistic regressor with one neuron for the binary classifier. In the case of the multiclass classifier, the final classification layer contains as many neurons as the number of unique answer words in the dataset followed by a softmax activation to predict the probabilities.
Fig. 1: Proposed attention model architecture
## III Evaluation
### _Dataset_
We trained and evaluated our models on the recently proposed Clotho-AQA dataset [10]. The dataset contains 1991 audio files randomly selected from the Clotho dataset [20]. The Clotho dataset is an audio captioning dataset that contains 4981 audio files that are 15-30s in duration. It contains audio files of day-to-day sounds occurring in the environment such as water, nature, birds, noise, rain, city, wind, etc. In the Clotho-AQA dataset, for each of these audio files, there are four 'yes' or 'no' questions and two single-word answer questions. For each question, answers were collected from three independent annotators. Hence, each audio file is associated with 18 question-answer pairs. In the Clotho-AQA dataset, there are 828 unique single-word answers excluding 'yes' and 'no'. The complete process of data collection, cleaning, and creating the splits is detailed in [10]. Henceforth, this dataset is referred to as Clotho-AQA_v1 in this paper.
The Clotho-AQA_v1 dataset has a few limitations in single-word answers due to crowdsourcing. The dataset contains issues relating to specificity, tense, singular and plural words, etc in the answers. For example, to questions like 'What is making the chirping sound?', some annotators provided 'bird' as the answer while some provided'seagull' as a more specific answer. Although both these answers can be correct, they are considered different answer classes which creates confusion in the system. Tense issues generally occur when the same question is posed in different tenses for different audio files. For example, for questions like 'What is the person doing?' and 'What does the person do?' the answers are 'running' and 'run' respectively. An example of singular-plural answers is, for the question 'Which object is making the metallic sound?', some annotators answered with the singular form 'key' while some used the plural form 'keys'. These are considered different answer classes in the Clotho-AQA dataset and thus affect the performance of the classifier.
Since all the answers in the dataset are single words, the AQA system is trained as a classifier where each unique answer word is denoted by a class index for the ground truth labels. Hence, the system does not learn any language modeling from the answer words. Therefore, we addressed these three issues by replacing specific answer words with their parent classes (for example,'seagull' to 'bird'), all plural words to singular, and all answer tenses to simple present. After this cleaning process, we ended up with 650 unique single-word answers compared to 828 in Clotho-AQA_v1. This new version of the dataset is referred to as Clotho-AQA_v2 in this paper. The distribution of unique answer words in Clotho-AQA_v2 is shown in Figure 2.
Since each question is answered by three independent annotators, it is also important to analyze if the answers are the same or different from each other. For example in the Clotho-AQA_v1 test split for single-word answers, out of 946 unique questions, 203 questions have unanimous answers provided by all the annotators and 381 questions have two out of the three annotators providing the same answer. This means that the maximum possible top-1 accuracy of a system without modeling the characteristics of the annotator will be 61%. In the Clotho-AQA_v2, the maximum achievable accuracy of the multiclass classifier increased to 65%.
### _Reference methods_
In order to study the effects of cross-attention and self-attention mechanisms on the AQA task, two reference architectures are used. As the first reference model, we used the architecture proposed in [10]. This model architecture does not have any attention blocks. Similarly to the proposed model, the mel-spectrogram of the audio input is processed using the pre-trained OpenL3 model and the pre-trained word vectors are obtained using the Fasttext model. Compared to the proposed model, where the pre-trained audio and textual features are passed separately through SA layers, in this model these features are passed separately through a series of Bi-LSTM layers to learn the temporal features and create fixed-size representations. The hidden state of the final time step of the last Bi-LSTM layer is used as the fixed-size representation. These fixed-size representations from the audio and text branches are concatenated to produce the multimodal representation. The multimodal features are then processed by the dense and classification layers similarly to the proposed model.
The second reference model examines the effect of the cross-attention mechanism on the fusion of multimodal features. MHCA layers are introduced after the Bi-LSTM layers of the first reference system for multimodal representation instead of feature concatenation. The output of the audio Bi-LSTM units is \(\mathbf{X}_{audio}\in\mathbb{R}^{T^{\prime}\times 2h}\), where \(h\) is the number of hidden units in the Bi-LSTM layer and \(T^{\prime}\) is the number of output frames from the OpenL3 model. Similarly, the output of the textual branch Bi-LSTM is \(\mathbf{X}_{text}\in\mathbb{R}^{z\times 2h^{\prime}}\), where \(h^{\prime}\) is the number of hidden units in the Bi-LSTM layer and \(z\) is the number of words in the natural language question. The cross-attention is calculated on these features as explained in Section II. Similarly to the proposed method, the mean is taken over
Fig. 2: Count of unique answers in each of the splits in Clotho-AQA_v2
the words axis to produce a fixed-size representation and it is passed through dense and classification layers for predicting the answer class.
### _Network training_
All the models were trained and evaluated on both the Clotho-AQA_v1 and Clotho-AQA_v2 datasets. The data split for the binary classifier and the multiclass classifier is obtained by selecting 'yes' or 'no' answers and single-word answers respectively from the Clotho-AQA dataset splits as described in [10]. As a result, there are 1174, 344, and 473 audio files for training, validation, and test split respectively each associated with 12 yes/no and six single-word question-answer pairs.
The performance of the binary yes/no classifier is also analyzed on contradicting answers provided by different annotators to the same question similar to the approach proposed in [10]. In this regard, three cases are considered. In the first case, all the question-answer pairs are considered valid even if they contain contradicting answers. In the second case, only those question-answer pairs for which all three annotators have responded unanimously are considered valid. In the third case, a majority voting scheme is used, where for each question, the label is chosen as the answer provided by at least two out of the three annotators. These three cases are denoted as 'Unfiltered data', 'Unanimous', and 'Majority votes' respectively. Note that the binary classifier data set is the same in both Clotho-AQA_v1 and Clotho-AQA_v2.
All the models were trained for 100 epochs with cross-entropy loss and the model with the best validation score is used for evaluation on the test set.
## IV Results
The results of all the experiments on the Clotho-AQA data set for binary classification of 'yes' or 'no' answers are presented in Table I. In this table, 'LSTM' represents the first reference model that uses Bi-LSTM layers for feature extraction and feature concatenation for multimodal fusion. 'LSTM-CA' represents the model which uses Bi-LSTM layers for feature extraction and cross-attention layers for multimodal fusion. Finally, 'SA-CA' represents our proposed model which used self-attention layers for feature extraction and cross-attention layers for multimodal fusion.
Firstly, it is clear that using cross-attention layers significantly improves the performance compared to feature concatenation. This means that the cross-attention layer, helps the model to attend to audio features that are useful to answer the question. Secondly, using self-attention layers with positional embeddings to learn temporal relationships outperform Bi-LSTM layers.
The results of single-word answer multiclass classifier experiments on Clotho-AQA_v1 and Clotho-AQA_v2 are summarized in Table II and Table III respectively. Since the number of unique answer classes is large (828 in Clotho-AQA_v1 and 650 in Clotho-AQA_v2), top-5 and top-10 accuracy scores are also reported. These results indicate that the model is starting to learn relationships between the multimodal data.
It is again evident from the results that the self-attention and cross-attention mechanism significantly improve the evaluation metrics in the case of multi-class classifier as well. It is also noticeable that the model performs better on the Clotho-AQA_v2 dataset after resolving some issues present in the Clotho-AQA_v1 dataset. The proposed multi-class classifier has also reached close to the maximum possible accuracy by an Oracle model with both Clotho-AQA_v1 and Clotho-AQA_v2 data sets.
## V Conclusion
In this paper, we proposed self-attention and cross-attention based architectures for AQA task. We trained and evaluated our models on the recently proposed Clotho-AQA dataset referred to as Clotho-AQA_v1 in this work. We also discussed some of the challenges of this dataset such as the presence of the same answer word in multiple tenses, the presence of specific and generic answers to the same question, and the presence of singular and plural forms of the same word. These challenges were addressed and a revised version of this dataset Clotho-AQA_v2 was created. The results of our proposed attention models on both these datasets clearly prove that the cross-attention mechanism helps the model to learn better relationships between the input question and the audio compared to the reference methods. It is also evident that using self-attention layers with positional embeddings is powerful in learning useful audio and textual features compared to the Bi-LSTM layers used in the reference methods.
## Acknowledgment
The authors wish to acknowledge CSC-IT Center for Science, Finland, for the computational resources used in this research.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Model** & **Top-1** & **Top-5** & **Top-10** \\ \hline
**LSTM** & 54.2 & 93.7 & 98.0 \\ \hline
**LSTM-CA** & 57.5 & 99.8 & 99.9 \\ \hline
**SA-CA** & 57.9 & 99.8 & 99.9 \\ \hline
**Oracle model** & 61 & 100 & 100 \\ \hline \end{tabular}
\end{table} TABLE II: Accuracies (%) of single-word answers classifier on Clotho-AQA_v1 data set.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Model** & **Unfiltered** & **Unanimous** & **Majority votes** \\ \hline
**LSTM** & 62.7 & 73.1 & 63.2 \\ \hline
**LSTM-CA** & 66.2 & 75.4 & 66.3 \\ \hline
**SA-CA** & 68.3 & 77.1 & 68.3 \\ \hline
**Oracle model** & 86.2 & 100 & 100 \\ \hline \end{tabular}
\end{table} TABLE I: Accuracies (%) of binary ’yes’ or ‘no’ classifier on Clotho-AQA data set.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Model** & **Top-1** & **Top-5** & **Top-10** \\ \hline
**LSTM** & 59.8 & 96.6 & 99.3 \\ \hline
**LSTM-CA** & 61.3 & 99.6 & 99.9 \\ \hline
**SA-CA** & 61.9 & 99.8 & 99.9 \\ \hline
**Oracle model** & 65 & 100 & 100 \\ \hline \end{tabular}
\end{table} TABLE III: Accuracies (%) of single-word answers classifier on Clotho-AQA_v1 data set. |
2309.15286 | Composable Coresets for Determinant Maximization: Greedy is Almost
Optimal | Given a set of $n$ vectors in $\mathbb{R}^d$, the goal of the
\emph{determinant maximization} problem is to pick $k$ vectors with the maximum
volume. Determinant maximization is the MAP-inference task for determinantal
point processes (DPP) and has recently received considerable attention for
modeling diversity. As most applications for the problem use large amounts of
data, this problem has been studied in the relevant \textit{composable coreset}
setting. In particular, [Indyk-Mahabadi-OveisGharan-Rezaei--SODA'20, ICML'19]
showed that one can get composable coresets with optimal approximation factor
of $\tilde O(k)^k$ for the problem, and that a local search algorithm achieves
an almost optimal approximation guarantee of $O(k)^{2k}$. In this work, we show
that the widely-used Greedy algorithm also provides composable coresets with an
almost optimal approximation factor of $O(k)^{3k}$, which improves over the
previously known guarantee of $C^{k^2}$, and supports the prior experimental
results showing the practicality of the greedy algorithm as a coreset. Our main
result follows by showing a local optimality property for Greedy: swapping a
single point from the greedy solution with a vector that was not picked by the
greedy algorithm can increase the volume by a factor of at most $(1+\sqrt{k})$.
This is tight up to the additive constant $1$. Finally, our experiments show
that the local optimality of the greedy algorithm is even lower than the
theoretical bound on real data sets. | Siddharth Gollapudi, Sepideh Mahabadi, Varun Sivashankar | 2023-09-26T21:46:44Z | http://arxiv.org/abs/2309.15286v1 | # Composable Coresets for Determinant Maximization: Greedy is Almost Optimal+
###### Abstract
Given a set of \(n\) vectors in \(\mathbb{R}^{d}\), the goal of the _determinant maximization_ problem is to pick \(k\) vectors with the maximum volume. Determinant maximization is the MAP-inference task for determinantal point processes (DPP) and has recently received considerable attention for modeling diversity. As most applications for the problem use large amounts of data, this problem has been studied in the relevant _composable coreset_ setting. In particular, [8] showed that one can get composable coresets with optimal approximation factor of \(\tilde{O}(k)^{k}\) for the problem, and that a local search algorithm achieves an almost optimal approximation guarantee of \(O(k)^{2k}\). In this work, we show that the widely-used Greedy algorithm also provides composable coresets with an almost optimal approximation factor of \(O(k)^{3k}\), which improves over the previously known guarantee of \(C^{k^{2}}\), and supports the prior experimental results showing the practicality of the greedy algorithm as a coreset. Our main result follows by showing a local optimality property for Greedy: swapping a single point from the greedy solution with a vector that was not picked by the greedy algorithm can increase the volume by a factor of at most \((1+\sqrt{k})\). This is tight up to the additive constant \(1\). Finally, our experiments show that the local optimality of the greedy algorithm is even lower than the theoretical bound on real data sets.
Introduction
In the _determinant maximization_ problem, we are given a set \(P\) of \(n\) vectors in \(\mathbb{R}^{d}\), and a parameter \(k\leq d\). The objective is to find a subset \(S=\{v_{1},\ldots,v_{k}\}\subseteq P\) consisting of \(k\) vectors such that that the volume squared of the parallelepiped spanned by the points in the subset \(S\) is maximized. Equivalently, the volume squared of a set \(S\), denoted by \(\mathsf{vol}(S)\), is equal to the determinant of the Gram matrix of the vectors in \(S\). Determinant maximization is the MAP-inference of determinantal point processes, and both of these problems as well as their variants have found numerous applications in data summarization, machine learning, experimental design, and computational geometry. In particular, the determinant of a subset of points is one way to measure the _diversity_ of the subset, and thus they have been studied extensively over the last decade in this context [17, 7, 11, 3, 10, 23, 13].
The best approximation factor for the problem in this regime is due to the work of [20] who shows a factor of \(e^{k}\), and it is known that an exponential dependence on \(k\) is necessary [5] unless P = NP. However, the most common algorithm used for this problem in practical applications is a natural _greedy_ algorithm. In this setting, the algorithm first picks the vector with the largest norm, and then greedily picks the vector with largest perpendicular component to the subspace spanned by the current set of picked vectors, thus maximizing the volume greedily in each iteration. This algorithm is known to have an approximation factor of \((k!)^{2}[4]\).
As in most applications of determinant maximization one needs to work with large amounts of data, there has been an increasing interest in studying determinant maximization in large data models of computation [17, 22, 21, 19, 18, 16, 1]. One such model that we focus on in this work is the _composable coreset_ setting [9]. Intuitively, composable coresets are small "summaries" of a data set with the composability property: for the summaries of multiple datasets, the union of the summaries should make a good summary for the union of the datasets. More precisely, in this setting, instead of a single set of vectors \(P\), there are \(m\) sets \(P_{1},\ldots,P_{m}\subseteq\mathbb{R}^{d}\). In this context, a mapping function \(c\) that maps a point set to one of its subsets is called \(\alpha\)_-composable coreset_ for determinant maximization, if for any collection of point sets \(P_{1},\ldots,P_{m}\),
\[\mathsf{MAXDET}_{k}\left(\cup_{i=1}^{m}c(P_{i})\right)\geq\frac{1}{\alpha} \cdot\mathsf{MAXDET}_{k}\left(\cup_{i=1}^{m}P_{i}\right) \tag{1}\]
where \(\mathsf{MAXDET}_{k}\) is used to denote the maximum achievable determinant with parameter \(k\). (Similarly, \(\mathsf{MAXVOL}_{k}\) is used to denote the maximum volume, with \(\mathsf{MAXVOL}_{k}^{2}=\mathsf{MAXDET}_{k}\).) For clarity, we note that the mapping function \(c\) can only view its input data set \(P_{i}\) and has no knowledge of other data sets while constructing \(c(P_{i})\). [9] showed that a composable coreset for a task automatically gives an efficient distributed and an efficient streaming algorithm for the same task.
Indeed, composable coresets have been used for determinant maximization. In particular, [8, 15], presented a composable coreset of size \(O(k\log k)\) with approximation factor of \(\tilde{O}(k)^{k}\) using spectral spanners, which they showed to be almost tight. In particular, the best approximation factor one can get is \(\Omega(k^{k-o(k)})\) (Theorem 1.4). As the above algorithm is LP-based and does not provide the best performance in practice, they proposed to use the greedy algorithm followed by a local search procedure, and showed that this simple algorithm also yields a coreset of size \(k\) with an almost optimal approximation guarantee of \(O(k)^{2k}\). They also proved that the greedy algorithm alone yields a \(C^{k^{2}}\) guarantee for composable coresets, which is far larger than the optimal
approximation of \(\tilde{O}(k)^{k}\) for this problem.
Since the greedy algorithm provides a very good performance in practice [15, 19], an improved analysis of the greedy algorithm in the coreset setting is very desirable. Furthermore, both of these prior work implied that greedy performs well in practice in the context of distributed and composable coreset settings [19], and in particular its performance is comparable to that of the local search algorithm for the problem [15].
Our contribution.In this paper, we close this theoretical gap: we prove that the greedy algorithm provides a \(O(k)^{3k}\)-composable coreset for the determinant maximization problem (Theorem 4). This explains the very good performance of this algorithm on real data previously shown in [15, 19]. We achieve this by proving an elegant linear algebra result on the local optimality of the greedy algorithm: swapping a single point from the greedy solution with a vector that was not picked by the greedy algorithm can increase the volume by a factor of at most \((1+\sqrt{k})\). We further show that this is tight up to the additive constant \(1\). As an application of our result, we give a proof that the locality property can recover and in fact marginally improve the \(k!\) guarantee of the greedy algorithm of [4] for the offline volume maximization problem.
Finally, in Section 4, we run experiments to measure the local optimality of the greedy algorithm on real data, and show that this number is much smaller in practice than the worst case theoretically guaranteed bound. In fact, in our experiments this number is always less than \(1.5\) even for \(k\) even as large as \(300\). Again this explains the practical efficiency of the greedy algorithm as a coreset shown in [15, 19].
### Preliminaries
#### 1.1.1 The Greedy Algorithm
Recall the standard offline setting for determinant maximization, where one is required to pick \(k\) vectors out of the \(n\) vectors in \(P\) of maximum volume. Here, [4] showed that greedily picking the vector with the largest perpendicular distance to the subspace spanned by the current solution (or equivalently, greedily picking the vector that maximizes the volume as in Algorithm 1) outputs a set of vectors that is within \(k!\) of the optimal volume. Formally, if \(\mathsf{Greedy}(P)\) is the output of Algorithm 1, then
\[\mathsf{vol}(\mathsf{Greedy}(P))\geq\frac{\mathsf{MAXVOL}_{k}(P)}{k!} \tag{2}\]
#### 1.1.2 Local Search for Composable Coresets
In [15], the authors show that the greedy algorithm followed by the local search procedure with parameter \(\epsilon\) (as described in Algorithm 2) provides a \((2k(1+\epsilon))^{2k}\)-composable coreset for determinant maximization. A locally optimal solution can thus be naturally defined as follows:
**Definition 1** (\((1+\epsilon)\)-Locally Optimal Solution).: _Given a point set \(P\subseteq\mathbb{R}^{d}\) and \(c(P)\subseteq P\) with \(|c(P)|=k\), we say \(c(P)\) is a \((1+\epsilon)\)-locally optimal solution for volume maximization if for any \(v\in c(P)\) and any \(w\in P\setminus c(P)\),_
\[\mathsf{vol}(c(P)-v+w)\leq(1+\epsilon)\,\mathsf{vol}(c(P)) \tag{3}\]
Given the output of the greedy algorithm \(\mathsf{Greedy}(P)\), one can obtain a locally optimal solution using a series of swaps: if the volume of the solution can be increased by a factor of \((1+\epsilon)\) by swapping a vector in the current solution with a vector in the point set \(P\) that has not been included, we make the swap. Since \(\mathsf{vol}(\mathsf{Greedy}(P))\) is within a factor of \(k!\) of the optimal, we will make at most \(\frac{k\log k}{\log(1+\epsilon)}\) swaps. This is precisely the local search algorithm (Algorithm 2). For any point set \(P\), we denote the output of Algorithm 2 by \(\mathsf{LS}(P)\).
In [15], the authors prove that local search yields a \(O(k)^{2k}\)-composable coreset for determinant maximization. Formally, they prove the following.
**Theorem 2**.: _Let \(P_{1},\ldots,P_{m}\subseteq\mathbb{R}^{d}\). For each \(i=1,\ldots,m\), let \(\mathsf{LS}(P_{i})\) be the output of the local search algorithm (Algorithm 2) with parameter \(\epsilon\). Then_
\[\mathsf{MAXDET}_{k}\left(\cup_{i=1}^{m}P_{i}\right)\leq(2k(1+\epsilon))^{2k} \,\mathsf{MAXDET}_{k}\left(\cup_{i=1}^{m}\mathsf{LS}(P_{i})\right) \tag{4}\]
**Remark 3**.: _Even though [15] treats \(\epsilon\) as a small constant in \([0,1]\), the proof for Theorem 2 above holds for any non-negative \(\epsilon\)._
### Outline of our approach
In [15], the authors prove Theorem 2 for local search using a reduction to a related problem called \(k\)-directional height. The authors then use similar ideas to prove that the output of the greedy algorithm is also a composable coreset for determinant maximization. However, since we do not know a priori whether greedy is \((1+\epsilon)\)-locally optimal, the guarantee they obtain is significantly weaker: they only prove that the greedy algorithm yields a \(((2k)\cdot 3^{k})^{2k}=C^{k^{2}}\)-composable coreset for determinant maximization. This is clearly far from the desired bound of \(k^{O(k)}\).
To improve the analysis of the greedy algorithm in the coreset setting, we ask the following natural question:
_Can we prove that the output of the greedy algorithm is already locally optimal?_
We answer this question positively. Our main result is Theorem 5, where we show that for any point set \(P\), \(\mathsf{Greedy}(P)\) is a \((1+\sqrt{k})\)-locally optimal solution. In other words, the greedy algorithm has the same guarantee as local search with the parameter \(\epsilon=\sqrt{k}\). This circumvents the loose reduction from greedy to the \(k\)-directional height problem and directly implies the following improved guarantee for the greedy algorithm in the coreset setting:
**Theorem 4**.: \[\mathsf{MAXDET}_{k}\left(\cup_{i=1}^{m}P_{i}\right)\leq(2k(1+\sqrt{k}))^{2k} \,\mathsf{MAXDET}_{k}\left(\cup_{i=1}^{m}\mathsf{Greedy}(P_{i})\right)\] (5)
Thus, the greedy algorithm also provides a \((2k(1+\sqrt{k}))^{2k}=k^{O(k)}\)-composable coreset for determinant maximization, which is near the optimal \(\Omega(k^{k-o(k)})\).
Section 2 is dedicated to proving that greedy is \((1+\sqrt{k})\)-locally optimal (Theorem 5). We also show that this local optimality result of \((1+\sqrt{k})\) for the greedy algorithm is tight up to the additive constant \(1\). In Section 4 we show that on real and random datasets, the local optimality constant \(\epsilon\) is much smaller than the bound of \(1+\sqrt{k}\), which serves as an empirical explanation for why greedy performs much better in practice than what the theoretical analysis suggests.
## 2 Greedy is Locally Optimal
**Theorem 5** (Local Optimality).: _Let \(V:=\mathsf{Greedy}(P)=\{v_{1},\ldots,v_{k}\}\subseteq P\) be the output of the greedy algorithm. Let \(v_{k+1}\in P\setminus V\) be a vector not chosen by the greedy algorithm. Then for all \(i=1,\ldots,k\),_
\[\mathsf{vol}(V-v_{i}+v_{k+1})\leq(1+\sqrt{k})\,\mathsf{vol}(V) \tag{6}\]
Proof.: If \(\operatorname{rank}(P)<k\), then the result is trivial. So we may assume \(\operatorname{rank}(P)\geq k\) and \(V\) is linearly independent. Fix any \(v_{i}\in V\). Our goal is to show that \(\mathsf{vol}(V-v_{i}+v_{k+1})\leq(1+\sqrt{k})\,\mathsf{vol}(V)\). This trivially holds when \(i=k\) by the property of the greedy algorithm, so assume \(1\leq i\leq k-1\).
Let \(\{v^{\prime}_{1},\ldots,v^{\prime}_{k},v^{\prime}_{k+1}\}\) be the set of orthogonal vectors constructed by performing the Gram-Schmidt algorithm on \(\{v_{1},\ldots,v_{k},v_{k+1}\}\). Formally, let \(\mathcal{G}_{t}=\operatorname{span}\{v_{1},\ldots,v_{t}\}\). Define \(v^{\prime}_{1}=v_{1}\) and \(v^{\prime}_{t}=v_{t}-\Pi(\mathcal{G}_{t-1})(v_{t})\) for \(t=2,\ldots,k,k+1\), where \(\Pi(\mathcal{G})(v)\) denotes the projection of the vector \(v\) onto the subspace \(\mathcal{G}\). Note that
\[\mathsf{vol}(V)=\prod_{j=1}^{k}\|v^{\prime}_{j}\|_{2}\]
For each \(j=i+1,\ldots,k,k+1\), write
\[v_{j} =\Pi(\mathcal{G}_{i-1})(v_{j})+\sum_{l=i}^{j}\alpha_{l}^{j}v^{ \prime}_{l}\] \[:=\Pi(\mathcal{G}_{i-1})(v_{j})+w_{j}\]
We must have that \(|\alpha_{j}^{l}|\leq 1\) by the greedy algorithm because if \(|\alpha_{l}^{j}|>1\), the vector \(v_{j}\) would have been chosen before \(v_{l}\). Further, \(\alpha_{j}^{j}=1\) by definition of Gram-Schmidt. The vector \(w_{j}\) is what remains of \(v_{j}\) once we subtract its projection onto the first \(i-1\) vectors.
We are interested in bounding the following quantity:
\[\mathsf{vol}(V-v_{i}+v_{k+1}) =\mathsf{vol}(v_{1},\dots,v_{i-1},v_{i+1},\dots,v_{k},v_{k+1})\] \[=\mathsf{vol}(v^{\prime}_{1},\dots,v^{\prime}_{i-1},v_{i+1},\dots, v_{k},v_{k+1})\] \[=\mathsf{vol}(v^{\prime}_{1},\dots,v^{\prime}_{i-1},w_{i+1},\dots,w_{k},w_{k+1})\] \[=\mathsf{vol}(v^{\prime}_{1},\dots,v^{\prime}_{i-1})\cdot\mathsf{ vol}(w_{i+1},\dots,w_{k},w_{k+1})\] \[=\left(\prod_{j=1}^{i-1}\|v^{\prime}_{j}\|_{2}\right)\cdot\mathsf{ vol}(w_{i+1},\dots,w_{k},w_{k+1})\]
Therefore, it suffices to prove the following:
\[\mathsf{vol}(w_{i+1},\dots,w_{k},w_{k+1})\leq(1+\sqrt{k})\prod_{j=i}^{k}\|v^{ \prime}_{j}\|_{2} \tag{7}\]
To establish this, we consider two cases. Recall that \(v^{\prime}_{k+1}=v_{k+1}-\Pi(\mathcal{G}_{k})(v_{k})\). We analyze the cases where \(v^{\prime}_{k+1}\neq 0\) and \(v^{\prime}_{k+1}=0\) separately, although the ideas are similar. In Claim 7 and Claim 8 below, we establish the desired bound stated in Eq. (7) for \(v^{\prime}_{k+1}\neq 0\) and \(v^{\prime}_{k+1}=0\) respectively. Theorem 5 then follows immediately.
To prove Claim 7 and Claim 8, the following well-known lemma will be useful. A proof can be found in [6].
**Lemma 6** (Matrix Determinant Lemma).: _Suppose \(M\) is an invertible matrix. Then_
\[\det\bigl{(}M+uv^{T}\bigr{)}=(1+v^{T}M^{-1}u)\det(M) \tag{8}\]
**Claim 7**.: _Suppose \(v^{\prime}_{k+1}\neq 0\). Then_
\[\mathsf{vol}(w_{i+1},\dots,w_{k},w_{k+1})\leq\Bigl{(}\sqrt{k+1}\Bigr{)}\prod_ {j=i}^{k}\|v^{\prime}_{j}\|_{2} \tag{9}\]
Proof.: Define the matrix \(B=[w_{i+1}|\cdots|w_{k}|w_{k+1}]\). Note that \(\det\bigl{(}B^{T}B\bigr{)}=\mathsf{vol}(w_{i+1},\dots,w_{k},w_{k+1})^{2}\) is the quantity we are interested in bounding. For clarity,
\[B^{T}=\begin{bmatrix}\alpha_{i}^{i+1}v^{\prime}_{i}+v^{\prime}_{i+1}\\ \alpha_{i}^{i+2}v^{\prime}_{i}+\alpha_{i+1}^{i+2}v^{\prime}_{i+1}+v^{\prime}_{ i+2}\\ \vdots\\ \alpha_{i}^{k+1}v^{\prime}_{i}+\cdots+\alpha_{k}^{k+1}v^{\prime}_{k}+v^{\prime} _{k+1}\end{bmatrix}\]
We define the matrix \(A\) by just removing the \(v^{\prime}_{i}\) terms from \(B\) as follows:
\[A^{T}=\begin{bmatrix}v^{\prime}_{i+1}\\ \alpha_{i+1}^{i+2}v^{\prime}_{i+1}+v^{\prime}_{i+2}\\ \vdots\\ \alpha_{i+1}^{k+1}v^{\prime}_{i+1}+\cdots+\alpha_{k}^{k+1}v^{\prime}_{k}+v^{ \prime}_{k+1}\end{bmatrix}\]
Since \(\langle v^{\prime}_{i},v^{\prime}_{j}\rangle=0\) for all \(j\neq i\), we have that
\[B^{T}B=A^{T}A+uu^{T}\]
where the column vector \(u\) is given by
\[u=\|v^{\prime}_{i}\|\cdot\left(\alpha^{i+1}_{i}\ \ \alpha^{i+2}_{i}\ \ \cdots\ \ \alpha^{k+1}_{i}\right)\]
Since \(|\alpha^{j}_{i}|\leq 1\) for \(j=i+1,\ldots,k+1\) by the nature of the greedy algorithm, we have that
\[\|u\|_{2}^{2}\leq(k-i+1)\|v^{\prime}_{i}\|_{2}^{2}\leq k\|v^{\prime}_{i}\|_{2} ^{2} \tag{10}\]
We now bound the desired volume quantity. Let \(M=A^{T}A\). \(M\) is clearly a positive semi-definite matrix. In fact, because we assumed that \(v^{\prime}_{k+1}\neq 0\), it will turn out that \(M\) is positive definite and thus invertible. For now, assume that \(M^{-1}\) exists. We will compute the inverse explicitly later.
\[\mathsf{vol}(w_{i+1},\ldots,w_{k},w_{k+1})^{2} =\det\bigl{(}B^{T}B\bigr{)}\] \[=\det\bigl{(}A^{T}A+uu^{T}\bigr{)}\] \[=(1+u^{T}M^{-1}u)\det(M)\] [by Lemma 6 ] \[\leq\bigl{(}1+k\|v^{\prime}_{i}\|^{2}\lambda_{\max}(M^{-1}) \bigr{)}\det(M)\]
where \(\lambda_{\max}(M^{-1})\) is the largest eigenvalue of \(M^{-1}\). We will now show that \(M^{-1}\) does in fact exist and bound \(\lambda_{\max}(M^{-1})\). Consider the matrix \(E\) and \(W\) defined as follows:
\[E=\begin{bmatrix}1&0&&\cdots&0\\ \alpha^{i+2}_{i+1}&1&0&\cdots&0\\ \vdots&&&\\ \alpha^{k+1}_{i+1}&&\cdots&\alpha^{k+1}_{k}&1\end{bmatrix}\qquad\qquad W^{T}= \begin{bmatrix}v^{\prime}_{i+1}\\ v^{\prime}_{i+2}\\ \vdots\\ v^{\prime}_{k+1}\end{bmatrix}\]
It is easy to check that \(EW^{T}=A^{T}\). Therefore,
\[M=A^{T}A=EW^{T}WE^{T}=EDE^{T}\]
where \(D\) is the diagonal matrix given by
\[D=\operatorname{diag}\left(\|v^{\prime}_{i+1}\|_{2}^{2},\ldots,\|v^{\prime}_{ k+1}\|_{2}^{2}\right)\]
It is easy to see that \(E\) has all eigenvalues equal to \(1\), and so must be invertible with determinant \(1\). The same is obviously true for \(E^{T}\), \(E^{-1}\) and \((E^{T})^{-1}\) as well. It follows that
\[\det(M)=\det(D)=\|v^{\prime}_{i+1}\|^{2}\cdots\|v^{\prime}_{k+1}\|^{2}\]
Since \(v^{\prime}_{k+1}\neq 0\), we have that \(\|v^{\prime}_{j}\|_{2}>0\) for all \(j\), so \(D^{-1}\) clearly exists. It follows that
\[M^{-1}=(A^{T}A)^{-1}=(E^{T})^{-1}D^{-1}E^{-1}\]
\[\lambda_{\max}(M^{-1})\leq\lambda_{\max}(D^{-1})=\frac{1}{\|v^{\prime}_{k+1}\|^{2}}\]
Therefore,
\[\mathsf{vol}(w_{i+1},\ldots,w_{k},w_{k+1})^{2} \leq(1+k\|v^{\prime}_{i}\|^{2}\lambda_{\max}(M^{-1}))\det(M)\] \[\leq\left(1+\frac{k\|v^{\prime}_{i}\|^{2}}{\|v^{\prime}_{k+1}\|^ {2}}\right)\det(M)\] \[=\det(M)+k\prod_{j=i}^{k}\|v^{\prime}_{j}\|_{2}^{2}\] \[\leq(1+k)\prod_{j=i}^{k}\|v^{\prime}_{j}\|_{2}^{2}\]
**Claim 8**.: _Suppose \(v^{\prime}_{k+1}=0\). Then_
\[\mathsf{vol}(w_{i+1},\ldots,w_{k},w_{k+1})\leq\left(1+\sqrt{k}\right)\prod_{j =i}^{k}\|v^{\prime}_{j}\|_{2} \tag{11}\]
Proof.: The idea for this proof is similar to the previous claim. However, the main catch is that decomposing \(B^{T}B\) into \(A^{T}A+uu^{T}\) (as defined in the proof of Claim 7) is no longer helpful because \(v^{\prime}_{k+1}=0\) implies that \(A^{T}A\) is not invertible. However, there is a simple workaround.
Define the matrix \(B^{\prime}=[w_{k+1}|w_{i+1}|\cdots|w_{k}]\). Note that \(\det\bigl{(}(B^{\prime})^{T}B^{\prime}\bigr{)}=\mathsf{vol}(w_{i+1},\ldots,w_ {k},w_{k+1})^{2}\) is the quantity we are interested in bounding. Recall that \(v^{\prime}_{k+1}=0\) by assumption. For clarity,
\[(B^{\prime})^{T}=\left[\begin{array}{cccc}\alpha^{k+1}_{i}v^{\prime}_{i}+ \cdots+\alpha^{k+1}_{k}v^{\prime}_{k}\\ \alpha^{i+1}_{i}v^{\prime}_{i}+v^{\prime}_{i+1}\\ \alpha^{i+2}_{i}v^{\prime}_{i}+\alpha^{i+2}_{i+1}v^{\prime}_{i+1}+v^{\prime}_ {i+2}\\ \vdots\\ \alpha^{k}_{i}v^{\prime}_{i}+\cdots+v^{\prime}_{k}\end{array}\right]\]
Note that \((B^{\prime})^{T}\) is the same as \(B^{T}\) from the proof of Claim 7 except for moving the last row to the position of the first row. This change is just for convenience in this proof.
Define the following coefficients matrix \(C\in\mathbb{R}^{(k-i+1)\times(k-i+1)}\):
Define \(W^{\prime}=[v^{\prime}_{i}|\cdots|v^{\prime}_{k}]\). By construction, \((B^{\prime})^{T}=C(W^{\prime})^{T}\). Therefore
\[(W^{\prime})^{T}W:=D^{\prime}=\operatorname{diag}\left(\|v^{\prime}_{i}\|_{2 }^{2},\ldots,\|v^{\prime}_{k}\|_{2}^{2}\right)\]
It follows that
\[\det\bigl{(}(B^{\prime})^{T}B^{\prime}\bigr{)} =\det\bigl{(}C(W^{\prime})^{T}W^{\prime}C^{T}\bigr{)}\] \[=\det(C)^{2}\det(D^{\prime})\] \[=\det(C)^{2}\prod_{j=i}^{k}\|v_{j}^{\prime}\|_{2}^{2}\]
It remains to show that \(|\det(C)|\leq(1+\sqrt{k})\). We may assume that \(\alpha_{i}^{k+1}\geq 0\) by taking the negative of the first column if necessary. This does not affect the magnitude of the determinant. Note that all eigenvalues of \(C^{\prime}\) and \((C^{\prime})^{-1}\) are \(1\). Further,
\[\|x\|_{2}^{2}\leq k-i+1\leq k \tag{12}\]
\[|\det(C)| =|(1+x^{T}(C^{\prime})^{-1}e_{1})|\cdot|\det(C^{\prime})|\qquad \quad\text{[by Lemma \ref{lem:C1}]}\] \[=|1+x^{T}(C^{\prime})^{-1}e_{1}|\] \[\leq 1+\sqrt{k}\lambda_{\max}((C^{\prime})^{-1})\] \[=1+\sqrt{k}\]
We now provide a simple example where the output of the greedy algorithm is at best \(\sqrt{k}\)-locally optimal, thus demonstrating that the locality result for greedy is optimal up to the constant \(1\).
**Theorem 9** (Tightness of Local Optimality).: _There exists a point set \(P=\{v_{1},\ldots,v_{k},v_{k+1}\}\) from which the greedy algorithm picks \(V=\{v_{1},\ldots,v_{k}\}\), and_
\[\frac{\mathsf{vol}(V-v_{1}+v_{k+1})}{\mathsf{vol}(V)}=\sqrt{k} \tag{13}\]
Proof.: Let \(P=\{v_{1},\ldots,v_{k},v_{k+1}\}\) where \(v_{1}\in\mathbb{R}^{k}\) is the vector of all ones and \(v_{i}=\sqrt{k}e_{i-1}\) for \(i=2,\ldots,k+1\). Since the magnitude of every vector in \(P\) is \(\sqrt{k}\), the greedy algorithm could start by picking \(v_{1}\). The greedy algorithm will then pick any \(k-1\) of the remaining \(k\) vectors. Without loss in generality, assume that the algorithm picks \(V=\{v_{1},\ldots,v_{k}\}\). Then \(\mathsf{vol}(V)=(\sqrt{k})^{k-1}\). On the other hand, \(\mathsf{vol}(V-v_{1}+v_{k+1})=(\sqrt{k})^{k}\). The result follows.
Application to Standard Determinant Maximization
The greedy algorithm for volume maximization was shown to have an approximation factor of \(k!\) in [4]. We provide a completely new proof for this result with a slightly improved approximation factor.
**Theorem 10**.: _Let \(P\) be a point set, \(\mathsf{Greedy}(P)=\{v_{1},\ldots,v_{k}\}\) the output of the greedy algorithm, and \(\mathsf{MAXVOL}_{k}(P)\) the maximum volume of any subset of \(k\) vectors from \(P\). Then_
\[\mathsf{vol}(\mathsf{Greedy}(P))\geq\frac{\mathsf{MAXVOL}_{k}(P)}{\prod_{i=2}^ {k}(1+\sqrt{i})} \tag{14}\]
Proof.: Let \(S\subseteq P\) be the set of \(k\) vectors with maximum volume. Without loss of generality and for simplicity of exposition, we assume that \(\mathsf{Greedy}(P)\cap S=\varnothing\) (the proof still goes through if this is not the case). We will order \(S\) in a convenient manner.
Consider the set \(W_{1}=\{v_{1}\}\cup S\) with \(k+1\) elements. Perform the greedy algorithm on \(W_{1}\) with \(k\) steps. Clearly, greedy will choose \(v_{1}\) first and then some \(k-1\) of the remaining vectors. Label the left out vector \(w_{1}\).
Inductively define \(W_{i+1}=\{v_{1},\ldots,v_{i},v_{i+1}\}\cup(S-\{w_{1},\ldots,w_{i}\})\), which has size \(k+1\). Perform greedy on \(W_{i+1}\) with \(k\) steps. The first \(i+1\) vectors chosen will be \(v_{1},\ldots,v_{i},v_{i+1}\) by definition. Call the left out vector \(w_{i+1}\). We now have an ordering for \(S=\{w_{1},\ldots,w_{k}\}\).
Starting with the greedy solution, we will now perform \(k\) swaps to obtain the optimal solution. Each swap will increase the volume by a factor of at most \(1+\sqrt{k}\). Initially, our solution starts with \(\mathsf{Greedy}(P)=\{v_{1},\ldots,v_{k}\}\). Note that this is also the output of greedy when applied to the set \(\mathsf{Greedy}(P)\cup\{w_{k}\}=W_{k}\). Swapping in \(w_{k}\) in place of \(v_{k}\) increases our volume by a factor of at most \(1+\sqrt{k}\).
Our current set of vectors is now \(\{v_{1},\ldots,v_{k-1},w_{k}\}\). By the ordering on \(S\), this is also the greedy output on the set \(W_{k-1}=\{v_{1},\ldots,v_{k-1},w_{k-1},w_{k}\}\). Therefore, we may swap in \(w_{k-1}\) in place of \(v_{k-1}\) in our current set of vectors by increasing the volume by at most a factor of \((1+\sqrt{k})\). Proceeding in this manner, we can perform \(k\) swaps to obtain the optimal solution from the greedy solution by increasing our volume by a factor of at most \((1+\sqrt{k})^{k}\).
To obtain the slightly better approximation factor in the theorem statement, we observe that in the proof of Theorem 5, swapping out the \(i^{\text{th}}\) vector from the greedy solution for a vector that was not chosen increases the volume only by a factor of \((1+\sqrt{k+1-i})\leq 1+\sqrt{k}\) (Eq. (10),Eq. (12)), and that swapping out the \(k^{\text{th}}\) vector does not increase the volume at all. Therefore, the approximation factor of greedy is at most
\[\prod_{i=1}^{k-1}(1+\sqrt{k+1-i})=\prod_{i=2}^{k}(1+\sqrt{i})\]
**Remark 11**.: _Note that \(\prod_{i=2}^{k}(1+\sqrt{i})<2^{k}\sqrt{k!}\) for \(k\geq 7\), which is \((k!)^{\frac{1}{2}+o(1)}\). While the improvement in the approximation factor is quite small, we emphasize that the proof idea is very different from the \(k!\) guarantee obtained in [4]._
Experiments
In this section, we measure the local optimality parameter for the greedy algorithm empirically. We use two real world datasets, both of which were used as benchmarks for determinant maximization in immediately related work ([15, 14]:
* **MNIST**[12], which has 60000 elements, each representing a 28-by-28 bitmap image of a hand-drawn digit;
* **GENES**[2], which has 10000 elements, with each representing the feature vector of a gene. The data set was initially used in identifying a diverse set of genes to predict breast cancer tumors. After removing the elements with some unknown values, we have around 8000 points.
We measure the local optimality parameter both as a function of \(k\), and as a function of the data set size as explained in the next two subsections.
### Local Optimality for Real/Random Datasets as a Function of \(k\)
Experiment Setup:For both MNIST and GENES, we consider a collection of \(m=10\) data sets, each with \(n=3000\) points chosen uniformly at random from the full dataset. We ran the greedy algorithm for \(k\) from \(1\) to \(20\) and measured the local optimality value \((1+\epsilon)\) as a fucntion of \(k=2,4,\ldots,20\) for each of the \(10\) data sets in the collection. More precisely, for each such \(k\), we took the maximum value of \((1+\epsilon)\) over every data set in the collection. The reason we take the worst value of \((1+\epsilon)\), is that in the context of composable coresets, we require the guarantee to hold for each individual data set to be \((1+\epsilon)\)-locally optimal. We repeated this process for \(5\) iterations and took the average. We plot this value as a function of \(k\).
Further, to compare against a random data set, for both MNIST and GENES, we repeated the above experiment against a set of random points of the same dimension sampled uniformly at random from the unit sphere.
Results:As shown in Fig. 1, while the real world data sets have local optimality value \((1+\epsilon)\) higher than the random data sets, they are both significantly lower than (less than \(1.4\)) the theoretical bound of \((1+\sqrt{k})\). This suggests that real world data sets behave much more nicely and are closer to random than the worst case analysis would suggest, which explains why greedy does so well in practice.
For the purpose of diversity maximization, the regime of interest is when \(k\ll n\). However, we wanted to verify that the local optimality value does not increase much even when \(k\) is much larger and closer to \(n\). Since measuring local optimality is expensive when both \(k\) and \(n\) are large, we ran the same experiment again, except with \(n=300\) points per point set, and measuring the local optimality at \(k=1,50,100,\ldots,300\) in steps of \(50\). Again, as seen in Fig. 2, local optimality stays much below \(1+\sqrt{k}\) (in fact less than \(1.5\)) for larger values of \(k\) as well.
### Local Optimality as a Function of the Size of Point Sets
Experiment Setup:Here, we fix the value of \(k\in\{5,10,15,20\}\) and compute the local optimality value \((1+\epsilon)\) while increasing the size of the point sets. The point set size is varied from \(500\) to \(4000\) in intervals of \(500\). For each point set size, we chose a stream of \(10\) random point sets from the dataset and took the maximum value over \(10\) iterations. Once again, we did this on MNIST and GENES and took the average of \(5\) iterations.
Results:As shown in Fig. 3, the local optimality parameter remains very low (lower than \(1.2\)) regardless of the number of points in the data set, which is much smaller than \((1+\sqrt{k})\).
Figure 1: Local Optimality \((1+\epsilon)\) against \(k\) for GENES and MNIST datasets, and random datasets of the same dimension. Each stream had \(10\) point sets of size \(3000\), with \(k\) ranging from \(1\) to \(20\).
Figure 2: Local Optimality \((1+\epsilon)\) against \(k\) for GENES and MNIST datasets, and random datasets of the same dimension. Each stream had \(10\) point sets of size \(300\), with \(k\) from \(1\) to \(300\) in steps of \(50\). Note that when \(k\in\{1,n\}\), we trivially have that \((1+\epsilon)=1\).
## 5 Conclusion
In this work, we provided an almost tight analysis of the greedy algorithm for determinant maximization in the composable coreset setting: we improve upon the previous known bound of \(C^{k^{2}}\) to \(O(k)^{3k}\), which is optimal upto the factor \(3\) in the exponent. We do this by proving a result on the local optimality of the greedy algorithm for volume maximization. We also support our theoretical analysis by measuring the local optimality of greedy over real world data sets. It remains an interesting question to tighten the constant in the exponent or otherwise provide a lower bound showing that \(3\) is in fact optimal.
|
2309.05095 | MaskRenderer: 3D-Infused Multi-Mask Realistic Face Reenactment | We present a novel end-to-end identity-agnostic face reenactment system,
MaskRenderer, that can generate realistic, high fidelity frames in real-time.
Although recent face reenactment works have shown promising results, there are
still significant challenges such as identity leakage and imitating mouth
movements, especially for large pose changes and occluded faces. MaskRenderer
tackles these problems by using (i) a 3DMM to model 3D face structure to better
handle pose changes, occlusion, and mouth movements compared to 2D
representations; (ii) a triplet loss function to embed the cross-reenactment
during training for better identity preservation; and (iii) multi-scale
occlusion, improving inpainting and restoring missing areas. Comprehensive
quantitative and qualitative experiments conducted on the VoxCeleb1 test set,
demonstrate that MaskRenderer outperforms state-of-the-art models on unseen
faces, especially when the Source and Driving identities are very different. | Tina Behrouzi, Atefeh Shahroudnejad, Payam Mousavi | 2023-09-10T17:41:46Z | http://arxiv.org/abs/2309.05095v1 | # MaskRenderer: 3D-Infused Multi-Mask Realistic Face Reenactment
###### Abstract
We present a novel end-to-end identity-agnostic face reenactment system, MaskRenderer, that can generate realistic, high fidelity frames in real-time. Although recent face reenactment works have shown promising results, there are still significant challenges such as identity leakage and imitating mouth movements, especially for large pose changes and occluded faces. MaskRenderer tackles these problems by using (i) a 3DMM to model 3D face structure to better handle pose changes, occlusion, and mouth movements compared to 2D representations; (ii) a triplet loss function to embed the cross-reenactment during training for better identity preservation; and (iii) multi-scale occlusion, improving inpainting and restoring missing areas. Comprehensive quantitative and qualitative experiments conducted on the VoxCeleb1 test set, demonstrate that MaskRenderer outperforms state-of-the-art models on unseen faces, especially when the Source and Driving identities are very different.
Face Reenactment Deepfake Generative Models 3D Morphable Model Multi-scale Occlusion Masks Triplet Loss
## 1 Introduction
Identity-agnostic face reenactment is the process of generating sequential face images of a target person (i.e., Source), where the talking (i.e., Driving) person's pose and facial expression are transferred to the target person's image. In other words, the talking person controls the Source image and its pose and expression like a puppet (See Fig. 1).
This well-established computer vision problem has a wide range of application areas, such as the film industry, teleconferencing, and virtual reality [1]. However, high-fidelity and identity-agnostic face reenactment still remains challenging, particularly when the model should (i) preserve the Source's identity, (ii) generate a photo-realistic face, and (iii) ideally require only a single image of the Source. More precisely, the algorithm is required to preserve the face texture and shape of the Source, even in the presence of occlusion or any changes in the expression or pose of the Driving. We assume that only one photo is available from the Source, and the model is suitable for real-time applications.
### Background
In contrast to identity-agnostic face reenactment for which a single Source image is used, two other face reenactment categories use multiple frames (i.e., few-shot) or video footage (i.e., identity-specific). However, few-shot methods
generally only work well when all Source images are extracted from the same video because photos of the same person at different times typically have different backgrounds, lighting conditions, and facial details (e.g., hair). That alteration in images can make identity preservation harder and worsen the cross-reenactment quality, compared to the single-image methods. Moreover, some few-shot methods such as Vid2Vid [2] and MarioNETte [3] are not real-time and have high computational costs.
Most face reenactment methods use one or more facial representations to accurately understand and capture face structure (i.e., identity), pose, and expression. Facial landmarks, a common representation, have been used to determine the pose and expression changes in Driving videos [4, 5]. Sparse landmarks, trained in a supervised manner, are fixed points that cannot fully capture facial structure and features. Moreover, a heavy dependence on supervised landmarks makes the model more sensitive to occlusion and inaccurate prediction. Recent works [6, 7, 8] have shown that unsupervised facial features can better represent face motion for reenactment. Moreover, unsupervised facial features better depict pose changes than supervised key points.
The vast majority of existing works [8, 9] find facial representation and motion based only on a person's 2D image. However, these models do not capture 3D geometry from other views with different poses and expressions. To tackle this issue, a 3D Morphable Model (3DMM) [10] can be used as a 3D representation to enhance the robustness of face reconstruction and improve mouth movements, particularly in the presence of occlusion and large pose changes. Furthermore, 3DMM models extract person-specific disentangled parameters such as pose, lighting, and identity. The original 3DMM employs Principal Component Analysis (PCA) to find statistical properties of face shape. More recent 3DMMs [11, 12] have included a light Neural Network or a GAN model to generate a full 3D representation of a face.
Most 3DMM models [11] are trained on 3D face scans that are extremely expensive to collect and, due to annotation costs, they have less variety in expression. To tackle this problem, some recent works, such as DECA [12], can estimate accurate 3D faces based on images in the wild. However, these models usually require multiple views of the face to construct a detailed texture map that is robust to occlusion and pose changes. Although 3DMM models have shown impressive results in generating a 3D face, they still suffer in recovering facial components, such as eyes, neck, teeth, facial detail (e.g., beard and makeup), and image background. To address these limitations, there are image inpainting methods, such as DE-GAN [13], that can help in recovering a face image.
### Contributions
This article introduces the MaskRenderer framework that reenacts the face of an unseen Source based on a sequence of Driving frames while preserving the Source identity. The proposed model can generate images in real-time. Our main contributions are as follows:
* Inspired by the recent advance in 3D vision models, we incorporated Deng 3DMM [10] into our model. Combining 3D head parameters and 2D motion transformation resulted in more accurate pose changes and iris and mouth movements.
* Current face reenactment models do not perform cross-reenactment during training. Not considering different identities for Source and Driving during training causes under-performance at inference time when the Source and Driving images are from two different people. We designed a triplet loss to incorporate cross-reenactment
Figure 1: This example face reenactment result is generated by MaskRenderer.
loss during training, which improves identity preservation where the Source and Driving facial shapes are completely different.
* We used multi-scale occlusion in the generator for better inpainting of unwanted or missing areas.
* MaskRenderer outperformed state-of-the-art methods on the VoxCeleb1 benchmark in terms of identity preservation, transferring pose and expression, and mouth movements.
The remainder of the article is as follows: In Section 2, recent related works on face reenactment are discussed. The framework of our proposed method and the detailed structure are explained in Section 3. Section 4 summarizes the datasets, inference requirements, evaluation metrics, and training paradigm. In section 5, MaskRenderer is compared with baselines and other state-of-the-art methods. Section 6 contains empirical ablation studies to evaluate and justify our algorithmic design decisions. Finally, Section 7 concludes and discusses the future directions.
## 2 Related Works
In this section, we review existing face reenactment methods and categorize them based on the three types of face representation.
### 3D-based models
3D-based models leverage 3D facial geometry to improve upon 2D techniques, better controlling different expressions and poses. The notable challenges in 3D-based models are (i) achieving high-fidelity identity preservation and looking realistic instead of like a 3D avatar, (ii) some tasks, such as hair generation, is more problematic in 3D, and (iii) the quality of 3D animatable head reconstruction typically depends on the extracted geometry, texture rendering, model generalization, and disentanglement between identity, pose, and expression. Earlier works [14, 15, 16] transfer 3D facial parameters from Driving to Source and generate the reenactment video by rendering the input. The primary limitation of these works is that they all need to be trained on a multi-view video of a specific person. Many recent agnostic works [6, 17, 18, 19, 20, 21] incorporate a pre-trained 3DMM into their structure to model the shape, texture, expression, and pose of faces. PIRenderer [17] first maps 3DMM parameters to a latent space and then uses it to guide the motion network and the generator. Yao et al. [22] predicts 3DMM parameters to reconstruct 3D face meshes of Source and Driving. Then, they apply graph Convolutional operations on these meshes to extract motion features to guide the motion estimation network.
### Landmark-based models
In this class of model, facial landmarks are used as a prior condition to guide the generator by representing pose and expression. The main challenge in landmark-based models is preserving identity due to the person-specific identity information in facial landmarks. Zakharov et al. [23] proposed NeuralHead as a few-shot face reenactment that generates reenactment directly from landmarks by adopting a meta-learning strategy and fine-tuning for unseen identities. A personalized pose-invariant embedding vector is created based on different sets of frames and is used to predict AdaIN parameters [24] in the generator for appearance transfer. NeuralHead was later extended [25] for better cross-identity reenactment by using latent pose vectors instead of directly employing landmarks.
Another few-shot work to address the identity preservation problem is MarioNETte [3]. It introduces a landmark transformer to adapt facial points between Source and Driving and uses an image attention block for better appearance style transfer. The idea of a landmark transformer was later used in multiple works [4, 26, 27]. To improve the visual quality of results, others [28, 29] consider facial components landmarks separately, in addition to the facial landmarks in the reenactment process. Hsu et al. [5] employ a 3D landmark detector, rather than 2D, to cope with large pose changes.
### Motion-based models
Motion-based models are composed of two sub-networks: (i) a motion estimation network to predict a dense motion field from Source to Driving that is used for warping the Source and (ii) a generative network to generate the final reenactment result using the warped Source. Siarohin et al. [30] first proposed MonkeyNet, which encodes motion by learning self-supervised keypoints. Then in their follow-up work, named First-Order Motion Model (FOMM) [8], they significantly improved the results' appearance, particularly for more complex motions, by considering local affine transformations instead of rigid motions. Many subsequent works, including ours, have drawn inspiration from the FOMM's performance and structure. Zhao et al. [9] reformulated FOMM for better transferring the motion when there is a large pose difference between Source and Driving. They employ Thin-Plate Spline transformations along with the affine transformation for the dense motion estimation in the training path.
Another work tackling the large pose changes between Source and Driving is proposed by Liu et al. [31]. They adopted Neural-ODE to differentially refine the initial keypoints and model the dynamics of motion deformation. Tripathy et al. [32] use a keypoint transformer to stabilize keypoints in FOMM for better cross-identity reenactment. To address the limitations of 2D representations in motion changes and cross-identity issues, Hong et al. [33] proposed estimating facial depth maps from Source and Driving and using them as 3D facial geometries to guide the keypoint estimator in FOMM. These depth maps are also used to refine the motion field for more detailed and accurate reenactment. Wang et al. [34] also extended FOMM by predicting canonical 3D keypoints instead of 2D to support large pose changes.
In a recent work, LIA [7] uses StyleGAN2 [35] and sparse coding to estimate the motion from Source to Driving without keypoints prediction. Therefore, the quality of reenactment results is not dependent on the initial Driving frame. Other methods [36, 37] extract regions or masks instead of keypoints to separate the foreground from the background for improving shapes capturing. Hence, these methods are more suitable for articulated objects like human bodies. Our proposed model comprises both motion-based and 3D-based models resulting in high-quality, high-fidelity, and photo-realistic face reenactment.
## 3 MaskRenderer: Methodology and Approach
In this section, we explain the detailed structure of the MaskRenderer. Fig. 2 shows a high-level schematic. The network is divided into four parts: (i) the 3DMM module generates disentangled Source's facial shape and Driving's pose and expression. (ii) the Facial Features Detector (FFD) finds representative facial characteristics. Then, both the FFD features and 3DMM parameters are passed to (iii), the Dense Motion network, to find the optical flow between the Source and Driving images. In (iv), the Multi-scale Occlusion Masks are constructed to represent the image's missing parts. The generator takes the occlusion masks and dense motion matrix to create the reenacted result. MaskRenderer is a GAN-based model that uses a Generative Adversarial Network (GAN) architecture to generate realistic reenacted images from a given source image. The final part of this section is devoted to discussing our novel use of triplet loss among other loss terms.
### 3Dmm
Given the Source image \(I_{S}\), the 3DMM objective is to reproduce the 3D shape of \(I_{S}\), which is consistent with various image viewpoints. The original 3DMM is defined as Eq. (1), where \(S\) represents the 3D face shape. \(\bar{S}\) is the mean face shape value. \(\alpha_{id}\) and \(\alpha_{exp}\) are identity and expression coefficients, and finally \(A_{id}\) and \(A_{exp}\) are the identity and expression PCA bases, respectively. Further explanation can be found in the supplementary material and the 3DMM paper [38].
\[S=\bar{S}+A_{id}\alpha_{id}+A_{exp}\alpha_{exp} \tag{1}\]
\[T=\bar{T}+A_{tex}\alpha_{tex} \tag{2}\]
The skin texture \(T\) is combined with the face shape \(S\), defined in Eq. (2), where the texture coefficient \(\alpha_{tex}\) adjusts the mean texture value \(\bar{T}\) and texture PCA base \(A_{tex}\). The input image only changes the coefficients (\(\alpha_{id}\in\mathbb{R}^{80}\)
Figure 2: The architecture of the proposed model
\(\alpha_{exp}\in\mathbb{R}^{64}\), \(\alpha_{tex}\in\mathbb{R}^{80}\)). The PCA bases and mean values (\(A_{id}\), \(A_{exp}\), \(A_{tex}\), \(\overline{S}\), \(\overline{T}\)) are set to be consistent with prior work [38].
We employ the pre-trained Deep3DFace model [10] as the 3DMM model as this model finds an explicit texture's mesh structure from a single-view image and can capture major changes in the head and mouth area. Since the 3DMM's features are disentangled, the pose and expression can be taken from the Driving frame \(I_{D}\), while other parameters are derived from the Source image \(I_{S}\). The 3D reenacted face and motion field between the Source and Driving face vertexes are passed to the Dense Motion network.
Although the reenacted 3D face shape is stable to different poses, expressions, and lighting, the final texture is not detailed and does not include regions such as teeth, hair, and inner eye parts. We address this issue in the later steps, bottom right of Fig. 2, where the skin texture and facial details are reconstructed in 2D space.
### Facial Feature Detection
The Facial Feature Detection (FFD) structure is inspired by FOMM's keypoints detector network [8]. The FFD structure and the corresponding equivariance losses are designed to ensure that facial features1 are presentable on an input face. As shown in Fig. 3, the input image \(I\in\mathbb{R}^{256\times 256\times 3}\) (from either the Source or the Driver) is passed through the hourglass network to create a heatmap with the shape of \((K\times 64\times 64)\), where \(K\) is the spatial feature shape. The heatmap is passed via two parallel convolution layers to create the feature locations in 2D coordinate \(F\in\mathbb{R}^{K\times 2}\) and motion matrix \(J\in\mathbb{R}^{K\times 2\times 2}\).
Footnote 1: In this article, we use the term “feature” instead of “keypoint” because the unsupervised features do not directly represent facial landmarks.
The facial features are used to identify the sparse motion and dense motion, subsection 3.3. The sparse motion matrix \(M_{k}(z)\) between Source and Driving images is defined as:
\[M_{k}(z)=F_{S_{k}}+(J_{S_{k}}J_{D_{k}}^{-1})(z-F_{D_{k}}), \tag{3}\]
where \(M_{k}(z)\) is calculated at coordinate positions \(z\) near each feature point for each feature row \(k\in\{1,\dots,K\}\). The subscripts \(S\) and \(D\) denote whether features belong to the Source or Driving image. The learnable \(J_{S}J_{D}^{-1}\) matrix control the impact of changes in \(F_{D}\) on \(F_{S}\).
We have included the attention mechanism in the hourglass model by adding Convolutional Block Attention Module (CBAM) [39] blocks. The new hourglass can capture the gradual changes between \(F_{S}\) and \(F_{D}\) using channel and spatial attention. The attention blocks also help FFD to be robust against occlusion, face rotation, and changes in illumination by considering the positional relationship between features and only influential information.
### Dense Motion Network
The objective of the Dense Motion network is to find the optical flow between \(I_{S}\) and \(I_{D}\), transferring the Driving's pose and expression into the Source image. However, deriving dense motion between \(I_{S}\) and \(I_{D}\) is very challenging due to the small \(K\) number of unsupervised features \(F\). Hence, we incorporated 3DMM-based parameters to better address motions such as eye blinking, head movement, and mouth opening. The sparse motion \(M_{k}(z)\) is concatenated with the motion field \(M_{3D}(z)\) derived from the 3DMM module to better represent the flow changes from the Driving to the Source's face.
Figure 3: In the FFD module, the attention module is applied to the skip connections’ features
To estimate the final dense motion \(\hat{M}(z)\), \(K+2\) dense heatmaps' masks \(H\) are required to combine static points, 3D motions, and the \(K\) local features (Eq. 4). To generate \(H_{0}\), \(H_{3D}\), and \(H_{k}\) matrices, another attention hourglass network is applied respectively to the sparse transformation of the Source image, the reenacted 3D face, and to the features.
\[\hat{M}(z)=H_{0}z+H_{3D}M_{3D}(z)+\sum_{k=1}^{K}H_{k}M_{k}(z) \tag{4}\]
### Multi-Scale Occlusion Masks & Generator
Occlusion masks address the pixel locations that are missing or occluded in the Source image by inpainting them during generation. However, different feature maps in the Dense Motion module's decoder have different relative importance. The high-level feature map represents facial details such as face and color, while the low-level feature map focuses on coarse features such as pose and face shape. Accordingly, we define an occlusion mask \(O_{i}\in[0,1]\) for the feature map in layer \(i\), as shown bottom right part of the Fig. 2. The \(O_{i}\) is estimated using a parallel convolutional layer with a sigmoid activation function on the output of the attention hourglass's decoder layer \(i\) in the Dense Motion module.
To generate a photo-realistic reenacted 2D image, a generator with an encoder-decoder structure is designed. The encoder extracts Source features for different scales, \(F_{Enc_{S,i}}\), where \(i\) is the \(i^{th}\) down-sampled layer. The decoder employs per-pixel mapping derived from the dense motion module to warp the Source features and up-sample \(F_{Enc_{S}}\) to the original image shape.
After warping \(F_{Enc_{S,i}}\) based on the Driving's pose and expression, the missing pixels should be inpainted. For this complex inpainting, we apply the occlusion mask \(O_{i}\) to the warped features, \(\mathcal{T}(F_{Enc_{S,i}},\hat{M}(z))\), to locate the hidden regions that require reconstruction. The decoder's output at layer \(i\) is concatenated with \(O_{i}\times\mathcal{T}(F_{Enc_{S,i}},\hat{M}(z))\) and passed to the next up-sample layer to generate the final reenacted image \(I_{F}\).
### Training Losses
The model is trained in a self-supervised fashion. All the training losses, except the triplet loss, are defined based on self-reenactment, where \(I_{S}\) and \(I_{D}\) are two frames of the same person in a video. The triplet loss is based on cross-reenactment and uses identity soft labels derived from the VGGFace2 [40]. MaskRenderer is trained end-to-end, and all the losses listed below are summed and minimized.
_Triplet loss \(L_{T}\):_ To encourage the network to generate identity-consistent images robust to changes in pose and expression, we employed the triplet loss. The triplet loss, Eq. (5), forces the network to preserve the identity of the Source image.
\[L_{T}=\sum_{a,p,n}\left[D_{x_{a},x_{p}}-D_{x_{a},x_{n}}+m\right]_{+};\quad y_ {a}=y_{p}\neq y_{n} \tag{5}\]
Figure 4: Triplet loss illustration.
Where \(x_{a}\) is an anchor identity feature and \(x_{p}\) and \(x_{n}\) are positive and negative features, respectively. The input and reenacted images are passed through the pre-trained VGGFace2 [40] to extract identity features \(x\). Pair \((x_{a},x_{p})\) belongs to the same identity class \(y_{a}\), whereas negative samples have different identities \(y_{n}\). The objective of triplet loss is to bring the reconstructed image's identity closer to the Source image while increasing its distance \(D\) from a negative sample with the margin \(m\). We expanded the triplet loss as Eq. (6) to represent the cross-reenactment loss illustrated in Fig. 4. The second part of the equation captures difficult positive cases where Source and Driving have different facial shapes and poses.
\[L_{T}=\frac{1}{|B|}\sum_{i=1}^{|B|}{[\alpha||x_{a}-x_{p_{1}}||_{2}+(1-\alpha)|| x_{a}-x_{p_{2}}||_{2}-||x_{a}-x_{n}||_{2}+m]_{+}} \tag{6}\]
A drawback of adding a triplet loss term to the network is that it requires additional computation. Moreover, triplet loss can cause negative bias and force the model to compromise the pose and expression to preserve identity. We address these issues by setting triplet loss to zero at initial epochs and applying it in the later epochs. Moreover, the weight between the first and second terms in the Eq. (6) is tuned to reduce negative bias.
_Warping loss \(L_{W}\):_ The warping loss attempts to bring the warped Source features closer to the Driving features \(F_{Enc_{D}}\) to improve the warping accuracy and dense flow estimation. The warping loss is defined as:
\[L_{W}=\sum_{i}mean(|\mathcal{T}(F_{Enc_{S,i}},\hat{S}(z))-F_{Enc_{D,i}}|), \tag{7}\]
where the index \(i\) represents the \(i^{th}\) down-sampled layer in the generator. The next losses are defined as FOMM's [8] loss functions.
_Equivariance losses \(L_{E}\) & \(L_{J}\):_ By adding the equivariance losses, we address the feature \(F\) consistency issue to non-linear Thin Plate Spline (TPS) warping \(\mathcal{T}_{TPS}\)[8]. The \(L_{E}\) and \(L_{J}\) losses encourage the warped Driving features \(\mathcal{T}_{TPS}(F_{D})\) and motion \(\mathcal{T}_{TPS}(J_{D})\) to become closer to their corresponding values from the warped Driving frame \(\mathcal{T}_{TPS}(I_{D})\).
_Perceptual loss \(L_{p}\):_ The perceptual loss attempts to bring the reenacted image \(I_{F}\) and Driving frame closer to each others using the pre-trained VGG_19 network's features. In the following equations, the index \(i\) represents the feature scale.
\[L_{P}=\sum_{i}mean(|VGG_{i}(I_{D})-VGG_{i}(I_{F})|) \tag{8}\]
_GAN loss \(L_{G}\):_ The MaskRenderer is trained similarly to a GAN, by alternating between the training of the generator and the discriminator. The discriminator consists of four layers for four feature maps, \(F_{M}\), and a final discriminator map prediction, \(D_{M}\). In the generator, the GAN loss \(L_{G}\), defined as Eq. (9) attempts to make generated images look like real images.
\[L_{G}=\sum_{i}mean((1-D_{M_{i}}(I_{F}))^{2}) \tag{9}\]
The _Feature Matching loss \(L_{F}\)_, Eq. (10), brings the Driving and reenacted frames' discriminator feature maps closer to each other.
\[L_{F}=\sum_{i}\sum_{j}mean(|F_{M_{i},j}(I_{D})-F_{M_{i},j}(I_{F})|) \tag{10}\]
The index \(j\) indicates the \(F_{M}\) layer. At each iteration, first, the mentioned losses are combined as shown in Eq. (11) and passed to the optimizer.
\[L_{generator}=\alpha L_{T}+\beta(L_{W}+L_{E}+L_{J}+L_{P}+L_{G}+L_{F}) \tag{11}\]
Then, the discriminator is trained to discriminate between the real \(I_{D}\) and generated \(I_{F}\) photos to make the model robust to parameters that help distinguish between real and fake images. The discriminator loss \(L_{D}\) is defined as follows:
\[L_{D}=\sum_{i}mean[(1-D_{M_{i}}(I_{D}))^{2}+D_{M_{i}}(I_{F})^{2}] \tag{12}\]
## 4 Experimental Setup
In this section, we discuss how we will train and evaluate MaskRenderer: the dataset, testing setup, and evaluation metrics. We also include implementation details for reproducability.
### Dataset
We used VoxCeleb1 dataset2[41] to train and evaluate our model. It contains 22,496 videos from 1,251 identities that have been extracted from interview videos on YouTube. After preprocessing and cleaning the data (based on prior work [8]), we used 17,765 videos for training and 464 videos for evaluation. There is no overlap between the identities in the training and test sets.
Footnote 2: [https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html)
### Inference stage
Self-reenactment evaluations were performed on all 464 testing videos, where the first frame is set as the Source image, and the rest are the Driving images. On the other hand, in cross-reenactment evaluation, we randomly selected 61 test videos of 12 individuals. For each video, the first frame is assigned as the initial driving frame. This helps to consider the relative pose and expression distance between the initial frame and any other frame used as a driving image instead of the absolute pose value. Due to the difference in facial structures between Source and Driving, considering the relative changes in the Driving's face coordinates supports preserving the Source's identity. Additionally, The Source image is randomly selected from videos with different identities.
### Evaluation Metrics
In the self-reenactment scenario, the final generated frame must replicate the Driving frame. Therefore, the Driving frame is the ground truth for pose, expression, and identity of the corresponding generated image. We considered standard metrics for comparing the SOTA models' self-reenactment performance [3, 6, 8], as follows. **L1 distance** calculates the mean absolute difference between Driving the generated frame. **Absolute Keypoint Difference (AKD)** measures the keypoints' contrast between Driving and the generated frame. **Absolute Identity Difference (AID)** measures the similarity between source and generated images' identity vector, estimated as in prior work [12]. Moreover, we computed the **Euclidean Pose Distance (EPD)** to compare the Driving and the generated frame's pose and expression using the pose embedding vectors [10].
For evaluating cross-reenactment performance, since there is no ground truth available when the Source and Driving people are different, we calculated cosine similarity between two features instead of calculating absolute distance. The cross-reenactment metrics are as follows. **Identity Similarity (ISIM)** determines how well the generated frame has preserved the Source's identity and facial structure. **Pose Similarity (PSIM)** and **Keypoint Similarity (KSIM)** quantify how close the poses and positional facial landmarks are, respectively, between the Driving and generated frames.
The **Frechet Inception Distance (FID)** estimates the difference between the normal distribution of the fake and real images' features by looking at the distance between the generated (fake) and Source (real) images, estimating how photo-realistic it is. We used the pre-trained InceptionV3 for extracting images' features compared using FID as described in [6].
### Implementation Details
Our MaskRenderer was trained on the training data with 421 different identities. Both Driving and Source frames are randomly selected from a random patch of each person's videos. To cover various combinations of each identity's frames, the process is repeated 150 times with a batch size of 8 for 100 epochs. Both generator and discriminator learning rates are set to \(2\times 10^{-4}\) with \(\beta_{1}=0.5\) and \(\beta_{2}=0.99\). We used three NVIDIA 3090 GPUs with 24GB memory to train the model. The loss hyper-parameters \(\alpha\) and \(\beta\) in the Eq. 11 are set to 10, and the triplet loss margin \(m\) is set to 1. The facial features' number \(K\) is set to 10 after performing experiments on the set \(K=\{5,10,15,32,64\}\). Fewer points could not fully capture the face and body movements, while using more points resulted in the system becoming overly complicated (the model failed to accurately map and detect changes in the facial features). After testing the range of \([0.4,1]\) for \(alpha\) in the Eq. (6), the higher weight, \(\alpha=0.8\), was selected to emphasize the first self-reenactment term and reduce the likelihood of any negative bias. We replaced the nvdiffrast renderer in Deep3DFace [10] with the Pytorch3D renderer3, as it is computationally more efficient.
Footnote 3: [https://pytorch3d.readthedocs.io/en/latest/modules/renderer/index.html](https://pytorch3d.readthedocs.io/en/latest/modules/renderer/index.html)
## 5 Results
In this section, we compare the MaskRenderer with state-of-the-art (SOTA) methods: FOMM [8], SAFA [6], DAGAN [33], TPS [9], and DualGen [5]. The metrics for SOTA are calculated for the released pre-trained models. All
evaluations are performed on the same device. The inference time of MaskRenderer for cross-reenactment is less than 40 fps.
### Cross-Reenactment
The main objective of this work is to improve the cross-reenactment performance when Driving and Source images have different facial structures and identities.
#### 5.1.1 Quantitative Results
Table 1 shows that MaskRenderer has the highest ISIM and KSIM scores. Higher identity and landmark similarity indicate that MaskRenderer better preserves the Source identity during pose and expression changes compared to the SOTA models. The visual qualitative comparison verifies this performance, especially for faces with very different structures. Moreover, MaskRenderer has the lowest FID score, confirming that our model generates the most photo-realistic images.
Although the pose similarity score of the SAFA and TPS is higher than MaskRenderer, these methods perform much worse in preserving the identity of Source image. The decrease in PSIM is because our method tries to make smooth transitions between frames, and in some cases, the generated result does not fully follow the Driving's pose in the area of the iris. However, this pose adjustment does not affect indicative features such as mouth movements, and the final result is highly realistic. All in all, MaskRenderer is smoother and more consistent in generating photo-realistic identity-preserved reenacted results than SOTA models.
#### 5.1.2 Qualitative Results
Fig. 5 shows the qualitative comparison of our MaskRenderer with the baseline (FOMM) and other SOTA models. The first and second columns show sample Source images and Driving frames, respectively. The rest of the columns represent the face reenactment results of each model corresponding to the Source and Driving images (first two columns). As shown in the last column, our method generates high-quality, high-fidelity, and photo-realistic face reenactment results. MaskRenderer visually outperforms SOTA methods in several aspects. First, MaskRenderer preserves the Source identity even when it is very different from the Driving face, while other models show signs of identity leakage more or less (as most apparent in the first row). Second, MaskRenderer handles the hand occlusion better than other models (see the third row). Third, MaskRenderer has the capability of generating reenactment in different head pose angles. Fourth, our method can generate fine-grained details, such as wrinkles, facial hair, and teeth. Fifth, it can also produce reasonable mouth movements, which is critical when the Driving person is speaking.
### Self-Reenactment
We also compared MaskRenderer with SOTA methods in self-reenactment cases where the Source and Driving frames are from the same person in a video. Table 2 shows that our proposed method has the lowest AID, and the identity error has improved at least by 13% compared to SOTA methods. This increase indicates that our method elevates identity preservation in self-reenactment scenarios and accurately reconstructs the facial details even for large pose
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Model** & **ISIM\(\uparrow\)** & **KSIM\(\uparrow\)** & **PSIM\(\uparrow\)** & **FID\(\downarrow\)** \\ \hline FOMM[8] & 0.873 & 0.899 & 0.512 & 59.522 \\ \hline SAFA [6] & 0.856 & 0.902 & **0.575** & 55.456 \\ \hline DaGAN [33] & 0.872 & 0.91 & 0.507 & 61.643 \\ \hline TPS [9] & 0.851 & 0.91 & 0.571 & 69.144 \\ \hline MaskRenderer & **0.891** & **0.914** & 0.527 & **49.469** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Cross-reenactment quantitative results. Bold values correspond to the best result among SOTA and our methods.
changes, more than 45\({}^{\circ}\) head rotation. However, the AKD, EPD, and \(L_{1}\) losses of MaskRenderer are higher than the previous methods. Adding the triplet loss helps reduce the bias of the same-identity reenactment during training. This modification comes with the trade-off of decreasing the self-reenactment's pose and landmarks accuracy, relative to SOTA methods. However, we believe the slight decrease in AKD and EPD are justified by the improved cross-identity image generation.
## 6 Ablation Studies
In this section, we conduct ablation studies to test the impact of adding the main components of our network to the baseline FOMM [8] in cross-reenactment scenarios. Table 3 quantitatively shows that adding the 3D parameters to the
Figure 5: Qualitative comparison of our method with the state of the art for cross-reenactment
baseline (second row) helps mouth and body movements and pose changes; however, it does poorly in preserving the identity with high quality. On the other hand, including an attention unit to the hourglass network (third row) improves the ISIM and FID scores but leads to less accurate pose and expression generation. The forth row results validate the role of triplet loss in maintaining the Source identity while facing changes in pose and expression. This indicates the importance of considering identity change during training. The multi-scale occlusion masks and warping loss (fifth row) help with image background blending and inpainting unseen facial details.
From these quantitative results, we conclude that all the components in MaskRenderer contribute to the generation of highly realistic reenacted faces with high identity preservation. The final model has a high performance regarding identity, pose, and landmark, with the best FID score as compared to other models. Furthermore, MaskRenderer replicates face landmarks changes accurately and has the highest KSIM. Fig. 6 also qualitatively supports all the aforementioned advantages of the proposed components.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Model** & **AKD** & **AID** & \(L_{1}\) & **EPD** \\ \hline FOMM [8] & 1.378 & 0.12 & 0.045 & 0.917 \\ \hline SAFA [6] & **1.21** & 0.118 & 0.041 & **0.831** \\ \hline DaGAN [33] & 1.27 & 0.123 & 0.042 & 0.88 \\ \hline TPS [9] & 1.322 & 0.145 & **0.039** & 0.93 \\ \hline MaskRenderer & 1.395 & **0.103** & 0.044 & 1.023 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Quantitative comparison of the proposed method with state-of-the-art methods for self-reenactment.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Method** & **ISIM\(\uparrow\)** & **KSIM\(\uparrow\)** & **PSIM\(\uparrow\)** & **FID\(\downarrow\)** \\ \hline FOMM (baseline) [8] & 0.873 & 0.899 & 0.512 & 59.522 \\ \hline \hline w/ 3DMM & 0.871 & 0.902 & **0.559** & 52.339 \\ \hline w/ 3DMM+attention & **0.926** & 0.9 & 0.505 & 57.164 \\ \hline w/ 3DMM+Masks+\(L_{W}\) & 0.873 & 0.912 & 0.533 & 50.919 \\ \hline w/ 3DMM+\(L_{T}\) & 0.875 & 0.905 & 0.508 & 50.181 \\ \hline MaskRenderer & 0.891 & **0.914** & 0.527 & **49.469** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation on adding the main components of our network to the baseline for cross-reenactment
## 7 Conclusion
We have introduced MaskRenderer, a real-time identity-agnostic face reenactment framework that is robust to occlusion and large mismatches between Source and Driving facial structures. We incorporate the 3DMM model into the network to find a reliable 3D facial pose. The 3D and 2D motions are combined to precisely follow changes in the eyes and lips. By adopting the triplet loss, the impact of identity alteration is considered during training thereby reducing the bias to self-reenactment.
We further designed multi-scale occlusion masks and the corresponding warping loss to improve the separation of foreground and background of a face and blending with the border. Ablation studies demonstrate the importance of each network's component in maintaining the facial structure and inpainting missing sections.
Both qualitative and quantitative results illustrate that Maskrenderer improves the identity preservation of a Source image even for large poses compared to the SOTA models. The generated cross-reenacted results of the proposed method are photo-realistic, and the FID score is 10% higher than the SOTA models. In the future, we plan to improve on the feature fusion and normalization in the generator to enhance the inpainting of the hair and teeth even further.
## 8 Acknowledgment
We wish to thank Mara Cairo, Deborah Akaniru, and Talat Iqbal Syed for their valuable expertise and support during this project. We would also like to thank Dr. Matt Taylor for his great advise and constructive feedback, which helped us to improve the quality of this work.
|
2305.03731 | Working Memory Capacity of ChatGPT: An Empirical Study | Working memory is a critical aspect of both human intelligence and artificial
intelligence, serving as a workspace for the temporary storage and manipulation
of information. In this paper, we systematically assess the working memory
capacity of ChatGPT, a large language model developed by OpenAI, by examining
its performance in verbal and spatial n-back tasks under various conditions.
Our experiments reveal that ChatGPT has a working memory capacity limit
strikingly similar to that of humans. Furthermore, we investigate the impact of
different instruction strategies on ChatGPT's performance and observe that the
fundamental patterns of a capacity limit persist. From our empirical findings,
we propose that n-back tasks may serve as tools for benchmarking the working
memory capacity of large language models and hold potential for informing
future efforts aimed at enhancing AI working memory. | Dongyu Gong, Xingchen Wan, Dingmin Wang | 2023-04-30T11:54:40Z | http://arxiv.org/abs/2305.03731v4 | # Working Memory Capacity of ChatGPT:
###### Abstract
Working memory is a critical aspect of both human intelligence and artificial intelligence, serving as a workspace for the temporary storage and manipulation of information. In this paper, we systematically assess the working memory capacity of ChatGPT (gpt-3.5-turbo), a large language model developed by OpenAI, by examining its performance in verbal and spatial \(n\)-back tasks under various conditions. Our experiments reveal that ChatGPT experiences significant declines in performance as \(n\) increases (which necessitates more information to be stored in working memory), suggesting a limit to the working memory capacity strikingly similar to that of humans. Furthermore, we investigate the impact of different instruction strategies on ChatGPT's performance and observe that the fundamental patterns of a capacity limit persist. From our empirical findings, we propose that \(n\)-back tasks may serve as tools for benchmarking the working memory capacity of large language models and hold potential for informing future efforts aimed at enhancing AI working memory and deepening our understanding of human working memory through AI models.
## 1 Introduction
The advent of large language models (LLMs) like ChatGPT and GPT-4 [34] has propelled the pursuit of artificial general intelligence [6] and unveiled human-level abilities that warrant further exploration [42; 25]. Among these abilities is the capacity to retain contextual information while engaging in multi-turn conversations, suggesting the presence of working memory in these LLMs.
In cognitive science, working memory is usually defined as the ability to temporarily store and manipulate information in mind [2]. It is widely regarded as a critical element of human intelligence, as it underlies various higher-order cognitive processes such as reasoning, problem-solving, and language comprehension [11].
Studies on human participants have revealed a fundamental capacity limit in working memory [12]. However, there has not been a consensus on why and how working memory capacity is limited [33; 44]. Among many theories, the executive attention hypothesis [18; 17] suggests that working memory depends on utilizing attention to maintain or suppress information, and the restriction on working memory capacity is not specifically about memory storage per se, but more about the capacity for sustained, regulated attention in the presence of interference.
Supporting evidence of the executive attention hypothesis includes results from the \(n\)-back task, which is arguably the gold-standard measure of working memory capacity in cognitive neuroscience
(for a review, see [23]). The \(n\)-back task, initially developed by Kirchner [24], requires participants to monitor a continuous stream of stimuli, and to decide for each stimulus whether it matches the one \(n\) steps back in the stream (see Figure 1 for illustrations of basic verbal and spatial \(n\)-back tasks). The participants in this task must, therefore, continuously update their mental representation of the target items while also dropping now irrelevant items from consideration. So, some executive attention processes are required in addition to storage. Typical human performance in this task (measured by accuracy) as a function of \(n\) is shown in Figure 2, where we plot the data presented in [22].
In humans, working memory capacity has proved to be closely related to fluid intelligence (_Gf_) [9; 37], which refers to the ability to reason and to solve new problems independently of previously acquired knowledge. Training on working memory capacity using the \(n\)-back task has been shown to be effective in improving fluid intelligence [1; 21], highlighting the special role of working memory capacity in human intelligence [20]. However, in artificial intelligence, there has not been a consensus as to which metrics should be accepted as an intelligence index when evaluating and comparing cognitive abilities of LLMs. In the current study, we define working memory of LLMs as an emergent ability to selectively maintain and manipulate information for ongoing cognitive processes, echoing the executive attention hypothesis in cognitive science. We propose that the performance of LLMs on \(n\)-back tasks can be a reliable metric for assessing their working memory capacity, which in turn might reflect the general intelligence of reasoning and problem solving emerged from these models.
To demonstrate this, we used ChatGPT (gpt-3.5-turbo) as a representative of LLMs, and designed two categories of \(n\)-back tasks to evaluate its working memory capacity. Our results revealed strikingly consistent patterns of a capacity limit across multiple experimental conditions, hinting at possibly similar mechanisms of working memory in humans and LLMs. We believe this finding is important for both cognitive scientists and LLM researchers, and hope that this could guide future endeavors of better understanding why human working memory capacity is limited and building more intelligent LLMs with better working memory capacity.
## 2 Related Works
Working memory has long been a subject of study in human cognition [13]. Unlike long-term memory, which is stored in long-term synaptic weights in the neural system, working memory is believed to be maintained by activation of neurons in distributed brain networks [29]. However, the investigation of working memory in LLMs remains largely unexplored. A few latest studies in
Figure 1: Illustrations of **verbal** (top row) and **spatial** (bottom row) \(n\)-back tasks with \(n=\{1,2,3\}\). Participants are instructed to give a response (“m”) when the current stimulus (e.g., a letter or a spatial location) is the same as the stimulus \(n\) trials ago), and not respond (“-”) on nonmatch trials.
Figure 2: Typical human performance in \(n\)-back tasks for \(n=\{1,2,3\}\). We plot the mean \(\pm 1\) standard deviation of the data collected in [22].
this line has shown that studying and improving working memory of LLMs holds great interest and significance, as it can contribute to better performance of these models [19; 26].
LLMs have played a crucial role in achieving impressive performance across a wide range of downstream tasks. While fine-tuning has emerged as a popular approach for adapting a pre-trained model to new tasks [15; 41; 3], it can be impractical to apply this method to extremely large models and/or scarce data. As an alternative, a method called in-context learning was proposed in a study by [5], showcasing the remarkable few-shot learning capabilities of large language models without requiring weight updates through gradient descent. This method, which demonstrates the ability of LLMs to retrieve long-term (pre-trained) knowledge and integrating the correct knowledge with the context, bears striking resemblance to how human working memory works. Since its introduction, research on in-context learning in language models has garnered significant attention from both academia and industry. Previous studies have presented various approaches to leverage the in-context learning ability of language models, including selecting labeled examples for demonstrations [36; 28; 27], meta-training with an explicit in-context learning objective [7; 30], and exploring the variant of in-context learning that involves learning to follow instructions [43; 41; 16; 31; 32]
However, to the best of our knowledge, this paper is the first that provides an empirical analysis of the working memory ability of LLMs from a cognitive science perspective.
## 3 Methods
We devised two categories of \(n\)-back tasks involving verbal and spatial working memory [39] respectively, and prompted ChatGPT (using the OpenAI API, model = "gpt-3.5-turbo", with default parameters) to complete the tasks in a trial-by-trial manner. For both categories, we have a base version task, and several variants derived from the base version to further test the model's performance under different conditions.
### Verbal _n_-back experiments
In the base version of the verbal _n_-back task (see Figure 2(a)), for \(n=\{1,2,3\}\), respectively, we generated 50 blocks of letter sequences using an alphabet commonly found in the literature ("bcdfghjklnpqrstvwxyz"). Each block contained a sequence of 24 letters, which are presented one at a time as user input to the API. We included 8 match trials and 16 nonmatch trials in each block. The LLM was instructed to respond with "m" on match trials and "-" on nonmatch trials. Apart from the above base version, we further explored the behavioural performance of ChatGPT on the following three variants of the task (see Table 1 for detailed prompts):
* We added \(3\) to \(6\) noise symbols to the input on every trial to examine the LLM's behaviour when it is impossible to get the correct answer by simply doing string match between stimulus inputs (see Figure 2(b)).
* In human behavioural studies, a common strategy to improve participants' performance is to provide feedback after each trial [38]. Here in the variant, after the LLM gave a response for the current trial, we provided feedback on whether its response was correct or wrong alongside the stimulus input of the following trial (see Figure 2(c)).
* Chain-of-thought (CoT) prompting has proved helpful in eliciting reasoning in LLMs [43]. In this variant, we instructed the LLM to think step by step when giving a response (see Figure 2(b)).
### Spatial _n_-back experiments
Although in its very nature, LLMs are text-based, but at least one study has demonstrated that they have spatial reasoning abilities [6]. To build on this promising trail and further examine the spatial working memory of ChatGPT, in the base version of the spatial _n_-back task (Figure 3(a)), we constructed a \(3\times 3\) grid using ASCII characters. For \(n=\{1,2,3\}\), respectively, we generated 50 blocks of grid sequences, each grid featuring a letter **X** in one of the nine positions. Note that the letter **X** here was arbitrarily chosen to represent an occupied spatial location textually and could be substituted by any other letter or symbol. Each block contains 24 grids, including 8 match trials and 16 nonmatch trials. Like in the verbal _n_-back tasks, the LLM was instructed to respond with "m" on
match trials and "-" on nonmatch trials. We further explored the spatial working memory capacity of ChatGPT with the following modifications of the task (see Table 2 for detailed prompts):
* Similar to the variants of verbal _n_-back tasks, we also had "spatial-with-noise", "spatial-with-feedback", and "spatial-with-CoT-reasoning" versions of the task. The with-feedback and with-CoT-reasoning variants were basically the same as those for the corresponding verbal tasks. For the spatial-with-noise version, we added a noise character (chosen from "#$%&0^"") to \(1\) to \(3\) unoccupied locations in the 3 \(\times\) 3 grid on every trial, so that we could examine the LLM's spatial working memory when it is not able to get the correct answer by simply doing string match.
* To further confirm that the LLM can _really_ reason in a spatial way rather than trivially performing some kind of string match under the hood, we further introduced two variants that specifically require abstract spatial reasoning; a hypothetical model that otherwise simply matches strings would not succeed in these variants. For the first variant (see Figure 4c), a match is defined as when the location of the letter **X** is in the same row **and/or** column (i.e., including identical locations) as the **X**\(n\) trials ago. For a second variant (see Figure 4d), a match is defined as when the letter **X** appears in the same row **or** column, but not both (i.e., excluding identical locations). This constraint would further force the LLM to use abstract reasoning and instruction-following abilities to perform this task. Given the increased complexity of the second variant, we expect it would be harder for the LLM to perform compared to the first variant.
* We also explored whether the size of the grid (\(3\times 3\), \(4\times 4\), \(5\times 5\) or \(7\times 7\)) would influence the LLM's performance (see Figure 4b). To the best of our knowledge, there has not been human studies exploring how the number of all possible spatial locations would impact behavioural performance in spatial _n_-back tasks. In light of this, we did not have specific assumptions for how the LLM would perform differently under these scenarios.
Figure 3: Illustrations of the different variants of \(\textbf{verbal}^{\text{*}}\)_n_-back tasks (we use \(n=2\) in the figure) considered in this paper. **(a)**: base version identical to the case presented in Figure 1 (top row); **(b)**: stimulus on each trial now contains 3-6 random noise characters (chosen from "#$%/&0^"") in addition to a single alphabetical letter that the LLM should compare across trials. The LLM is instructed to ignore these noise characters, and the alphabetical letter may appear in any position in the noise-corrupted stimulus; **(c)**: alongside the input for every trial, the LLM is also provided with feedback on whether it has performed the previous trial correctly; **(d)**: the LLM is prompted with a reasoning-eliciting instruction to output the final answer (“m” or “-”) _and_ the rationale. Refer to Table 1 for the detailed instructions the LLM is prompted with in each of the task variants.*Note: both verbal and spatial tasks are compatible with these variants; we illustrate using verbal tasks without loss of generality.
## 4 Results
To analyse the model's performance on our experiments, we used four widely accepted performance metrics reported in numerous human behavioral studies:
**Hit Rate**: it is the proportion of correct identifications of the target (the stimulus that was \(n\) steps back). It can be calculated as follows:
\[\text{Hit Rate}=\frac{\text{Number of Hits}}{\text{Total Number of Targets}} \tag{1}\]
where _Number of Hits_ is the number of times the target was correctly identified, and _Total Number of Targets_ is the total number of targets that were presented during the task.
**False Alarm Rate**: it is the proportion of incorrect identifications of the target. It is the rate at which non-targets are incorrectly identified as targets. It can be calculated as follows:
\[\text{False Alarm Rate}=\frac{\text{Number of False Alarms}}{\text{Total Number of Non-Targets}} \tag{2}\]
where _Number of False Alarms_ is the number of times a non-target was incorrectly identified as a target, and _Total Number of Non-Targets_ is the total number of non-targets that were presented during the task.
\begin{table}
\begin{tabular}{p{142.3pt} p{284.5pt}} \hline \hline
**Task type** & **Prompt** \\ \hline Verbal & You are asked to perform a {1,2,3}-back task. You will see a sequence of letters. The sequence will be presented one letter at a time, [For the with-noise variant only:] accompanied with random noise symbols chosen from ”#$/$/$/$/$0***”. Please ignore the noise symbols and focus on the letter only. Your task is to respond with ”n” (no quotation marks, just the letter m) whenever the current letter is the same as the previous {one/two/three} letter(s) ago, and ”-” (no quotation marks, just the dash sign) otherwise. [For the with-feedback variant only:] Feedback on whether your last response was correct or wrong will also be presented. Please take advantage of feedback information to improve your performance. Only ”m” and ”-” are allowed responses. No explanations needed: please don’t output any extra words!! The sequence will be presented one letter at a time. Now begins the task. \\ \hline Verbal with Reasoning (Figure 3d) & You are asked to perform a {1,2,3}-back task. You will see a sequence of letters. The sequence will be presented one letter at a time. Your task is to respond with ”m” (no quotation marks, just the letter m) whenever the current letter is the same as the letter {one, two, three} letter(s) ago, and ”-” (no quotation marks, just the dash sign) otherwise. Please think step by step and provide your thinking steps after responding with ”m” or ”-”. \\ \multicolumn{2}{p{142.3pt}}{Here are examples of how to format your response: 1.”-:this is the first trial, so my response is -”. 2.”m:the letter {one, two, three} trial(s) ago was a, the current letter is a, so my response is m”. 3.”-:the letter {one, two, three} letter(s) ago was a, the current letter is b, so my response is -”. Now begins the task. \\ \hline \hline \end{tabular}
\end{table}
Table 1: Prompts used in different **verbal** task variants. Blue texts are to be selected as appropriate depending on the value of \(n\) in the \(n\)-back tasks. Other colored texts are inserted as appropriate, depending on the task variant.
**Accuracy**: it represents the overall correctness of responses, whether the stimulus is a target or a non-target. Accuracy can be calculated as follows:
\[\text{Accuracy}=\frac{\text{Number of Correct Hits}+\text{Number of Correct Rejections}}{\text{Total Number of Trials}} \tag{3}\]
where _Number of Correct Hits_ is the number of targets correctly identified, _Number of Correct Rejections_ is the number of non-targets correctly identified (i.e., they were not incorrectly identified as targets), and _Total Number of Trials_ is the total number of stimuli presented in a block, both target trials and non-target trials (i.e., 24, in our case).
**Detection Sensitivity (\(d^{\prime}\))**: it is commonly used in signal detection theory and is a measure of sensitivity to distinguish between signal (target) and noise (non-target). In the context of the \(n\)-back
Figure 4: Illustrations of the different variants of **spatial**\(n\)-back tasks (we use \(n=2\) in the figure) considered in this paper _in addition to the variants presented in Figure 3_, which are applicable to both spatial and verbal tasks. **(a)**: base version identical to the case presented in Figure 1 (bottom row); **(b)**: spatial tasks with larger grid sizes (\(4\times 4\) shown for illustration; we considered \(4\times 4\), \(5\times 5\), and \(7\times 7\)); **(c)** and **(d)**: two types of spatial reasoning tasks that additionally require _abstract reasoning_. In **(c)**, a match is expected whenever the letter **X** occurs in the same row and/or column as the location \(n\) trials ago (including identical locations); in **(d)**, a match is expected when the letter **X** appears in the same row or column (but not both) as the location \(n\) trials ago (excluding identical locations). Refer to Table 2 on the detailed instructions the LLM is prompted with for each of the variant.
Figure 5: Results of different variants of verbal _n_-back experiments. Error bars represent \(\pm 1\) SEM_.
task, \(d^{\prime}\) can be calculated using the \(z\)-scores (the inverse of the cumulative distribution function of a standard normal distribution) of the hit rate and the false alarm rate. The formula is as follows:
\[d^{\prime}=Z_{\text{Hit Rate}}-Z_{\text{False Alarm Rate}} \tag{4}\]
where \(Z_{\text{Hit Rate}}\) and \(Z_{\text{False Alarm Rate}}\) represent the \(z\)-score of _Hit Rate_ and _False Alarm Rate_, respectively. In the case where _Hit Rate_ or _False Alarm Rate_ is equal to either 0 or 1, they will be adjusted by 0.01 to handle the problem of \(z\)-score being infinite.
In the current study, we did 50 blocks of tests for \(n=\{1,2,3\}\) in each experiment, which allows us to calculate the standard error of mean (_SEM_) and draw error bars to visualise the reliability of our findings. Among the four metrics, the pattern of hit rates and false alarm rates can vary a lot depending on the specific task condition [8]. Accuracy, in turn, will also be biased by very high/low hit rate and false alarm rate. In contrast, detection sensitivity(\(d^{\prime}\)) is a much more robust performance metric. A higher \(d^{\prime}\) indicates better performance, suggesting that the individual is more accurately distinguishing between targets and non-targets. Conversely, a \(d^{\prime}\) near 0 indicates performance no better than chance. Our analysis below will mainly rely on \(d^{\prime}\) as the performance metric (see Appendix A for the statistics tests we conducted and Appendix B for performance distributions).
\begin{table}
\begin{tabular}{p{142.3pt} p{142.3pt}} \hline \hline
**Task type** & **Prompt** \\ \hline Spatial & You are asked to perform a {1,2,3}-back task. You will see a sequence of 3*3 [For larger grids only:] {4*4, 5*5, 7*7} grids. Each grid has a letter X in one of the nine [For larger grids only:] {sixteen, twenty-five, forty-nine } positions. For example, a grid with X at top left corner would be \(\texttt{\^{\cdots}}\) [X]\_
### Verbal _n_-back experiments
In the verbal task variants, we observed a performance pattern strikingly consistent with human participants, with the LLM's performance declining significantly when \(n\) increased from 1 to 3 (Figure 5). While CoT prompting has significantly improved the performance of the LLM, feedback on whether the model has performed correctly on the previous trial failed to meaningfully improve performance. On the other hand, adding noise made the model perform worse, as expected - these noises may be interpreted as analogous to distractors in human behavioural tasks.
### Spatial _n_-back experiments
In the four versions of spatial tasks corresponding to the above verbal tasks, same patterns of performance declines were basically replicated (Figure 6). Interestingly, CoT prompting again significantly made the LLM perform better - this further confirms the hypothesis that the spatial _n_-back task presented to the LLM cannot be solved trivially with string similarity, as previous works on LLMs show that strong gain from CoT prompting is usually only present in tasks requiring advanced reasoning [43].
We further evaluated whether the LLM could conduct abstract spatial reasoning. Although for both types of abstract reasoning variants the \(d^{\prime}\) was significantly lower than the base version, a closer look into the results shows that it was mainly driven by the disproportionately high false alarm rates in these two variants. If we focus on the hit rates, then clearly the LLM was able to conduct some abstract reasoning (Figure 7). Furthermore, in line with our prediction, the LLM performed worse when identical locations are not defined a match, which means more abstract spatial reasoning would be required in this scenario.
Our explorations on the effect of the grid size on model performance yielded interesting results, too. The LLM performed better when the grid size was larger, especially as seen from the hit rate and \(d^{\prime}\) results in Figure 8. One possibility is that when the grid size is larger, there might be less interference between stimulus inputs across trials, so that the LLM can better keep track of the information flow without being confused. Future studies should try to explain this phenomenon in more detail and analogous tasks on human participants should be done to test the generalisability of this finding.
## 5 Discussion
Our consistent finding across nearly all tasks is that _ChatGPT suffers from significant declines in performance as \(n\) increases. We argue that our experimental results firmly point to the conclusion that ChatGPT has limited working memory capacity similar to humans. Although various prompting techniques (such as the use of state-of-the-art CoT prompting [43]) may be used to improve the model's performance, the trend of performance declines as a function of increasing \(n\) still bears striking resemblance to humans. This consistent pattern thus might be reflecting a fundamental constraint emerged from the architecture of the model, suggesting a possibility that the low-level
Figure 6: Results of the variants of spatial _n_-back tasks corresponding to those in verbal tasks. Error bars represent \(\pm 1\)_SEM_.
mechanisms of working memory in LLMs might be similar to human working memory at least in some aspects.
In human neuroscience, numerous unresolved challenges persist pertaining to working memory. We propose that, in light of the above observation, ChatGPT and other large language models of similar calibre could be potentially used and tested as a modelling platform for studying human working memory, just as what neuroscientists have done in recent years using other artificial neural networks [35]. Furthermore, future efforts aimed at interpreting activity of artificial neurons in LLMs [4] would probably hold potential in informing the mechanisms of human working memory. If we could visualise the activity of artificial neurons across different layers of the model when doing working memory tasks, that could probably shed some light on the neural representations of human working memory as well.
Our work also has some limitations. It would be important to test other LLMs on the same task we used here, to confirm whether they exhibit similar performance patterns and whether they have different working memory capacity. It would also be helpful to test ChatGPT on other working memory span tasks used in cognitive science [10, 14] to address the generalisability of \(n\)-back tasks as measurement tools. Furthermore, given that other non-verbal/spatial \(n\)-back tasks (e.g. auditory) have been previously used in human experiments, it would also be interesting to test LLMs on these novel task types, especially given that LLMs are becoming increasingly multi-modal and support a wide range of input and/or output formats.
Last but not the least, the current work opens a brand new topic in probing the cognitive abilities of LLMs: if the working memory capacity of LLMs are fundamentally limited, then why? How their architecture is related to the capacity limit? One possible explanation would be the self-attention mechanism used in the Transformer architecture [40]. The self-attention mechanism computes a weighted sum of input elements, where each element's weight is determined by its relevance to other elements in the sequence. While this approach allows the model to focus on relevant information, it may also lead to a diffusion of information across multiple input elements, making it challenging to maintain and access specific pieces of information as \(n\) increases in \(n\)-back tasks.
Figure 8: Results of spatial task variants with different grid sizes. Error bars represent \(\pm 1\)_SEM_.
Figure 7: Results of abstract reasoning variants of spatial \(n\)-back tasks. Error bars represent \(\pm 1\)_SEM_. |
2309.14198 | (Predictable) Performance Bias in Unsupervised Anomaly Detection | Background: With the ever-increasing amount of medical imaging data, the
demand for algorithms to assist clinicians has amplified. Unsupervised anomaly
detection (UAD) models promise to aid in the crucial first step of disease
detection. While previous studies have thoroughly explored fairness in
supervised models in healthcare, for UAD, this has so far been unexplored.
Methods: In this study, we evaluated how dataset composition regarding
subgroups manifests in disparate performance of UAD models along multiple
protected variables on three large-scale publicly available chest X-ray
datasets. Our experiments were validated using two state-of-the-art UAD models
for medical images. Finally, we introduced a novel subgroup-AUROC (sAUROC)
metric, which aids in quantifying fairness in machine learning.
Findings: Our experiments revealed empirical "fairness laws" (similar to
"scaling laws" for Transformers) for training-dataset composition: Linear
relationships between anomaly detection performance within a subpopulation and
its representation in the training data. Our study further revealed performance
disparities, even in the case of balanced training data, and compound effects
that exacerbate the drop in performance for subjects associated with multiple
adversely affected groups.
Interpretation: Our study quantified the disparate performance of UAD models
against certain demographic subgroups. Importantly, we showed that this
unfairness cannot be mitigated by balanced representation alone. Instead, the
representation of some subgroups seems harder to learn by UAD models than that
of others. The empirical fairness laws discovered in our study make disparate
performance in UAD models easier to estimate and aid in determining the most
desirable dataset composition. | Felix Meissen, Svenja Breuer, Moritz Knolle, Alena Buyx, Ruth Müller, Georgios Kaissis, Benedikt Wiestler, Daniel Rückert | 2023-09-25T14:57:43Z | http://arxiv.org/abs/2309.14198v1 | # (Predictable) Performance Bias in Unsupervised Anomaly Detection
###### Abstract
**Background** With the ever-increasing amount of medical imaging data, the demand for algorithms to assist clinicians has amplified. Unsupervised anomaly detection (UAD) models promise to aid in the crucial first step of disease detection. While previous studies have thoroughly explored fairness in supervised models in healthcare, for UAD, this has so far been unexplored.
**Methods** In this study, we evaluated how dataset composition regarding subgroups manifests in disparate performance of UAD models along multiple protected variables on three large-scale publicly available chest X-ray datasets. Our experiments were validated using two state-of-the-art UAD models for medical images. Finally, we introduced a novel subgroup-AUROC (sAUROC) metric, which aids in quantifying fairness in machine learning.
**Findings** Our experiments revealed empirical "fairness laws" (similar to "scaling laws" for Transformers) for training-dataset composition: Linear relationships between anomaly detection performance within a subpopulation and its representation in the training data. Our study further revealed performance disparities, even in the case of balanced training data, and compound effects that exacerbate the drop in performance for subjects associated with multiple adversely affected groups.
**Funding** European Research Council Deep4MI
Artificial intelligence Machine learning Algorithmic bias Subgroup disparities Anomaly detection
#### 1.0.1 Research in context
#### 1.0.2 Evidence before this study
We searched PubMed, Scopus and Google Scholar for Machine Learning and Deep Learning studies related to "unsupervised anomaly detection", "fairness", "model bias", and "intersectionality" before August 2023. The list of publications was complemented by the authors' knowledge of the body of literature and suggestions from colleagues. Several of these prior works have investigated the fairness of supervised classification models regarding protected attributes, such as gender, age, or race, and how this negatively affects patient care. Dataset composition, fairness in population-wide studies, and the effect of intersectionality (considering multiple protected attributes) have been thoroughly investigated for this class of models. These studies have found lower performances in under-represented or socio-economically disadvantaged patient groups, as well as disproportionate impacts in intersectional subgroups (i.e. individuals who share more than one sensitive trait).
#### 1.0.3 Added value of this study
Unsupervised anomaly detection (UAD) differs from supervised classification not only algorithmically but also regarding the context in which it is applied: the main application of UAD is pre-screening or triaging to support clinicians in handling increasing amounts of imaging data, and therefore has a distinct role. Since UAD models learn the training data-generating distribution instead of an association between input and output labels, the effects for under-represented subpopulations are likely to be different from supervised models. To this end, our work is the first to investigate and quantify fairness in UAD models. Our experiments reveal empirical "fairness laws" - linear relationships between dataset composition and subgroup performance that facilitate the _a priori_ estimation of subgroup performance. The presented empirical results further show severe performance differences for subgroups even under balanced training data, suggesting that performance disparities in UAD can not be eliminated by equal data representation alone. Lastly, we introduce a novel metric, subgroup-AUROC (sAUROC), to better quantify performance discrepancies for subgroups in machine learning models.
#### 1.0.4 Implications of all the available evidence
This study shows that performance bias is not limited to supervised classification models and further suggests additional care and rigour will be necessary in designing and deploying UAD algorithms to minimize excessive risk for misdiagnosis of different subpopulations. This also implies that new UAD models generally need to be evaluated regarding their fairness. We have further shown that performance bias behaves predictably in UAD. The above-mentioned "fairness laws" render subgroup performance in UAD more predictable and can help to guide data collection towards more fair models.
## 2 Introduction
In unsupervised anomaly detection (UAD), a machine learning (ML) model is trained to capture a distribution of training samples with the aim of being able to identify outliers/anomalies that do not stem from the distribution underlying the training data-generating process. Within the medical domain, UAD serves as a valuable tool for detecting pathological samples while only requiring data derived from healthy patients without any disease-specific labels.[1, 2, 3] This approach allows for making use of the vast amounts of clinically unremarkable data acquired in hospitals on a daily basis. In contrast to supervised methods, UAD, by construction, avoids the problem of class/label imbalance, making it well-suited to identify even rare anomalies, for which the collection of sufficient training data would otherwise present a challenge. A UAD model \(\Phi\) produces an anomaly score \(a\) for a data point \(x\). Formally: \(\Phi(x)=a\). The model assigns higher anomaly scores to samples far from its learned distribution.
Recently, a lack of dataset diversity defined through, e.g. age, gender, or ethnicity, has been recognised as an important concern in medical imaging[4] and beyond,[5] as ML models trained on such data often provide biased predictions that result in poor performance on under-represented subgroups. Furthermore, there is unequivocal evidence that this bias is harmful to patients: In a diverse dermatology dataset, Danshejou _et al._ found that most state-of-the-art ML models used for skin cancer detection exhibited significantly
lower performance than previously reported, particularly on dark skin tones. In a clinical deployment setting, this can lead to delayed or missed diagnoses for patients with darker skin, potentially resulting in inadequate treatment and poorer outcomes.[6] This issue becomes even more pronounced when taking intersectionality into account: Both Seyyed-Kalantari _et al._ and Stanley and colleagues independently reported that performance disproportionately worsens for patients belonging to multiple subgroups already adversely affected.[7, 8]
For UAD models, which are trained to learn a "normal" distribution against which to contrast "anomalies" (see Fig. 1), imbalanced training data is particularly challenging. This is because subgroups observed less frequently in the training data lack sufficient representation for the UAD model to accurately learn their normal patterns, resulting in higher anomaly scores and, consequently, higher false-positive rates. These can be detrimental in numerous ways, from unnecessary diagnostic tests and interventions to the potential for misdiagnosis or delayed diagnosis of true findings. However, while the fairness of supervised ML models for clinical application has been increasingly studied,[4, 7, 9] and the problem has been occasionally discussed in the UAD literature,[10, 11] a thorough empirical investigation of the matter for UAD models does not yet exist.
The main contribution of our study is a thorough investigation into the fairness of UAD models under different dataset compositions related to a protected attribute, including balanced scenarios. Notably, our work is the first to measure how the proportion of a subgroup in the training data of UAD models affects the resulting performance. We operationalise our findings and introduce empirical "fairness laws" a (i.e., a linear relationship between subgroup representation and performance), which help to identify the optimal dataset compositions for training fairer UAD models. Finally, we introduced a new subgroup-AUROC (sAUROC) metric to better evaluate fairness in ML models.
## 2 Materials and Methods
### Datasets
We utilized three large public chest X-ray datasets to measure the fairness of UAD models regarding the protected attributes gender, age, and race: MIMIC-CXR-JPG (MIMIC-CXR),[13] ChestX-ray14 (CXR14),[14] and CheXpert.[15] MIMIC-CXR contains 371 110 chest X-rays from a cohort of 65 079 patients acquired at Beth Israel Deaconess Medical Center in Boston, Massachusetts, USA, between 2011 and 2016. The CXR14 dataset, collected from the NIH Clinical Center in Bethesda, Maryland, USA, between 1992 and 2015, includes 112 120 frontal-view chest radiographs from 30 805 distinct patients. The CheXpert database contains 224 316 chest radiographs of 65 240 patients acquired at Stanford Hospital between October 2002 and July 2017.
All three datasets contain structured diagnostic labels (12 labels for MIMIC-CXR and CheXpert, 13 for CXR14, excluding the "support devices" label) and a "no finding" or "normal" label marking the absence
Figure 1: Supervised classification models (a) learn a mapping between input (samples) and (disease) labels and thus require annotated data. On the other hand, UAD models (b) learn to capture the distribution of the (healthy) training samples, here visualised as cats for exemplary purposes. If a subgroup is not adequately represented in the training data, the UAD model will assign data samples from that group higher anomaly scores, resulting in more false positives at test time.
of any other identified diagnostic labels. These labels were automatically derived from the associated radiology reports using natural language processing techniques. In addition, demographic metadata about the patients' age (MIMIC-CXR: \(60\pm 18\), CXR14: \(47\pm 17\), CheXpert: \(60\pm 18\) years) and their gender (MIMIC-CXR: 47:7 %, CXR14: 43:5 %, CheXpert: 40:6 % images of female and 59:4 % of male patients \({}^{\rm b}\)) was available. For MIMIC-CXR, additionally, the self-reported race was available from Johnson _et al._[16]. Here, 17:3 % of the patients identified as "Black" and 62:8 % as "White". While information about ethnicity was also available for CheXpert, the dataset sizes for the experiments described in Section 2 were insufficient to adequately train anomaly detection models (\(\sim 500\) images for training).
#### 2.2.2 Dataset Construction and Inclusion Criteria
To ensure consistency in the inclusion criteria, distinct selection strategies were employed for all datasets. We only considered frontal-view images without support devices and further excluded those with all labels marked as uncertain. All images were center-cropped and resized to \(128\times 128\) pixels.
We constructed the training datasets using only the "no finding" and "normal" labels. All other diagnostic labels were consolidated into a "diseased" label. First, validation and test sets were randomly generated with equal representation of normal and abnormal classes and equal distribution of the protected attributes. From the remaining normal data, we created training sets with varying proportions of the protected attribute subgroups (from 0 % to 100 %). In this process, the total number of training samples was held constant (e.g., the training set with 50 % _male_ and 50 % _female_ samples contains the same number of samples as the one with 10 % _males_ and 90 % _females_). The dataset construction for CXR14 is illustrated in Fig. 2. For MIMIC-CXR and CheXpert, the procedure was analogous. Importantly, there was no patient overlap between the training, validation, and test sets.
We selected two variables for the protected attribute race: _white_ and _black_. The group _white_ consists of the variable "WHITE". We accumulated four variations of self-reported race into the group _black_: "BLACK/AFRICAN AMERICAN", "BLACK/CAPE VERDEAN", "BLACK/AFRICAN", and "BLACK/CARIBBEAN ISLAND". Further relevant subgroups, such as Asian or Latin, were excluded due to the small number of available images. The categorization along the gender dimension was taken from the binary _male_ and _female_ values provided in the datasets. For the separation between _young_ and _old_ patients, we opted for a data-driven approach, dividing the patient pool into three age groups based on the maximum age within the MIMIC-CXR dataset and removing the centre group to ensure a sufficient age gap between the patients in the two remaining groups. This stratification resulted in individuals up to 31 years being classified as _young_, while individuals aged 61 years and above were categorized as _old_ within our analysis.
We conducted additional experiments to examine fairness in intersectional groups, controlling for multiple protected attributes (gender, age, and race) in the MIMIC-CXR dataset. We constructed random test sets for each possible combination of two protected attributes (i.e. _male_ and _white_, _male_ and _black_, _female_ and _white_, etc.), each balanced in terms of both positive and negative samples and all considered protected
Figure 2: Exemplary flowchart of the procedure to generate training, validation, and test sets for CXR14.
attributes. The remaining normal data was used to form the training set. While this training set was unbalanced regarding the protected attributes, it approximates the population of the originating hospital. Disease prevalence is unequal in the subgroups defined above. This prevalence shift is known to cause disparities in many metrics, such as the receiver operating characteristics (ROC) curve.[17] Since we intend to measure model performance bias in this study, we corrected for unequal prevalence while constructing our validation and test sets. Since the prevalences in the datasets considered suffer from selection bias and, thus, likely do not represent the true prevalence in many real-world scenarios, we chose an arbitrary but consistent prevalence of 0.5 in the test sets.
#### 3.2.2 Statistics
Since samples of under-represented subgroups are likely to yield higher anomaly scores and, consequently, are more often falsely flagged as positive (c.f. Section 1), we considered predictive equality (or false positive error rate balance) as the most relevant measure to assess (group-)fairness in the context of UAD.
While the effect of generally higher or lower anomaly scores for a subgroup could potentially be partially mitigated _post hoc_ by selecting a unique threshold for each group, this solution can not be used in many cases where the association of a sample to a particular variation is unavailable, for example in retrospective cases. The amount of required thresholds further grows combinatorially with the number of protected variables in intersectional subgroups, while the number of available samples to estimate this threshold shrinks simultaneously.[7] Thus, when evaluating an anomaly detection model's fairness regarding multiple subgroups, metrics such as the false positive rate at a minimally required true positive rate (FPR@x%TPR) cannot be calculated separately for each group. Instead, the threshold necessary to achieve the minimum TPR must be computed across the entire dataset, while the resulting FPRs should be determined individually for each subgroup.[17] Similarly, the area under the receiver operating characteristics curve (AUROC) can not be computed for each group separately, as the minimum and maximum anomaly scores for both groups are likely significantly apart. To alleviate this issue, we propose the subgroup-AUROC (sAUROC), which can be viewed as a threshold-free extension of the FPR@x%TPR metric.
For a subgroup \(s\in\mathcal{S}\) in the overall population \(\mathcal{P}\), the sAUROC is calculated as follows: For every decision threshold \(t\), the TPR is computed over the whole population, but the FPR only over the specific subgroup. Mathematically, sAUROC can be described as:
\[\mathrm{TPR}_{\mathcal{P}}(t)=\frac{\mathrm{TP}_{\mathcal{P}}(t)}{\mathrm{ TP}_{\mathcal{P}}(t)+\mathrm{FN}_{\mathcal{P}}(t)} \tag{1}\]
\[\mathrm{FPR}_{s}(t)=\frac{\mathrm{FP}_{s}(t)}{\mathrm{FP}_{s}(t)+\mathrm{TN}_ {s}(t)}, \tag{2}\]
where \(\mathrm{TP}(t),\mathrm{FN}(t),\mathrm{FP}(t),\mathrm{TN}(t)\) are the numbers of true positives, false negatives, false positives, true negatives at threshold \(t\), respectively, and the subscripts \(\mathcal{P}\) and \(s\) denote if all samples are considered, or only the ones from the subgroup \(s\), respectively. Finally, the sAUROC is computed as
\[\mathrm{sAUROC}(s)=\int_{0}^{1}\mathrm{TPR}_{\mathcal{P}}(\mathrm{FPR}_{s}^{ -1}(x))\mathrm{d}x. \tag{3}\]
sAUROC paints a thorough picture of performance differences between subgroups of a population.
We ran each experiment ten times with different random seeds
#### 3.2.3 Anomaly Detection Model
In our experiments, we employed the Structural Feature Autoencoder (FAE) by Meissen _et al._,[18] the best-performing method among contemporary state-of-the-art techniques in a recent comparative analysis by Lagogiannis _et al._[3] Model optimization was performed using the Adam optimizer, with a learning rate of 0-0002, for 10 000 iterations. Each experiment was run with ten different random seeds to compute confidence intervals using a Gaussian-based approximation and to perform statistical significance tests. To validate that our findings are not specific to the chosen model, we validated our results with the Reverse Distillation model (RD) by Deng _et al._[19] - the second-best model in Lagogiannis _et al._[3] The settings for both models were preserved at their default parameters. These can be found in the Appendix, along with a short description of both models and the results for the RD model.
#### Ethics
This research is exempt from ethical approval as the analysis is based on secondary data which is publicly available, and no permission is required to access the data.
#### Role of the funding source
The funders had no role in study design, data collection, data analysis, data interpretation, or writing of the report. The authors had full access to all the data in the study and had final responsibility for the decision to submit for publication.
## 3 Results
Figure 3: a) A linear relationship between the representation of a subgroup in the training dataset and its performance was observed across all datasets and subgroups. Equal representation of subgroups did not produce the most group-fair results. Experimental results for the FAE on the MIMIC-CXR, CXR14, and CheXpert datasets trained under different gender, age, or race imbalance ratios. Each box extends from the lower to upper quartile values of ten runs with different random seeds with a line at the median. Regression lines along the different imbalance ratios are additionally plotted. The exact numbers can be found in the Appendix. b) The mean absolute errors (MAE) between the real subgroup performances and those estimated using the “fairness laws” for each dataset and protected variable. Each box again shows the results over ten runs with different random seeds.
#### 3.3.2 Subgroup representation and performance are linearly correlated
Figure 3 shows a significant correlation between the representation of a subgroup in the training dataset and the subsequent performance for that subgroup (Pearson correlation coefficients of 0\(\cdot\)979\(\pm\)0\(\cdot\)011, 0\(\cdot\)971\(\pm\)0\(\cdot\)016, and 0\(\cdot\)682\(\pm\)0\(\cdot\)331 in the gender, age, and race experiments respectively, details in the Appendix). This relationship was linear in all experiments, which allowed us to accurately estimate the sAUROC-performance for training with any composition of patients regarding a protected attribute, using linear interpolation from the extreme values (0 % and 100 %) - with a mean absolute error of 0\(\cdot\)0061\(\pm\)0\(\cdot\)0029 for age, 0\(\cdot\)0063\(\pm\)0\(\cdot\)0015 for gender, and 0\(\cdot\)0045\(\pm\)0\(\cdot\)0025 for race. Only in the race-controlled experiments on MIMIC-CXR the influence of the dataset composition on the performance outcome for patients from the _white_ group was much weaker (Pearson correlation coefficient 0\(\cdot\)398\(\pm\)0\(\cdot\)241). The results of the RD model in the Appendix showed the same linear behaviour.
#### 3.3.3 Unfairness exists with balanced training data
The experiments in Figure 3 also revealed that _male_ subjects consistently received significantly higher scores across all datasets, even under balanced conditions, where both subgroups are equally represented in the training data (Welch's t-test, \(N=10\) different runs, \(p<0\)\(\cdot\)01). This pattern also held true for _old_ patients, except for the MIMIC-CXR dataset, where _young_ individuals obtained significantly higher scores (\(p<0\)\(\cdot\)01). Only when the protected variable was the patients' self-reported race, balanced training data did not lead to significant unequal performance in the FAE model (\(p\geq 0\)\(\cdot\)01).
#### 3.3.4 Unfairness is amplified in intersectional subgroups
Our intersectional experiments' outcomes are illustrated in Figure 4. Given that the training dataset was not controlled for any protected variable, the findings presented here revealed potential unfairness within a population representative of the Beth Israel Deaconess Medical Center. Here, _male_ patients received higher scores than _female_ patients, and _young_ patients achieved higher scores than their _old_ counterparts. The performance of _white_ and _black_ patients appears to be equivalent. This pattern is also consistently observable in the intersectional subgroups featured in the lower row of the Figure, although _black_ patients scored slightly lower in these cases. Moreover, the score disparity (\(\Delta\)) between _male_ and _female_ patients was more pronounced among _old_ individuals compared to their _young_ counterparts (\(\Delta\)0\(\cdot\)085 and \(\Delta\)0\(\cdot\)042, respectively). Similarly, the score disparity between _old_ and _young_ patients was larger among _female_ patients in comparison to _male_ patients (\(\Delta\)0\(\cdot\)112 and \(\Delta\)0\(\cdot\)068, respectively).
#### 3.3.5 Naive AUROC fails to capture "fairness laws"
To put the sAUROC results in perspective, we additionally show the anomaly scores, FPR@0\(\cdot\)95TPR, and naive AUROC for different dataset compositions on CXR14 in Figure 5. The anomaly scores for a subgroup increased as their representation in the training data shrank. FPR@0\(\cdot\)95TPR exhibited analogous behavior. For AUROC, an increase in samples from one subpopulation did not improve scores for that group. Instead, an increase in _male_ or _old_ patients resulted in similar or worse scores for all groups.
## 4 Discussion
It has been shown that an embedded ethics and social science approach is helpful when analyzing complex, interdisciplinary problems like the one discussed in this work.[20, 21] We, therefore, drew on the interdisciplinary expertise of the authors in the discussion of the results, combining technical, social, and medical perspectives.
The experiments in Section 3 have unveiled "fairness laws" for UAD models: linear relationships between the representation of a subpopulation in the training data and the performance of that group (c.f. Figure 3). These relationships enable practitioners to accurately predict a subgroup's performance using only two points of measurement and linear inter- or extrapolation. This implies that the optimal dataset combination under any fairness constraints can be easily estimated beforehand based on the above-described linear relationship.
However, like supervised methods,[8] UAD models suffered from compounding adverse effects in intersectional subgroups (c.f. Figure 4). For example, the difference we found in model performance between _male_
and _female_ patients was larger in _old_ patients than in _young_ ones, resulting in _old female_ patients being at a particular disadvantage.
Our experiments demonstrated substantial performance disparities among subgroups, even when they were equally represented in the training data. For instance, _male_ patients on CXR14 consistently received significantly higher scores than their _female_ counterparts (_male_: 0\(\cdot\)71, _female_: 0\(\cdot\)60), and _young_ patients outperformed _old_ ones on MIMIC-CXR (_young_: 0\(\cdot\)73, _old_: 0\(\cdot\)63). Notably, the dataset compositions that yield the most group-fair results - as measured by predictive equality - were often situated towards the extremes of the dataset composition spectra. In CheXpert and MIMIC-CXR, optimal sAUROC parity was achieved with a 70 % and 80 % _female_ patient representation, respectively, whereas for CXR14, the composition that led to the most group-fair outcome did not include any _male_ samples. This leads us to the hypothesis that there might be subgroup-specific differences that cause some subgroups to be easier to represent by the UAD model than others, as discussed by Nalisnick _et al._[22]. Further potential reasons for this performance gap are summarized in a recent review by Petersen _et al._[23]. Among them are systematic labelling errors, as well as potential inherent task difficulty differences between groups. Our findings, therefore, highlight the need for medical expertise in evaluating these models and their potential performance biases.
Disease detection is the central first step in the diagnostic process.[24] Coupled with the increasing demand for medical imaging, UAD models fill a relevant clinical need. In this pivotal role, unfairness in UAD, perhaps even more so than "downstream", more specialized models, has significant potential to negatively affect patients. Our experiments revealed that UAD models produce elevated false-positive rates for some subgroups (c.f. Figure 5). False positives can cause serious harm to patients, such as unnecessary follow
Figure 4: In the MIMIC-CXR dataset, representative of the Beth Israel Deaconess Medical Center, Boston, USA, diseases were detected better in _male_ than _female_ patients and in _young_ than _old_ patients. When considering a second demographic variable, these differences were amplified, e.g. the difference between _male_ and _female_ subjects is larger among older patients than younger ones. Top row: _male_ vs. _female_, _old_ vs. _young_, and _white_ vs. _black_. Bottom row: intersectional subgroups. Each bar shows the mean and standard deviation over ten runs with different random seeds.
up tests (and costs)[25], harm from unnecessary treatment, and psychological distress[26], and can generally cause distrust in the model or ML techniques in general.
The results in Figure 5 show why sAUROC is a more suitable metric to measure the fairness of ML models. The figure displays a linear relationship between anomaly scores and dataset composition, indicating that, as groups get less represented in the training data, they are flagged as "more anomalous" by the model. These relationships were also reflected in the FPR@0\(\cdot\)95TPR and our recently introduced sAUROC. The naive, individual calculation of AUROC for each subpopulation, however, did not exhibit this expected pattern. While the anomaly scores clearly showed disparities but were missing information about the resulting classification performance differences and FPR@0\(\cdot\)95TPR is only a point measure considering only a single decision threshold, sAUROC painted a more comprehensive picture by considering all possible thresholds while capturing disparate performances.
The findings of our work need to be viewed with an awareness that the categories of human differences we are working with are complex and historically formed. We could not find comprehensive information about how subjects were assigned labels of gender and race in the datasets. This leaves us unclear about what social and biological aspects were included in these categories, important information for nuanced analyses that take into account how human bodies are shaped by both social and biological factors[27; 28]. Further, the labels of the datasets used for evaluation in this study are at risk of being biased. Reasons for that reach from biases of medical professionals in the creation of the radiology reports[29] to the automatic label extraction from these reports using a rule-based natural language processing system, which is known to generally contain high levels of label noise, especially in the oldest patient group[30]. Although UAD models only require minimal labels during training (healthy vs. diseased) and thus are presumably less susceptible to systematic labelling errors during training than supervised models, such errors might have an effect on our experimental results when they occur in the evaluation data. Due to the small sample sizes of many ethnic subgroups in the available databases, our analysis of the race dimension was restricted to the categories _white_ and _black_ to guarantee meaningful insights of our results. While we assume analogous effects on other racial categories like _asian_, _latin_, etc., the shortage of available data prevented us from empirically substantiating this hypothesis. This limitation reflects the bias against under-represented racial groups that exists in current public medical data sets.
In conclusion, this study represents an effort to quantify fairness in UAD on a large scale, including the results of 1560 trained models. Our extensive experiments on various large-scale datasets and protected
Figure 5: The representation of a subgroup in the training dataset had a strong influence on its anomaly scores, the false positive rate at a minimally required true positive rate, and our proposed sAUROC (c.f. Fig. 3). Naive computation of AUROC did not capture this relationship. Anomaly scores (left), FPR@0\(\cdot\)95TPR (middle), and naive AUROC (right) for different compositions of gender (top) and age (bottom) on the CXR14 dataset.
attributes confirmed that a demographic subpopulation's anomaly detection performance strongly depends on its representation in the training data and can be efficiently estimated, simplifying the task of identifying the fairest composition. Our experiments further showed that disparate performance between two subgroups can not solely be explained by the under-representation of one subgroup. Instead, some subgroups seemed to be harder to learn by the UAD models than others and, thus, were generally flagged as "more anomalous". While this study has found performance disparities existing along the three considered variables, likely, more of them exist (for example, people with disabilities). Thus, enhancing model fairness is a significant yet unresolved requirement for the safe implementation of UAD models. Towards this end, the sAUROC metric presented here is a relevant contribution, as it facilitates the quantification of performance bias in UAD models. We emphasize that sAUROC is also relevant for supervised classification models where disparities between subgroups also manifest in TPR/FPR shifts [7, 17]. Considering the severe implications that over-diagnosis can have on both society and individual patients, we believe the quantification of existing bias mechanisms in this work presents a vital step towards a fairer future in healthcare.
## Contributors
F.M. conducted the experiments. F.M., S.B., M.K., and B.W. conceptualized the study, performed literature search, and validated and interpreted the experimental results. G.K., R.M., and D.R. acquired funding, provided resources, and supervised the study with regular feedback. All authors contributed to writing and critical revision of the manuscript.
## Declaration of interests
B.W. has received speaker honoraria from Novartis, Bayer and Philips. He holds a patent related to Apogenix APG101 for glioblastoma treatment and is a stockholder of the company "Need".
## Acknowledgments
This work was supported by the DAAD programme Konrad Zuse Schools of Excellence in Artificial Intelligence, sponsored by the Federal Ministry of Education and Research. Daniel Rueckert has been supported by ERC grant Deep4MI (884622). Svenja Breuer, Ruth Muller and Alena Buyx have been supported via the project MedAICine by the Center for Responsible AI Technologies of the Technical University of Munich, the University of Augsburg, and the Munich School of Philosophy.
## Data sharing
All data used in this work is publicly available. The CXR14 dataset together with the patient demographic information can be downloaded from [https://nihcc.app.box.com/v/ChestKray-NIHCC/folder/36938765345](https://nihcc.app.box.com/v/ChestKray-NIHCC/folder/36938765345). For the CheXpert dataset, analogous information is available from [https://stanfordmlgroup.github.io/competitions/chexpert/](https://stanfordmlgroup.github.io/competitions/chexpert/). The MIMIC-CXR dataset can be downloaded from [https://physionet.org/content/mimic-cxr-jpg/2.0.0/](https://physionet.org/content/mimic-cxr-jpg/2.0.0/) with the corresponding patient demographic information available from [https://physionet.org/content/mimiciv/2.2/](https://physionet.org/content/mimiciv/2.2/). All information to reproduce the exact results in this paper is available under an open source Apache 2.0 license in our dedicated GitHub repository [https://github.com/FeliMe/unsupervised_fairness](https://github.com/FeliMe/unsupervised_fairness).
|
2304.00139 | Classification Strength of Polish Groups I: Involving $S_\infty$ | In recent years, much work has been done to measure and compare the
complexity of orbit equivalence relations, especially for certain classes of
Polish groups. We start by introducing some language to organize this previous
work, namely the notion of classification strength of Polish groups. Broadly
speaking, a Polish group $G$ has stronger classification strength than $H$ if
every orbit equivalence relation induced by a continuous action of $H$ on a
Polish space can be "emulated" by such an action of $G$ in the sense of Borel
reduction.
Among the non-Archimedean Polish groups, the groups with the highest
classification strength are those that involve $S_\infty$, the Polish group of
permutations of a countably-infinite set. We prove that several properties,
including a weakening of the disjoint amalgamation in Fra\"{i}ss\'{e} theory, a
weakening of the existence of an absolute set of generating indiscernibles, and
not having ordinal rank for a particular coanalytic rank function, are all
equivalent to a non-Archimedean Polish group involving $S_\infty$. Furthermore,
we show the equivalence relation $=^+$, which is a relatively simple benchmark
equivalence relation in the theory of Borel reducibility, can only be
classified by such groups that involve $S_\infty$. | Shaun Allison | 2023-03-31T21:25:58Z | http://arxiv.org/abs/2304.00139v1 | # Classification strength of Polish groups I: involving \(S_{\infty}\)
###### Abstract.
In recent years, much work has been done to measure and compare the complexity of orbit equivalence relations, especially for certain classes of Polish groups. We start by introducing some language to organize this previous work, namely the notion of **classification strength** of Polish groups. Broadly speaking, a Polish group \(G\) has stronger classification strength than \(H\) if every orbit equivalence relation induced by a continuous action of \(H\) on a Polish space can be "emulated" by such an action of \(G\) in the sense of Borel reduction.
Among the non-Archimedean Polish groups, the groups with the highest classification strength are those that involve \(S_{\infty}\), the Polish group of permutations of a countably-infinite set. We prove that several properties, including a weakening of the disjoint amalgamation in Fraisse theory, a weakening of the existence of an absolute set of generating indiscernibles, and not having ordinal rank in a particular coanalytic rank function, are all equivalent to a non-Archimedean Polish group involving \(S_{\infty}\). Furthermore, we show the equivalence relation \(=^{+}\), which is a relatively simple benchmark equivalence relation in the theory of Borel reducibility, can only be classified by such groups that involve \(S_{\infty}\).
Key words and phrases:Polish group, Borel reduction, Knight's model, countable model, Fraisse limit 2020 Mathematics Subject Classification: Primary 54H05, 37B02, 54H11; Secondary 03E15, 03C15
## 1. Introduction
Invariant descriptive set theory is concerned with definable equivalence relations and definable reductions between them. In particular, we usually consider equivalence relations living on Polish spaces, where the equivalence relations are _analytic_, i.e. analytic as a subset of the product space. The definable reductions that we consider are usually the _Borel_ ones. To be specific, given analytic equivalence relations \(E\) and \(F\) living on Polish spaces \(X\) and \(Y\) respectively, a **Borel reduction** from \(E\) to \(F\) is a Borel function \(f:X\to Y\) satisfying \(x\ E\ x^{\prime}\) iff \(f(x)\ F\ f(x^{\prime})\) for any \(x,x^{\prime}\in X\). When such a reduction exists, we say that \(E\) is _Borel-reducible_ to \(F\), i.e \(E\leq_{B}F\). In the case that \(E\) and \(F\) represent classification problems in math, then from \(E\leq_{B}F\) we can conclude that the problem \(E\) is no harder than \(F\).
One common way to show that \(E\) does _not_ Borel reduce to \(F\) is by showing that \(E\) is _generically-ergodic_ with respect to \(F\). A function \(f:X\to Y\) is a homomorphism from \(E\) to \(F\) iff for every \(x,y\in X\), if \(x\ E\ y\) then \(f(x)\ E\ f(y)\). Then we say that \(E\) is generically-ergodic with respect to \(F\) iff for every Borel homomorphism \(f:X\to Y\) from \(E\) to \(F\), there is a comeager \(C\subseteq X\) such that for any \(x,y\in C\) we have \(f(x)\ F\ f(y)\). In the case that \(E\) does not have a comeager class, this precludes the existence of a Borel reduction.
While comparing equivalence relations has value, it is perhaps more valuable to give an absolute notion of complexity of a given equivalence relation, which is invariant under Borel reducibility. One very successful program along these lines has been, in the case of Borel
equivalence relations, to consider their place in the Borel hierarchy via a notion of _potential Borel complexity_. This was essentially initiated in the seminal [10], where a connection to potential Borel complexity was made to the set-theoretic complexity of any definable assignment of invariants. In particular, in the large class of equivalence relations that they consider, an equivalence relation is potentially \(\mathbf{\Pi}_{2}^{0}\) iff there is a Borel assignment of reals as invariants, potentially \(\mathbf{\Pi}_{3}^{0}\) iff there is a Borel assignment of sets of reals as invariants, potentially \(\mathbf{\Pi}_{4}^{0}\) iff there is a Borel assignment of sets of reals as invariants, and so on.
The case of **orbit equivalence relations** gives another opportunity to produce a meaningful hierarchy of complexity, this time by studying the acting ("classifying") groups. Many definable equivalence relations that arise in practice are orbit equivalence relations. Given a Polish group \(G\), a _Polish \(G\)-space_ is a Polish space \(X\) along with a continuous action \(G\curvearrowright X\). We use \(E_{X}^{G}\) to denote the orbit equivalence relation, which is analytic, and moreover every orbit is Borel.
Formally, given an equivalence relation \(E\) on a Polish space \(X\), we say that a Polish group \(G\)**classifies**\(E\) iff there is a Polish \(G\)-space \(Y\) such that \(E\leq_{B}E_{Y}^{G}\). This notion can be used to separate equivalence relations up to Borel reduction. We identify three examples of this phenomenon of particular importance.
1. We say that an equivalence relation \(E\) is _classifiable by countable structures_ iff it is classifiable by \(S_{\infty}\), the Polish group of automorphisms of a countably-infinite set with the natural topology. An important benchmark equivalence relation is \(=^{+}\), the Friedman-Stanley jump of equality, which has particular importance in the study of potential Borel complexity in [10]. It is defined on \(\mathbb{R}^{\omega}\) by \[(x_{n})=^{+}(y_{n})\quad\text{iff}\quad\{x_{n}\mid n\in\omega\}=\{y_{n}\mid n \in\omega\}.\] This is easily seen to be classifiable by countable structures as witnessed by the natural Bernoulli-shift action \(S_{\infty}\curvearrowright\mathbb{R}^{\omega}\). Hjorth showed that any orbit equivalence relation that is generically-ergodic with respect to \(=^{+}\) is also generically-ergodic with respect to any action of \(S_{\infty}\). This implies in any case where there isn't a comeger orbit that the equivalence relation is not classifiable by countable structures. Hjorth also isolated a dynamical property of some orbit equivalence relations called _turbulence_, which precludes them from being classifiable by countable structures (see [11] and [11]).
2. Among the orbit equivalence relations classifiable by countable structures, further separations can be found. Recall that a Polish group is _complete left-invariant_ (or cli) iff it has a complete metric \(d_{G}\) which compatible in the sense that it generates the topology, and furthermore is left-invariant, i.e. \[d_{G}(g,g^{\prime})=d_{G}(hg,hg^{\prime})\] for every group elements \(g,g^{\prime},h\). In [11], Hjorth identifies a metamathematical property of equivalence relations that preclude them from being classifiable by cli Polish groups, and shows that \(=^{+}\) exhibits this property. This property was further explored and dubbed being "unpinned" in [12].
3. A stronger invariance notion than a Polish group being cli is being _two-sided invariant_ (or tsi). A Polish group is tsi iff there is a compatible complete metric \(d_{G}\) satisfying \[d_{G}(g,g^{\prime})=d_{G}(hg,hg^{\prime})=d_{G}(gh,g^{\prime}h)\] for every group elements \(g,g^{\prime},h\). A natural example of an equivalence relation classifiable by a tsi Polish group is \(E_{\infty}\), defined as the orbit equivalence relation induced by the Bernoulli-shift action of \(F_{2}\curvearrowright\mathbb{R}^{F_{2}}\), where \(F_{2}\) is the free group on two generators with the discrete topology. Since \(F_{2}\) is a countable discrete group, it is tsi for trivial reasons. In [All], it was shown that an orbit equivalence relation that is generically ergodic with respect to \(E_{\infty}\) is generically ergodic with respect to any orbit equivalence relation induced by a tsi non-Archimedean Polish group. This strongly parallels Hjorth's result relating generic ergodicity with respect to \(=^{+}\) and orbit equivalence relations induced by any non-Archimedean Polish group. In [CC22] Clemens-Coskey introduced an equivalence relation \(E_{0}^{[\mathbb{Z}]}\), called the \(\mathbb{Z}\)-jump of \(E_{0}\), which is classifiable by a cli Polish group. They asked if it is in fact classifiable by a tsi Polish group. However they showed that it is generically ergodic with respect to \(E_{\infty}\) and furthermore has all meager classes, thus by [All] it is not classifiable by a tsi non-Archimedean Polish group. This was generalized by the author and Panagiotopoulos in [AP21] to general tsi Polish groups, and a purely dynamical property was identified similar to Hjorth's notion of turbulence which serves as an obstruction to classifiability by tsi Polish groups. In particular, \(E_{0}^{[\mathbb{Z}]}\) is not classifiable by _any_ tsi Polish group. We will not need the definition of \(E_{0}^{[\mathbb{Z}]}\) in this work, but its definition, as well as an exploration of an interesting hierarchy of similar equivalence relations, can be found in [CC22].
We can observe that by considering properties of the classifying group we can now start to see that it produces a meaningful hierarchy of equivalence relations which are classifiable by countable structures:
**classifiable by non-Archimedean TSI**
\(\subsetneq\)**classifiable by non-Archimedean CLI**
\(\subsetneq\)**classifiable by \(\mathbf{S}_{\infty}\)**.
In another upcoming paper, we are showing with Panagiotopoulos that there is a finer hierarchy below classifiability by CLI, and in this paper we expose another hierarchy which exists above classifiability by CLI. With this picture in mind, the following definition seems natural:
**Definition 1.1**.: Say that \(G\) is **stronger in classification strength** than \(H\) iff for every Polish \(H\)-space \(X\), the orbit equivalence relation \(E_{X}^{H}\) is classifiable by \(G\).
The following weak restatement of a result of Mackey and Hjorth further motivates this definition. Recall that \(G\)**involves**\(H\) iff there is a closed subgroup \(G_{0}\) of \(G\) and a continuous surjective homomorphism from \(G_{0}\) onto \(H\). Note that this is also sometimes said "\(H\) divides \(G\)".
**Lemma 1.2** (Mackey, Hjorth).: _If \(G\) involves \(H\), then \(G\) is stronger in classification strength than \(H\)._
The non-Archimedean Polish groups are exactly those that are isomorphic to closed subgroups of \(S_{\infty}\). Thus the non-Archimedean Polish groups which involve \(S_{\infty}\) has maximal classification strength among the non-Archimedean Polish groups. A result of Hjorth implies the converse [10] (see also the more recent [9] which recovers this result using a different strategy.)
Hjorth's result gives a metamathematical sufficient condition for a non-Archimedean Polish group to involve \(S_{\infty}\). On the other hand, a paper of Baldwin-Friedman-Koerwien-Laskowski contains an argument that the automorphism group of the limit of any Fraisse class which satisfies disjoint amalgamation involves \(S_{\infty}\). However, not much else was known, and there had not yet been any comprehensive study of the division between the Polish groups which do and do not involve \(S_{\infty}\).
In this paper, we identify several seemingly unconnected properties of non-Archimedean Polish groups, which are equivalent to involving \(S_{\infty}\). This tells us, we believe, that such groups have an interesting and deep structure and are worthy of further study.
Our main result is the following, where the terms mentioned in equivalences (2), (3), (4), and (5) are yet to be defined.
**Theorem 1.3**.: _Let \(G=\text{Aut}(\mathcal{M})\) be a non-Archimedean Polish group. Then TFAE:_
1. \(G\) _does not involve_ \(S_{\infty}\)_;_
2. _Every disjointiying closure operator on_ \(G\) _is trivial;_
3. \(\text{Krk}(\mathcal{M})<\infty\)_;_
4. \(\text{Krk}(\mathcal{M})<\omega_{1}\)_;_
5. _There is no indiscernible support function on_ \(\mathcal{M}\)_;_
6. \(G\) _does not classify_ \(=^{+}\)_;_
In the sequel, we will add several more equivalences onto the list, as well as initiate a finer study of the hierarchy of Polish groups not involving \(S_{\infty}\) which arises from the rank function Krk.
### Acknowledgements
The author would like to thank Omer Ben-Neria, Clinton Conley, Aristotelis Panagiotopoluos, and Spencer Unger for many valuable conversations. This work was partially funded by the Israel Science Foundation (grant no. 1832/19).
## 2. Preliminaries
### Countable model theory
We briefly review a few very basic concepts and notation from the model theory of countable structures. We will always use \(\mathcal{L}\) to refer to a countable relational language. Given an \(\mathcal{L}\)-structure \(\mathcal{M}\), we will write \(M\) to refer to the underlying set of \(\mathcal{M}\). If \(a,a^{\prime}\in M\) and \(B\subseteq M\), we write \(a\cong_{B}a^{\prime}\) iff there is an automorphism \(\pi\) of \(\mathcal{M}\) satisfying \(\pi(a)=a^{\prime}\) and \(\pi(b)=b\) for every \(b\in B\).
We write \(\mathcal{N}\prec_{\mathcal{L}}\mathcal{M}\) to denote that \(\mathcal{N}\) is an \(\mathcal{L}\)-substructure of \(\mathcal{M}\), and we write \(\mathcal{N}\prec_{\mathcal{L}_{\omega_{1},\omega}}\mathcal{M}\) iff \(\mathcal{N}\) is an \(\mathcal{L}_{\omega_{1},\omega}\)-elementary substructure of \(\mathcal{M}\). Equivalently, \(\mathcal{N}\prec_{\mathcal{L}_{\omega_{1},\omega}}\mathcal{M}\) iff for every finite \(B\subseteq N\) and \(a\in M\), there is some \(a^{\prime}\in N\) such that \(a^{\prime}\cong_{B}a\).
We say \(\mathcal{M}\) is _ultrahomogeneous_ iff for any two tuples \(\bar{a}\) and \(\bar{b}\) from \(M\), if they have the same quantifier-free type, then there is an automorphism sending one to the other.
### Non-Archimedean Polish groups
A Polish group is called _non-Archimedean_ if it has a countable local basis of the identity of open subgroups. The most important such group is \(S_{\infty}\), the Polish group of permutations of a countably-infinite set (with the discrete topology), equipped with the pointwise convergence topology. There are several useful equivalent formulations of a Polish group being non-Archimedean:
**Fact 2.1**.: _Let \(G\) be a Polish group. The following are equivalent:_
1. \(G\) _is non-Archimedean_
2. \(G\) _is isomorphic to_ \(\text{Aut}(\mathcal{M})\) _for a countable structure_ \(\mathcal{M}\) _in a countable language;_
3. \(G\) _is isomorphic to a closed subgroup of_ \(S_{\infty}\)_;_
4. \(G\) _has a compatible complete ultrametric._
See [1, Theorem 2.4.1, Theorem 2.4.4] for proofs.
We will make the most use of equivalence (2), as it allows the use of language and techniques from model theory, even though our use of model theory will not particularly deep. An important point is that we can assume that \(\mathcal{M}\) is ultrahomogeneous by adding new relations without changing the set of automorphisms. This is done as follows: for each tuple \(\bar{b}\) from \(M\), add a new relation \(R_{\bar{b}}\) to the language and declare \(R^{\mathcal{M}}_{b}(\bar{a})\) iff there is an automorphism of \(\mathcal{M}\) sending \(\bar{a}\) to \(\bar{b}\).
### Non-Archimedean Polish groups that are cli and not cli
Recall that a Polish group is cli iff it has a complete left-invariant metric which is compatible with the topology. In the case of a non-Archimedean Polish group, we have some nice characterizations due to Gao.
**Fact 2.2** (Gao).: _Let \(G=\text{Aut}(\mathcal{M})\) be a non-Archimedean Polish group. Then TFAE:_
1. \(G\) _is cli_
2. \(\mathcal{M}\) _has no nontrivial_ \(\mathcal{L}_{\omega_{1},\omega}\)_-elementary substructure;_
3. _there is no uncountable model of the Scott sentence of_ \(\mathcal{M}\)__
There is also a nice rank function due to Deissler which characterizes when \(G\) is cli, which we will define and discuss later.
A Polish group is tsi iff it has a complete two-sided metric which is compatible with the topology. An example of a Polish group which is cli but not tsi is the automorphism group of the linear order with order-type \(\mathbb{Z}*\mathbb{Z}\), which is just the lex order on \(\mathbb{Z}\times\mathbb{Z}\). A natural action of this group induces the equivalence relation \(E_{0}^{[\mathbb{Z}]}\) mentioned in the introduction, discussed further in [1], [1].
The most important non-cli Polish group from our perspective is the automorphism group of Knight's model. It is significant in the sense that was the simplest known Polish group which is not cli but does not involve \(S_{\infty}\). Knight's model \(\mathcal{K}\) is a countable structure in the language \(\mathcal{L}=\{<,f_{n}\}_{n\in\omega}\), where \(<\) is a binary relation and each \(f_{n}\) is a unary function which satisfies:
1. \(<\) is a linear order on \(K\);
2. for every \(a\in K\), \(\{b\in K\mid b<a\}=\{f_{n}(a)\mid n\in\omega\}\);
3. \(\mathcal{K}\) is \(1\)-transitive (there is an automorphism between any two pairs of elements);
4. there is a non-trivial \(\mathcal{L}_{\omega_{1},\omega}\)-substructure of \(\mathcal{K}\).
By Gao's characterization, (4) implies that \(\operatorname{Aut}(\mathcal{K})\) is not cli. Hjorth showed [10] that it does not classify \(=^{+}\) and thus does not involve \(S_{\infty}\) (it also follows from our results in this paper).
### Examples of non-Archimedean Polish groups involving \(S_{\infty}\)
The original of Knight's model was to produce a countable structure whose Scott sentence has a model of cardinality \(\aleph_{1}\), but no larger. It is said that the Scott sentence of Knight's model "characterizes" \(\aleph_{1}\). In [10], Hjorth constructed, for each \(\alpha\), a countable structure characterizing \(\aleph_{\alpha}\). Rather than a generalization of Knight's model, Hjorth's construction relies on a very different construction method which resulted in structures whose automorphism groups involve \(S_{\infty}\).
Hjorth's model characterizing \(\aleph_{1}\), which we denote by \(\mathcal{H}\), is a countable ultrahomogeneous structure in a language with binary relations \(S_{n}\) for every \(n\), and \(k+2\)-ary relations \(S_{k}\) for every \(k\), satisfying
1. there is some function \(f:[H]^{2}\to\omega\) such that for every \(a\) and \(b\) in \(H\) and \(n\in\omega\), we have \(S_{n}^{\mathcal{H}}(a,b)\) iff \(f(\{a,b\})=n\);
2. for every \(a\) and \(b\), the set \(S(\{a,b\})\) of all \(c\) such that \(f(\{a,c\})=f(\{b,c\})\) is finite, and \(R_{k}^{\mathcal{H}}(a,b,\bar{c})\) iff \(\bar{c}\) enumerates \(S(\{a,b\})\).
It's straightforward to check that every \(\mathcal{L}_{\omega,\omega}\)-substructure any model of the Scott sentence of \(\mathcal{H}\) must be countable. Thus there are no models of the Scott sentence of \(\mathcal{H}\) of cardinality \(\aleph_{2}\) or higher. On the other hand, Hjorth showed that the automorphism group of \(\mathcal{H}\) involves \(S_{\infty}\) (in the process of showing that its Scott sentence has an uncountable model). This is done by exploiting the fact that \(\mathcal{H}\) satisfies disjoint amalgamation. The fact that the automorphism groups of limits of Fraisse structures satisfying disjoint amalgamation is stated and proved more explicitly in [1]. Another proof of this, in even greater detail, appeared recently in [11]. We briefly sketch this argument in Section 3, as we will build on this idea.
A classic example of an automorphism group that involves \(S_{\infty}\) is \(Aut(\mathbb{Q},<)\), the automorphism group of the rational linear order. To see that it involves \(S_{\infty}\), partition \(\mathbb{Q}\) into countably-many dense subsets \(A_{n}\). Let \(H\leq\operatorname{Aut}(\mathbb{Q})\) be the closed subgroup of automorphisms \(\pi\) satisfying that for every \(n,m\in\omega\) and \(a,b\in A_{n}\), then \(\pi(a)\in A_{m}\) iff \(\pi(b)\in A_{m}\). Then define a continuous homomorphism \(f:H\to S_{\infty}\) where \(f(\pi)\) is defined to be the
unique \(\sigma\in S_{\infty}\) such that \(a\in A_{m}\) iff \(\pi(a)\in A_{\sigma(m)}\) for every \(a\) and \(m\). To see that \(f\) is indeed surjective, apply a back-and-forth argument.
We will construct one more example, which we will actually make use of later. Let \(\Delta\) be some countably-infinite group, and consider a free action \(\Delta\curvearrowright I\) on a countably-infinite set \(I\) with infinitely-many \(\Delta\)-orbits. Now consider the closed subgroup \(P\leq S_{I}\) of permutations \(\pi\) of \(I\) satisfying \(\pi(\delta\cdot x)=\delta\cdot\pi(x)\) for every \(x\in I\).
This group involves \(S_{\infty}\). To see this, fix a transversal \(T\subseteq I\), i.e. a set which intersects every \(\Delta\)-orbit exactly once. Since \(T\) is a countably-infinite set, the Polish group \(S_{T}\) is isomorphic to \(S_{\infty}\). Moreover, the map \(f:P\to S_{T}\) sending each \(\pi\in P\) to the unique \(\sigma\in S_{T}\) satisfying \(\pi(x)\in\Delta\cdot\sigma(x)\) for every \(x\in T\), is easily seen to be a continuous surjective homomorphism.
### Baire-measurable homomorphisms and the orbit continuity lemma
Given a Polish group \(G\), recall that a Polish \(G\)-space is a Polish space \(X\) along with a continuous action \(G\curvearrowright X\). We write the induced orbit equivalence relation as \(E_{X}^{G}\). Given two such orbit equivalence relations \(E_{X}^{G}\) and \(E_{Y}^{H}\), a homomorphism is a function \(f:X\to Y\) such that \(f(x)\;E_{Y}^{H}\;f(y)\) whenever \(x\;E_{X}^{H}\;y\).
The following so-called "orbit continuity lemma", due to Hjorth and Lupini-Panagiotopoulos, is central. This is the main way to extract information from the existence of Baire-measurable homomorphisms that don't trivialize on a comeager set. Our statement differs slightly from the statement in [10], so we give an argument for how to recover this one from theirs.
**Lemma 2.3** (Hjorth, Lupini-Panagiotopoulos).: _Suppose \(f:X\to Y\) is a Baire-measurable homomorphism from \(E_{X}^{G}\) to \(E_{Y}^{H}\) and \(G_{0}\) is a countable dense subgroup of \(G\). Then there is a comeager subset \(C\subseteq X\) such that_
1. \(f\) _is continuous on_ \(C\)_;_
2. _for every_ \(x\in C\)_, the set of_ \(g\) _such that_ \(g\cdot x\in C\) _is comeager;_
3. _for every_ \(x\in C\) _and for every_ \(g\in G_{0}\)_,_ \(g\cdot x\in C\)_;_
4. _for every_ \(x_{0}\in C\) _and every nonempty open neighborhood_ \(V\subseteq H\) _of the identity, there is a neighborhood_ \(U_{0}\) _of_ \(x_{0}\) _and a nonempty open neighborhood_ \(W\subseteq G\) _of the identity such that for any_ \(x\in C\cap U\) _and for every_ \(w\in W\) _with_ \(w\cdot x\in C\cap U_{0}\)_, we have_ \(f(w\cdot x)\in V\cdot f(x)\)_; and_
5. _for every_ \(x_{0}\in C\) _and_ \(g\in G_{0}\) _and nonempty open_ \(W\subseteq H\) _such that_ \(f(g\cdot x)\in W\cdot f(x)\)_, there is an open neighborhood_ \(U_{0}\) _of_ \(x_{0}\) _such that for every_ \(x\in C\cap U_{0}\)_,_ \(g\cdot x\in C\) _and_ \(f(g\cdot x)\in W\cdot f(x)\)_;_
Proof.: Let \(C_{0}\) be a comeager set satisfying the first three conditions as in [10, Lemma 2.5]. For every \(g\in G_{0}\) and basic open \(W\), let \(C_{g,W}\) be a comeager set on which the Baire-measurable function \(f:X\to 2\) defined by
\[f(x)=1\iff f(gx)\in Wf(x)\]
is continuous. Let \(C^{\prime}\) be the intersection of \(C_{0}\) and each \(C_{g,W}\) and define
\[C=\{x\in C^{\prime}\mid\forall^{*}g,gx\in C^{\prime}\text{ and }\forall g\in G_{0},gx\in C^{\prime}\}.\]
We argue that this works.
The fact that (1)-(3) and (5) are satisfied are immediate, so we proceed to (4). Clause (4) is almost the same as the clause (3) of [12, Lemma 2.5], with the difference is that we quantify over all \(w\in W\), not just a set comeager in \(W\), with the additional assumption that \(w\cdot x\in U_{0}\). Let \(x_{0}\in C\) and \(V\subseteq H\) be an open neigborhood of the identity of \(H\). Let \(\hat{V}\subseteq H\) be a symmetric open neighborhood of the identity of \(H\) such that \(\hat{V}^{2}\subseteq V\). From our choice of \(C\), there is an open neighborhood \(U_{0}\ni x_{0}\) and an open neighborhood \(W\subseteq G\) of the identity of \(G\) such that for every \(x\in C\cap U\) there is a comeager set of \(w\in W\) such that \(f(w\cdot x)\in\hat{V}\cdot f(x)\).
Let \(x\in U_{0}\cap C\) and \(w\in W\) such that \(w\cdot x\in U_{0}\cap C\). Let \(D_{0}\) be the set of \(w^{\prime}\in W\) such that \(f(w^{\prime}\cdot x)\in\hat{V}\cdot f(x)\) and let \(D_{1}\) be the set of \(w^{\prime}\in W\) such that \(f(w^{\prime}w\cdot x)\in\hat{V}\cdot f(w\cdot x)\). The set \(D_{0}\) is comeager in \(W\) and \(D_{1}w\) is nonmeager in \(W\), thus we may fix some \(w^{\prime}\in D_{0}\cap D_{1}w\). Then we have that \(f(w^{\prime}\cdot x)\in\hat{V}\cdot x\) and \(f(w^{\prime}w^{-1}\cdot(w\cdot x))\in\hat{V}\cdot(w\cdot x)\). Thus \(f(w\cdot x)\in\hat{V}^{-1}\cdot f(x)\subseteq V\cdot f(x)\) as desired.
## 3. A weakening of disjoint amalgamation
In [10], and more explicitly in [1], disjoint amalgamation is identified as a sufficient condition for the automorphism group of a structure to involve \(S_{\infty}\). In this section, we introduce the appropriate weakening of disjoint amalgamation. We first give a brief review of the Fraisse theory of classes of finite structures, and review the ideas of [1]. Then we introduce the weakening of disjoint amalgamation and prove that it is necessary and sufficient.
### Fraisse theory
We start with a brief presentation of the generalized Fraisse theory which we will later need. A general survey of generalized Fraisse theory can be found in [11].
Let \(\mathcal{L}\) be a countable relational language. Let \(\mathcal{F}\) be a class of finite \(\mathcal{L}\)-structures closed under isomorphism. Let \(\preceq_{\mathcal{F}}\) be a notion of "strong substructure" on \(\mathcal{F}\), i.e. a transitive and reflexive binary relation satisfying
1. if \(\mathcal{A}\preceq_{\mathcal{F}}\mathcal{B}\) then \(\mathcal{A}\preceq_{\mathcal{L}}\mathcal{B}\); and
2. if \(\mathcal{A}\preceq_{\mathcal{F}}\mathcal{B}\) and \(\pi:\mathcal{B}\cong\mathcal{C}\) is an isomorphism, then \(\pi[\mathcal{A}]\preceq_{\mathcal{F}}\mathcal{C}\).
We write \(\mathcal{B}\cong\mathcal{C}\) iff there is an isomorphism \(\pi:B\to C\) between \(\mathcal{B}\) and \(\mathcal{C}\). More generally, if \(A\subseteq B\cap C\), then we write \(\mathcal{B}\cong_{A}\mathcal{C}\) iff there is an isomorphism \(\pi:B\to C\) between \(\mathcal{B}\) and \(\mathcal{C}\) satisfying \(\pi(a)=a\) for every \(a\in A\).
The following is standard.
**Proposition 3.1**.: _Suppose \((\mathcal{F},\preceq_{\mathcal{F}})\) satisfies the following:_
1. _the empty structure_ \(\emptyset\) _is in_ \(\mathcal{F}\) _and_ \(\emptyset\preceq_{\mathcal{F}}\mathcal{A}\) _for every_ \(\mathcal{A}\in F\)_; and_
2. _(amalgamation property) for any_ \(\mathcal{A},\mathcal{B},\mathcal{C}\in\mathcal{F}\)_, if_ \(\mathcal{A}\preceq_{\mathcal{F}}\mathcal{B}\) _and_ \(\mathcal{A}\preceq_{\mathcal{F}}\mathcal{C}\) _then there is some_ \(\mathcal{C}^{\prime}\in\mathcal{F}\) _satisfying_ \(\mathcal{C}^{\prime}\cong_{A}\mathcal{C}\) _and_ \(\mathcal{D}\in\mathcal{F}\) _such that_ \(\mathcal{B}\preceq_{\mathcal{F}}\mathcal{D}\) _and_ \(\mathcal{C}^{\prime}\preceq_{\mathcal{F}}\mathcal{D}\)_._
_Then there is a unique (up to isomorphism) countable \(\mathcal{L}\)-structure \(\mathcal{M}\) satisfying:_
1. _for any finite_ \(D\subseteq M\)_, there is some_ \(\mathcal{A}\in\mathcal{F}\) _with_ \(D\subseteq A\) _and_ \(\mathcal{A}\preceq_{\mathcal{L}}M\)_;_
2. _if_ \(\mathcal{A}\preceq_{\mathcal{F}}\mathcal{B}\) _and_ \(\mathcal{A}\preceq_{\mathcal{L}}M\)_, then there is some_ \(\mathcal{B}^{\prime}\in\mathcal{F}\) _with_ \(\mathcal{A}\preceq_{\mathcal{F}}\mathcal{B}^{\prime}\)_,_ \(\mathcal{B}^{\prime}\preceq_{\mathcal{L}}M\) _and_ \(\mathcal{B}^{\prime}\cong_{A}\mathcal{B}\)_; and_
3. _if_ \(\mathcal{A}\preceq_{\mathcal{F}}\mathcal{B}\) _and_ \(\mathcal{A}\preceq_{\mathcal{F}}\mathcal{B}^{\prime}\) _and_ \(\mathcal{A},\mathcal{B},\mathcal{B}^{\prime}\preceq_{\mathcal{L}}M\)_, if_ \(\mathcal{B}\cong_{A}\mathcal{B}^{\prime}\) _then there is an automorphism_ \(\pi\) _of_ \(\mathcal{M}\) _such that_ \(\pi[B]=B^{\prime}\) _and_ \(\pi(a)=a\) _for every_ \(a\in A\)_._
We note that any class \((\mathcal{F},\preceq_{\mathcal{F}})\) satisfying the assumptions of the proposition is a generalized Fraisse class as defined in [15], however the notion of generalized Fraisse class introduced there is much more general. Classically, the definition of Fraisse class might include other axioms. The joint embedding property axiom follows from the amalgamation property and the fact that we require the empty structure to be a strong substructure of every structure in \(\mathcal{F}\). The hereditary property is commonly required, but isn't necessary (or desired) in this context. Often it is required that \(\mathcal{F}\) satisfies the essential countability axiom, which already follows in our case from the fact that \(\mathcal{L}\) is countable and every structure in \(\mathcal{F}\) is finite. For the rest of this section, we will refer to any \((\mathcal{F},\preceq_{\mathcal{F}})\) satisfying the assumptions of Proposition 3.1 as a Fraisse class.
We call \(\mathcal{M}\) in the conclusion of the proposition as the **limit** of \((\mathcal{F},\preceq_{\mathcal{F}})\). The structure \(\mathcal{M}\) is ultrahomogenous, which follows by a back-and-forth argument using property (3) of Proposition 3.1.
Conversely, if \(\mathcal{M}\) is an arbitrary ultrahomogeneous \(\mathcal{L}\)-structure, then \((\mathrm{Age}(\mathcal{M}),\preceq_{\mathcal{L}})\) is a Fraisse class and its limit is isomorphic to \(\mathcal{M}\). Recall that \(\mathrm{Age}(\mathcal{M})\) is the class of all finite substructures of \(\mathcal{M}\), closed under isomorphism.
### Disjoint amalgamation property
Let \(\mathcal{L}\) be a countable relational language and let \((\mathcal{F},\leq_{\mathcal{F}})\) be a Fraisse class. Say that \((\mathcal{F},\leq_{\mathcal{F}})\) satisfies the **disjoint amalgamation property** (also called the strong amalgamation property) iff for every \(\mathcal{A},\mathcal{B},\mathcal{C}\in\mathcal{F}\) with \(\mathcal{A}\preceq_{\mathcal{F}}\mathcal{B}\) and \(\mathcal{A}\preceq_{\mathcal{F}}\mathcal{C}\), there is some \(\mathcal{B}^{\prime},\mathcal{D}\in\mathcal{F}\) such that \(\mathcal{B}^{\prime}\cong_{A}\mathcal{B}\) and \(\mathcal{B}^{\prime}\preceq_{\mathcal{F}}\mathcal{D}\) and \(\mathcal{C}\preceq_{\mathcal{F}}\mathcal{D}\), and moreover \(B^{\prime}\cap C=A\). Notice that this is property (ii) in Proposition 3.1, with the additional requirement that \(\mathcal{B}^{\prime}\) and \(\mathcal{C}\) be as disjoint as possible.
**Proposition 3.2** (Baldwin-Friedman-Koerwien-Laskowski, [1]).: _If \((\mathcal{F},\preceq_{\mathcal{F}})\) is a Fraisse class satisfying the disjoint amalgamation property, and \(\mathcal{M}\) is the limit, then \(\text{Aut}(\mathcal{M})\) involves \(S_{\infty}\)._
The proof proceeds by considering the class \(\mathcal{F}^{*}\) of "colored" versions of structures in \(\mathcal{F}\). More precisely, we consider pairs \((\mathcal{A},\mathbf{c}_{\mathcal{A}})\) where \(\mathcal{A}\in\mathcal{F}\) and \(\mathbf{c}_{\mathcal{A}}:A\to\omega\). We could view these formally as \(\mathcal{L}^{*}\)-structures, where \(\mathcal{L}^{*}\supseteq\mathcal{L}\) is the language adding countably-many new unary relations. We say \((\mathcal{A},\mathbf{c}_{\mathcal{A}})\) is a strong substructure of \((\mathcal{B},\mathbf{c}_{\mathcal{B}})\), which we write as \((\mathcal{A},\mathbf{c}_{\mathcal{A}})\preceq_{\mathcal{F}}^{*}(\mathcal{B}, \mathbf{c}_{\mathcal{B}})\), iff \(\mathcal{A}\preceq_{\mathcal{F}}\mathcal{B}\) and \(\mathbf{c}_{\mathcal{B}}(a)=\mathbf{c}_{\mathcal{A}}(a)\) for every \(a\in A\). The class \((\mathcal{F}^{*},\preceq_{\mathcal{F}}^{*})\) need not satisfy the amalgamation property in general. However, with the additional assumption that \((\mathcal{F}^{*},\preceq_{\mathcal{F}})\) has the disjoint amalgamation property, it is easy to confirm that \((\mathcal{F}^{*},\preceq_{\mathcal{F}}^{*})\) satisfies the amalgamation property. Thus we can compute the limit of \((\mathcal{F}^{*},\preceq_{\mathcal{F}}^{*})\), which would be the pair \((\mathcal{M},\mathbf{c})\) for some coloring \(\mathbf{c}:M\to\omega\), by the uniqueness of the limit of \((\mathcal{F},\preceq_{\mathcal{F}})\). Next, consider the closed subgroup \(H\leq\text{Aut}(\mathcal{M})\) of automorphisms of \(\mathcal{M}\) which permute the colors consistently, i.e. \(h\in H\) iff there is some \(\sigma_{h}\in S_{\infty}\) such that
\(\sigma_{h}(\mathbf{c}(a))\) for every \(a\in M\). There is a natural continuous homomorphism \(f:H\to S_{\infty}\) sending \(h\) to \(\sigma_{h}\), noting that \(\sigma_{h}\) is unique as every color appears somewhere in \(M\). The final step is to show that \(f\) is surjective, by fixing an arbitrary \(\sigma\) and constructing by a back-and-forth method (utilizing the homogeneity of the coloring \(\mathbf{c}\)) an automorphism \(h\in H\) such that \(\sigma=\sigma_{h}\).
The natural question: is the converse to Proposition 3.2 true? Evidently not, as one could consider the case where \(\mathcal{F}\) is the class of finite equivalence relations where each equivalence class has at most two elements, and \(\preceq_{\mathcal{F}}\) is the usual substructure relation. This does not satisfy disjoint amalgamation for trivial reasons, however its automorphism group involves \(S_{\infty}\) as all of the pairs can be permuted arbitrarily.
One could consider the "fix" of only considering the class of substructures of \(\mathcal{M}\) which are definably-closed. This may require us in general to consider classes of infinite structures, and indeed there exists a Fraisse theory of "finitely-generated" structures which would be useful if we were to consider substructures of \(\mathcal{M}\) which are the definable closures of finite sets. One would need to devise the appropriate notion of the disjoint amalgamation property in this context, but this seems to be a reasonable approach.
This does not, however, produce a necessary condition. One could just as well take the random graph and replace every vertex with some much more complicated structure which is not finitely-generated. The automorphism group of the resulting structure would still involve \(S_{\infty}\), but would not satisfy any reasonable notion of disjoint amalgamation relative to definable closure. The realization, then, is that we need to work with some notion of closure which is weaker (coarser) than definable closure.
One reasonable candidate is the notion of pseudo-algebraic closure. For a countable atomic \(\mathcal{L}\)-structure \(\mathcal{M}\), the pseudo-algebraic closure of a set \(A\subseteq M\) is defined to be the set of all \(b\in M\) such that \(b\in N\) for every \(\mathcal{N}\preceq_{\mathcal{L}_{\omega_{1},\omega}}\mathcal{M}\) with \(A\subseteq N\). We will see later that this is not quite the correct closure notion, but it is reasonably close to the correct one, and indeed our theory will significantly parallel the theory of the pseudo-algebraic closure as developed in [10].
In this section we work backwards and reverse-engineer exactly what property we will want our notion of closure to exhibit in order to develop the "right" weakening of disjoint amalgamation. In Section 4, we will instead work from the bottom-up and define the right notion of closure that we want, which will be a very natural coarsening of pseudo-algebraic closure from the perspective of [10]. Here, we use "coarsening" to mean that our closure operator will include possibly more elements in the closure of any set than pseudo-algebraic closure.
In an attempt to keep the theory as simple as possible, we discard the idea of working with classes of finitely-generated infinite structures and restrict ourselves only to classes of finite structures. This will fortunately not be an obstacle to developing the right weakening of disjoint amalgamation. Rather than considering the class of substructures which are finitely-generated with respect to some closure operator and develop a notion of disjoint amalgamation for such structures, we will work with the class of finite substructures and instead use the closure notion to control just how "disjoint" we require the amalgamation to be.
### Closure operators and independence relations
We begin by considering definable closure and pseudo-algebraic closure in an abstract sense, as _closure operators_.
A **closure operator** on a set \(I\) is a function \(\operatorname{cl}:\mathcal{P}(I)\to\mathcal{P}(I)\) satisfying for every \(A,B\subseteq I\):
1. \(A\subseteq\operatorname{cl}(A)\);
2. \(\operatorname{cl}(A)\subseteq\operatorname{cl}(B)\) whenever \(A\subseteq B\); and
3. \(\operatorname{cl}(\operatorname{cl}(A))=\operatorname{cl}(A)\).
Note that sometimes the axioms of closure operators demand that \(\operatorname{cl}(\emptyset)=\emptyset\), though we will not (and should not) demand that here. In the case that \(\operatorname{cl}(A)=A\), we say that \(A\) is **closed**. The closure operators considered in these notes will have **finite character**, meaning that \(\operatorname{cl}(A)\) is the union of \(\operatorname{cl}(A_{0})\) where \(A_{0}\) ranges over finite subsets of \(A\). We say that \(\operatorname{cl}\) is **non-trivial** iff \(\operatorname{cl}(\emptyset)\neq I\). This will be reflected by the fact that we will only bother to define closure operators in terms of finite subsets. We will adopt the standard notation to sometimes write \(A\cup B\) as simply \(AB\) and \(A\cup\{b\}\) as simply \(Ab\), for \(A,B\subseteq I\) and \(b\in I\).
Given \(B\supseteq C\) finite subsets of \(I\), say that \(b\in B\setminus\operatorname{cl}(C)\) is **minimal in \(B\) over \(C\)** iff for every \(b^{\prime}\in(B\cap\operatorname{cl}(bC))\setminus\operatorname{cl}(C)\), we have \(b\in\operatorname{cl}(b^{\prime}C)\).
**Lemma 3.3**.: _Suppose \(\operatorname{cl}\) is a closure operator on \(I\) and \(B\supseteq C\) are two finite subsets of \(I\) such that \(B\not\subseteq\operatorname{cl}(C)\). Then there exists some \(b\in B\setminus\operatorname{cl}(C)\) which is minimal in \(B\) over \(C\)._
Proof.: Define on \(B\setminus\operatorname{cl}(C)\) a binary relation \(\leq\) by \(b\leq b^{\prime}\) iff \(b\in\operatorname{cl}(b^{\prime}C)\). This is easily seen to be a preorder. As \(B\setminus\operatorname{cl}(C)\) is finite, we may choose a \(\leq\)-minimal element.
Given such a closure operator \(\operatorname{cl}\) on a set \(I\) and finite subsets \(A,B,C\subseteq I\), write \(A\mathrel{\mathop{\mathchoice{\kern 0.0pt\hbox to 0.0pt{\kern 2.0pt\vrule width 6.0pt height 6.0pt depth -0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt width 0.2pt 0.2pt width 0.2pt width 0.
\(A\mathrel{\mathop{\mathchoice{\kern 0.0pt\hbox to 0.0pt{\vrule width 6.45pt depth -0.4pt widt h 1.0pt\hss}\hbox{$\smile$}}}{\kern 0.0pt\hbox to 0.0pt{\vrule width 6.45pt depth -0.4pt widt h 1.0pt\hss}\hbox{$\smile$}}}_{\text{$\smile$}}}_{\text{$\smile$}}}_{\text{$\smile$}}}_{\text{$\smile$}}}_{\text{$\smile$}}}_{\text{$\smile$}}}}_{\text{$\smile$}}}_{\text{$\smile$}}}}_{\text{$}}}}}_{\text{$}}}}}}\text{B\)}}}\text{}}}\text{}}}\text{}}}}}}}}}}}}}}} \ \text{\text{\}\}\}\,\}\,\}\}\,\}\,\}\{\}\,\}\}\{
_._
4. _for any finite_ \(C\subseteq I\) _and_ \(a,b\in I\) _with_ \(a\not\in\operatorname{cl}(C)\)_, the following both hold:_ 1. _there is some_ \(a^{\prime}\cong_{C}a\) _such that_ \(a^{\prime}\mathchoice{\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt \hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.
\(a^{\prime\prime}\in\operatorname{cl}(b^{\prime\prime}C)\) for every \(b^{\prime\prime}\cong_{a^{\prime\prime}C}b^{\prime}\) and thus \(a^{\prime}\in\operatorname{cl}(b^{\prime\prime}C)\) for every \(b^{\prime\prime}\cong_{a^{\prime\prime}C}b^{\prime}\). By (4*).(a), we can conclude \(a^{\prime}\in\operatorname{cl}(a^{\prime\prime}C)\) as desired.
All that remains to show that \(\leq\) is a linear preorder. Reflexivity and transitivity of \(\leq\) are clear (and not immediately useful). To see linearity, suppose for contradiction there exist some \(a_{0},a_{1}\cong_{C}a\) and some \(b_{0},b_{1}\cong_{C}b\) with
\[a_{1}\in\operatorname{cl}(b_{0}C)\quad\text{and}\quad a_{0}\not\in \operatorname{cl}(b_{0}C)\]
and
\[a_{1}\not\in\operatorname{cl}(b_{1}C)\quad\text{and}\quad a_{0}\in \operatorname{cl}(b_{1}C).\]
By the second statement, we have \(b_{1}\in\operatorname{cl}(a_{1}C)\) and thus \(a_{0}\in\operatorname{cl}(a_{1}C)\). But that implies \(a_{0}\in\operatorname{cl}(b_{0}C)\) by the first clause of the first statement. This contradicts the second clause of the first statement.
If \(\operatorname{cl}\) is an invariant disjointifying closure operator, then \(\operatorname{cl}\) is non-trivial iff \(I\neq\operatorname{cl}(A)\) for any finite \(A\subseteq I\). Also, we will implicitly assume from now on that any disjointifying closure operator is invariant.
### Disjointifying closure operators and involving \(S_{\infty}\)
We now finish this section by proving the following theorem:
**Theorem 3.7**.: _Let \(I\) be a countably-infinite set and \(P\leq S_{I}\) closed with the natural action \(P\curvearrowright I\). If \(I\) has a nontrivial invariant disjointifying closure operator \(\operatorname{cl}\), then \(P\) involves \(S_{\infty}\)._
Let \(\mathcal{L}\) be the countable relational language with an \(n\)-ary relation \(R_{\bar{a}}\) for each \(n\in\omega\) and \(\bar{a}\in I^{n}\). We define an \(\mathcal{L}\)-structure \(\mathcal{M}\) living on \(I\) where for every \(\bar{a},\bar{b}\in I^{<\omega}\), we have that \(R_{\bar{a}}(\bar{b})^{\mathcal{M}}\) holds iff there exists some \(g\in P\) with \(g\cdot\bar{a}=\bar{b}\). It's easy to check that \(\mathcal{M}\) is ultrahomogeneous, thus if we let \(\mathcal{F}\) be the age of \(\mathcal{M}\), we have that \((\mathcal{F},\leq_{\mathcal{L}})\) is a Fraisse class. For each \(\mathcal{A}\in\mathcal{F}\), we have the invariant disjointifying closure operator \(\operatorname{cl}_{\mathcal{A}}\) on \(A\) that \(\mathcal{A}\) inherits as a substructure of \(\mathcal{M}\).
Let \(\mathcal{F}^{*}\) be the class of pairs \((\mathcal{A},\mathbf{c}_{\mathcal{A}})\) where \(\mathcal{A}\in\mathcal{F}\) and \(\mathbf{c}_{\mathcal{A}}:A\to\omega\cup\{\text{null}\}\) satisfying that \(\mathbf{c}_{\mathcal{A}}(a)=\text{null}\) for every \(a\in\operatorname{cl}_{\mathcal{A}}(\emptyset)\). We say that \((\mathcal{A},\mathbf{c}_{\mathcal{A}})\) is a strong substructure of \((\mathcal{B},\mathbf{c}_{\mathcal{B}})\), denoted \((\mathcal{A},\mathbf{c}_{\mathcal{A}})\preceq^{*}(\mathcal{B},\mathbf{c}_{ \mathcal{B}})\), iff
1. \(\mathcal{A}\leq_{\mathcal{L}}\mathcal{B}\);
2. \(\mathbf{c}_{\mathcal{A}}(a)=\mathbf{c}_{\mathcal{B}}(a)\) for every \(a\in A\); and
3. \(\mathbf{c}_{\mathcal{B}}(b)=\text{null}\) for every \(b\in(B\cap\operatorname{cl}_{\mathcal{B}}(A))\setminus A\).
We remark that \(\preceq^{*}\) is a notion of strong substructure, and that \(\mathcal{F}^{*}\) consists of the pairs \((\mathcal{A},\mathbf{c}_{\mathcal{A}})\) for which the empty structure \((\emptyset,\emptyset)\) is a strong substructure.
We wish to show that \((\mathcal{F}^{*},\preceq^{*})\) satisfies the assumptions of Proposition 3.1. The only tricky part is showing that it satisfies the amalgamation property. Suppose \((\mathcal{A},\mathbf{c}_{\mathcal{A}}),(\mathcal{B},\mathbf{c}_{\mathcal{B}}),( \mathcal{C},\mathbf{c}_{\mathcal{C}})\in\mathcal{F}^{*}\) satisfy \((\mathcal{A},\mathbf{c}_{\mathcal{A}})\preceq^{*}(\mathcal{B},\mathbf{c}_{ \mathcal{B}})\) and \((\mathcal{A},\mathbf{c}_{\mathcal{A}})\preceq^{*}(\mathcal{C},\mathbf{c}_{ \mathcal{C}})\). Because \(\operatorname{cl}\) is disjointifying, after replacing \((\mathcal{C},\mathbf{c}_{\mathcal{C}})\) with some \((\mathcal{C}^{\prime},\mathbf{c}_{\mathcal{C}^{\prime}})\cong_{A}(\mathcal{C}, \mathbf{c}_{\mathcal{C}})\), we may find \(\mathcal{D}\in\mathcal{F}\) such that \(\mathcal{B}\leq_{\mathcal{L}}\mathcal{D}\) and \(\operatorname{cl}_{\mathcal{D}}(B)\cap C\subseteq\operatorname{cl}_{\mathcal{D}}(A)\) and \(\operatorname{cl}_{\mathcal{D}}(C)\cap B\subseteq\operatorname{cl}_{\mathcal{D}}(A)\). By shrinking \(\mathcal{D}\), we can assume
that \(D=B\cap C\). Our task now is to find a coloring \(\mathbf{c}_{\mathcal{D}}\) of \(\mathcal{D}\) such that \((\mathcal{B},\mathbf{c}_{\mathcal{B}})\preceq^{*}(\mathcal{D},\mathbf{c}_{ \mathcal{D}})\) and \((\mathcal{C},\mathbf{c}_{\mathcal{C}})\preceq^{*}(\mathcal{D},\mathbf{c}_{ \mathcal{D}})\). This would also imply that \((\mathcal{D},\mathbf{c}_{\mathcal{D}})\in\mathcal{F}^{*}\).
The choice of coloring is natural. Define
\[\mathbf{c}_{\mathcal{D}}(d)=\begin{cases}\mathbf{c}_{\mathcal{B}}(b)&b\in B\\ \mathbf{c}_{\mathcal{C}}(c)&c\in C.\end{cases}\]
To see that it is well-defined, observe that if \(d\in B\cap C\), then \(d\in\operatorname{cl}_{\mathcal{D}}(A)\). If \(d\in A\), then of course we have \(\mathbf{c}_{\mathcal{B}}(d)=\mathbf{c}_{\mathcal{C}}(d)\), otherwise, if \(d\in\operatorname{cl}(A)\setminus A\), then we also have \(\mathbf{c}_{\mathcal{B}}(d)=\operatorname{null}=\mathbf{c}_{\mathcal{C}}(d)\). Next, suppose \(d\in\operatorname{cl}_{\mathcal{D}}(B)\setminus B\), for which we need to check \(\mathbf{c}_{\mathcal{D}}(d)=\operatorname{null}\). By our choice of \(\mathcal{D}\), since \(d\not\in B\) we must have \(d\in C\), but since \(d\in\operatorname{cl}_{\mathcal{D}}(B)\cap C\), we must have \(d\in\operatorname{cl}(A)\). Clearly \(d\not\in A\) (because \(d\not\in B\)), thus \(\mathbf{c}_{\mathcal{D}}(d)=\mathbf{c}_{\mathcal{C}}(d)=\operatorname{null}\). For any \(d\in\operatorname{cl}_{\mathcal{D}}(C)\setminus C\), we can confirm that \(\mathbf{c}_{\mathcal{D}}(d)=\operatorname{null}\) by the same argument.
Now we can compute (using Proposition 3.1) the limit of \((\mathcal{F}^{*},\preceq^{*}_{\mathcal{F}})\), which of course will be isomorphic to \((\mathcal{M},\mathbf{c})\) for some coloring \(\mathbf{c}\). Because \(\operatorname{cl}\) is nontrivial, we have that every color appears in \((\mathcal{M},\mathbf{c})\). Finally, we let \(H\leq P\) be the subgroup of all \(g\) such that for some \(\sigma\in S_{\infty}\), we have that for every \(a\in I\), \(\mathbf{c}(g\cdot a)=\sigma(\mathbf{c}(a))\). To see that \(H\) is closed, observe \(h\in H\) iff for every \(a,a^{\prime}\in I\), if \(\mathbf{c}(a)=\mathbf{c}(a^{\prime})\) then \(\mathbf{c}(h\cdot a)=\mathbf{c}(h\cdot a^{\prime})\), which is a closed condition. We consider the natural continuous homomorphism \(f:H\to S_{\infty}\) which sends each \(h\in H\) to the unique \(\sigma\in S_{\infty}\) such that \(\mathbf{c}(h\cdot a)=\sigma(\mathbf{c}(a))\) for every \(a\in I\). Our final task is to show that \(f\) is surjective.
Fix some arbitrary permutation \(\sigma\in S_{\infty}\). Let \(\{c_{n}\mid n\in\omega\}\) be an enumeration of \(I\). We will define finite substructures \(\mathcal{A}_{n}\leq\mathcal{M}\) and \(\mathcal{B}_{n}\leq\mathcal{M}\) and \(g_{n}\in P\) satisfying:
1. \(A_{n}\subseteq A_{n+1}\) and \(B_{n}\subseteq B_{n+1}\) for every \(n\);
2. \((\mathcal{A}_{n},\mathbf{c}\upharpoonright A_{n})\preceq^{*}(\mathcal{A}_{n+1 },\mathbf{c}\upharpoonright A_{n+1})\) and \((\mathcal{B}_{n},\mathbf{c}\upharpoonright B_{n})\preceq^{*}(\mathcal{B}_{n+1 },\mathbf{c}\upharpoonright B_{n+1})\) for every \(n\);
3. \(c_{n}\in A_{2n+1}\) and \(c_{n}\in B_{2n+2}\) for every \(n\);
4. \(g_{n}\cdot A_{n}=B_{n}\) for every \(n\);
5. \(\mathbf{c}(g_{n}\cdot a)=\sigma(\mathbf{c}(a))\) for every \(n\) and \(a\in A_{n}\);
6. \(g_{n+1}\cdot c=g_{n}\cdot c\) for every \(n\) and \(c\in A_{n}\).
Having done this, we can easily check that \(g_{n}\to g_{\infty}\) for some \(g_{\infty}\in P\) with \(g_{\infty}\in H\) and \(f(g_{\infty})=\sigma\). We proceed to the construction. Let \(A_{0}=B_{0}=\emptyset\). Having defined \(\mathcal{A}_{2n}\), by Proposition 3.1.(1) we choose \(\mathcal{A}_{2n+1}\) to be any substructure of \(\mathcal{M}\) satisfying \(c_{n}\in A_{2n+1}\) and
\[(\mathcal{A}_{2n},\mathbf{c}\upharpoonright A_{2n})\preceq^{*}(\mathcal{A}_{2n +1},\mathbf{c}\upharpoonright A_{2n+1}).\]
Now let \((\hat{\mathcal{B}},\hat{\mathbf{c}})\) be defined by
\[\hat{\mathcal{B}}=\mathcal{M}\upharpoonright(g_{2n}\cdot A_{2n+1})\quad\text{and }\quad\hat{\mathbf{c}}(b)=(\sigma\circ\mathbf{c})(g_{2n}^{-1}\cdot b).\]
Observe that
\[(\mathcal{B}_{2n},\mathbf{c}\upharpoonright B_{2n})\preceq^{*}(\hat{\mathcal{B}},\hat{\mathbf{c}}).\]
By Proposition 3.1.(2), there is some \(\mathcal{B}_{2n+1}\leq_{\mathcal{L}}M\) satisfying that
\[(\mathcal{B}_{2n},\mathbf{c}\upharpoonright B_{2n})\preceq^{*}(\mathcal{B}_{2n+ 1},\mathbf{c}\upharpoonright B_{2n+1})\]
and
\[(\mathcal{B}_{2n+1},\mathbf{c}\upharpoonright B_{2n+1})\cong_{B_{2n}}(\hat{ \mathcal{B}},\mathbf{c}^{*}).\]
By Proposition 3.1.(3) there is some \(h\in\operatorname{Stab}_{P}(B_{2n})\) with \(B_{2n+1}=h\cdot\hat{B}\) and \(\mathbf{c}(b)=\hat{\mathbf{c}}(h^{-1}\cdot b)\) for every \(b\in B_{2n+1}\). Let \(g_{2n+1}=hg_{2n}\).
The even case is symmetric, and thus we have showed that \(P\) involves \(S_{\infty}\).
## 4. A rank function and the minimal disjointifying closure operator
We now identify a rank function which identifies the existence of a disjointifying closure operator. It will also allow us to define a canonical closure operator which will happen to be the minimal disjointifying closure operator if one exists. The rank that we will define will be a natual extension of a rank function which is already in the literature. We will start by discussing this rank function and how it characterizes the pseudo-algebraic closure, and then proceed to defining our new rank function and the analogous way in which it characterizes the minimal disjointifying closure operator.
### Deissler rank
This subsection is presentation of the theory developed by Deissler in [10] in somewhat different language.
Let \(I\) be a set with an action \(P\curvearrowright I\). Given a finite set \(B\subseteq I\) and \(a\in I\), we define an ordinal rank \(\operatorname{Drk}(a,B)\) as follows. We say \(\operatorname{Drk}(a,B)\leq 0\) iff for every \(g\in\operatorname{Stab}_{P}(B)\), \(g\cdot a=a\). In general, for \(\alpha>0\), we say \(\operatorname{Drk}(a,B)\leq\alpha\) iff there exists some \(c\in I\) such that for every \(c^{\prime}\cong_{B}c\), \(\operatorname{Drk}(a,Bc^{\prime})<\alpha\). We then define \(\operatorname{Drk}(a,B)\) to be the the least \(\alpha\) such that \(\operatorname{Drk}(a,B)\leq\alpha\), if such \(\alpha\) exists, otherwise we write \(\operatorname{Drk}(a,B)=\infty\). If \(\operatorname{Drk}(a,B)\leq\alpha\) for some ordinal \(\alpha\), then we write \(\operatorname{Drk}(a,B)<\infty\).
The following is easy to check.
**Lemma 4.1**.: _Suppose for some finite \(B\subseteq I\) and \(a,c\in I\), we have \(\operatorname{Drk}(a,c^{\prime}B)<\infty\) for every \(c^{\prime}\cong_{B}c\). Then \(\operatorname{Drk}(a,B)<\infty\)._
We define an operator \(\operatorname{Dcl}\) on \(I\) as follows: given \(B\subseteq I\) and \(a\in I\), we declare \(a\in\operatorname{Dcl}(B)\) iff \(\operatorname{Drk}(a,B_{0})<\infty\) for some finite \(B_{0}\subseteq B\).
**Theorem 4.2** (esss. Deissler).: _Let \(\mathcal{L}\) be a countable relational language and \(\mathcal{M}\) a countable \(\mathcal{L}\)-structure with the natural action \(\text{Aut}(\mathcal{M})\curvearrowright\mathcal{M}\). Let \(B\subseteq M\) be finite and \(a\in M\). Then \(a\in\operatorname{Dcl}(B)\) iff for every \(\mathcal{M}_{0}\prec_{\mathcal{L}_{\omega_{1},\omega}}\mathcal{M}\) with \(B\subseteq M_{0}\), we have \(a\in M_{0}\)._
Proof.: Recall that for any set \(M_{0}\subseteq M\), we have \(\mathcal{M}_{0}\prec_{\mathcal{L}_{\omega_{1},\omega}}\mathcal{M}\) iff for every finite \(B\subseteq M_{0}\) and \(a\in M\), there is some \(a^{\prime}\in M_{0}\) with \(a^{\prime}\cong_{B}a\).
For the forward direction, suppose \(a\in\operatorname{Dcl}(B)\) and \(M_{0}\prec_{\mathcal{L}_{\omega_{1},\omega}}M\) with \(B\subseteq M_{0}\). We argue by induction on \(\operatorname{Drk}(a,B)\). If \(\operatorname{Drk}(a,B)\leq 0\) and \(a^{\prime}\cong_{B}a\) satisfies \(a^{\prime}\in M_{0}\), then we must have \(a^{\prime}=a\) and thus \(a\in M_{0}\). Otherwise, suppose \(\operatorname{Drk}(a,B)\leq\alpha\) with \(\alpha>0\) and the claim is true below \(\alpha\). Fix some \(c\in M\) such that \(\operatorname{Drk}(a,Bc^{\prime})<\alpha\) for every \(c^{\prime}\cong_{B}c\). Let \(c^{\prime}\cong_{B}c\) such that \(c^{\prime}\in M_{0}\) in which case by the induction hypothesis we have \(a\in M_{0}\).
For the backwards direction, suppose \(a\not\in\operatorname{Dcl}(B)\). We will construct a \(\mathcal{L}_{\omega_{1},\omega}\)-elementary submodel \(\mathcal{M}_{0}\) of \(\mathcal{M}\) with \(a\not\in M_{0}\).
Given \(C\subseteq C^{\prime}\subseteq M\), say that \(C^{\prime}\) has closure property \((*)\) over \(C\) iff for every finite \(D\subseteq C\) and \(e\in M\), there exists some \(e^{\prime}\cong_{D}e\) with \(e^{\prime}\in C^{\prime}\). We argue that for any \(C\subseteq M\) with \(a\not\in\operatorname{Dcl}(C)\), there is a set \(C^{\prime}\subseteq M\) with \(C^{\prime}\) which has closure property \((*)\) over \(C\) and \(a\not\in\operatorname{Dcl}(C^{\prime})\). Having done this, we can define a sequence
\[\emptyset=C_{0}\subseteq C_{1}\subseteq C_{2}\subseteq...\]
such that \(a\not\in\operatorname{Dcl}(C_{n})\) and \(C_{n+1}\) has property \((*)\) over \(C_{n}\), and define \(M_{0}=\bigcup_{n\in\omega}C_{n}\).
Let \(C\subseteq M\). Write \(C\) as an increasing union \(\bigcup_{n\in\omega}B_{n}\) of finite sets. Fix an enumeration \(\{a_{n}\mid n\in\omega\}\) of \(I\). Let \(h:\omega\to\omega\) be a function such that the preimage of every \(n\in\omega\) is infinite. We recursively define using Lemma 4.1 a sequence \(d_{0},d_{1},...\) such that for every \(n\in\omega\), \(d_{n}\cong_{B_{n}}a_{h(n)}\) and \(a\not\in\operatorname{Dcl}(B_{n}d_{0}...d_{n})\). Then let \(C^{\prime}=C\cup\{d_{n}\mid n\in\omega\}\). It is easy to check that this works.
**Proposition 4.3**.: _The operator \(\operatorname{Dcl}\) is a locally-finite invariant closure operator, and moreover, for any finite \(B\subseteq I\) and \(a,c\in I\), if \(a\in\operatorname{Dcl}(Bc^{\prime})\) for every \(c^{\prime}\cong_{B}c\), then \(a\in\operatorname{Dcl}(B)\)._
Proof.: Local-finiteness and invariance are immediate. The fact that \(\operatorname{Dcl}\) is a closure operator follows from Theorem 4.2. The additional property (or more accurately, its contrapositive) is also easily proved from Theorem 4.2. Suppose \(a\not\in\operatorname{Dcl}(B)\). Then there is some \(\mathcal{M}_{0}\prec_{\mathcal{L}_{\omega_{1},\omega}}\mathcal{M}\) with \(B\subseteq M_{0}\) and \(a\not\in M_{0}\). Then for any \(c\), there must be some \(c^{\prime}\cong_{B}c\) with \(c^{\prime}\in M_{0}\). Since \(a\not\in M_{0}\), we must have \(a\not\in\operatorname{Dcl}(Bc^{\prime})\).
In [1], Gao relates the non-existence of nontrivial \(\mathcal{L}_{\omega_{1},\omega}\)-elementary substructures with dynamical properties of the automorphism group. Recall that a Polish group \(G\) is \(\operatorname{cli}\) iff there is a complete metric \(d\) on \(G\), compatible with the topology on \(G\), such that \(d\) is left-invariant, i.e. \(d(g,g^{\prime})=d(hg,hg^{\prime})\) for every \(g,g^{\prime},h\in G\).
**Theorem 4.4** (Gao, [1]).: _Let \(\mathcal{L}\) be a countable relational language and \(\mathcal{M}\) a countable \(\mathcal{L}\)-structure. Then the Polish group \(\text{Aut}(\mathcal{M})\) is \(\operatorname{cli}\) iff there is no \(M_{0}\subsetneq M\) such that \(\mathcal{M}_{0}\prec_{\mathcal{L}_{\omega_{1},\omega}}\mathcal{M}\)._
Putting these equivalences together, we get the following "master" list of equivalences.
**Corollary 4.5** (Deissler, Gao).: _Let \(\mathcal{L}\) be a countable relational language and \(\mathcal{M}\) a countable \(\mathcal{L}\)-structure. The following are equivalent:_
1. \(\text{Aut}(\mathcal{M})\) _is_ \(\operatorname{cli}\)_;_
2. _there is no_ \(M_{0}\subsetneq M\) _such that_ \(\mathcal{M}_{0}\prec_{\mathcal{L}_{\omega_{1},\omega}}\mathcal{M}\)_;_
3. _there is no uncountable_ \(\mathcal{L}\)_-structure satisfying the Scott sentence of_ \(\mathcal{M}\)_;_
4. \(\operatorname{Dcl}(\emptyset)=M\)_;_
5. _for every_ \(a\in M\)_,_ \(\operatorname{Drk}(a,\emptyset)<\omega_{1}\)_._
### Disjointifying rank
With the Deissler rank as motivation, we move on to the disjointifying rank.
Let \(I\) be a set with a group action \(P\curvearrowright I\). Given a finite set \(B\subseteq I\) and \(a\in I\), we define a rank an ordinal rank \(\operatorname{Krk}(a,B)\) as follows.
We say \(\operatorname{Krk}(a,B)\leq 0\) iff iff for every \(\pi\in P\) with \(\pi\upharpoonright B=\operatorname{id}_{B}\), \(\pi(a)=a\). In general for \(\alpha>0\), we say \(\operatorname{Krk}(a,B)\leq\alpha\) iff at least one of the following holds:
1. there exists some \(c\) such that for every \(c^{\prime}\cong_{B}c\), \(\operatorname{Krk}(a,Bc^{\prime})<\alpha\). or
2. for every \(a^{\prime}\cong_{B}a\), either \(\operatorname{Krk}(a,Ba^{\prime})<\alpha\) or \(\operatorname{Krk}(a^{\prime},Ba)<\alpha\).
We say \(\operatorname{Krk}(a,B)=\infty\) iff \(\operatorname{Krk}(a,B)\) is not less than \(\alpha\) for any ordinal \(\alpha\). We write \(\operatorname{Krk}(a,B)<\infty\) to mean that \(\operatorname{Krk}(a,B)\leq\alpha\) for some ordinal \(\alpha\). Ultimately, we define \(\operatorname{Krk}(a,B)\) to be the least ordinal \(\alpha\) such that \(\operatorname{Krk}(a,B)\leq\alpha\), or \(\infty\) if no such \(\alpha\) exists.
Note that if we were to remove condition (2) in the recursive case of the definition of disjointifying rank, we will just get the Deissler rank.
We list some basic properties of the rank which are easily confirmed.
**Lemma 4.6**.: _The following all hold:_
1. _For finite subsets_ \(B,C\) _of_ \(I\) _satisfying_ \(C\supseteq B\)_, and_ \(a\in I\)_, we have_ \(\operatorname{Krk}(a,C)\leq\operatorname{Krk}(a,B)\)_;_
2. _For finite_ \(B\subseteq I\) _and_ \(a\in I\)_, if there exists some_ \(c\in I\) _such that_ \(\operatorname{Krk}(a,c^{\prime}B)<\infty\) _for every_ \(c^{\prime}\cong_{B}c\)_, then_ \(\operatorname{Krk}(a,B)<\infty\)_;_
3. _For finite_ \(B\subseteq I\) _and_ \(a\in I\)_, if either_ \(\operatorname{Krk}(a,a^{\prime}B)<\infty\) _or_ \(\operatorname{Krk}(a^{\prime},aB)<\infty\) _for every_ \(a^{\prime}\cong_{B}a\)_, then_ \(\operatorname{Krk}(a,B)<\infty\)_._
We define a closure operator \(\operatorname{cl}^{\min}\) on \(M\) by saying for \(a\in M\) and finite \(B\subseteq M\) that \(a\in\operatorname{cl}^{\min}(B)\) iff \(\operatorname{Krk}(a,B)<\infty\).
**Lemma 4.7**.: \(\operatorname{cl}^{\min}\) _is a closure operator._
Proof.: We show by induction on \(\alpha\) that for any finite \(C\subseteq I\) and \(a,b\in I\), if \(\operatorname{Krk}(a,C)\leq\alpha\) and \(\operatorname{Krk}(b,aC)<\infty\), then \(\operatorname{Krk}(b,C)<\infty\).
For the base case \(\alpha=0\), we now induct on \(\beta:=\operatorname{Krk}(b,aC)\). For \(\beta=0\), the claim is immediate, so let \(\beta>0\) and assume the claim is true below \(\beta\). For the first case, we assume there is some \(d\in I\) such that for every \(d^{\prime}\cong_{aC}d\), we have \(\operatorname{Krk}(a,d^{\prime}aC)<\beta\). Whenever \(d^{\prime}\in I\) satisfies \(d^{\prime}\cong_{C}d\), we have that \(d^{\prime}\cong_{aC}d\) by the fact that \(\operatorname{Krk}(a,C)\leq 0\). Therefore we have \(\operatorname{Krk}(a,d^{\prime}C)<\beta\) for every \(d^{\prime}\cong_{C}d\), and thus \(\operatorname{Krk}(a,C)\leq\beta\). In the second case, we have for every \(b^{\prime}\cong_{aC}b\), either \(\operatorname{Krk}(b^{\prime},baC)<\beta\) or \(\operatorname{Krk}(b,b^{\prime}aC)<\beta\). By the same argument as in the first case, \(\operatorname{Krk}(b,C)\leq\beta\) follows.
Now suppose \(\alpha>0\) and the claim is true below \(\alpha\). Suppose \(\operatorname{Krk}(a,C)\leq\alpha\) and \(\operatorname{Krk}(b,aC)<\infty\).
For the first case, we assume there is some \(d\in I\) such that for every \(d^{\prime}\cong_{C}d\), we have \(\operatorname{Krk}(a,d^{\prime}C)<\alpha\). We have by the induction hypothesis and Lemma 4.6.(1) that for every \(d^{\prime}\cong_{C}d\), \(\operatorname{Krk}(b,d^{\prime}C)<\infty\). Then by Lemma 4.6.(2), we have \(\operatorname{Krk}(b,C)<\infty\).
For the second case, we assume that for every \(a^{\prime}\cong_{C}a\), either \(\operatorname{Krk}(a^{\prime},aC)<\alpha\) or \(\operatorname{Krk}(a,a^{\prime}C)<\alpha\). For \(b^{\prime}\cong_{C}b\) and \(b^{\prime\prime}\cong_{C}b\), write \(b^{\prime}\leq b^{\prime\prime}\) iff for every \(a^{\prime}\cong_{C}a\), if \(\operatorname{Krk}(b^{\prime},a^{\prime}C)<\infty\) then \(\operatorname{Krk}(b^{\prime\prime},a^{\prime}C)<\infty\). Easily, \(\leq\) is reflexive and transitive on \(\{b^{\prime}\in I\mid b^{\prime}\cong_{C}b\}\). We argue it is also linear, similar to the proof of Proposition 3.6. Suppose we have such \(b^{\prime},b^{\prime\prime}\) such that \(b^{\prime}\not\leq b^{\prime\prime}\) and \(b^{\prime\prime}\not\leq b^{\prime}\). Then there are \(a^{\prime},a^{\prime\prime}\cong_{C}a\) such that
\[\operatorname{Krk}(b^{\prime},a^{\prime}C)<\infty\quad\text{and}\quad \operatorname{Krk}(b^{\prime\prime},a^{\prime}C)=\infty \tag{1}\]
\[\operatorname{Krk}(b^{\prime},a^{\prime\prime}C)=\infty\quad\text{and}\quad \operatorname{Krk}(b^{\prime\prime},a^{\prime\prime}C)<\infty. \tag{2}\]
We have either \(\operatorname{Krk}(a^{\prime},a^{\prime\prime}C)<\alpha\) or \(\operatorname{Krk}(a^{\prime\prime},a^{\prime}C)<\alpha\). In the first case, we have \(\operatorname{Krk}(b^{\prime},a^{\prime\prime}C)<\infty\) by the induction hypothesis, which contradicts Equation 2. In the second case, we have \(\operatorname{Krk}(b^{\prime\prime},a^{\prime}C)<\infty\) by the induction hypothesis, which contradicts Equation 1. Thus \(\leq\) is a prelinear order.
Now we argue that if \(b^{\prime}\leq b^{\prime\prime}\), then \(\operatorname{Krk}(b^{\prime\prime},b^{\prime}C)<\infty\). To see this, fix some \(a^{\prime}\cong_{C}a\) such that \(\operatorname{Krk}(b^{\prime},a^{\prime}C)<\infty\), which must exist because of \(\operatorname{Krk}(b,aC)<\infty\) and invariance. Then for any \(a^{\prime\prime}\cong_{Cb^{\prime}}a^{\prime}\), we have \(\operatorname{Krk}(b^{\prime},a^{\prime\prime}C)<\infty\). Because \(b^{\prime}\leq b^{\prime\prime}\), this means that for any \(a^{\prime\prime}\cong_{Cb^{\prime}}a\), we have \(\operatorname{Krk}(b^{\prime\prime},a^{\prime\prime}b^{\prime}C)<\infty\). By Lemma 4.6.(2), we have \(\operatorname{Krk}(b^{\prime\prime},b^{\prime}C)<\infty\).
Finally, by Lemma 4.6.(3) we have just proved that \(\operatorname{Krk}(b,C)<\infty\).
**Proposition 4.8**.: \(\operatorname{cl}^{\min}\) _is the minimum disjointifying closure operator._
Proof.: We first check that \(\operatorname{cl}^{\min}\) is disjointifying, using equivalence (4) from Proposition 3.6. Let \(C\subseteq I\) be finite and \(a,b\in I\) with \(a\not\in\operatorname{cl}^{\min}(C)\). First we check clause (a) from Proposition 3.6.4. Suppose for contradiction that for every \(a^{\prime}\cong_{C}a\), \(a^{\prime}\not\perp_{C}a\) where \(\perp\) is the independence relation derived from \(\operatorname{cl}^{\min}\). This means for every \(a^{\prime}\cong_{C}a\), \(a\in\operatorname{cl}^{\min}(a^{\prime}C)\) or \(a^{\prime}\in\operatorname{cl}^{\min}(aC)\). By definition of \(\operatorname{cl}^{\min}\) and Lemma 4.6.3, we have \(a\in\operatorname{cl}^{\min}(C)\), a contradiction. By a similar argument using Lemma 4.6.2, we see that clause (b) holds as well.
Now we see that \(\operatorname{cl}^{\min}\) is the minimum. Let \(\operatorname{cl}\) be some other disjointifying closure operator. We show by induction on \(\alpha\) that for any finite \(C\subseteq I\) and \(a\in I\), if \(\operatorname{Krk}(a,C)\leq\alpha\) then \(a\in\operatorname{cl}(B)\).
If \(\alpha=0\) and for contradiction we assume \(a\not\in\operatorname{cl}(B)\), then we apply clause (4a) from Proposition 3.6 to find some \(a^{\prime}\cong_{C}a\) such that \(a^{\prime}\not\in\operatorname{cl}(aC)\). However, as \(\operatorname{Krk}(a,C)=0\), it must be the case that \(a^{\prime}=a\), a contradiction.
Now let \(\alpha>0\) and assume the claim is true below \(\alpha\). Suppose \(\operatorname{Krk}(a,C)\leq\alpha\). There are two cases in the definition of Krk, so we first consider the first one. Suppose there is some \(b\in I\) such that for every \(b^{\prime}\cong_{C}b\), \(\operatorname{Krk}(a,b^{\prime}C)<\alpha\). In particular, we have \(a\in\operatorname{cl}(b^{\prime}C)\) for every \(b^{\prime}\cong_{C}b\). This directly contradicts clause (4b) of Proposition 3.6. The second case of the definition is handled in the same way using clause (4a) of Proposition 3.6.
## 5. Indiscernible support functions
We define another technical notion which we will see is equivalent to an automorphism group involving \(S_{\infty}\), with two goals in mind: to provide a motivating example of where a nontrivial disjointifying closure operator arises, and as a technical tool which we will make us of later.
As usual let \(I\) be a set with an action \(P\curvearrowright I\). We write \([I]^{<\omega}\) to represent the set of finite subsets of \(I\). A function \(\operatorname{supp}:[I]^{<\omega}\to[\omega]^{<\omega}\) is a **support function** iff
1. for every finite \(A,B\subseteq I\) with \(A\subseteq B\), we have \(\operatorname{supp}(A)\subseteq\operatorname{supp}(B)\).
We say \(\operatorname{supp}\) is **notrivial** iff furthermore
1. \(\operatorname{supp}(A)\neq\emptyset\) for some finite \(A\subseteq I\);
and finally we say \(\operatorname{supp}\) is **indiscernible** iff
1. for every finite \(A,B\subseteq I\) with \(A\subseteq B\), and for every finite \(u,v\subseteq\omega\) with \(\operatorname{supp}(A)=u\) and \(\operatorname{supp}(B)=v\), and for every \(v^{\prime}\cong_{u}v\), there exists some \(B^{\prime}\cong_{A}B\) with \(\operatorname{supp}(B^{\prime})=v^{\prime}\).
We write \(v^{\prime}\cong_{u}v\) to indicate \(|v^{\prime}\setminus u|=|v\setminus u|\), allowing for the possibility of putting some extra structure on the space of supports.
Note that we do not make any demands that \(\operatorname{supp}\) is invariant. One could view this as meaning that a support function captures local information. From the existence of such a function \(\operatorname{supp}\), we will derive the existence of a nontrivial disjointifying closure operator. An invariant closure operator, on the other hand, is a global object, as in describes relationships between sets which is invariant under automorphisms.
We assume that \(\operatorname{supp}\) is such a function, and our objective is to show that \(\operatorname{cl}^{\min}\) is nontrivial.
**Lemma 5.1**.: _Suppose \(a\in\operatorname{cl}^{\min}(B)\). Then \(\operatorname{supp}(aB)=\operatorname{supp}(B)\)._
Proof.: We prove by induction on \(\alpha\) that if \(\operatorname{Krk}(a,B)\leq\alpha\) then \(\operatorname{supp}(aB)=\operatorname{supp}(B)\).
Consider the case where \(\alpha=0\). By indiscernibility of \(\operatorname{supp}\), there is some \(a^{\prime}\cong_{B}a\) such that \(\operatorname{supp}(a^{\prime}B)\cap\operatorname{supp}(aB)=\operatorname{ supp}(B)\). By the definition of \(\operatorname{Krk}(a,B)\leq 0\), we know that \(a^{\prime}=a\), which means \(\operatorname{supp}(a^{\prime}B)=\operatorname{supp}(aB)=\operatorname{supp}(B)\) as desired.
Otherwise let \(\alpha>0\) and suppose the claim is true below \(\alpha\).
In the first case of the definition of \(\operatorname{Krk}(a,B)\leq\alpha\), there is some \(c\in I\) such that \(\operatorname{Krk}(a,c^{\prime}B)<\alpha\) for every \(c^{\prime}\cong_{B}c\). Fix some \(c^{\prime}\cong_{B}c\) such that \(\operatorname{supp}(c^{\prime}B)\cap\operatorname{supp}(aB)=\operatorname{ supp}(B)\). By the induction hypothesis we have \(\operatorname{supp}(ac^{\prime}B)=\operatorname{supp}(c^{\prime}B)\) and thus \(\operatorname{supp}(aB)\subseteq\operatorname{supp}(c^{\prime}B)\). Again we have \(\operatorname{supp}(aB)=\operatorname{supp}(B)\).
The second case of the definition of \(\operatorname{Krk}(a,B)\leq\alpha\) is handled a similar way. Suppose for every \(a^{\prime}\cong_{B}a\), either \(\operatorname{Krk}(a,a^{\prime}B)<\alpha\) or \(\operatorname{Krk}(a^{\prime},aB)<\alpha\). Fix some \(a^{\prime}\cong_{B}a\) such that \(\operatorname{supp}(a^{\prime}B)\cap\operatorname{supp}(aB)=\operatorname{ supp}(B)\) and \(|\operatorname{supp}(a^{\prime}B)|=|\operatorname{supp}(aB)|\). In the case that \(\operatorname{Krk}(a,a^{\prime}B)<\alpha\) we have \(\operatorname{supp}(aB)\subseteq\operatorname{supp}(aa^{\prime}B)= \operatorname{supp}(a^{\prime}B)\) and thus we can conclude \(\operatorname{supp}(aB)=\operatorname{supp}(B)\). Finally, if \(\operatorname{Krk}(a^{\prime},aB)<\alpha\) we have \(\operatorname{supp}(a^{\prime}B)\subseteq\operatorname{supp}(a^{\prime}aB)= \operatorname{supp}(aB)\). Oow we realize we must have \(|\operatorname{supp}(aB)|=|\operatorname{supp}(a^{\prime}B)|=|\operatorname{ supp}(B)|\), and thus in particular, \(\operatorname{supp}(aB)=\operatorname{supp}(B)\).
Although it is easy to check, we remark that if \(\operatorname{supp}\) is nontrivial, then there must be some finite \(B\) and \(a\) such that \(\operatorname{supp}(B)=\emptyset\) and \(\operatorname{supp}(aB)\supsetneq\operatorname{supp}(B)\). This can easily be done by letting \(C\) be the smallest set with nonempty support, and choosing \(a\) to be some arbitrary element of \(C\) and defining \(B=C\setminus\{a\}\). Thus we can conclude:
**Proposition 5.2**.: _If \(P\curvearrowright I\) has a nontrivial indiscernible support function, then it has a nontrivial disjointifying closure operator._
And we are done.
### Deriving an indiscernible support function from a Baire-measurable homomorphism
Our next goal is to show that if \(P\leq S_{I}\) classifies \(=^{+}\), then there is a nontrivial indiscernible support function on \(P\curvearrowright I\). We start by finding a presentation of \(=^{+}\) which is easier for us to work with.
Let \(\Delta\curvearrowright J\) be a free action of a countably-infinite group \(\Delta\) on a countably-infinite set \(J\) with infinitely-many orbits. Let \(T\subseteq J\) be a transversal for \(\Delta\curvearrowright J\) (i.e. a set which intersects every \(\Delta\)-orbit exactly once) and fix an enumeration \(T=\{t_{n}\mid n\in\omega\}\). Let \(Q\leq S_{J}\) be the set of permutations \(\pi\) satisfying that \(\delta\cdot\pi(a)=\pi(\delta\cdot a)\) for every \(\delta\in\Delta\) and \(a\in J\).
Now define \(Y\) to be the \(G_{\delta}\) set of all injections \(f:J\to\mathbb{R}\). This is a Polish space with the pointwise-compactness topology (putting the discrete topology on \(J\)), and moreover the natural action \(Q\curvearrowright Y\) defined by
\[(g\cdot p)(a)=p(g^{-1}\cdot a)\]
is continuous with respect to this topology.
**Lemma 5.3**.: _The equivalence relations \(E_{Y}^{Q}\) and \(=^{+}\) are Borel bi-reducible._
Proof.: We first see that \(=^{+}\!\leq_{B}E_{Y}^{Q}\). Let \(\Delta=\{\delta_{n}\mid n\in\omega\}\) be an enumeration. Let \(g:\mathbb{R}\to\mathbb{R}^{\omega}\) be a Borel function satisfying
\[\{g(x)(n)\mid n\in\omega\}\cap\{g(x^{\prime})(n)\mid n\in\omega\}=\emptyset\]
for any \(x\neq x^{\prime}\in\mathbb{R}\). One may see this as an application of the axiom of choice, but we remark that in our applications we will always take \(J\) to be \(\omega\), in which case both the action of \(\Delta\) and \(T\) can be definable. Now define \(f:\mathbb{R}^{\omega}\to Y\) by
\[f(p)=y_{p}\quad\text{where}\quad y_{p}(\delta_{n}\cdot t_{m})=g(p(m))(n)\]
for every \(t_{m}\in T\) and \(\delta_{n}\in\Delta\). This is easily seen to be a Borel reduction.
Now we see that \(E_{Y}^{Q}\leq_{B}=^{+}\). Fix an enumeration \(J=\{x_{n}\mid n\in\omega\}\) and a Borel bijection \(h:\mathbb{R}^{\Delta}\to\mathbb{R}\). Define \(f:Y\to\mathbb{R}^{\omega}\) by
\[f(y)=p_{y}\quad\text{where}\quad p_{y}(n)=h(\delta\mapsto p(\delta\cdot x_{n})).\]
Using the fact that \(Q\) consists of only injective functions, this is a reduction, and it is clearly Borel.
Let \(N\trianglelefteq Q\) be the closed normal subgroup of \(g\in Q\) satisfying \(g\cdot a\in\Delta\cdot a\) for every \(a\in I\). Let \(\chi:S_{T}\to Q\) be the homomorphism such that for every permutation \(\sigma\in S_{T}\), the group element \(\chi(\sigma)\) is the unique one such that \(\chi(\sigma)\cdot a=\sigma(a)\) for every \(a\in T\). Then \(N\) is the normal complement of \(\operatorname{Im}(\chi)\), i.e. \(\operatorname{Im}(\chi)\cap N=\{1\}\) and \(Q=\operatorname{Im}(\chi)N\). Define \(S_{T}^{\operatorname{fin}}\) to be the countable subgroup of \(\pi\in S_{T}\) with finite support, i.e. \(\pi(a)=a\) for cofinitely-many \(a\in T\). Define \(N_{0}\) to be the set of \(h\in N\) with "finite-support" in the sense that \(h\cdot a=a\) for cofinitely-many \(x\in T\). Then define \(Q_{0}=\chi[S_{T}^{\operatorname{fin}}]N_{0}\), which is easily seen to be a countable dense subgroup of \(Q\).
Let \(I\) be a countably-infinite set and let \(P\leq S_{I}\) be a closed subgroup with the natural action \(P\curvearrowright I\). Let \(X\) be a Polish \(P\)-space. Fix a Baire-measurable homomorphism \(f:Y\to\mathbb{R}^{\omega}\). Let \(\Delta\) be a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curveright) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\arrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\curvearrowright I\) is a Borel function such that \(\Delta\arrowright) is a Borel function such that \(\Delta\arrowright) is a Borel function such that \(\Delta\arrowright) is a Borel function such that \(\Delta\curveright) is a Borel function such that \(\Delta\curveright) is a Borel function such that \(\Delta\arrowright) is a Borel function such that \(\Delta\arrowright) is a Borel function such that \(\Delta\arrowright) is a Borel function such that \(\Delta\arrowright) is a Borel function such that \(\Delta\arrowright) is a Borel function such that \(\Delta\arrowright) is a Borel function such that \(\Delta\arrowright) is a Borel function such that \(\Delta\arrowright) is a Borel function such that \(\Delta\arrowright) is a Borel function such that \(\Delta\arrowright) is a Borel function such that \(\Delta\arrowright) is a Borel function such that \(\Delta\arrowright) is a Borel function such that \(\left\arrowarrowright) is a Borel function such that \(\left\arrowarrowright) is a Borel function such that \(\left\arrowright) is a Borel function such that \(\left\left\arrowarrowarrowright\
\(X\) from \(E_{Y}^{Q}\) to \(E_{X}^{P}\). Then the action \(Q\curvearrowright Y\) restricts to an action \(N\curvearrowright Y\) and \(f\) is also a homomorphism from \(E_{Y}^{N}\) to \(E_{X}^{P}\). From \(f\) we will derive an indiscernible support function. Assuming that \(f\) is not degenerate in a way which we will soon define, the support function will be non-trivial.
Let \(C\subseteq Y\) be the comeager set satisfying properties (1)-(4) in Lemma 2.3 applied to \(f\) as a Baire-measurable homomorphism from \(E_{Y}^{Q}\) to \(E_{X}^{P}\), with \(Q_{0}\) as the countable dense subgroup of \(Q\).
Fix some point \(y_{0}\in C\). For finite \(A\subseteq I\) and \(u\subseteq T\), say that \(u\)**supports**\(A\) iff there is a basic open neighborhood \(U\ni y_{0}\) such that for every \(y\in U\cap C\) and for every \(g\in\operatorname{Stab}_{Q_{0}}(u)\), if \(g\cdot y\in U\) then \(f(g\cdot y)\in\operatorname{Stab}_{P}(A)\cdot f(y)\). By Lemma 2.3, for every finite \(A\) there exists some finite \(u\) such that \(u\) supports \(A\).
**Claim 1**.: _For every finite \(A\subseteq I\) and \(u,v\subseteq T\), if both \(u\) and \(v\) support \(A\) then \(u\cap v\) supports \(A\)._
Proof.: Let \(U\ni y_{0}\) witnessing that \(u\) and \(v\) both support \(A\). We argue that \(U\) witnesses that \(u\cap v\) supports \(A\) as well. Let \(y\in U\cap C\) and \(g\in\operatorname{Stab}_{Q_{0}}(u\cap v)\) be arbitrary such that \(g\cdot y\in U\). Write \(g=\chi(\pi)h\) where \(\pi\in S_{T}^{\operatorname{fin}}\cap\operatorname{Stab}_{S_{T}}(u\cap v)\) and \(h\in\operatorname{Stab}_{N_{0}}(u\cap v)\). Fix \(h_{0}\in\operatorname{Stab}_{N_{0}}(v)\) and \(h_{1}\in\operatorname{Stab}_{N_{0}}(T\setminus v)\) such that \(h=h_{1}h_{0}\).
Let \(W\subseteq T\) be a large enough set such that \(\Delta\cdot T\) contains the support of \(\pi\) and \(h\), the support of \(U\), as well as \(u\) and \(v\).
We first check that there exists \(\pi_{0},\pi_{2}\in\operatorname{Stab}_{S_{T}}(v)\) and \(\pi_{1}\in\operatorname{Stab}_{S_{T}}(u)\), each with finite support, such that
1. \(\pi=\pi_{2}\circ\pi_{1}\circ\pi_{0}\); and
2. \(\chi(\pi_{0})h_{0}\cdot x\in U\); and
3. \(\chi(\pi_{1})h_{1}\chi(\pi_{0})h_{0}\cdot x\in U\).
Let \(\sigma\in S_{T}\) be an involution such that \(\sigma[u\setminus v]\cap W=\emptyset\) with support \((u\setminus v)\cup\sigma[u\setminus v]\). By a density argument, given that \(y\in U\) and \(\chi(\pi)h\cdot y\in U\), we may also ensure \(\chi(\sigma)h_{0}\cdot y\in U\) and \(\chi(\sigma)\chi(\pi)h\cdot y\in U\). Define \(\pi_{0}=\pi_{2}=\sigma\).
With the observation that \(h_{1}\) and \(\chi(\pi_{0})\) commute, it is enough to find \(\pi_{1}\) to satisfy condition (1) and then conditions (2) and (3) would immediately follow.
Define \(\pi_{1}\in S_{T}\) by
\[\pi_{1}(a)=\begin{cases}\sigma(\pi(\sigma(a)))&a\in\sigma[u\setminus v]\text { and }\pi(\sigma(a))\in u\setminus v\\ \pi(\sigma(a))&a\in\sigma[u\setminus v]\text{ and }\pi(\sigma(a))\not\in u \setminus v\\ \sigma(\pi(a))&a\not\in u\setminus v\text{ and }\pi(a)\in u\setminus v\\ a&a\in u\\ \pi(a)&\text{otherwise.}\end{cases}\]
It is easy to check that \(\pi_{1}\in\operatorname{Stab}_{S_{T}}(u)\) and has finite support. Checking that it is well-defined is straightforward. To see that \(\pi=\pi_{2}\circ\pi_{1}\circ\pi_{0}\), we fix an arbitrary \(a\in J\) and check that \(\pi(a)=(\pi_{2}\circ\pi_{1}\circ\pi_{0})(a)\).
First consider the case that \(a\in u\setminus v\) and \(\pi(a)\not\in u\setminus v\). Then by the definition of \(\pi_{1}\), we have
\[\pi_{2}(\pi_{1}(\pi_{0}(a)))=\sigma(\pi_{1}(\sigma(a)))=\sigma(\pi(a))=\pi(a).\]
The case that \(a\in u\setminus v\) and \(\pi(a)\in u\setminus v\) is similar. The next case we have is \(a\not\in u\setminus v\) and \(\pi(a)\in u\setminus v\) where we have
\[\pi_{2}(\pi_{1}(\pi_{0}(a)))=\sigma(\pi_{1}(\sigma(a)))=\sigma(\pi_{1}(a))= \sigma(\sigma(\pi(a)))=\pi(a).\]
The case \(a\in u\cap v\) is easily checked as \(a\) is fixed by \(\pi_{2}\), \(\pi_{1}\), and \(\pi_{2}\). The final case is that \(a\not\in u\) and \(\pi(a)\not\in u\). Then
\[\pi_{2}(\pi_{1}(\pi_{0}(a)))=\sigma(\pi_{1}(\sigma(a)))=\sigma(\pi_{1}(a))= \sigma(\pi(a))=\pi(a).\]
We can conclude then that
1. \(\chi(\pi_{1})h_{0}\in\operatorname{Stab}_{Q_{0}}(v)\) and \(\chi(\pi_{1})h_{0}\cdot y=\chi(\sigma)h_{0}\cdot y\in U\) and thus \(f(\chi(\pi_{1})h\cdot y)\in\operatorname{Stab}_{P}(A)\cdot f(y)\);
2. \(\chi(\pi_{2})h_{1}\in\operatorname{Stab}_{Q_{0}}(u)\) and \(\chi(\pi_{2})h_{1}\cdot(\chi(\pi_{1})h_{0}\cdot y)=\chi(\sigma)\chi(\pi)h \cdot y\in U\) and thus \(f(\chi(\pi_{2})h_{1}\chi(\pi_{1})h_{0}\cdot y)=\operatorname{Stab}_{P}(A) \cdot f(y)\); and
3. \(\chi(\pi_{3})\in\operatorname{Stab}_{Q_{0}}(v)\) and \(\chi(\pi_{3})\cdot(\chi(\pi_{2})h_{1}\cdot(\chi(\pi_{1})h_{0}\cdot y))=\chi( \pi)h\cdot y\in U\), and thus \(f(\chi(\pi)h\cdot y)\in\operatorname{Stab}_{P}(A)\cdot f(y)\)
as desired.
We conclude in particular that for every \(A\) there is a minimal \(u\) which supports \(A\). The minimal support of \(A\), denoted \(\operatorname{supp}(A)\), is **the support of \(A\)**. Now we wish to show that \(\operatorname{supp}\) is indiscernible. We will prove this after the following claim.
**Claim 2**.: _For any \(u\) and \(A\), if \(u\) is a support of \(A\) witnessed by neighborhood \(U_{0}\ni y_{0}\), then for any finite-support \(\sigma\in S_{T}\) and \(h\in N_{0}\) and \(g\in P\) with \(\chi(\sigma)h\cdot y_{0}\in U_{0}\) and \(f(\chi(\sigma)h\cdot y_{0})=g\cdot f(y_{0})\), we have that \(\sigma^{-1}\cdot u\) is a support of \(g^{-1}\cdot A\)._
Proof.: Let \(v=\sigma^{-1}\cdot u\) and \(B=g^{-1}\cdot A\). Observe that \(\operatorname{Stab}_{P}(B)=g^{-1}\operatorname{Stab}_{P}(A)g\) and \(\operatorname{Stab}_{S_{T}}(v)=\sigma^{-1}\operatorname{Stab}_{S_{T}}(u)\sigma\). We show that \(v\) is a support of \(B\).
Find open \(U_{1}\ni y_{0}\) by Lemma 2.3.(4) such that for every \(y\in U_{1}\cap C\), \(\chi(\sigma)h\cdot y\in U_{0}\) and \(f(\chi(\sigma)h\cdot y)\in g\operatorname{Stab}_{P}(B)\cdot f(y)\). We argue that \(U_{1}\) witnesses that \(v\) supports \(B\).
Let \(y\in U_{1}\cap C\) be arbitrary. Let \(g_{v}\in\operatorname{Stab}_{Q_{0}}(v)\) arbitrary and suppose \(g_{v}\cdot y\in U_{1}\). We want to show that \(f(g_{v}y)\in\operatorname{Stab}_{P}(B)\cdot f(y)\). Because \(y,g_{v}\cdot y\in U_{1}\) we have
\[f(\chi(\sigma)h\cdot y)\in g\operatorname{Stab}_{P}(B)\cdot f(y) \tag{3}\]
and
\[f(\chi(\sigma)hg_{v}\cdot y)\in g\operatorname{Stab}_{P}(B)\cdot f(g_{v}\cdot y). \tag{4}\]
Because \(\chi(\sigma)h\cdot y,\chi(\sigma)hg_{v}\cdot y\in U_{0}\) and
\[\chi(\sigma)hg_{v}\cdot y=[(\chi(\sigma)h)g_{v}(\chi(\sigma)h)^{-1}]\chi( \sigma)hg_{v}\cdot y,\]
where
\[(\chi(\sigma)h)g_{v}(\chi(\sigma)h)^{-1}\in\operatorname{Stab}_{Q_{0}}(u)\]
we have
\[f(\chi(\sigma)h\cdot y)\in\operatorname{Stab}_{P}(A)\cdot f(\chi(\sigma)hg_{v} \cdot y). \tag{5}\]
Thus putting these together we have
\[f(g_{v}\cdot y) \in\operatorname{Stab}_{P}(B)g^{-1}\cdot f(\chi(\sigma)hg_{v} \cdot y)\] \[\subseteq\operatorname{Stab}_{P}(B)\pi^{-1}\operatorname{Stab}_{P} (A)\cdot f(\chi(\sigma)h\cdot y)\] \[\subseteq\operatorname{Stab}_{P}(B)g^{-1}\operatorname{Stab}_{P} (A)g\operatorname{Stab}_{P}(B)\cdot f(y)\] \[=\operatorname{Stab}_{P}(B)\cdot f(y),\]
where the first inclusion is from Equation 4, the second from Equation 5, and the third from Equation 3 as desired.
Now we see that \(\operatorname{supp}\) is indiscernible. Let \(A,B\subseteq I\) be finite and \(u,v\subseteq J\) finite such that \(\operatorname{supp}(A)=u\) and \(\operatorname{supp}(B)=v\), and let \(U_{0}\ni y_{0}\) be the open neighborhood witnessing this. Let \(v^{\prime}\subseteq J\) such that \(v^{\prime}\cong_{u}v\). Fix a finite-support \(\sigma\in S_{T}\) such that \(\sigma(a)=a\) for every \(a\in u\) and \(\sigma[v^{\prime}]=v\). By a density argument, fix some \(h\in\operatorname{Stab}_{N_{0}}(u)\) such that \(\chi(\sigma)h\cdot y_{0}\in U_{0}\). Because \(\chi(\sigma)h\in\operatorname{Stab}_{Q_{0}}(u)\), there is some \(g\in\operatorname{Stab}_{P}(A)\) such that \(f(\chi(\sigma)h\cdot y_{0})=g\cdot f(y_{0})\). By the claim, this means that \(v^{\prime}\) is a support of \(g^{-1}\cdot B\), and because \(g\in\operatorname{Stab}_{P}(A)\), we have \(B\cong_{A}g^{-1}\cdot B\). A symmetric argument gives us that \(v^{\prime}\) is in fact the minimal support of \(g^{-1}\cdot B\), as desired.
Our final task is to show that \(\operatorname{supp}\) is nontrivial, otherwise it carries no meaningful structure.
**Claim 3**.: _If \(\operatorname{supp}\) is trivial, then \(f(y)\;E_{X}^{P}\;f(y^{\prime})\) for every \(y,y^{\prime}\in C\)._
Proof.: Suppose \(\operatorname{supp}\) is trivial, i.e. the support of every \(A\) is empty. In other words, for every finite \(A\subseteq I\), there is an open neighborhood \(U_{A}\) of \(y_{0}\) such that for every \(y\in U_{A}\cap C\) and every \(h\in Q_{0}\) with \(g\cdot y\in U_{A}\), we have \(f(h\cdot y)\in\operatorname{Stab}_{P}(A)\cdot y\).
Write \(I\) as an increasing union \(\bigcup_{n}A_{n}\) of finite sets. Fix a compatible complete metric \(d\) on \(Y\). For each \(n\), let \(U_{n}\) be an open neighborhood of \(y_{0}\) with \(d\)-diameter less than \(1/n\) and contained in \(U_{A_{n}}\).
Let \(y\in Y\cap C\) by arbitrary. We will show that \(f(y)\;E_{X}^{P}\;f(y_{0})\). For every \(n\), let \(h_{n}\in Q_{0}\) such that \(h_{n}\cdot y\in C\cap U_{n}\). We have \(h_{n}\cdot y\to y_{0}\), and so by continuity of \(f\) on \(C\), we have \(f(h_{n}\cdot y)\to f(y_{0})\).
For every \(n\) we may find \(g_{n}\in\operatorname{Stab}_{P}(A_{n})\) such that \(f(h_{n+1}\cdot y)=g_{n}\cdot f(h_{n}\cdot y)\). Defining \(g_{n}^{*}:=g_{n}...g_{0}\), we have \(f(h_{n}\cdot y)=g_{n}^{*}\cdot f(y)\) for every \(n\). The sequence \((g_{n}^{*})\) is Cauchy and thus \(g_{n}^{*}\to g_{\infty}\) for some \(g_{\infty}^{*}\in P\). By continuity of the group action, we have \(g_{\infty}^{*}\cdot f(y)=f(y_{0})\)
Observe that for any \(x,x^{\prime}\in I\), the set of \(y\) such that \(y(x)\neq y_{0}(x^{\prime})\) is dense and open. Thus there is some \(y\in C\) such that \(y(x)\neq y_{0}(x^{\prime})\) for every \(x,x^{\prime}\in I\). In particular, this means \(Q\cdot y\neq Q\cdot y_{0}\). By essentially the same argument we have that \(E_{Y}^{Q}\) is meager. We conclude that if \(f\) is not just a homomorphism but a reduction, then \(\operatorname{supp}\) is nontrivial. The
same could be concluded if \(f\) witnesses that \(E_{Y}^{Q}\) is not generically ergodic with respect to \(E_{X}^{P}\).
Combining the results of this section, we get:
**Theorem 5.4**.: _Let \(I\) be a countable set and \(P\leq S_{I}\) a closed subgroup with the natural action \(P\curvearrowright I\). The following are equivalent:_
1. \(P\curvearrowright I\) _has a nontrivial indiscernible support function;_
2. \(P\) _classifies_ \(=^{+}\)_;_
3. _there is a Polish_ \(P\)_-space_ \(X\) _such that_ \(=^{+}\) _is not generically-ergodic with respect to_ \(E_{X}^{P}\)_._
We have now proved all of the equivalences of the main theorem.
|
2309.09687 | The Cygnus Allscale Survey of Chemistry and Dynamical Environments:
CASCADE. II. A detailed kinematic analysis of the DR21 Main outflow | Molecular outflows are believed to be a key ingredient in the process of star
formation. The molecular outflow associated with DR21 Main in Cygnus-X is one
of the most extreme, in mass and size, molecular outflows in the Milky Way. The
outflow is suggested to belong to a rare class of explosive outflows which are
formed by the disintegration of protostellar systems.We aim to explore the
morphology, kinematics,and energetics of the DR21 Main outflow, and compare
those properties to confirmed explosive outflows to unravel the underlying
driving mechanism behind DR21. Line and continuum emission are studied at a
wavelength of 3.6\,mm with IRAM 30 m and NOEMA telescopes as part of the Cygnus
Allscale Survey of Chemistry and Dynamical Environments (CASCADE) program. The
spectra include ($J= 1-0$) transitions of HCO$^+$, HCN, HNC, N$_2$H$^+$,
H$_2$CO, CCH tracing different temperature and density regimes of the
outflowing gas at high-velocity resolution ($\sim$ 0.8 km s$^{-1}$). The map
encompasses the entire DR21 Main outflow and covers all spatial scales down to
a resolution of ~3" ($\sim$ 0.02 pc). Integrated intensity maps of the HCO$^+$
emission reveal a strongly collimated bipolar outflow with significant overlap
of the blue- and red-shifted emission. The opening angles of both outflow lobes
decrease with velocity, from $\sim80$ to 20$^{\circ}$ for the velocity range
from 5 to 45 km s$^{-1}$ relative to the source velocity. No evidence is found
for the presence of elongated, "filament-like" structures expected in explosive
outflows. N$_2$H$^+$ emission near the western outflow lobe reveals the
presence of a dense molecular structure which appears to be interacting with
the DR21 Main outflow. The overall morphology as well as the detailed
kinematics of the DR21 Main outflow is more consistent with that of a typical
bipolar outflow instead of an explosive counterpart. | I. M. Skretas, A. Karska, F. Wyrowski, K. M. Menten, H. Beuther, A. Ginsburg, A. Hernández-Gómez, C. Gieser, S. Li, W. -J. Kim, D. A. Semenov, L. Bouscasse, I. B. Christensen, J. M. Winters, A. Hacar | 2023-09-18T11:50:00Z | http://arxiv.org/abs/2309.09687v1 | The Cygnus Allscale Survey of Chemistry and Dynamical Environments: CASCADE. II. A detailed kinematic analysis of the DR21 Main outflow
###### Abstract
Context:Molecular outflows are believed to be a key ingredient in the process of star formation. The molecular outflow associated with DR21 Main in Cygnus-X is one of the most extreme, in mass and size, molecular outflows in the Milky Way. The outflow is suggested to belong to a rare class of explosive outflows which are formed by the disintegration of protostellar systems.
Aims:We aim to explore the morphology, kinematics, and energetics of the DR21 Main outflow, and compare those properties to confirmed explosive outflows to unravel the underlying driving mechanism behind DR21.
Methods:Line and continuum emission are studied at a wavelength of 3.6 mm with IRAM 30 m and NOEMA telescopes as part of the Cygnus Allscale Survey of Chemistry and Dynamical Environments (CASCADE) program. The spectra include (\(J=1-0\)) transitions of HCO\({}^{+}\), HCN, N\({}_{2}\)H\({}^{+}\), H\({}_{2}\)CO, CCH (among others) tracing different temperature and density regimes of the outflowing gas at high-velocity resolution (\(\sim 0.8\) km s\({}^{-1}\)). The map encompasses the entire DR21 Main outflow and covers all spatial scales down to a resolution of 3 \({}^{\prime\prime}\) (\(\sim 0.02\) pc).
Results:Integrated intensity maps of the HCO\({}^{+}\) emission reveal a strongly collimated bipolar outflow with significant overlap of the blue and red-shifted emission. The opening angles of both outflow lobes decrease with velocity, from \(\sim 80\) to 20\({}^{\circ}\) for the velocity range from 5 to 45 km s\({}^{-1}\) relative to the source velocity. No evidence is found for the presence of elongated, "filament-like" structures expected in explosive outflows. N\({}_{2}\)H\({}^{+}\) emission near the western outflow lobe reveals the presence of a dense molecular structure which appears to be interacting with the DR21 Main outflow.
Conclusions:The overall morphology as well as the detailed kinematics of the DR21 Main outflow is more consistent with that of a typical bipolar outflow instead of an explosive counterpart.
## 1 Introduction
Molecular outflows are a ubiquitous part of star formation arising from both high and low mass protostars (Arce et al. 2007; Frank et al. 2014; Bally 2016). A new type of outflows, formed by the disintegration of protostellar systems due to a merging event, has been proposed and tied to regions of high-mass star formation (Bally & Zinnecker 2005; Zapata et al. 2009). The massive outflow of DR21 Main is one of the proposed candidates for such explosive outflows (Zapata et al. 2013). The large angular extent of the DR21 Main outflow allows for a detailed analysis of its structure and properties, and their comparison to those of other explosive outflow candidates.
Molecular protostellar outflows range from highly collimated molecular jets like HH211 (Gueth & Guilloteau 1999) all the way to wide-angled outflows from high mass sources (Beuther & Shepherd 2005a). In general, outflows tend to appear "narrower" at higher velocities (Bachiller & Tafalla 1999), and become less collimated as they evolve (Beuther & Shepherd 2005b; Arce & Sargent 2006; Offner et al. 2011; Hsieh et al. 2023). They can vary significantly in size and energetics, with sizes from 0.1 pc up to pc scales and momentum rates between 10\({}^{-5}\) and 10\({}^{-2}\) M\({}_{\odot}\) km s\({}^{-1}\) yr\({}^{-1}\), for low mass sources and some O type stars, respectively (e.g., Maud et al. 2015). Some outflow properties, such as their mass, force and mechanical luminosity, correlate well with intrinsic parameters of their driving sources, e.g., the bolometric luminosity (Bally & Lada 1983; Cabrit & Bertout 1992; Wu et al. 2004) and the mass of the molecular gas envelope of their driving source (Bontemps et al. 1996; Beuther et al. 2002), suggesting a common driving mechanism in both
the low and high mass sources. Clearly, molecular outflows play a critical role in regulating star formation by removing excess angular momentum and thus facilitating the further mass growth of a protostellar system (Blandford & Payne, 1982; Machida, 2014), and partly in dispersing the surrounding envelope, reducing the available mass reservoir (Arce & Sargent, 2006).
Due to their large size and the energy they carry, molecular outflows can have a significant impact to the surrounding interstellar medium (ISM) over different spatial scales. Firstly, at envelope scales (10\({}^{3}\)-10\({}^{4}\) AU), powerful young outflows entrain and clear-out dense material giving rise to bipolar cavities (e.g. Gueth et al., 1997; Velusamy & Langer, 1998; Arce & Sargent, 2004, 2005). At core scales (0.1-0.3 pc), outflows are considered a significant contributor to the turbulence (Myers et al., 1988; Zhang et al., 2005). In addition, outflows from high mass young stellar objects (YSOs) might impact the morphology and even break apart the host molecular cloud (Fuente et al., 1998; Benedetdini et al., 2004). Finally, the propagation of outflows through the surrounding dense material leads to the formation of shocks, which locally compress and heat the gas, and drive chemical processes enriching the ISM (e.g. Kaufman & Neufeld, 1996; Flower & Pineau Des Forets, 2010; Burkhardt et al., 2019).
A newly proposed type of molecular outflows are the so-called "explosive dispersal outflows", whose origin appears to linked to the disintegration of young stellar systems (Bally et al., 2017; Rivera-Ortiz et al., 2021) or to protostellar mergers (Bally & Zinnecker, 2005). The interpretation is limited due to the small sample of explosive-outflow candidates: Orion-KL (Zapata et al., 2009), DR21 Main (Zapata et al., 2013), G5.89 (Zapata et al., 2019), and IRAS 16076-5134 (Guzman Coclique et al., 2022). Nevertheless, these explosive outflows share the following characteristics (Zapata et al., 2009, 2017): (i) they consist of multiple straight, narrow and relatively isotropically distributed filament-like structures; (ii) these filament-like structures should all point towards the origin point of the explosive outflow and show an increase in velocity with the distance from the origin point akin to a Hubble flow; (iii) have a significant overlap of their blue- and red-shifted emission components. The filament-like structures of explosive outflows form because all material is simultaneously accelerated in the explosion. As a result, faster moving material has traveled further away from the source and is trailed by the slower parts of the outflow. Overall, the properties of those outflows have been mostly studied using low\(-J\) CO transitions at high angular resolution (e.g. Zapata et al., 2009, 2013, 2019). The inner parts of the molecular outflows revealed multiple filament-like structures that make up the explosive outflows. At the same time, the lack of similar observations in other molecular tracers limits our understanding of their chemistry and various physical gas components.
The DR21 Main outflow is a particularly interesting explosive outflow candidate (Zapata et al., 2013), as it is one of the most massive (\(M_{\rm out}>3000\) M\({}_{\odot}\)) and energetic (\(E_{\rm kin}>2\)\(\times 10^{48}\) erg) outflows detected in our Galaxy (Garden et al., 1986, 1991b), first in vibrationally excited 2.12 \(\mu\)m line of shock-excited molecular hydrogen (H\({}_{2}\)). DR21 Main itself is a compact HII region prominent at radio wavelengths. It is located in the Cygnus-X high-mass star-forming region/molecular cloud complex (Leung & Thaddeus, 1992), at the southern end of the DR21 molecular ridge (Dickel et al., 1978), and at a distance of 1.5 kpc (Rygl et al., 2012). The outflow appears bipolar, with the outflow lobes extending in a East-West direction (Garden et al., 1986, 1991a; Garden & Carlstrom, 1992; Schneider et al., 2010). High velocity low\(-J\) CO emission has also been reported in the North-South direction (Garden et al., 1991b). The blue- and red-shifted parts of the outflow overlap significantly, suggesting that it extends very close to the plane of the sky (Cruz-Gonzalez et al., 2007). It was initially suggested that the DR21 Main outflow is driven by a massive protostar, with \(L_{\rm bol}\) of \(\sim 10^{5}-10^{6}\)\(L_{\odot}\)(Garden et al., 1991b; Garden & Carlstrom, 1992), but such a source has not been yet identified (Cruz-Gonzalez et al., 2007). The absence of a clearly detected driving source, along with the detection of some filament-like structures in CO (1 - 0) emission, led Zapata et al. (2013) to suggest a possible explosive nature for the DR21 Main outflow.
In this work, we aim to study the morphology, kinematics and energetics of the DR21 Main outflow using observations in multiple molecular lines, sensitive to a range of physical conditions. We also aim to determine whether those characteristics of the DR21 Main outflow are consistent with those expected for explosive outflows or, rather, for typical protostellar outflows.
This work is a part of the Max Planck IRAM Observatory Program (MIOP) "Cygnus Allscale Survey of Chemistry and Dynamical Environments (CASCADE)" (Beuther et al., 2022). CASCADE aims to map significant parts of the Cygnus-X molecular cloud complex at high angular resolution and with a broad bandpass using the Northern Extended Array for Millimeter Astronomy (NOEMA) and the 30 m telescope, both operated by the Institut de Radioastronomie Millimetrique (IRAM). The combination of velocity-resolved single dish and interferometric observations offers the high resolution necessary to resolve the outflow structure without losing information on extended emission. CASCADE aims to take advantage of these high quality observations to connect the transition of gas all the way from the large scales of molecular clouds down to the small scales of cores, to look for signs of collapse or feedback, to investigate the impact of star-forming cores to their surrounding, and to search for possible trends with evolutionary stage and more. The scope and goals of CASCADE are discussed in detail by Beuther et al. (2022).
The paper is organized as follows. Section 2 describes the observations from CASCADE. Section 3 presents line detections and maps of the DR21 Main outflow in several molecular transitions, and provides the analysis of outflow properties. In Section 4 the results are discussed and scenarios for the origin of the DR21 Main outflow are explored along with its interactions with the surrounding molecular cloud. Finally, Section 5 contains the summary and conclusions.
## 2 Observations
A detailed overview of the CASCADE program is given in Beuther et al. (2022). In brief, CASCADE covers all high column density areas in the Cygnus-X molecular cloud complex using 40 mosaics, each covering 16 arcmin\({}^{2}\). Each of the mosaics corresponds to 78 NOEMA pointings and was observed in both the C and D configurations. The observations have a total bandwidth of 16 GHz, 8 in each sideband, at the 3.6 mm window. The full bandwidth is covered with a spectral resolution of 2.0 MHz, but selected parts, surrounding the most important lines, are also covered by additional high resolution correlator units providing a spectral resolution of 62.5 kHz. The DR21 Main outflow is covered by two of the NOEMA mosaics, which were observed between 2020 May 29 and November 6. During that time, the array consisted of 10 antennas, yielding baselines between 15 m and 365 m. The strong quasars 3C345 and 3C273 were used as bandpass calibrators, MWC349 and 2010+723 were used for flux calibration and 2005+403, 2120+445, 2050+363 and 2013+370 were used for gain calibration. Complementary single-dish ob
servations were carried out with the IRAM 30m telescope between 2020 February and July, in order to provide the missing short spacing information. These observations will be presented in detail in an upcoming paper by Christensen et al. (in prep.).
The calibration and imaging of the data was done using CLIC and MAPPING software, which are part of the GILDAS package1). The NOEMA observations are combined with the IRAM 30m data using the UV_SHORT task. The resulting single channel \(\sigma_{\rm rms}\) noise, for a channel width of 0.8 km s\({}^{-1}\), and beam sizes for all observed lines are summarized in Table 1.
Footnote 1: [https://www.iram.fr/IRAMFR/GILDAS/](https://www.iram.fr/IRAMFR/GILDAS/)
## 3 Results and analysis
We present the CASCADE observations for an area surrounding DR21 Main that covers the entirety of its outflow. Our data allow us to analyze the kinematics, morphology and energetics of the DR21 Main outflow and to contribute to a discussion about its nature.
### Molecular detections
Several molecular lines are detected in the CASCADE observations of the DR21 Main (see Table 2). The spatial distribution of the emission can be divided into three cases: (i) tracing the outflow (extended emission in the west-east (W-E) direction), (ii) tracing the DR21 ridge (extended emission in the north-south (N-S) direction), and (iii) sporadic (compact emission that appears in multiple locations) or compact emission (see Fig. 1). The different tracers can therefore be used to examine the morphology of the various gas components in DR21 Main. Integrated intensity contour maps for all the emission lines are shown in Appendix A.
The contour map of the HCO\({}^{+}\) integrated intensity (Fig. 1, left panel) shows that most of the emission arises from the area
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Species} & \multirow{2}{*}{Transition} & Frequency & \(\sigma_{\rm rms}\) & Beam & E\({}_{\rm up}\) & Log(A\({}_{\rm ii}\)) \\ & & [GHz] & \(\left[\frac{\rm mJy}{\rm beam}\right]\) & [arcsec] & [K] & [s\({}^{-1}\)] \\ \hline Continuum & - & 82.028 & 0.05 & \(2.80\times 2.54\) & - & - \\ DCO\({}^{+}\) & (1-0) & 72.039 & 15 & \(3.57\times 3.19\) & 3.46 & -4.16 \\ CCD & (1-0) & 72.108 & 15 & \(3.57\times 3.19\) & 3.46 & -6.06 \\ DCN & (1-0) & 72.415 & 16 & \(3.56\times 3.18\) & 3.47 & -4.88 \\ SO\({}_{2}\) & (6\({}_{0.65}\)-5\({}_{1.5}\)) & 72.758 & 14 & \(3.54\times 3.16\) & 19.15 & -5.56 \\ HCCCN & (8-7) & 72.784 & 14 & \(3.54\times 3.16\) & 15.72 & -4.53 \\ H\({}_{2}\)CO & (1\({}_{0.1}\)-0\({}_{0.0}\)) & 72.838 & 16 & \(3.90\times 3.28\) & 3.50 & -5.09 \\ CH\({}_{3}\)CN & (4\({}_{k}\)-3\({}_{k}\)) & 73.590 & 12 & \(3.51\times 3.13\) & 8.83 & -4.66 \\ DNC & (1-0) & 76.306 & 10 & \(3.24\times 2.90\) & 3.66 & -4.79 \\ CH\({}_{3}\)OH & (5\({}_{0.54}\)-4\({}_{1.3}\)) & 76.510 & 10 & \(3.24\times 2.89\) & 47.93 & -6.05 \\ NH\({}_{2}\)D & (1\({}_{1.1}\)-1\({}_{0.1}\)) & 85.926 & 10 & \(2.77\times 2.51\) & 20.68 & -5.71 \\ H\({}^{13}\)CN & (1-0) & 86.340 & 9 & \(2.75\times 2.50\) & 4.14 & -4.65 \\ H\({}^{13}\)CO\({}^{+}\) & (1-0) & 86.754 & 7 & \(2.74\times 2.49\) & 4.16 & -4.41 \\ SiO & (2-1) & 86.847 & 11 & \(3.25\times 2.70\) & 6.25 & -4.53 \\ HN\({}^{13}\)C & (1-0) & 87.091 & 9 & \(2.73\times 2.48\) & 4.18 & -4.73 \\ CCH & (1-0) & 87.329 & 10 & \(2.72\times 2.47\) & 4.19 & -5.90 \\ HNCO & (4\({}_{0.4}\)-3\({}_{0.3}\)) & 87.925 & 8 & \(2.71\times 2.45\) & 10.55 & -5.06 \\ HCN & (1-0) & 88.632 & 13 & \(3.18\times 2.64\) & 4.25 & -4.62 \\ HCO\({}^{+}\) & (1-0) & 89.189 & 10 & \(3.29\times 2.74\) & 4.28 & -4.38 \\ HNC & (1-0) & 90.664 & 11 & \(3.12\times 2.57\) & 4.35 & -4.57 \\ HCCCN & (10-9) & 90.979 & 9 & \(2.64\times 2.39\) & 24.01 & -4.24 \\ CH\({}_{3}\)CN & (5\({}_{k}\)-4\({}_{k}\)) & 91.987 & 9 & \(2.61\times 2.36\) & 13.24 & -4.85 \\ H41\(\alpha\) & & 92.034 & 6 & \(2.60\times 2.36\) & - & - \\ \({}^{13}\)CS & (2-1) & 92.494 & 10 & \(2.59\times 2.35\) & 6.66 & -4.85 \\ N\({}_{2}\)H\({}^{+}\) & (1-0) & 93.174 & 11 & \(2.58\times 2.33\) & 4.47 & -4.44 \\ \hline \end{tabular} 1
\end{table}
Table 1: Continuum and spectral line parameters for all lines covered by the CASCADE observations
\begin{table}
\begin{tabular}{l l} \hline \hline Location & Species \\ \hline Outflow (E – W) & HCO\({}^{+}\), HCN \\ DR21 ridge - Dense gas (N – S) & \({}^{13}\)CS, CCH, H\({}_{2}\)CO, H\({}^{13}\)CO\({}^{+}\), HCCCN, HNC, N\({}_{2}\)H\({}^{+}\), H\({}^{13}\)CN, HN\({}^{13}\)C \\ Sporadic or compact emission & CH\({}_{3}\)CN\({}^{a}\), CH\({}_{3}\)OH, DCN, DCO\({}^{+}\), DNC, H\(41\alpha\), NH\({}_{2}\)D, SiO \\ Non-detections & CCD, HNCO, SO\({}_{2}\) \\ \hline \end{tabular} 1
\end{table}
Table 2: Molecular line detections (at 5\(\sigma\) level) in the area of the DR21 Main outflow
of the outflow lobes and also appears to be in close agreement with the H\({}_{2}\) emission at 2.2 \(\mu\)m (see also Garden et al. 1986; Davis et al. 2007), associated with outflowing shocked gas. Interestingly, the HCO\({}^{+}\) (1 - 0) contours reveal also the presence of hollowed out cavities in both outflow lobes lacking line emission, similar to early findings by Garden & Carlstrom (1992). The cavities are more prominent in the eastern outflow lobe, which appears entirely separated from the center of DR21 Main area. Therefore, while HCO\({}^{+}\) is well associated with the outflowing material, it traces most accurately the outer parts of the outflow cavities. In a similar fashion, line emission in HCN (1 - 0) is also associated with the outflow, but shows a more compact pattern in the direction of the peaks of HCO\({}^{+}\) emission (see Fig. 11).
In contrast, the N\({}_{2}\)H\({}^{+}\) emission appears to trace the DR21 ridge along the N-S direction (the middle panel of Fig. 1, Wilson & Mauersberger 1990; Motte et al. 2007). This is expected as N\({}_{2}\)H\({}^{+}\) is known to trace dense, cold, CO depleted gas (Caselli et al. 2002; Jorgensen et al. 2004). In addition to the ridge, N\({}_{2}\)H\({}^{+}\) reveals also the presence of a molecular structure near the western lobe. This structure was previously detected in CS (2 - 1) by Plambeck & Menten (1990) who also reported the detection of a collisionally excited class 1 methanol maser in its interaction region with the outflow traced by H\({}_{2}\) emission. Other molecules that are often associated with dense gas, like CCH (1 - 0), HC-CCN (both the (10 - 9) and (8 - 7) transitions), \({}^{13}\)CS (1 - 0) and HNC (1 - 0) show distribution similar to that of N\({}_{2}\)H\({}^{+}\) (1 - 0) (see Figs. 11, 12 and 13).
Finally, SiO is detected close to the center of the outflow, but also shows very localized emission in an area of the western outflow lobe (Fig. 1, right panel). Even though SiO typically traces shocks in the ISM (e.g. Martin-Pintado et al. 1992; Schilke et al. 1997; Gusdorf et al. 2008), its emission peak in the center of DR21 might also originate from photo-evaporating ice mantles in the dusty envelope of a driving source(s) (e.g. Walmsley et al. 1999; Schilke et al. 2001). On the other hand, the SiO emission detected near the western lobe could result from the interaction of the outflow and the the dense structure seen, for example, in N\({}_{2}\)H\({}^{+}\), located there. This scenario is discussed in more detail in Sec 4.3.
Figure 2 shows the profiles of the strongest lines, averaged over the eastern outflow lobe, the central area of DR21 Main and the western outflow lobe (Fig. 1). Similar spectra for the rest of the lines are presented in Appendix B. The peak of the line emission lies at a velocity of \(-\)3 km s\({}^{-1}\), which corresponds to the velocity of the DR21 ridge (Dickel et al. 1978). The 9 km s\({}^{-1}\) feature, caused by more diffuse foreground material in the so-called "extended W75 cloud" (Dickel et al. 1978; Nyman 1983) can also be seen in some of the lines, but is most prominent in HCO\({}^{+}\) and HCN. The profiles of both the HCO\({}^{+}\) and HCN lines display extended line wings and strong emission in both outflow lobes, thus confirming their association with the outflowing material. In contrast, emission from molecules associated with the denser gas and the bulk of the DR21 ridge, like N\({}_{2}\)H\({}^{+}\) or HNC have narrower emission lines and relatively weaker emission from the outflow lobes.
In summary, the CASCADE observations offer a clear view of the different gas components in the DR21 Main area. Most importantly, the HCO\({}^{+}\) is found to trace well the molecular outflow, N\({}_{2}\)H\({}^{+}\) highlight the dense filament while SiO emission suggests the possibility of interaction between the outflow and the surrounding ISM, a scenario further explored in Section 4.3. The release of a full line list, including unidentified lines for all targets of the CASCADE survey, will be presented in a future paper of the collaboration.
### The molecular outflow of DR21 Main
The HCO\({}^{+}\) emission is one of the best tracers of the molecular outflow in DR21 Main and its distribution closely follows that of the H\({}_{2}\) emission (Section 3.1, Fig. 1). Therefore, we use it to explore the kinematics as well as the outflow morphology at different velocities. In particular, we investigate the change of the opening angle with gas velocity.
Figure 3 shows the spatial distribution of the HCO\({}^{+}\) (1 - 0) emission integrated over velocity intervals of 5 km s\({}^{-1}\), in the range from 5 and 45 km s\({}^{-1}\) relative to source velocity
Figure 1: _United Kingdom Infra-Red Telescope_ (UKIRT) Wide Field Camera (WFCAM) continuum image of the DR21 Main region at 2.2 \(\mu\)m (Warren et al. 2007) and the line emission in key gas tracers observed as part of CASCADE. Shock excited H\({}_{2}\) emission makes a significant contribution to the 2.2 \(\mu\)m image, in particular to the lobes off the central region. White contours mark the 5\(\sigma\) HCO\({}^{+}\) (left), N\({}_{2}\)H\({}^{+}\) (middle) and SiO (right) emission.Intensities are integrated between -50 and 50, -20 and 10,and -20 and 20 km s\({}^{-1}\) for the HCO\({}^{+}\), N\({}_{2}\)H\({}^{+}\) and SiO emission respectively. The adopted origin point of the outflow is marked in all cases with a red cross. Red dashed lines (middle) mark the three separate areas of the DR21 Main region used to extract the spectra in Fig. 2, with “E” marking the eastern outflow lobe, “C” the central area and “W” the western outflow lobe. The magenta dashed box (right) marks the location of the interaction region shown in Fig. 11. Finally orange contours (right) show the 5\(\sigma\) integrated intensity of H41\(\alpha\) from -30 to 30 km s\({}^{-1}\).
(\(v_{\rm source}=-3\) km s\({}^{-1}\)) for the red-shifted emission, and from \(-5\) to \(-45\) km s\({}^{-1}\), for the blue-shifted emission. Most of the HCO\({}^{+}\) emission is elongated in the West-East direction, tracing the outflow lobes of a bipolar outflow (Fig. 1). Some emission extends also in the North-South direction in a narrow range of velocities, suggesting that it is associated with the DR21 ridge. However, some of this emission might also originate from the outflowing gas, as suggested by CO (\(1-0\)) maps (Garden et al., 1991b).
Overall the velocity-channel maps (Fig. 3) show a rather symmetric morphology, but some small asymmetries can be noted. Namely, the blue-shifted part of the outflow appears stronger and extends to higher velocities than its red-shifted counterpart. Similar behaviour is also seen between the two lobes of the outflow, with the western lobe appearing both brighter and having higher velocities that the eastern one. These small asymmetries are likely to arise due to the relative position of the outflow driving source compared to the bulk of material in the surrounding ISM. In addition, the higher velocity HCO\({}^{+}\) emission seems to be detached from the origin point of the outflow (see also, Garden & Carlstrom, 1992). Due to the higher angular resolution of the current observations, we find cleared-out cavities in the outflow lobes. Finally, the known overlap of the blue- and red-shifted parts of the DR21 Main outflow, indicating that the outflow extends close to the plane of the sky, is clearly seen (see Fig. 4, and Schneider et al., 2010). We note, however, that we cannot estimate the inclination of the outflow more precisely because to the complexity of the ISM surrounding DR21 Main, e.g., the interaction region.
Figure 5 shows the position-velocity diagram of HCO\({}^{+}\) along the DR21 Main outflow illustrating several key outflow structures. Firstly, the bright negative peak near the offset of 200'' indicates the location of the interaction region, where material is deflected into the line of sight. Secondly, an extended absorption feature is detected near the middle of the outflow, which corresponds to the H ii region. Thirdly, the known absorption feature at 9 km s\({}^{-1}\), caused by more diffuse foreground material associated with W75, is also detected along the entire length of the outflow (Dickel et al., 1978). Finally, several structures are detected in the less disrupted Eastern lobe, which resemble a sawtooth pattern associated with the extremely high velocity component (EHV) of the low mass protostar IRAS 04166+2706 (Santiago-Garcia et al., 2009). A first estimate using approximate
Figure 2: Averaged line profiles of the \((1-0)\) transitions of HCO\({}^{+}\), HCN, HNC, N\({}_{2}\)H\({}^{+}\), and the \((2-1)\) transition of SiO toward the east lobe, center, and west lobe of the DR21 Main outflow (see Fig. 1). For HCN and N\({}_{2}\)H\({}^{+}\), the velocities corresponding to hyperfine structure components are marked with black ticks. Red dashed lines show the source velocity, \(-3\) km s\({}^{-1}\), green dashed lines mark the location of the absorption feature at 9 km s\({}^{-1}\) and the grey horizontal lines show the baselines.
values, derived from the PV diagram, for the maximum velocity of the structures (\(\sim 20\) km s\({}^{-1}\)) and the separation between them (\(\sim 10\arcsec\)) yields upper limits for the timescales between these knots of the order of \(\sim 10^{3}\) yrs. This result corresponds to the upper limits of the timescales found between knots in the outflows of W43-MM1 (Nony et al., 2020).
Figure 6 compares the velocities of HCO\({}^{+}\) with those of N\({}_{2}\)H\({}^{+}\) in the direction of the DR21 ridge. N\({}_{2}\)H\({}^{+}\) is exclusively associated with the ridge (Fig. 1) and shows three peaks corresponding to its hyperfine-splitted lines. All those components show a velocity gradient along the DR21 ridge, with velocities becoming increasingly blue-shifted North of DR21 Main. A similar gradient cannot be probed in the corresponding HCO\({}^{+}\) position-velocity diagram (Fig. 6) due to the complex line profiles. Similar to Figure 5, a strong central absorption feature exists, associated with the H ii region. In addition, the extended emission in the North - South is detected as a blue-shifted structure between offsets 50\(\arcsec\) and 100\(\arcsec\). The lack of corresponding red-shifted emission favors the scenario that this emission is associated with the ridge and not an additional outflow, extending in the N-S direction.
An important characteristic often used to describe outflows is their opening angle, a measure of how wide or collimated an outflow actually is. Here we calculated the opening angles for
Figure 3: Channel maps of the DR21 Main outflow in HCO\({}^{+}\). Contours of the blue-shifted HCO\({}^{+}\) emission (in blue) and red-shifted emission (in red) are integrated over velocity steps of 5 km s\({}^{-1}\) and are plotted over the corresponding gray-scale. The full velocity range is from 5 to 45 km s\({}^{-1}\) relative to source velocity (\(-\)3 km s\({}^{-1}\)) and the contour levels correspond to 5, 10 and 20 \(\sigma_{\rm{rms}}\). The black, dashed circle shows the area of the DR21\(-\)1 core in Cao et al. (2019). The green dashed lines denote the half-circles used to derive the outflow opening angles (see Fig. 7).
the DR21 Main outflow, separately for each lobe and for red- and blue-shifted emission, by examining the spectra of HCO\({}^{+}\) emission along a half-circle with radius approximately equal to half the extent of the corresponding outflow lobe (The exact location of these half-circles is shown in Fig. 3). The opening angle then corresponds to the angle between the location where the emission first becomes significant and the location where it drops further to noise level. The resulting opening angles for all different cases and for different velocities are plotted in Fig. 7 over the corresponding velocity. Interestingly, the opening angles for all cases appear to be decreasing for higher velocities, a behavior that is expected in the case of a typical bipolar outflow, powered by a narrow and well collimated jet (e.g. Zhang et al., 2019; Rabenanahary et al., 2022).
### Energetics of the outflow
The spatially- and velocity-resolved observations of the DR21 Main outflow allow us to calculate key outflow properties such as, the outflow force, \(F\), the rate at which the outflow injects momentum into its surrounding interstellar medium, the outflow mass, \(M\), and its kinetic energy, \(E_{\rm kin}\). For all calculations, we use the HCO\({}^{+}\) (\(1-0\)) emission found to trace well the molecular outflow (Section 3.1).
To measure the force of the DR21 Main outflow, we use the so-called separation method introduced in van der Marel et al.
Figure 4: HCO\({}^{+}\) emission integrated from -50 to 50 km s\({}^{-1}\). Red and blue contours mark the redshifted (5 to 45 km s\({}^{-1}\)) and blueshifted (\(-5\) to \(-45\) km s\({}^{-1}\)) HCO\({}^{+}\) emission respectively. Contours correspond to 5 \(\sigma_{\rm rms}\) emission and the velocity ranges are given relative to the source velocity (\(-3\) km s\({}^{-1}\)). Black arrows mark the cuts for the PV diagrams (Figs. 5 and 6) and the ticks mark distances of 50 arcseconds along the arrows.
Figure 5: Position – velocity diagram for HCO\({}^{+}\) emission along the DR21 Main outflow. The offset is measured from the edge of the eastern lobe towards the west. The arrow points to the 9 km s\({}^{-1}\) absorption feature (Dickel et al., 1978), the white arrow highlights the location of the interaction region (this work), the magenta dashed contour marks the H41\(\alpha\) emission from the H ii region, and the light green arrows mark examples of sawtooth pattern structures (Santiago-García et al., 2009).
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \(F^{2}\) (\({}^{\circ}\)) & 10 & 30 & 50 & 70 & Ref \\ \hline \(c_{1}\) & 0.28 & 0.45 & 0.45 & 1.1 & 1,2 \\ \(c_{2}\) & 1.6 & 3.6 & 6.3 & 14 & 1 \\ \(c_{3}\)\({}^{b}\) & 0.6 & 1.3 & 2.4 & 3.8 & 3 \\ \hline \end{tabular}
\end{table}
Table 3: Inclination correction factors used in the different methods of outflow force calculation
Figure 6: Position – velocity diagrams for HCO\({}^{+}\) (top) and N\({}_{2}\)H\({}^{+}\) (bottom) emission along the DR21 ridge. The offset is measured from South to North (see Fig. 4 for the exact location of the cut), the magenta dashed contour marks the H41\(\alpha\) emission from the H ii region while the orange dashed lines mark the source velocity (\(-3\) km s\({}^{-1}\)). For the N\({}_{2}\)H\({}^{+}\) line, the velocities of the three resolved hyperfine structure components are marked at \(4\), \(-3\) and \(-10\) km s\({}^{-1}\), respectively.
(2013), where the outflow force is calculated as:
\[F_{\rm HCO^{+}}=c_{3}\times\frac{K\left(\sum_{j}\left[\int_{t_{\rm m}}^{t_{\rm maj }}T(v^{\prime})v^{\prime}\mathrm{d}v^{\prime}\right]\right)\mathrm{e}_{\rm max}}{R _{\rm lobe}}. \tag{1}\]
Here, \(c_{3}\) is a correction factor for a given inclination angle of the outflow (Table 3), \(K\) is a conversion factor between the line integrated intensity and the molecular gas mass, the integral \(\int_{t_{\rm m}}^{t_{\rm maj}}T(v^{\prime})v^{\prime}\mathrm{d}v^{\prime}\) corresponds to the velocity weighted integrated intensity, \(v_{\rm max}\) is the maximum line-of-sight velocity in the outflow lobe, and \(R_{\rm lobe}\) is the length of the outflow lobe, while the sum runs over all pixels (j) that are part of the outflow. The conversion factor \(K\) (see Appendix C of van der Marel et al., 2013) is given by:
\[K=\mu m_{\rm H}A\frac{8\pi\mathrm{k}_{\rm B}v^{2}}{hc^{3}A_{\rm ul}}\left[ \frac{\mathrm{H}_{2}}{\mathrm{HCO^{+}}}\right]\frac{Q(T_{\rm exc})}{g_{\rm u} }\mathrm{e}_{\rm{}^{K}/T_{\rm exc}} \tag{2}\]
where \(\mu\) is the mean molecular weight, \(m_{\rm H}\) is the hydrogen mass, \(A\) is the observed area of the outflow, \(\mathrm{[H}_{2}/\mathrm{HCO^{+}}]\) is the abundance ratio between H\({}_{2}\) and HCO\({}^{+}\), \(Q(T_{\rm exc})\) is the partition function at a specific excitation temperature \(T_{\rm exc}\), \(g_{\rm u}\) is the degeneracy of the upper level of the observed transition, \(F_{\rm{}_{2}}\) is the upper level energy in Kelvins, \(\nu\) is the frequency of the observed transition in Hz, \(c\) is the speed of light, \(k_{\rm B}\) is Boltzmann's constant, \(h\) is Planck's constant and \(A_{\rm ul}\) is Einstein A coefficient for the transition in s\({}^{-1}\). We assume a single excitation temperature of 40 K (Garden et al., 1991b). The abundance ratio of H\({}_{2}\) over HCO\({}^{+}\) in high-mass star-forming regions has been found to range between \(2\times 10^{9}\) down to \(3\times 10^{9}\)(Godard et al., 2010; Gerner et al., 2014). In this work, we adopt an abundance ratio of H\({}_{2}\) over HCO\({}^{+}\) of \(1.6\times 10^{8}\)(Garden & Carlstrom, 1992), which is well within the above range and was estimated for DR21 Main. The corresponding value of the partition function and the remaining molecular data are taken from Splatalogue2 using the CDMS catalogue (Muller et al., 2001).
Footnote 2: [https://splatalogue.online/advanced1.php](https://splatalogue.online/advanced1.php)
The inner velocities for the integration (\(v_{\rm m}\)) are \(-5\) and \(5\) km s\({}^{-1}\) relative to the source velocity for the blue- and red-shifted parts of the emission, excluding the innermost \(10\) km s\({}^{-1}\) in order to avoid contamination from the cloud material.
The calculation of the length of the outflow lobe, \(R_{\rm lobe}\), requires information about the origin point of the outflow. Here, we adopt it as the position of the dense core DR21-1 from Cao et al. (2019) (see also core Nd46 in Motte et al., 2007), which is located close to the center of the two outflow lobes and at the DR21 ridge. The location of the core agrees also well with the peak of the 3mm continuum emission, as shown in Fig. 8. Higher resolution observations of this area would be required to determine the exact location and nature of the driving source, which is outside of the scope of this paper.
The gas mass carried by the outflow is obtained from:
\[M=K\left(\sum_{j}\left[\int_{t_{\rm m}}^{t_{\rm m}}T(v^{\prime})\mathrm{d}v^{ \prime}\right]\right). \tag{3}\]
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline & \(v_{\rm max}\) & \(R_{\rm lobe}\) & \(t_{\rm dyn}\) & \(M\) & \(P\) & \(E_{\rm kin}\) & \(\dot{M}\) & \(F\) & \(L_{\rm kin}\) \\ & [km s\({}^{-1}\)] & [pc] & [yr] & [M\({}_{\odot}\)] & [M\({}_{\odot}\) km s\({}^{-1}\)] & [erg] & [M\({}_{\odot}\) yr\({}^{-1}\)] & [M\({}_{\odot}\) km yr\({}^{-1}\) s\({}^{-1}\)] & [erg yr\({}^{-1}\)] \\ \hline East lobe: red & 38.6 & 0.74 & 4900 & 7 & 87 & \(1.2\times 10^{46}\) & 0.002 & 0.02 & \(4.87\times 10^{42}\) \\ blue & -62.2 & 0.75 & 3100 & 47 & 669 & \(2.4\times 10^{47}\) & 0.015 & 0.22 & \(1.57\times 10^{44}\) \\ \hline West lobe: red & 52.2 & 0.88 & 4300 & 13 & 135 & \(1.8\times 10^{46}\) & 0.003 & 0.03 & \(8.14\times 10^{42}\) \\ blue & -70.2 & 0.98 & 3600 & 57 & 1037 & \(2.3\times 10^{47}\) & 0.016 & 0.29 & \(1.26\times 10^{44}\) \\ \hline East+West: red & – & – & – & 20 & 222 & \(3.0\times 10^{46}\) & 0.005 & 0.05 & \(1.30\times 10^{43}\) \\ blue & – & – & – & 104 & 1706 & \(4.7\times 10^{47}\) & 0.031 & 0.51 & \(2.83\times 10^{44}\) \\ \hline Entire outflow & – & – & – & 124 & 1928 & \(5.0\times 10^{47}\) & 0.036 & 0.56 & \(2.96\times 10^{44}\) \\ \hline \end{tabular}
\end{table}
Table 4: Outflow parameters of the DR21 Main outflow
Figure 7: Opening angle of the DR21 Main outflow as a function of velocity using HCO\({}^{+}\) 1-0 line profiles. The angles are calculated every 5 km s\({}^{-1}\) from the source velocity up to \(v_{\rm max}=22,32,-38\) and \(-48\) km s\({}^{-1}\) for the east-red, west-red, east-blue and west-blue outflow lobe respectively. **Top**: Opening angles for blue-shifted outflow velocities. **Bottom**: Opening angles for the red-shifted outflow velocities.
Subsequently, the time-averaged kinetic energy of the outflow is calculated as:
\[E_{\rm kin}=\frac{1}{2}\,M\langle v\rangle^{2}, \tag{4}\]
and its momentum as:
\[P=K\left(\sum_{j}\left[\int_{v_{\rm in}}^{v_{\rm outj}}T(v^{\prime})v^{\prime} \mathrm{d}v^{\prime}\right]_{\rm j}\right). \tag{5}\]
The dynamical time, which is an estimate of the lifetime of the outflow, is then measured from:
\[t_{\rm dyn}=\frac{R_{\rm lobe}}{v_{\rm max}} \tag{6}\]
This, in turn, allows the calculation of the mass loss rate:
\[\dot{M}=\frac{M}{t_{\rm dyn}}, \tag{7}\]
and the power of the outflow:
\[L_{\rm kin}=\frac{E_{\rm kin}}{t_{\rm dyn}}. \tag{8}\]
The calculations are performed for the east and west outflow lobes, and for the red- and blue-shifted emission, separately. The resulting outflow properties are presented in Table 4.
The outflow mass and kinetic energy can be compared with the results from Garden & Carlstrom ([1992]), where observations of the transition of HCO\({}^{+}\) (1 - 0) were analyzed. Here, we use the high velocity component from Garden et al. ([1991b]) and scale it to the same distance of DR21, as adopted in this work. The outflow mass for the high-velocity component, of \(\sim\)120 M\({}_{\odot}\) (see Table 4) is a factor of 2-5 higher than the corresponding outflow mass in Garden & Carlstrom ([1992]). The outflow extent and the area covered by the observations are similar in both studies; Garden et al. ([1991b]) obtain the \(R_{\rm lobe}\) of \(\sim\)1.7 pc. Thus, the difference is likely due to the velocity limits adopted in Garden & Carlstrom ([1992]), which exclude a significant part of gas mass at velocities close to the source velocity (\(v_{\rm in}\) from \(-\)12.5 to \(-\)42.5 km s\({}^{-1}\)). In fact, the outflow kinetic energy, which accounts for the relevant range of velocities, is fully consistent: \(E_{\rm kin}\) of 5.0\(\times\)10\({}^{47}\) erg (Table 4) is within the range of \(\sim\)2.5-5.0 \(\times\)10\({}^{47}\) erg reported in Garden & Carlstrom ([1992]) using HCO\({}^{+}\) and about a factor of 4 lower than the total energy measured using CO (Garden et al. 1991b). For a more thorough discussion of the DR21 Main outflow properties, with respect to both low- and high-mass protostars, see Section 4.2.
The calculation of the outflow parameters includes a few assumptions that need to be addressed. Firstly, the conversion factor \(K\) is accurate only when the observed emission is optically thin. In the case of DR21 Main, the HCO\({}^{+}\) emission is optically thin in the outflow lobes, but not in the central area (Garden & Carlstrom 1992). Using our H\({}^{13}\)CO\({}^{+}\) observations, which appear to trace the dense ridge (Section 3), we estimate a \(\tau\) of 12.5 for the DR21 Main center. For that, the central region is excluded from the calculation of the outflow parameters. Similarly, gas with velocities within \(\sim\)5 km s\({}^{-1}\), from the velocity of the N-S filament is also excluded. Noteworthy, even though a significant part of the outflow material is often found at low velocities, its impact on \(F\) or \(E_{\rm kin}\) is not as significant due to the dependence of those parameters on \(v^{2}\). Secondly, the correction factor of inclination angle, \(c_{3}\), is available only for the inclination angles of 10\({}^{\circ}\), 30\({}^{\circ}\), 50\({}^{\circ}\), and 70\({}^{\circ}\) (Table 3). In this work, \(c_{3}\) of 3.8 is assumed (corresponding to 70\({}^{\circ}\)), but since the outflow of DR21 Main appear to be close to 90\({}^{\circ}\) (see Section 3.2), the correction is most likely underestimated by a factor of \(\mathrel{\hbox{\hbox to 0.0pt{\hbox{\lower 4.0pt\hbox{$\sim$}}}\hbox{$<$}}}2\). Finally, we assumed a uniform excitation temperature of the gas along the outflow, which likely differs by a factor of a few depending on the position of line emission. The increase of excitation temperature from 40 to 80 K would lead to the increase of the \(K\) parameter by a factor of \(\sim\) 1.9. Thus, the variations of \(T_{\rm ex}\) along the outflow are not expected to significantly impact the results.
To summarize, the parameters of the DR21 Main outflow, calculated using the HCO\({}^{+}\), provide a useful diagnostic of outflow energetics. The calculations are consistent with previous work by Garden & Carlstrom ([1992]) using the same tracer and transition. Due to the optical thickness of the emission and the adopted velocity limits, the mass of the outflow as well as the related parameters are lower limits to the actual parameters.
## 4 Discussion
### The nature of the DR21 Main outflow
The nature of the DR21 Main outflow is still a topic of discussion since it has been proposed to belong to the class of explosive outflows (Zapata et al. 2013). The detailed analysis of the morphology and the kinematics of the outflow using HCO\({}^{+}\) observations reveals a rather well defined bipolar outflow structure reminiscent of that of a typical protostellar outflow (Section 3.2). The strong overlap of red- and blue-shifted HCO\({}^{+}\) emission is indeed a property attributed to explosive outflows (Zapata et al. 2017), however it can also appear in the case of a bipolar outflow that extends along the plane of the sky with the red- and blue-shifted emission arising from the sideways expansion of the outflow lobes. Additionally, the apparent decrease of the DR21 Main outflow's opening angle with increasing velocities (see Fig. 7) is characteristic of bipolar outflows that are powered by a narrow collimated jet but seems unlikely to happen in the case of explosive outflows.
Figure 9 shows a side-by-side comparison of local intensity peaks at multiple velocity steps between the DR21 Main out
Figure 8: 3mm continuum emission in the area of DR21 Main. The peak of the continuum emission is marked with a red star, while the green circle marks the location, and the green dashed line the FWHM, of the dense core DR21-1 (Cao et al. 2019). Blue contours mark the integrated HCO\({}^{+}\) intensity (-50 to 50 km s\({}^{-1}\)), while the beam of the continuum observations is noted with the red ellipse.
flow and the Orion KL outflow. For the DR21 Main outflow the HCO\({}^{+}\) data presented in this work have been used, while for Orion archival ALMA CO (2 - 1) data were used (Project ID: 2013.1.00546.S, PI: John Bally, Bally et al. (2017)). The emission in Orion (right) can be seen to consist of multiple, well defined filament-like structures which also display clear velocity gradients along their length with higher absolute velocities being further away from the outflows point of origin. This behaviour is probably the most distinctive characteristic of explosive outflows and is absent in the case of the DR21 Main outflow (left). More precisely, in DR21 Main there are very few distinct filament-like structures which also do not show any significant velocity gradients along their length. Moreover, the structures that do exist appear to be tracing the cavity walls of the outflow lobes contrary to the more random distribution that the filament-like structures have in Orion. We note though that DR21 is located significantly further, at 1.5 kpc (Rygl et al., 2012), than Orion KL at \(\sim\)400pc (Menten et al., 2007; Kounkel et al., 2017), which means that the physical scales of the two outflows in Fig. 9 are significantly different.
Taking all the above into consideration, it appears that the outflow of DR21 Main resembles more a typical bipolar outflow that is driven by a protostar rather than an explosive outflow. However, the observations presented here do not reveal a single, compact emission that could be associated with the driving source (Appendix A). Further observations are required in order to discern the protostar or protostellar system behind such an exceptionally powerful bipolar outflow.
### Outflow energetics
Multiple correlations have been reported connecting the energetics of protostellar outflows with the properties of their driving sources. For example, the correlation of the outflow force with the envelope mass and bolometric luminosity are attributed to a connection between mass accretion rates and outflow activity (Beuther et al., 2002; Duarte-Cabral et al., 2013; Mottram et al., 2017). The existence of such correlations allows for a direct comparison of the properties of DR21 Main outflow and those of other protostellar and explosive outflows.
Figure 10 shows the comparison of the outflow force of the DR21 Main outflow and those of low-, intermediate- and high-mass protostars (Beuther et al., 2002; van Kempen et al., 2009; van der Marel et al., 2013; Yildiz et al., 2015; Maud et al., 2015; Mottram et al., 2017; Li et al., 2020; Skreras & Kristensen, 2022). We calculate the Pearson coefficients and the corresponding significance (\(\sigma\)) for both correlations (see e.g., Marseille et al., 2010). The outflow forces of high-mass protostars are found to correlate strongly both with their envelope masses (6.1\(\sigma\)) and bolometric luminosities (6.5\(\sigma\)); the correlation extends also to lower masses. The source sample from Li et al. (2020) represents sources at the very early stages of their evolution (70 \(\mu\)m dark clumps), and are therefore expected to have a low \(L_{\rm bol}/M_{\rm env}\) ratio. That is why they display relatively large envelope masses with respect to other sources of similar energetic parameters.
The outflow force of DR21 is higher than those of other high-mass protostars, including the other two explosive-outflow candidates: G5.89-0.39 (\(L_{\rm bol}\) of 4.1\(\times\) 10\({}^{4}\) L\({}_{\odot}\), \(M_{\rm env}\) of 140 M\({}_{\odot}\), van der Tak et al., 2013; Karska et al., 2014) and Orion BN/KL (\(L_{\rm bol}\) of 5\(\times\) 10\({}^{4}\) L\({}_{\odot}\), \(M_{\rm env}\) of 150 M\({}_{\odot}\), Downes et al., 1981; Genzel & Stutzki, 1989). For DR21, we adopt \(L_{\rm bol}\) of 1.0\(\times\) 10\({}^{4}\) L\({}_{\odot}\) and \(M_{\rm env}\) of 1355 M\({}_{\odot}\), which account for the new distance to the source and the peak of the Spectral Energy Distribution (Cao et al., 2019).
DR21 shows also enhanced values of the mass outflow rate and outflow kinetic luminosity, but falls within the range of other high-mass protostars when the outflow mass, power, and kinetic energy are concerned (Appendix C). Those differences are therefore the largest for parameters involving the outflow dynamical time, and as such are related with \(v_{\rm max}\) which in turn depends on the inclination angle. However, the inclination of DR21 on the sky could only introduce a factor of \(\sim\)4 difference in the derived parameters - much less than the enhancement in outflow properties with respect to typical high-mass protostars.
Figure 9: Distribution of gas velocities associated with the outflows in DR21 (left) and Orion KL (right, Bally et al., 2017). Each point shows a local intensity peak of the HCO\({}^{+}\) emission (above 2 \(\sigma\)) integrated in velocity steps of 5 km s\({}^{-1}\). The color of the points signifies the corresponding velocity steps and are shared for the two plots. Background grey-scale shows the integrated HCO\({}^{+}\) (left panel) and CO emission (right panel) over the entire velocity range of the outflows (from \(-\)70 to 70 km s\({}^{-1}\) for DR21 Main and \(-\)100 to 100 km s\({}^{-1}\) for Orion KL).
Orion BN/KL and G5.89-0.39 also show relatively high mass outflow rates, and kinetic energy and luminosity, which might suggest this could be a common characteristic of explosive-outflow candidates. Assuming that the underlying physical mechanism for explosive outflows is different than for typical outflows, there is presently no theoretical expectation that explosive outflows should follow the \(F\)-\(M_{\rm env}\) and \(F\)-\(L_{\rm bol}\) correlations (Figure 10). However, the sample of these objects is too small to be conclusive.
The DR21 Main outflow is found to be a bipolar outflow (Section 4.1). Therefore, the enhanced outflow force of DR21 might indicate the presence of scatter for the high-mass sources, similar to the one measured in the outflow properties of low-mass YSOs (Figure 10). In any case, the high outflow force of DR21 Main outflow is consistent with it being one of the most powerful outflows in the Galaxy.
### Interaction at the western outflow lobe
Outflow activity from protostars can have an impact on the structure and chemistry of their parental, or nearby, dense cores (e.g., van Kempen et al. 2009; Lis et al. 2016; Kahle et al. 2022). The energetic outflow from DR21 Main heavily interacts with its surrounding, and creates the "interaction region" in the western outflow lobe, which has been associated with a collisionally excited Class I methanol maser (Plambeck & Menten 1990). Molecular line emission from CASCADE allows us to pin-point the detailed characteristics of the outflow-cloud interaction.
The interaction region shows various patterns of molecular line emission with HCN and H\({}_{2}\)CO peaking in its east part, and HNC and N\({}_{2}\)H\({}^{+}\) emission extending to the west and outer part of the outflow lobe (Fig. 11). The SiO emission shows a compact pattern associated most closely with H\({}_{2}\)CO, suggesting the presence of shocks in the interface between e.g., HCN and N\({}_{2}\)H\({}^{+}\) gas.
We investigate the line emission across the interaction regions in Figures 12-17. HCO\({}^{+}\) emission peaks in the interaction region, and is followed by H\({}_{2}\)CO tracing warm gas and N\({}_{2}\)H\({}^{+}\), which is sensitive to the sharp increase in cold gas density (Fig. 12). In the case of HCO\({}^{+}\), it is likely that some outflowing material is deflected into the line of sight, giving rise to the strong, high velocity, blue-shifted emission detected in this area (Fig. 3). This velocity shift is also clearly seen in the first moment of HCO\({}^{+}\), which shows that its emission in the area of the dense structure is mainly red-shifted (Fig. 13). A likely explanation could be that the dense structure is located closer to the observer along the line of sight and is therefore interacting mostly with the blue-shifted part of the outflow. The blue-shifted emission indeed shows a high velocity components elongated almost vertical to the outflow axis, while the red-shifted emission appears almost unperturbed (Fig. 3).
The H\({}_{2}\)CO emission displays a sharp increase in the interaction area, followed by a significant decrease deeper into the dense structure (also shown in Fig. 12). Such an increase is likely related to the enhanced gas temperatures in the interaction region, which lead to the sputtering of H\({}_{2}\)CO from the dust grains (Benedettiini et al. 2013). Alternatively, the H\({}_{2}\)CO emission could be explained by shock chemistry (e.g. Viti et al. 2011), but the commonly predicted double-peaked structure is not resolved in our observations. It is possible, however, that averaging along the interaction front leads to the blending of line emission making this feature less apparent. A close association of H\({}_{2}\)CO with the SiO emission tracing shocks (Fig. 14), favors the scenario that H\({}_{2}\)CO traces not only warm gas, but also the location of active shocks as was suggested by Li et al. (2022). The multiple peaks of SiO and H\({}_{2}\)CO likely arise due to averaging
Figure 10: Outflow force over the envelope mass of the driving source for various protostellar sources. Right facing triangles represent low-mass sources from Mottram et al. (2017), left facing triangles mark sources taken from Yildiz et al. (2015) and upwards are from van der Marel et al. (2013) while in blue are the Class I sources and in red the Class 0. Black diamonds mark intermediate mass sources (van Kempen et al. 2009), grey squares mark high mass sources from Maud et al. (2015), green “\(\times\)” mark high mass sources from Beuther et al. (2002) and black stars mark high mass sources in Cygnus (Skretas & Kristensen 2022). The red crosses mark a sample of high-mass 70\(\mu\)m dark sources (Li et al. 2020). The Cyan star marks G5.89-0.39, the magenta one marks Orion KL and the green one represents the DR21 Main outflow. The dashed, black line shows the best fit to the outflow force - envelope mass correlation for all sources, while red and grey show the best fits for the low- and high-mass sources respectively.
across the entire interaction front and the clumpy nature of SiO emission (see Fig. 11).
HCN and HNC show some differences along the interaction region (Fig. 15), which might reflect the changes in gas temperature (Hacar et al., 2020). The pattern of emission in HNC is similar to that of N\({}_{2}\)H\({}^{+}\), whereas HCN follows closely H\({}_{2}\)CO. We refrain from using the Hacar et al. (2020) relation to calculate gas temperatures, because the HCN over HNC line ratios partly exceed the range where the experimental relation hold. Nevertheless, the ratio of the two species suggests that the temperature increases rapidly at the front of the interaction area and then drops steadily to a relatively low value. The ratio shows two peaks that follow the peaks of the HCN intensity (Fig. 15), and are found close to but not exactly at the same location as the peaks of SiO. This suggests that HCN might be also enhanced in the warm gas behind the shock front (see e.g., Mirocha et al., 2021). In addition, velocities of the gas traced by HCN and HNC also differ (Fig. 16). HNC has velocities close to the cloud velocity, especially from the middle of the interaction area and into the dense structure, where its emission actually becomes significant. HCN on the other hand follows HCO\({}^{+}\) and shows significant blue-shifted emission, which becomes red-shifted in the area dominated by the dense structure. The blue-shifted peak appears deeper in the interaction area compared that of HCO\({}^{+}\) showing that they trace different material.
HCCCN emission is detected in the area of the dense structure as expected based on previous studies (e.g. Morris et al., 1976; Churchwell et al., 1978). The slight enhancement in the in
Figure 11: Outflow-cloud interaction region in the western lobe of the DR21 Main outflow. The grey-scale shows the integrated HCO\({}^{+}\) emission between \(-\)50 and 50 km s\({}^{-1}\) relative to the source velocity. Red contours show line emission of HCN (top), H\({}_{2}\)CO (middle), and HNC (bottom). Blue contours show the emission of N\({}_{2}\)H\({}^{+}\) and green contours show the emission of SiO, in all panels. Contour levels are at 5, 10 and 20 \(\sigma_{\rm{rms}}\). The dashed black lines in the middle plot mark the lines used to calculate the average intensities and first moments across the interaction front, presented in Figs. 12-17. The orange rectangle marks the area actively affected by the interaction as derived from these intensities.
Figure 12: Average integrated intensities of HCO\({}^{+}\) (in blue), H\({}_{2}\)CO (in red), and N\({}_{2}\)H\({}^{+}\) (in green) across the interaction region in the western lobe of DR21. Intensities are integrated from \(-\)70 to 70, \(-\)20 to 20 and \(-\)20 to 10 km s\({}^{-1}\) for HCO\({}^{+}\), H\({}_{2}\)CO and N\({}_{2}\)H\({}^{+}\), respectively. The x-axis shows the distance in arcseconds, covering the extent of the relevant region where the outflow interacts with a dense structure (marked also in Fig. 11). The orange rectangle shows the area actively affected by the interaction. The intensities for H\({}_{2}\)CO and N\({}_{2}\)H\({}^{+}\) are scaled up by a factor of 8 in order for their distributions to be more easily comparable.
Figure 13: First moment of HCO\({}^{+}\) emission across the interaction front (see Fig. 12). The black dashed line marks the DR21 cloud velocity of \(v_{\rm{cloud}}=-\)3 km s\({}^{-1}\).
teraction region suggest the origin in shocks (Fig. 17), in agreement with Benedettini et al. (2013).
The intensity of CCH shows a significant increase close to the center of the interaction region, likely due to UV radiation originating from the shocked material in the interaction region (e.g. Gratier et al., 2017; Bouvier et al., 2020; Chahine et al., 2022).
In contrast, HNCO is not detected in the interaction region, even though it is often associated with shocks. According to Yu et al. (2018), HNCO is preferentially enhanced in slow moving shocks, but destroyed in high velocity shocks, which might indeed be the case for the energetic outflow from DR21.
Finally, we note here that due to the irregular shape of the interaction front, the simplistic average along it adopted for the above analysis suffers from significant uncertainties. Still, it can offer an interesting qualitative look into the behaviour of species across such an interaction.
## 5 Summary and conclusions
This work presents the results of the CASCADE observations in the area of the DR21 Main outflow covering several molecular tracers, including HCO\({}^{+}\), HCN, HNC, N\({}_{2}\)H\({}^{+}\), H\({}_{2}\)CO, and CCH at high spatial (\(\sim 3\arcsec\)) and spectral resolution (\(\sim\)0.8 km s\({}^{-1}\)). These molecular tracers are split into three separate categories according to their morphology, tracing the outflow (HCO\({}^{+}\) and HCN), the DR21 ridge (\({}^{13}\)CS, CCH, H\({}_{2}\)CO, H\({}^{13}\)CO\({}^{+}\), HCCCN (10 - 9), HCCCN (8 - 7), HNC, N\({}_{2}\)H\({}^{+}\)) and localized emission, e.g. CH\({}_{3}\)CN (4\({}_{h}\)-3\({}_{h}\)), CH\({}_{3}\)CN (5\({}_{h}\) - 4\({}_{h}\)), CH\({}_{3}\)OH, DCN, DCO\({}^{+}\), DNC, H\({}_{2}\)41, NH\({}_{2}\)D, SiO.
Based on the HCO\({}^{+}\) emission, the DR21 Main outflow was found to mostly resemble a typical bipolar outflow rather than an explosive one, as its emission shows two well structured lobes that get progressively more collimated at higher velocities and
Figure 16: **Top:** First moment of HCN emission across the interaction front of the DR21 Main outflow and the dense structure located near the western outflow lobe plotted over the corresponding distance. The distance is measured from the outflow dominated part and extends into the dense structure. The black dashed line marks the DR21 cloud velocity of v\({}_{\rm cloud}\) = \(-\)3 km s\({}^{-1}\). **Bottom:** Same as above for but for HNC.
Figure 14: Average integrated intensities of SiO (in blue) and H\({}_{2}\)CO (in red) across the interaction region in the western lobe of DR21. Intensities are integrated from \(-\)25 to 25 and \(-\)20 to 20 km s\({}^{-1}\) for SiO and H\({}_{2}\)CO respectively. The SiO emission is scaled up by a factor of 2 for clarity.
Figure 15: HCN and HNC emission in the interaction region. **Top:** Average integrated intensities of HCN (in blue) and HNC (in red) across the interaction region in the western lobe of DR21. Intensities are integrated from \(-\)35 to 35 and \(-\)15 to 10 km s\({}^{-1}\) for HCN and HNC respectively. The HNC emission is scaled up by a factor of 5 for clarity. **Bottom:** Ratio of HCN over HNC across the interaction region.
lack the filament-like structures that are prevalent in established explosive outflows.
Adapting the separation method, and applying it to HCO\({}^{+}\) emission allowed for the estimation of the energetic parameters (outflow force \(F=0.56\) M\({}_{\odot}\) km yr\({}^{-1}\) s\({}^{-1}\), mass \(M=124\) M\({}_{\odot}\) and kinetic energy \(E_{\rm kin}=5\times 10^{47}\) erg) of the outflow. Comparison with other protostellar sources showed that the outflow force of the DR21 Main outflow is about an order of magnitude higher than sources of similar envelope mass. It remains uncertain though whether the outflow of DR21 Main represents an upper limit of typical protostellar outflows or is powered by a different mechanism.
Finally, a dense molecular structure was detected near the western lobe of the outflow. The detection of SiO and H\({}_{2}\)CO emission in this area showed that there is ongoing interaction between the outflow and this dense structure. This, in turn, allowed for an analysis of the behaviour of different molecular species across such an interaction and found them to be in good agreement with the results of recent modelling predictions and observations of shocked regions.
Overall, the results presented in this paper firmly establish the outflow of DR21 Main as one of the most interesting cases of bipolar, protostellar outflows due to its exceptional size and power. Additionally, the CASCADE observations offer a good glimpse into the chemistry of the interaction region but further and more detailed modeling is required in order to properly constrain the chemistry that takes place in this location.
###### Acknowledgements.
The authors are grateful to the staff at the NOEMA and Pico Veleta observatories for their support of these observations. We thank in particular P. Chaudet, operator at the NOEMA observatory, for his motivation and dedication in developing and testing the advanced mosaic observing procedures employed in this project. This work is based on observations carried out under project number L19MA with the IRAM NOEMA Interferometer and [145-19] with the 30 m telescope. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain). AK acknowledges support from the Polish National Agency for Academic Exchange grant No. BWB/BEXO21/003199/DEC/. H.B. acknowledges support from the European Research Council under the Horizon 2020 Framework Program via the ERC Consolider Grant CSF-648050 and from the Deutsche Forschungsgemeinschaft in the Collaborative Research Center (SFB 881) "The Milky Way System" (subproject B1). A.G. acknowledges support from NSF AAG 2008101 and NSF CARERERE 2142300. D. S. acknowledges support from the European Research Council under the Horizon 2020 Framework Program via the ERC Advanced Grant No. 832428-Origins.
|
2305.19999 | Beam Tree Recursive Cells | We propose Beam Tree Recursive Cell (BT-Cell) - a backpropagation-friendly
framework to extend Recursive Neural Networks (RvNNs) with beam search for
latent structure induction. We further extend this framework by proposing a
relaxation of the hard top-k operators in beam search for better propagation of
gradient signals. We evaluate our proposed models in different
out-of-distribution splits in both synthetic and realistic data. Our
experiments show that BTCell achieves near-perfect performance on several
challenging structure-sensitive synthetic tasks like ListOps and logical
inference while maintaining comparable performance in realistic data against
other RvNN-based models. Additionally, we identify a previously unknown failure
case for neural models in generalization to unseen number of arguments in
ListOps. The code is available at:
https://github.com/JRC1995/BeamTreeRecursiveCells. | Jishnu Ray Chowdhury, Cornelia Caragea | 2023-05-31T16:20:04Z | http://arxiv.org/abs/2305.19999v3 | # Beam Tree Recursive Cells
###### Abstract
We propose Beam Tree Recursive Cell (BT-Cell) - a backpropagation-friendly framework to extend Recursive Neural Networks (RvNNs) with beam search for latent structure induction. We further extend this framework by proposing a relaxation of the hard top-\(k\) operators in beam search for better propagation of gradient signals. We evaluate our proposed models in different out-of-distribution splits in both synthetic and realistic data. Our experiments show that BT-Cell achieves near-perfect performance on several challenging structure-sensitive synthetic tasks like ListOps and logical inference while maintaining comparable performance in realistic data against other RvNN-based models. Additionally, we identify a previously unknown failure case for neural models in generalization to unseen number of arguments in ListOps. The code is available at: [https://github.com/JRC1995/BeamTreeRecursiveCells](https://github.com/JRC1995/BeamTreeRecursiveCells).
Machine Learning, ICML
## 1 Introduction
In the space of sequence encoders, Recursive Neural Networks (RvNNs) can be said to lie somewhere in-between Recurrent Neural Networks (RNNs) and Transformers in terms of flexibility. While vanilla Transformers show phenomenal performance and scalability on a variety of tasks, they can often struggle in length generalization and systematicity in syntax-sensitive tasks (Tran et al., 2018; Shen et al., 2019; Lakretz et al., 2021; Csordas et al., 2022). RvNN-based models, on the other hand, can often excel on some of the latter kind of tasks (Shen et al., 2019; Chowdhury and Caragea, 2021; Liu et al., 2021; Bogin et al., 2021) making them worthy of further study although they may suffer from limited scalability in their current formulations.
Given an input text, RvNNs (Pollack, 1990; Goller and Kuchler, 1996; Socher et al., 2010) are designed to build up the representation of the whole text by recursively building up the representations of their constituents starting from the most elementary representations (tokens) in a bottom-up fashion. As such, RvNNs can model the hierarchical part-whole structures underlying texts. However, originally RvNNs required access to pre-defined hierarchical constituency-tree structures. Several works (Choi et al., 2018; Peng et al., 2018; Havrylov et al., 2019; Maillard et al., 2019; Chowdhury and Caragea, 2021) introduced latent-tree RvNNs that sought to move beyond this limitation by making RvNNs able to learn to automatically determine the structure of composition from any arbitrary downstream task objective.
Among these approaches, Gumbel-Tree models (Choi et al., 2018) are particularly attractive for their simplicity. However, they not only suffer from biased gradients due to the use of Straight-Through Estimation (STE) (Bengio et al., 2013), but they also perform poorly on synthetic tasks like ListOps (Nangia and Bowman, 2018; Williams et al., 2018) that were specifically designed to diagnose the capacity of neural models for automatically inducing underlying hierarchical structures. To tackle these issues, we propose the Beam Tree Cell (BT-Cell) framework that incorporates beam-search on RvNNs replacing the STE Gumbel Softmax (Jang et al., 2017; Maddison et al., 2017) in Gumbel-Tree models. Instead of greedily selecting the highest scored sub-tree representations like Gumbel-Tree models, BT-Cell chooses and maintains top-\(k\) highest scored sub-tree representations. We show that BT-Cell outperforms Gumbel-Tree models in challenging structure sensitive tasks by several folds. For example, in ListOps, when testing for samples of length \(900\)-\(1000\), BT-Cell increases the performance of a comparable Gumbel-Tree model from \(37.9\%\) to \(86.7\%\) (see Table 1). We further extend BT-Cell by replacing its non-differentiable top-\(k\) operators with a novel operator called OneSoft Top-\(k\). Our proposed operator, combined with BT-Cell, achieves a new state-of-the-art in length generalization and depth-generalization in structure-sensitive synthetic tasks like ListOps and performs comparably in realistic data against other strong models.
A few recently proposed latent-tree models simulating RvNNs including Tree-LSTM-RL (Havrylov et al., 2019),
Ordered Memory (OM) (Shen et al., 2019) and Continuous RvNNs (CRvNNs) (Chowdhury and Caragea, 2021) are also strong contenders to BT-Cell on synthetic data. However, unlike BT-Cell, Tree-LSTM-RL relies on reinforcement learning and several auxiliary techniques to stabilize training. Moreover, compared to OM and CRvNN, one distinct advantage of BT-Cell is that it does not just provide the final sequence encoding (representing the whole input text) but also the intermediate constituent representations at different levels of the hierarchy (representations of all nodes of the underlying induced trees). Such tree-structured node representations can be useful as inputs to further downstream modules like a Transformer (Vaswani et al., 2017) or Graph Neural Network (Scarselli et al., 2009) in a full end-to-end setting.1 While CYK-based RvNNs (Maillard et al., 2019) are also promising and similarly can provide multiple span representations they tend to be much more expensive than BT-Cell (see SS5.3).
Footnote 1: There are several works that have used intermediate span representations for better compositional generalization in generalization tasks (Liu et al., 2020; Herzig and Berant, 2021; Bogin et al., 2021; Liu et al., 2021; Mao et al., 2021). We keep it as a future task to explore whether the span representations returned by BT-Cell can be used in relevant ways.
As a further contribution, we also identify a previously unknown failure case for even the best performing neural models when it comes to argument generalization in ListOps (Nangia and Bowman, 2018)--opening up a new challenge for future research.
## 2 Preliminaries
**Problem Formulation:** Similar to Choi et al. (2018), throughout this paper, we explore the use of RvNNs as a sentence encoder. Formally, given a sequence of token embeddings \(\mathcal{X}=(e_{1},e_{2},\ldots,e_{n})\) (where \(\mathcal{X}\in\mathbbm{R}^{n\times d_{e}}\) and \(e_{i}\in\mathbbm{R}^{d_{i}}\); \(d_{e}\) being the embedding size), the task of a sentence encoding function \(\mathcal{E}:\mathbbm{R}^{n\times d_{e}}\rightarrow\mathbbm{R}^{d_{h}}\) is to encode the whole sequence of vectors into a single vector \(o=\mathcal{E}(\mathcal{X})\) (where \(o\in\mathbbm{R}^{d_{h}}\) and \(d_{h}\) is the size of the encoded vector). We can use a sentence encoder for sentence-pair comparison tasks like logical inference or for text classification.
### RNNs and RvNNs
A core component of both RNNs and RvNNs is a recursive cell \(R\). In our context, \(R\) takes as arguments two vectors (\(a_{1}\in\mathbbm{R}^{d_{a_{1}}}\) and \(a_{2}\in\mathbbm{R}^{d_{a_{2}}}\)) and returns a single vector \(v=R(a_{1},a_{2})\). \(R:\mathbbm{R}^{d_{a_{1}}}\times\mathbbm{R}^{d_{a_{2}}}\rightarrow\mathbbm{R}^ {d_{e}}\). In our settings, we generally set \(d_{a_{1}}=d_{a_{2}}=d_{v}=d_{h}\). Given a sequence \(\mathcal{X}\), both RNNs and RvNNs sequentially process it through a recursive application of the cell function. For a concrete example, consider a sequence of token embeddings such as \((2+4\times 4+3)\) (assume the symbols \(2\), \(4\), \(+\) etc. represent the corresponding embedding vectors \(\in\mathbbm{R}^{d_{h}}\)). Given any such sequence, RNNs can only follow a fixed left-to-right order of composition. For the particular aforementioned sequence, an RNN-like application of the cell function can be expressed as:
\[o=R(R(R(R(R(R(R(h0,2),+),4),\times),4),+),3) \tag{1}\]
Here, \(h0\) is the initial hidden state. In contrast to RNNs, RvNNs can compose the sequence in more flexible orders. For example, one way (among many) that RvNNs could apply the cell function is as follows:
\[o=R(R(R(R(R(R(h0,2),+),4),\times),4),+),3) \tag{2}\]
Thus, RvNNs can be considered as a generalization of RNNs where a strict left-to-right order of composition is not anymore enforced. As we can see, by this strategy of recursively reducing two vectors into a single vector, both RNNs and RvNNs can implement the sentence encoding function in the form of \(\mathcal{E}\). Moreover, the form of application of cell function exhibited by RNNs and RvNNs can also be said to reflect a tree-structure. For any application of the cell function in the form \(v=R(a_{1},a_{2})\), \(v\) can be treated as the representation of the immediate parent node of child nodes \(a_{1}\) and \(a_{2}\) in an underlying tree.
In Eqn. 2, we find that RvNNs can align the order of composition to PEMDAS whereas RNNs cannot. Nevertheless, RNNs can still learn to simulate RvNNs by modeling tree-structures implicitly in their hidden state dimensions (Bowman et al., 2015). For example, RNNs can learn to hold off the information related to "\(2+\)" until "\(4\times 4\)" is processed. Their abilities to handle tree-structures is analogous to how we can use pushdown automation in a recurrent manner through an infinite stack to detect tree-structured grammar. Still, RNNs can struggle to effectively learn to appropriately organize information in practice for large sequences. Special inductive biases can be incorporated to enhance their abilities to handle their internal memory structures (Shen et al., 2019;a). However, even then, memories remain bounded in practice and there is a limit to what depth of nested structures they can model.
More direct approaches to RvNNs, in contrast, can alleviate the above problems and mitigate the need of sophisticated memory operations to arrange information corresponding to a tree-structure because they can directly compose according to the underlying structure (Eqn. 2). However, in the case of RvNNs, we have the problem of first determining the underlying structure to even start the composition. One approach to handle the issue can be to train a separate parser to induce a tree structure from sequences using gold tree parses. Then we can use the trained parser in RvNNs. However, this is not ideal. Not all tasks or languages would come with gold trees for training a parser and a parser trained in
one domain may not translate well to another. A potentially better approach is to jointly learn both the cell function and structure induction from a downstream objective (Choi et al., 2018). We focus on this latter approach. Below we discuss one framework (easy-first parsing and Gumbel-Tree models) for this approach.
### Easy-First Parsing and Gumbel-Tree Models
Here we describe an adaptation (Choi et al., 2018) of easy-first parsing (Goldberg and Elhadad, 2010) for RvNN-based sentence-encoding. The algorithm relies on a score function \(score:\mathrm{I\!R}^{d_{h}}\rightarrow\mathrm{I\!R}^{1}\) that scores parsing decisions. Particularly, if we have \(v=R(a_{1},a_{2})\), then \(score(v)\) represents the plausibility of \(a_{1}\) and \(a_{2}\) belonging to the same immediate parent constituent. Similar to (Choi et al., 2018), we keep the score as a simple linear transformation: \(score(v)=vW_{v}\) (where \(W_{v}\in\mathrm{I\!R}^{d_{h}\times 1}\) and \(v\in\mathrm{I\!R}^{d_{h}}\)).
**Recursive Loop:** In this algorithm, at every iteration in a recursive loop, given a sequence of hidden states \((h_{1},h_{2},\ldots,h_{n})\) we consider all possible immediate candidate parent compositions taking the current states as children: \((R(h_{1},h_{2}),R(h_{2},h_{3}),\ldots,R(h_{n-1},h_{n}))\).2 We then score each of the candidates with the score function and greedily select the highest scoring candidate (i.e., we commit to the "easiest" decision first). For the sake of illustration, assume \(score(R(h_{i},h_{i+1}))\geq score(R(h_{j},h_{j+1}))\;\;\forall j\in\{1,2,\ldots,n\}\). Thus, following the algorithm, the parent candidate \(R(h_{i},h_{i+1})\) will be chosen. The parent representation \(R(h_{i},h_{i+1})\) would then replace its immediate children \(h_{i}\) and \(h_{i+1}\). Thus, the resulting sequence will become: \((h_{1},\ldots,h_{i-1},R(h_{i},h_{i+1}),h_{i+2},\ldots,h_{n})\). Like this, the sequence will be iteratively reduced to a single element representing the final sentence encoding. The full algorithm is presented in the Appendix (see Algorithm 1).
Footnote 2: We focus only on the class of binary projective tree structures. Thus all the candidates are compositions of two contiguous elements.
One issue here is to decide how to choose the highest scoring candidate. One way to do this is to simply use an argmax operator but it will not be differentiable. Gumbel-Tree models (Choi et al., 2018) address this by using Straight Through Estimation (STE) (Bengio et al., 2013) with Gumbel Softmax (Jang et al., 2017; Maddison et al., 2017) instead of argmax. However, STE is known to cause high bias in gradient estimation. Moreover, as it was previously discovered (Nangia and Bowman, 2018), and as we independently verify, STE Gumbel-based strategies perform poorly when tested in structure-sensitive tasks. Instead, to overcome these issues, we propose an alternative of extending argmax with a top-\(k\) operator under a beam search strategy.
## 3 Beam Tree Cell
**Motivation:** Gumbel-Tree models, as described, are relatively fast and simple but they are fundamentally based on a greedy algorithm for a task where the greedy solution is not guaranteed to be optimal. On the other hand, adaptation of dynamic programming-based CYK-models (Maillard et al., 2019) leads to high computational complexity (see SS5.3). A "middle way" between the two extremes is then to simply extend Gumbel-Tree models with beam-search to make them less greedy while still being less costly than CYK-parsers. Moreover, using beam-search also provides additional opportunity to recover from local errors whereas a greedy single-path approach (like Gumbel-Tree models) will be stuck with any errors made. All these factors motivate the framework of Beam Tree Cells (BT-Cell).
**Implementation:** The beam search extension to Gumbel-Tree models is straight-forward and similar to standard beam search. The method is described more precisely in Appendix A.1 and Algorithm 2. In summary, in BT-Cell, given a beam size \(k\), we maintain a maximum of \(k\) hypotheses (or beams) at each recursion. In any given iteration, each beam constitutes a sequence of hidden states representing a particular path of composition and an associated score for that beam based on the addition of log-softmaxed outputs of the \(score\) function (as defined in SS2.2) over each chosen composition for that sequence. At the end of the recursion, we will have \(k\) sentence encodings (\((o_{1},o_{2},\ldots,o_{k})\) where \(o_{i}\in\mathrm{I\!R}^{d_{h}}\)) and their corresponding scores (\((s_{1},s_{2},\ldots,s_{k})\) where \(s_{i}\in\mathrm{I\!R}^{1}\)). The final sequence encoding can be then represented as: \(\sum_{i=1}^{k}\left(\frac{exp(s_{i}).o_{i}}{\sum_{i=1}^{k}exp(s_{i})}\right)\). This aims at computing the expectation over the \(k\) sequence encodings.
### Top \(k\) Variants
As in standard beam search, BT-Cell requires two top-\(k\) operators. The first top-\(k\) replaces the straight-through Gumbel Softmax (simulating top-1) in Gumbel-Tree models. However, selecting and maintaining \(k\) possible choices for every beam in every iteration leads to an exponential increase in the number of total beams. Thus, a second top-\(k\) operator is used for pruning the beams to maintain only a maximum of \(k\) beams at the end of each iteration. Here, we focus on variations of the second top-\(k\) operator that is involved in truncating beams.
**Plain Top-\(k\):** The simplest variant is just the vanilla top-\(k\) operator. However, the vanilla top-\(k\) operator is discrete and non-differentiable preventing gradient propagation to non-selected paths.3 Despite that, this can still work for the following reasons: (1) gradients can still pass through
the final top \(k\) beams and scores. The scorer function can thus learn to increase the scores of better beams and lower the scores of the worse ones among the final \(k\) beams; (2) a rich enough cell function can be robust to local errors in the structure and learn to adjust for it by organizing information better in its hidden states. We believe that as a combination of these two factors, plain BT-Cell even with non-differentiable top-\(k\) operators can learn to perform well for structure-sensitive tasks (as we will empirically observe).
**OneSoft Top-\(k\):** While non-differentiable top-\(k\) operators can work, they still can be a bottleneck because gradient signals will be received only for \(k\) beams in a space of exponential possibilities. To address this issue, we consider if we can make the truncation or deletion of beams "softer". To that end, we develop a new Top-\(k\) operator that we call OneSoft Top-\(k\). We motivate and describe it below.
As a concrete case, assume we have \(m\) beams (sequences and their corresponding scores). The target for a top-\(k\) operator is to keep only the top scoring \(k\) beams (where \(k\leq m\)). Ideally we want to keep the beam representations "sharp" and avoid washed out representations owing to interpolation (weighted vector averaging) of distinct paths (Drozdov et al., 2020). This can be achieved by plain top-\(k\). However, it prevents propagation of gradient signals through the bottom \(m-k\) beams. Another line of approach is to create a soft permutation matrix \(P\in\mathds{R}^{m\times m}\) through a differentiable sorting algorithm such that \(P_{ij}\) represents the probability of the \(i^{th}\) beam being the \(j^{th}\) highest scoring beam. \(P\) can then be used to softly select the top \(k\) beams. However, running differentiable sorting in a recursive loop can significantly increase computation overheads and can also create more "washed out" representations leading to higher error accumulation (also see SS5.1 and SS5.3). We tackle all these challenges by instead proposing OneSoft as a simple hybrid strategy to approach top-\(k\) selection. We provide a formal description of our proposed strategy below and a visualization of the process in Figure 1.
Assume we have \(m\) beams consisting of \(m\) sequences: \(H=(\mathcal{H}_{1},\ldots,\mathcal{H}_{m})\) (\(\mathcal{H}_{i}\in\mathds{R}^{n\times d_{h}}\) where \(n\) is the sequence length) and \(m\) corresponding scalar scores: \(S=(s_{1},\ldots,s_{m})\). First, we simply use the plain top-\(k\) operator to discretely select the top \(k-1\) beams (instead of \(k\)). This allows us to keep the most promising beams "sharp":
\[idx=topk(S,K=k-1),\quad\textit{Top}=\{(\mathcal{H}_{i},s_{i})\mid i\in idx\} \tag{3}\]
Second, for the \(k^{th}\) beam we instead perform a softmax-based marginalization of the bottom \(m-(k-1)\) beams. This allows us to still propagate gradients through the bottom scoring beams (unlike in the pure plain top-\(k\) operator):
\[B=\{(\mathcal{H}_{i},s_{i})\mid(i\notin idx)\wedge(i\in\{1,2,\ldots,m\})\} \tag{4}\]
\[Z=\sum_{(\mathcal{H},s)\in B}\exp(s) \tag{5}\]
\[\textit{SP}=\left(\sum_{(\mathcal{H},s)\in B}\left(\frac{\exp(s)}{Z}\cdot \mathcal{H}\right),\sum_{(\mathcal{H},s)\in B}\left(\frac{exp(s)}{Z}\cdot s \right)\right) \tag{6}\]
Here \(B\) represents the bottom scoring \(m-(k-1)\) beams and \(SP\) represents the softmax-based marginalization. Finally, we add the \(SP\) to the top \(k-1\) discretely selected beams to get the final set of \(k\) beams: \(\textit{Top}\cup\{SP\}\). Thus, we get to achieve a "middle way" between plain top-\(k\) and differentiable sorting: partially getting the benefit of sharp representations of the former through discrete top \(k-1\) selection, and partially getting the benefit of gradient propagation of the latter through soft-selection of the \(k^{th}\) beam. In practice, we switch to plain top-\(k\) during inference. This makes tree extraction convenient during inference if needed.
## 4 Experiments and Results
We present the main models below. Hyperparameters and other architectural details are in Appendix G.
Figure 1: Visualization of OneSoft Top-\(k\) selection from \(m=4\) beams to top \(k=3\) beams. (+) represents interpolation.
**1. RecurrentGRC:** RecurrentGRC is an RNN implemented with the Gated Recursive Cell (GRC) (Shen et al., 2019) as the cell function (see Appendix B for description of GRC). **2. GoldTreeGRC:** GoldTreeGRC is a GRC-based RvNN with gold tree structures. **3. GumbelTreeGRC:** This is the same as GumbelTreeLSTM (Choi et al., 2018) but with GRC instead of LSTM. **4. CYK-GRC:** This is the CYK-based model proposed by Maillard et al. (2019) but with GRC. **5. Ordered Memory (OM):** This is a memory-augmented RNN simulating certain classes of RvNN functions as proposed by Shen et al. (2019). OM also uses GRC. **6. CRvNN:** CRvNN is a variant of RvNN with a continuous relaxation over its structural operations as proposed by Chowdhury and Caragea (2021). CRvNN also uses GRC. **7. BT-GRC:** BT-Cell with GRC cell and plain top-\(k\). **8. BT-GRC + OneSoft:** BT-GRC with OneSoft top-\(k\). For experiments with BT-Cell models, we set beam size as \(5\) as a practical choice (neither too big nor too small).
### ListOps Length Generalization Results
**Dataset Settings:** ListOps (Nangia and Bowman, 2018) is a challenging synthetic task that requires solving nested mathematical operations over lists of arguments. We present our results on ListOps in Table 1. To test for length-generalization performance, we train the models only on sequences with \(\leq 100\) lengths (we filter the rest) and test on splits of much larger lengths (eg. \(200-300\) or \(900-1000\) taken from Havrylov et al. (2019). "Near-IID" is the original test set of ListOps (it is "near" IID and not fully IID because a percentage of the split has \(>100\) length sequences whereas such lengths are absent in the training split). We also report the mean accuracy with standard deviation on ListOps in Appendix E.6.
**Results: RecurrentGRC:** As discussed before in SS2.1, RNNs have to model tree structures implicitly in their bounded hidden states and thus can struggle to generalize to unseen structural depths. This is reflected in the sharp degradation in its length generalization performance. **GumbelTreeGRCs:** Consistent with prior work (Nangia and Bowman, 2018), Gumbel-Tree models fail to perform well in this task, likely due to their biased gradient estimation. **CYK-GRC:** CYK-GRC shows some promise to length generalization but it was too slow to run in higher lengths. **Ordered Memory (OM):** Here, we find that OM struggles to generalize to higher unseen lengths. OM's reliance on soft sequential updates in a nested loop can lead to higher error accumulation over larger unseen lengths or depths. **CRvNN:** Consistent with Chowdhury and Caragea (2021), CRvNN performs relatively well at higher lengths. **BT-GRC:** Here, we find a massive boost over Gumbel-tree baselines even when using the base model. Remarkably, in the \(900\)-\(1000\) length generalization split, BT-GRC increases the performance of GumbelTreeGRC from \(37.9\%\) to \(86.7\%\)--by incorporating beam search with plain top-\(k\). **BT-GRC+OneSoft:** As discussed in SS3.1, BT-GRC+OneSoft counteracts the bottleneck of gradient propagation being limited through only \(k\) beams and achieves near perfect length generalization as we can observe from Table 1.
### ListOps Argument Generalization Results
**Dataset Settings:** While length generalization (Havrylov et al., 2019; Chowdhury and Caragea, 2021) and depth generalization (Csordas et al., 2022) have been tested before for ListOps, the performance on argument generalization is yet to be considered. In this paper, we ask what would happen if we increase the number of arguments in the test set beyond the maximum number encountered during training. The training set of the original ListOps data only has \(\leq 5\) arguments for each operator. To test for argument general
\begin{table}
\begin{tabular}{l|c|c c c|c c|c} \hline \hline
**Model** & **near-IID** & \multicolumn{3}{c|}{**Length Gen.**} & \multicolumn{3}{c|}{**Argument Gen.**} & **LRA** \\ (Lengths) & \(\leq 1000\) & 200-300 & 500-600 & 900-1000 & 100-1000 & \(2000\) \\ (Arguments) & \(\leq 5\) & \(\leq 5\) & \(\leq 5\) & \(\leq 5\) & \(10\) & \(15\) & \(10\) \\ \hline \multicolumn{7}{l}{_With gold trees_} \\ \hline GoldTreeGRC & \(99.95\) & \(99.88\) & \(99.85\) & \(100\) & \(80.5\) & \(79\) & \(78.1\) \\ \hline \multicolumn{7}{l}{_Baselines without gold trees_} \\ \hline RecurrentGRC & \(84.05\) & \(33.85\) & \(20.2\) & \(15.1\) & \(37.35\) & \(30.10\) & \(20.7\) \\ GumbelTreeGRC & \(74.89\) & \(47.6\) & \(43.85\) & \(37.9\) & \(51.35\) & \(50.5\) & \(46.1\) \\ CYK-GRC & \(97.87\) & \(93.75\) & — & — & \(60.75\) & \(42.45\) & — \\ Ordered Memory & \(99.88\) & \(\mathbf{99.55}\) & \(92.7\) & \(76.9\) & \(\mathbf{84.15}\) & \(\mathbf{75.05}\) & \(\mathbf{80.1}\) \\ CRvNN & \(99.82\) & \(99.5\) & \(98.5\) & \(\mathbf{98}\) & \(65.45\) & \(45.1\) & \(55.38\) \\ \hline \multicolumn{7}{l}{_Ours_} \\ \hline BT-GRC & \(99.39\) & \(96.15\) & \(92.55\) & \(86.7\) & \(77.1\) & \(63.7\) & \(67.3\) \\ BT-GRC + OneSoft & \(\mathbf{99.92}\) & \(99.5\) & \(\mathbf{99}\) & \(97.2\) & \(76.05\) & \(67.9\) & \(71.8\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Accuracy on ListOps. For our models we report the median of \(3\) runs except for CYK-GRC which was ran only once for its high resource demands. Our models were trained on lengths \(\leq\) 100, depth \(\leq\) 20, and arguments \(\leq\) 5. We bold the best results and underline the second-best among models that do not use gold trees.
ization we created two new splits - one with \(10\) arguments per operator and another with \(15\) arguments per operator. In addition, we also consider the test set of ListOps from Long Range Arena (LRA) dataset (Tay et al., 2021) which serves as a check for both length generalization (it has sequences of length \(2000\)) and argument generalization (it has \(10\) arguments) simultaneously.4 The results are in Table 1.
Footnote 4: Note that the LRA test set is in-domain for the LRA training set and thus, does not originally test for argument generalization.
**Results:** Interestingly, we find that all the models perform relatively poorly (\(<90\%\)) on argument generalization. Nevertheless, after OM, BT-GRC-based models perform the best in this split. Comparatively, OM performs quite well in this split - even better than GoldTreeGRC. This shows that the performance of OM is not due to just better parsing. We can also tell that OM's performance is not just for its recursive cell (GRC) because it is shared by other models as well that do not perform nearly as well. This may suggest that the memory-augmented RNN style setup in OM is more amenable for argument generalization. Note that Transformer-based architectures tend to get \(\leq 40\%\) on LRA test set for ListOps (Tay et al., 2021) despite training on in-distribution data whereas BT-GRC can still generalize to a performance ranging in between \(60\)-\(80\%\) in OOD settings.
### Semantic Analysis (SST and IMDB) Results
**Dataset Settings:** SST5 (Socher et al., 2013) and IMDB (Maas et al., 2011) are natural language classification datasets (for sentiment classification). For IMDB, to focus on OOD performance, we also test our models on the contrast set (Con.) from (Gardner et al., 2020) and the counterfactual test set (Count.) from (Kaushik et al., 2020). We present our results on these datasets in Table 2.
**Results:** The results in these natural language tasks are rather mixed. There are, however, some interesting highlights. CRvNN and OM do particularly well in the OOD splits (contrast set and counterfactual split) of IMDB, correlating with their better OOD generalization in synthetic data. BT-GRC + OneSoft remains competitive in those splits with OM and CRvNN and is better than any other models besides CRvNN and OM. STE Gumbel-based models tend to perform particularly worse on IMDB.
### Natural Language Inference Experiments
**Dataset Settings:** We ran our models on MNLI (Williams et al., 2018) which is a natural language inference task. We tested our models on the development set of MNLI and used a randomly sampled subset of \(10,000\) data points from the original training set as the development set. Our training setup is different from Chowdhury and Caragea (2021) and other prior latent tree models that combine SNLI (Bowman et al., 2015) and MNLI training sets (in that we do not add SNLI data to MNLI). We filter sequences \(\geq 150\) length from the training set for efficiency. We also test our models in various stress tests (Naik et al., 2018). We report the results in Table 2. In the table, M denotes matched development set (used as test set) of MNLI. MM denotes mismatched development set (used as test set) of MNLI. LenM, LenM, NegM, and NegMM denote Length Match, Length Mismatch, Negation Match, and Negation Mismatch stress test sets, respectively - all from Naik et al. (2018). Len M/Len MM add to the length by adding tautologies. Neg M/Neg MM add tautologies containing "not" terms which can bias the model to falsely predict contradictions.
**Results:** The results in Table 2 show that BT-GRC models perform comparably with the other models in the standard matched/mismatched sets (M and MM). However, they outperform all the other models on Len M and Len MM. Also, BT-GRC models tend to do better than the other models in Neg M and Neg MM. Overall, BT-Cell shows better robustness to stress tests.
## 5 Analysis
### Analysis of Neighbor Models
We also analyze some other models that are similar to BT-GRC in Table 3 as a form of ablation and show that BT-GRC is still superior to them. We describe these models below and discuss their performance on ListOps.
\begin{table}
\begin{tabular}{l|l l l|l l l l l l} \hline \hline
**Model** & **SST5** & \multicolumn{4}{c|}{**IMDB**} & \multicolumn{4}{c}{**MNLI**} \\ & **IID** & \multicolumn{1}{c}{Con. OOD} & \multicolumn{1}{c}{Count. OOD} & \multicolumn{1}{c}{**M**} & \multicolumn{1}{c}{**MM**} & \multicolumn{1}{c}{**Len M**} & \multicolumn{1}{c}{**Len M**} & \multicolumn{1}{c}{**Len M**} & \multicolumn{1}{c}{**Neg M**} & \multicolumn{1}{c}{**Neg MM**} \\ \hline RecurrentGRC & \(52.19_{.5}\) & \(74.86_{28}\) & \(82.72_{19}\) & \(71.2_{3}\) & \(71.4_{4}\) & \(49_{25}\) & \(49.5_{24}\) & \(49.3_{6}\) & \(50.1_{6}\) \\ GumbelTreeGRC & \(51.67_{8.8}\) & \(70.63_{21}\) & \(81.97_{5}\) & \(71.2_{7}\) & \(71.2_{6}\) & \(57.5_{17}\) & \(59.6_{12}\) & \(50.5_{20}\) & \(51.8_{20}\) \\ Ordered Memory & \(52.30_{2.7}\) & \(76.98_{5.8}\) & \(83.68_{7.8}\) & \(\mathbf{72.5_{3}}\) & \(\mathbf{73_{2}}\) & \(56.5_{33}\) & \(57.1_{31}\) & \(50.9_{7}\) & \(51.7_{13}\) \\ CRvNN & \(51.75_{11}\) & \(\mathbf{77.80_{15}}\) & \(85.38_{3.5}\) & \(72.2_{4}\) & \(72.6_{5}\) & \(62_{44}\) & \(63.3_{47}\) & \(52.8_{6}\) & \(53.8_{4}\) \\ \hline Ours & & & & & & & & & \\ \hline BT-GRC & \(\mathbf{52.32_{4.7}}\) & \(75.07_{29}\) & \(82.86_{23}\) & \(71.6_{2}\) & \(72.3_{1}\) & \(64.7_{6}\) & \(66.4_{5}\) & \(\mathbf{53.7_{37}}\) & \(\mathbf{54.8_{43}}\) \\ BT-GRC + OneSoft & \(51.92_{7.2}\) & \(75.68_{21}\) & \(84.77_{11}\) & \(71.7_{1}\) & \(71.9_{2}\) & \(\mathbf{65.6_{13}}\) & \(\mathbf{66.7_{9}}\) & \(53.2_{2}\) & \(54.2_{5}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Mean accuracy and standard deviaton on SST5, IMDB, and MNLI. Con. represents Contrast set and Count. represents Countfactuals. Our models were run \(3\) times on different seeds. Subscript represents standard deviation. As an example, \(90_{1}=90\pm 0.1\).
**BT-LSTM:** This is just BT-Cell with an LSTM cell (Hochreiter and Schmidhuber, 1997) instead of GRC. In Table 3, we find that BT-LSTM can still perform moderately well (showing the robustness of BT-Cell as a framework) but worse than what it can do with GRC. This is consistent with prior works showing superiority of GRC as a cell function (Shen et al., 2019; Chowdhury and Caragea, 2021).
**BSRP-GRC:** This is an implementation of Beam Shift Reduce Parser (Maillard and Clark, 2018) with a GRC cell. Similar to us, this approach applies beam search but to a shift-reduce parsing model as elaborated in Appendix C. Surprisingly, despite using a similar framework to BT-Cell, BSRPC-GRC performs quite poorly in Table 3. We suspect this is because of the limited gradient signals from its top-\(k\) operators coupled with the high recurrent depth for backpropagation (twice the sequence length) encountered in BSRP-GRC compared to that in BT-Cell (the recurrent depth is the tree depth). Moreover, BSRP-GRC, unlike BT-Cell, also lacks the global competition among all parent compositions when making shift/reduce choices.
**MC-GambelTreeGRC:** Here we propose a Monte Carlo approach towards Gumbel Tree GRC. This model runs \(k\) gumbel-tree models with shared parameters in parallel. Since the models are stochastic, they can sample different latent structures. In the end we can average the final \(k\) sentence encodings treating this as a Monte-Carlo approximation. We set \(k=5\) to be comparable with BT-Cell. MC-GumbelTreeGRC is similar to BT-Cell because it can model different structural interpretations. However, it fails to do as effectively as BT-Cell in ListOps. We suspect this is because beam-search based structure selection allows more competition between structure candidates when using top-\(k\) for truncation and thus enables better structure induction.
**BT-GRC+SOFT:** This model incorporates another potential alternative to OneSoft within BT-GRC. It uses a differentiable sorting algorithm, SOFT Top-\(k\), that was previously used in beam search for language generation (Xie et al., 2020), to implement the top-\(k\) operator replacing OneSoft. However, it performs poorly. Its poor performance supports our prior conjecture (SS3.1) that using a soft permutation matrix in all recursive iterations is not ideal because of increased chances of error accumulation and more "washing out" through weighted averaging of distinct beams.
### OneSoft Top-\(k\) with Lower Beam Size
We motivated (SS3.1) the proposal of OneSoft top-\(k\) to specifically counteract the bottleneck of gradient propagation being limited through only \(k\) beams in the base BT-Cell model (with plain top-\(k\)). While we validate this bottleneck through our experiments in Table 1 for beam size 5, the bottleneck should be even worse when \(k\) (beam size) is low (e.g., 2). Based on our motivation, OneSoft should perform much better than plain top-\(k\) when beam size is low. We perform experiments with beam size \(2\) on ListOps to understand if that is true and show the results in Table 3. As we can see, OneSoft indeed performs much better than plain top-\(k\) with lower beam size of \(2\) where BT-GRC gets only \(50.2\%\) in the \(900\)-\(1000\) split of ListOps, and BT-GRC+OneSoft gets \(91\%\). As we would expect beam size \(5\) (from Table 1 and also shown verbatim in Table 3) still outperforms beam size \(2\) in a comparable setting. We report some additional results with beam size \(2\) in Appendix E.4.
### Efficiency Analysis
**Settings:** In Table 4, we compare the empirical performance of various models in terms of time and memory. We train each model on ListOps splits of different sequence lengths (\(200\)-\(250\), \(500\)-\(600\), and \(900\)-\(1000\)). Each split contains \(100\) samples. Batch size is set as \(1\). Other hyperparameters are the same as those used for ListOps. For CRvNN, we show
\begin{table}
\begin{tabular}{l|l|l l l|l|l|l} \hline \multicolumn{1}{c|}{**Model**} & \multicolumn{2}{c|}{**near-IID**} & \multicolumn{3}{c|}{**Length Gen.**} & \multicolumn{2}{c|}{**Argument Gen.**} & \multicolumn{1}{c}{**LRA**} \\ (Lengths) & \(\leq 1000\) & \(200\)-\(300\) & \(500\)-\(600\) & \(900\)-\(1000\) & \(100\)-\(1000\) & \(2000\) \\ (Arguments) & \(\leq 5\) & \(\leq 5\) & \(\leq 5\) & \(\leq 5\) & \(10\) & \(15\) & \(10\) \\ \hline \multicolumn{1}{l}{BT-GRC Models (Beam size \(5\))} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline BT-GRC & \(99.39\) & \(96.15\) & \(92.55\) & \(86.7\) & \(\mathbf{77.1}\) & \(63.7\) & \(67.3\) \\ BT-GRC + OneSoft & \(\mathbf{99.92}\) & \(\mathbf{99.5}\) & \(\mathbf{99}\) & \(\mathbf{97.2}\) & \(76.05\) & \(\mathbf{67.9}\) & \(\mathbf{71.8}\) \\ \hline \multicolumn{1}{l}{Alternative models in the Vicinity of BT-GRC} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline BT-LSTM & \(94.1\) & \(85.1\) & \(83.5\) & \(78.8\) & \(67.9\) & \(44.3\) & \(57.9\) \\ BSRP-GRC & \(70.3\) & \(42.4\) & \(33.2\) & \(26.3\) & \(40.2\) & \(35.8\) & \(29.7\) \\ MC-GumbelTreeGRC & \(89.3\) & \(36.8\) & \(28.2\) & \(25.1\) & \(39.5\) & \(34\) & \(30.1\) \\ BT-GRC+SOFT & \(69\) & \(44\) & \(37.1\) & \(29.4\) & \(39.5\) & \(38.6\) & \(31.6\) \\ \hline \multicolumn{1}{l}{Robustness of OneSoft Top-K to lower Beam Size (size \(2\))} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline BT-GRC & \(94.18\) & \(68.2\) & \(56.85\) & \(50.2\) & \(64.45\) & \(56.95\) & \(55.85\) \\ BT-GRC + OneSoft & \(99.69\) & \(97.55\) & \(95.40\) & \(91\) & \(75.75\) & \(62\) & \(66.1\) \\ \hline \end{tabular}
\end{table}
Table 3: Accuracy of different models on ListOps (same setting as in Table 1). We report the median of \(3\) runs.
the worst case performance (without early halt) because otherwise it halts too early without learning to halt from more training steps or data.
**Discussion:** RecurrentGRC and GumbelTreeGRC can be relatively efficient in terms of both runtime and memory consumption. BSRP-GRC and OM, being recurrent models, can be highly efficient in terms of memory but their complex recurrent operations make them slow. CYK-GRC is the worst in terms of efficiency because of its expensive chart-based operation. CRvNN is faster than OM/BSRP-GRC but its memory consumption can scale worse than BT-GRC because of Transformer-like attention matrices for neighbor retrieval. MC-GumbelTreeGRC is similar to a batched version of GumbelTreeGRC. BT-GRC performs similarly to MC-GumbelTreeGRC showing that the cost of BT-GRC is similar to increasing batch size of GumbelTreeGRC. BT-GRC + OneSoft perform similarly to CRvNN. BT-GRC + SOFT is much slower due to using a more expensive optimal transport based differentiable sorting mechanism (SOFT top-\(k\)) in every iteration. This shows another advantage of using OneSoft over other more sophisticated alternatives.
### Additional Analysis and Experiments
**Heuristics Tree Models:** We analyze heuristics-based tree models (Random tree, balanced tree) in Appendix E.5.
**Synthetic Logical Inference**: We present our results on a challenging synthetic logical inference task (Bowman et al., 2015) in Appendix E.3. We find that most variants of BT-Cell can perform on par with prior SOTA models.
**Depth Generalization**: We also run experiments to test depth-generalization performance on ListOps (Appendix E.1). We find that BT-Cell can easily generalize to much higher depths and it does so more stably than OM.
**Transformers**: We experiment briefly with Neural Data Routers (Csordas et al., 2022) which is a Transformer-based model proven to do well in tasks like ListOps. However, we find that Neural Data Routers (NDRs), despite their careful inductive biases, still struggle with sample efficiency and length generalization compared to strong RvNN-based models. We discuss more in Appendix E.2.
**Parse Tree Analysis**: We analyze parsed trees and score distributions in Appendix E.7.
## 6 Related Works
Goldberg and Elhadad (2010) proposed the easy-first algorithm for dependency parsing. Ma et al. (2013) extended it with beam search for parsing tasks. Choi et al. (2018) integrated easy-first-parsing with an RvNN. Similar to us, Maillard and Clark (2018) used beam search to extend shift-reduce parsing whereas Drozdov et al. (2020) used beam search to extend CYK-based algorithms. However, BT-Cell-based models achieve higher accuracy than the former style of models (e.g., BSRP-GRC) and are computationally more efficient than the latter style of models (e.g., CYK-GRC). Similar to us, Collobert et al. (2019) also used beam search in an end-to-end fashion during training but in the context of sequence generation. However, none of the above approaches explored beyond hard top-\(k\) operators in beam search. One exception is Xie et al. (2020) where a differentiable top-\(k\) operator (SOFT Top-\(k\)) is used in beam search for language generation but as we show in SS5.1 it does not work as well. Another exception is Goyal et al. (2018) where an iterated softmax is used to implement a soft top-\(k\) operator for differentiable beam search. However, iterated softmax operations can slow down BT-GRC and overall share similar limitations as SOFT Top-\(k\). Moreover, SOFT Top-\(k\) was shown to perform slightly better than iterated softmax (Xie et al., 2020) and we show that our OneSoft fares better than SOFT Top-\(k\) in SS5.1 for our contexts. Besides Xie et al. (2020), there are multiple versions of differentiable top-k operators or sorting functions
\begin{table}
\begin{tabular}{l|c c|c c|c c} \hline \hline & \multicolumn{6}{c|}{**Sequence Lengths**} \\ \hline
**Model** & \multicolumn{2}{c|}{\(\mathbf{200-250}\)} & \multicolumn{2}{c|}{\(\mathbf{500-600}\)} & \multicolumn{2}{c}{\(\mathbf{900-1000}\)} \\ & Time & Memory & Time & Memory & Time & Memory \\ \hline RecurrentGRC & \(0.2\) min & \(0.02\) GB & \(0.5\) min & \(0.02\) GB & \(1.3\) min & \(0.03\) GB \\ GumbelTreeGRC & \(0.5\) min & \(0.35\) GB & \(2.1\) min & \(1.95\) GB & \(3.5\) min & \(5.45\) GB \\ CYK-GRC & \(9.3\) min & \(32.4\) GB & OOM & OOM & OOM & OOM \\ BSRP-GRC & \(2.3\) min & \(0.06\) GB & \(6.1\) min & \(0.19\) GB & \(10.5\) min & \(0.42\) GB \\ Ordered Memory & \(8.0\) min & \(0.09\) GB & \(20.6\) min & \(0.21\) GB & \(38.2\) min & \(0.35\) GB \\ CRvNN & \(1.5\) min & \(1.57\) GB & \(4.3\) min & \(12.2\) GB & \(8.0\) min & \(42.79\) GB \\ MC-GumbelTreeGRC & \(1.1\) min & \(1.71\) GB & \(2.4\) min & \(9.85\) GB & \(4.3\) min & \(27.33\) GB \\ BT-GRC & \(1.1\) min & \(1.71\) GB & \(2.6\) min & \(9.82\) GB & \(5.1\) min & \(27.27\) GB \\ BT-GRC + OneSoft & \(1.4\) min & \(2.74\) GB & \(4.0\) min & \(15.5\) GB & \(7.1\) min & \(42.95\) GB \\ BT-GRC + SOFT & \(5.1\) min & \(2.67\) GB & \(12.6\) min & \(15.4\) GB & \(23.1\) min & \(42.78\) GB \\ \hline \hline \end{tabular}
\end{table}
Table 4: Empirical time and memory consumption for various models on an RTX A6000. Ran on \(100\) ListOps data with batch size \(1\)
(Adams and Zemel, 2011; Plotz and Roth, 2018; Grover et al., 2019; Cuturi et al., 2019; Xie et al., 2020; Blondel et al., 2020; Petersen et al., 2021; 2022) (interalia). We leave a more exhaustive analysis of them as future work. However, note that some of them would suffer from the same limitations as SOFT top-k (Xie et al., 2020) - that is, they can significantly slow down the model and they can lead to "washed out" representations. We provide an extended related work survey in Appendix F.
## 7 Discussion
In this section, we first discuss the trade offs associated with different RvNN models that we compare. We then highlight some of the features of our BT-Cell model.
**CYK-Cell:** Our experiments do not show any empirical advantage of CYK-Cell compared to CRvNN/OM/BT-Cell. Moreover, computationally it offers the worst trade-offs. However, there are some specialized ways (Drozdov et al., 2019; 2020) in which CYK-Cell-based models can be used for masked language modeling that other models cannot. Furthermore, Hu et al. (2021, 2022) also propose several strategies to make them more efficient in practice.
**Ordered Memory (OM)**: OM is preferable when memory is a priority over time. Its low memory also allows for high batch size which alleviates its temporal cost. OM shows some length generalization issues but overall performs well in general. It can also be used for autoregressive language modeling in a straightforward manner.
**CRvNN:** CRvNN also generally performs competitively. It can be relatively fast with dynamic halting but its memory complexity can be a bottleneck; although, that can be mitigated by fixing an upper-bound to maximum recursion.
**BT-Cell Features:** We highlight the salient features of BT-Cell below:
1. BT-Cell's memory consumption is better than CRvNN (without halt) but its speed is generally slower than CRvNN (but faster than Ordered Memory).
2. BT-Cell as a framework can be easier to build upon for its conceptual simplicity than OM/CRvNN/CYK-Cell.
3. Unlike CRvNN and OM, BT-Cell also provides all the intermediate node representations (spans) of the induced tree. Span representations can often have interesting use cases - they have been used in machine translation (Su et al., 2020; Patel and Flanigan, 2022), for enhancing compositional generalization (Bogin et al., 2021; Herzig and Berant, 2021), or other natural language tasks (Patel and Flanigan, 2022) in general. We leave possible ways of integrating BT-Cell with other deep learning modules as a future work. BT-Cell can also be a drop-in replacement for Gumbel Tree LSTM (Choi et al., 2018).
4. With BT-Cell, we can extract tree structures which can offer some interpretability. The extracted structures can show some elements of ambiguities in parsing (different beams can correspond to different interpretations of ambiguous sentences). See Appendix E.7 for more details on this.
We also note that OneSoft, on its own, can be worthy of individual exploration as a semi-differentiable top-\(k\) function. Our experiments show comparative advantage of it over a more sophisticated optimal-transport based method for implementation of differentiable top-\(k\) (SOFT top-\(k\)) (Xie et al., 2020). In principle, OneSoft can serve as a general purpose option whenever we need differentiable top-\(k\) selection in neural networks.
## 8 Limitations
While our Beam Tree Cell can serve as a "middle way" between Gumbel Tree models and CYK-based models in terms of computational efficiency, the model is still quite expensive to run compared to even basic RNNs. Moreover, the study in this paper is only done in a small scale setting without pre-trained models and only in a single natural language (English). More investigation needs to be done in the future to test for cross-lingual modeling capacities of these RvNN models and for ways to integrate them with more powerful pre-trained architectures.
## 9 Conclusion
We present BT-Cell as an intuitive way to extend RvNN that is nevertheless highly competitive with more sophisticated models like Ordered Memory (OM) and CRvNN. In fact, BT-Cell is the only model that achieves moderate performance in argument generalization while also excelling in length generalization in ListOps. It also shows more robustness in MNLI, and overall it is much faster than OM or CYK-GRC. We summarize our main results in Appendix D. The ideal future direction would be to focus on argument generalization and systematicity while maintaining computational efficiency. We also aim for added flexibility for handling more relaxed structures like non-projective trees or directed acyclic graphs as well as richer classes of languages (DuSell and Chiang, 2022; Deletang et al., 2022).
## Acknowledgments
This research is supported in part by NSF CAREER award #1802358, NSF IIS award #2107518, and UIC Discovery Partners Institute (DPI) award. Any opinions, findings, and conclusions expressed here are those of the authors and do
not necessarily reflect the views of NSF or DPI. We thank our anonymous reviewers for their constructive feedback.
|
2301.00081 | Finite abelian groups of K3 surfaces with smooth quotient | The quotient space of a $K3$ surface by a finite group is an Enriques surface
or a rational surface if it is smooth. Finite groups where the quotient space
are Enriques surfaces are known. In this paper, by analyzing effective divisors
on smooth rational surfaces, we will study finite groups which act faithfully
on $K3$ surfaces such that the quotient space are smooth. In particular, we
will completely determine effective divisors on Hirzebruch surfaces such that
there is a finite Abelian cover from a $K3$ surface to a Hirzebrunch surface
such that the branch divisor is that effective divisor. Furthermore, we will
decide the Galois group and give the way to construct that Abelian cover from
an effective divisor on a Hirzebruch surface. Subsequently, we study the same
theme for Enriques surfaces. | Taro Hayashi | 2022-12-31T00:37:05Z | http://arxiv.org/abs/2301.00081v1 | # Finite Abelian groups of K3 surfaces with smooth quotient
###### Abstract.
The quotient space of a \(K3\) surface by a finite group is an Enriques surface or a rational surface if it is smooth. Finite groups where the quotient space are Enriques surfaces are known. In this paper, by analyzing effective divisors on smooth rational surfaces, we will study finite groups which act faithfully on \(K3\) surfaces such that the quotient space are smooth. In particular, we will completely determine effective divisors on Hirzebruch surfaces such that there is a finite Abelian cover from a \(K3\) surface to a Hirzebruch surface such that the branch divisor is that effective divisor. Furthermore, we will decide the Galois group and give the way to construct that Abelian cover from an effective divisor on a Hirzebruch surface. Subsequently, we study the same theme for Enriques surfaces.
Keywords: K3 surface; finite Abelian group; Abelian cover of a smooth rational surface.
MSC2010: Primary 14J28; Secondary 14J50.
## 1. Introduction
In this paper, we work over \(\mathbb{C}\). A \(K3\) surface \(X\) is a smooth surface with \(h^{1}(\mathcal{O}_{X})=0\), and \(\mathcal{O}_{X}(K_{X})\cong\mathcal{O}_{X}\), where \(K_{X}\) is the canonical divisor of \(X\). In particular, a \(K3\) surface is simply connected. Finite groups acting faithfully on \(K3\) surfaces are well studied. Let \(\omega\) be a non-degenerated two holomorphic form. An automorphism \(f\) of a \(K3\) surface is called symplectic if \(f^{*}\omega=\omega\). A finite subgroup \(G\) of automorphisms of a \(K3\) surface is called symplectic if \(G\) is generated by symplectic automorphisms. The minimal resolution \(X_{m}\) of the quotient space \(X/G\) is one of a \(K3\) surface, an Enriques surface, and a rational surface. The surface \(X_{m}\) is a \(K3\) surface if and only if \(G\) is a symplectic group. Symplectic groups are classified (see [10,13,16]). If the quotient space of \(X/G\) is smooth, then it is an Enriques surface or a rational surface. The quotient space \(X/G\) is an Enriques surface if and only if \(G\) is isomorphic to \(\mathbb{Z}/2\mathbb{Z}\) as a group and the fixed locus of \(G\) is an empty set. It is not well-known what kind of rational surface is realized as the quotient space of a \(K3\) surface by a finite subgroup of \(\operatorname{Aut}(X)\). In this paper, we will consider the case where \(X/G\) is a smooth rational surface. The minimal model of smooth rational surfaces is the projective plane \(\mathbb{P}^{2}\) or a Hirzebruch surfaces \(\mathbb{F}_{n}\) where \(n\neq 1\), and \(\mathbb{F}_{1}\) is isomorphic to \(\mathbb{P}^{2}\) blow-up at a point. In other words, all smooth rational surfaces which are not minimal are \(\mathbb{F}_{1}\) or given by blowups of \(\mathbb{F}_{n}\) for \(0\leq n\). Therefore, if \(X/G\) is not \(\mathbb{P}^{2}\), then there is a birational morphism \(f:X/G\to\mathbb{F}_{n}\). Our first main results are to analyze the quotient space \(X/G\) and \(G\) when \(X/G\) is smooth.
**Theorem 1.1**.: _Let \(X\) be a \(K3\) surface and \(G\) be a finite subgroup of \(\text{Aut}(X)\) such that \(X/G\) is smooth. For a birational morphism \(f:X/G\to\mathbb{F}_{n}\) from the quotient space \(X/G\) to a Hirzebruch surface \(\mathbb{F}_{n}\), we get that \(n=0,1,2,3,4,6,8\), or \(12\). Furthermore, if \(n=6,8,12\), then \(f\) is an isomorphism._
Let \(X\) be a \(K3\) surface, and \(\omega\) be a non-degenerated holomorphic two form of \(X\). For a finite group \(G\) of \(\text{Aut}(X)\), We write \(G_{s}\) as a set of symplectic automorphisms of \(G\). Then there is a short exact sequence: \(1\to G_{s}\to G\to^{\varphi}C_{n}\to 1\), where \(C_{n}\) is a cyclic group of order \(n\), and \(\varphi(g):=\xi_{g}\in\mathbb{C}^{*}\) such that \(g^{*}\omega=\xi_{g}\omega\) in \(\text{H}^{2,0}(X)\) for \(g\in G\).
**Theorem 1.2**.: _Let \(X\) be a \(K3\) surface, \(G\) be a finite subgroup of \(\text{Aut}(X)\) such that \(X/G\) is smooth. Then the above exact sequence is split, i.e. there is a purely non-symplectic automorphism \(g\in G\) such that \(G\) is the semidirect product \(G_{s}\rtimes\langle g\rangle\) of \(G_{s}\) and \(\langle g\rangle\)._
Next, we will classify finite abelian groups which act faithfully on \(K3\) surfaces and the quotient space is smooth.
**Definition 1.3**.: _We will use the following notations._
\[\mathcal{A}G:=\begin{Bmatrix}\mathbb{Z}/2\mathbb{Z}^{\oplus a},\ \mathbb{Z}/3 \mathbb{Z}^{\oplus b},\ \mathbb{Z}/4\mathbb{Z}^{\oplus c},\mathbb{Z}/2\mathbb{Z}^{\oplus d}\oplus \mathbb{Z}/3\mathbb{Z}^{e},\ \mathbb{Z}/2\mathbb{Z}^{\oplus f}\oplus\mathbb{Z}/4 \mathbb{Z}^{\oplus g},\\ \mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/3\mathbb{Z}^{\oplus h}\oplus\mathbb{Z }/4\mathbb{Z},\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/8\mathbb{Z}\\ :1\leq a\leq 5,\ 1\leq b,c\leq 3,\\ (d,e)=(1,1),(1,2),(1,3),(2,1),(2,2),(3,1)(3,2),\\ (f,g)=(1,1),(1,2),(2,1),(3,1),\ h=1,2\\ :a=1,2,3,4,5,\ c=1\ \text{or}\ 3,\ (d,e)=(1,1),(1,2),\ \text{or}\ (3,2) \end{Bmatrix}\]
\[\mathcal{A}G_{0}:=\begin{Bmatrix}\mathbb{Z}/2\mathbb{Z}^{\oplus a},\ \mathbb{Z}/3 \mathbb{Z}^{\oplus b},\ \mathbb{Z}/2\mathbb{Z}^{\oplus f}\oplus\mathbb{Z}/4\mathbb{Z}^{\oplus g}\\ :a=1,2,3,4,5,\ b=1,2,3,\ (f,g)=(1,1),(1,2),(2,1),(3,1)\end{Bmatrix}\]
\[\mathcal{A}G_{1}:=\begin{Bmatrix}\mathbb{Z}/2\mathbb{Z}^{\oplus a},\ \mathbb{Z}/4 \mathbb{Z}^{\oplus 2},\ \mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/3\mathbb{Z}^{\oplus c},\ \mathbb{Z}/2 \mathbb{Z}^{\oplus f}\oplus\mathbb{Z}/4\mathbb{Z},\\ \mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/3\mathbb{Z}^{\oplus 2}\oplus\mathbb{Z}/4 \mathbb{Z},\ \mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/8\mathbb{Z}\\ :a=1,2,3,4,5,\ e=1,2,3,\ f=1,2,3\\ :a=1,2,3,4,\ b=1,2,3,\ (f,g)=(1,1),(1,2),(2,1),(3,1)\end{Bmatrix}\]
\[\mathcal{A}G_{3}:=\begin{Bmatrix}\mathbb{Z}/2\mathbb{Z}^{\oplus d}\oplus \mathbb{Z}/3\mathbb{Z}^{\oplus c},\ \mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/3\mathbb{Z}\oplus\mathbb{Z}/4 \mathbb{Z}\\ :(d,e)=(1,1),(1,2),(3,1)\\ \end{Bmatrix}\]
\[\mathcal{A}G_{4}:=\begin{Bmatrix}\mathbb{Z}/2\mathbb{Z}^{\oplus a},\ \mathbb{Z}/4 \mathbb{Z},\ \mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/3\mathbb{Z}^{\oplus 2},\ \mathbb{Z}/2 \mathbb{Z}^{\oplus f}\oplus\mathbb{Z}/4\mathbb{Z}\\ :a=1,2,3,\ f=1,2\\ \end{Bmatrix}\]
\[\mathcal{A}G_{6}:=\begin{Bmatrix}\mathbb{Z}/3\mathbb{Z}^{\oplus b},\ \mathbb{Z}/2 \mathbb{Z}^{\oplus 2}\oplus\mathbb{Z}/3\mathbb{Z}:b=1,2\end{Bmatrix}\]
\[\mathcal{A}G_{8}:=\begin{Bmatrix}\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/4 \mathbb{Z}\\ \mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/3\mathbb{Z}\end{Bmatrix}\]
Notice that \(\mathcal{A}G=\bigcup_{n=0,1,2,3,4,6,8,12,\infty}\mathcal{A}G_{n}\). In [15], Uludag classified finite abelian groups for the case \(X/G\) is \(\mathbb{P}^{2}\). Furthermore, he gave the way to construct the pair \((X,G)\) where \(X\) is a \(K3\) surface and \(G\) is a finite subgroup of \(\text{Aut}(X)\) such that \(X/G\cong\mathbb{P}^{2}\). We have the following.
**Theorem 1.4**.: _([15]) Let \(X\) be a \(K3\) surface and \(G\) be a finite abelian subgroup of \(\operatorname{Aut}(X)\) such that the quotient space \(X/G\) is isomorphic to \(\mathbb{P}^{2}\). Then \(G\) is one of \(\mathcal{A}G_{\infty}\) as a group. Conversely, for every \(G\in\mathcal{A}G_{\infty}\), there is a \(K3\) surface \(X^{\prime}\) and a finite abelian subgroup \(G^{\prime}\) of \(\operatorname{Aut}(X^{\prime})\) such that \(X^{\prime}/G^{\prime}\cong\mathbb{P}^{2}\) and \(G^{\prime}\cong G\) as a group._
By analyzing the irreducible components of the branch locus of the quotient map \(p:X\to X/G\), we will study a pair \((X,G)\) consisting of a K3 surface \(X\) and a finite abelian subgroup \(G\) of \(\operatorname{Aut}(X)\) such that the quotient space \(X/G\) is smooth. More precisely, the preimage of the branch locus of \(p\) is \(\bigcup_{g\in G\setminus\{\operatorname{id}_{X}\}}\operatorname{Fix}(g)\) where \(\operatorname{Fix}(g):=\{x\in X:g(x)=x\}\). Recall that for an automorphism \(f\) of finite order of a \(K3\) surface, if \(\operatorname{Fix}(f)\) contains a curve, then \(f\) is non-symplectic. The fixed locus of a non-symplectic automorphism is well-known, e.g. [1,2,14]. By analyzing the fixed locus of non-symplectic automorphisms of \(G\) from the branch divisor of the quotient map, we will reconstruct \(G\) from the branch divisor of the quotient map. In Section 4, we will investigate the relationship between a branch divisor and exceptional divisors of blowups. Based on the above results, we will obtain our second main result.
**Theorem 1.5**.: _Let \(X\) be a \(K3\) surface and \(G\) be a finite abelian subgroup of \(\operatorname{Aut}(X)\) such that the quotient space \(X/G\) is smooth. Then \(G\) is one of \(\mathcal{A}G\) as a group. Conversely, for every \(G\in\mathcal{A}G\), there is a \(K3\) surface \(X^{\prime}\) and a finite abelian subgroup \(G^{\prime}\) of \(\operatorname{Aut}(X^{\prime})\) such that \(X^{\prime}/G^{\prime}\) is smooth and \(G^{\prime}\cong G\) as a group._
Furthermore, in Section 3, for a Hirzebruch surface \(\mathbb{F}_{n}\) and an effective divisor \(B\) on \(\mathbb{F}_{n}\), we will give a necessary and sufficient condition for the existence of a finite Abelian cover \(f:X\to\mathbb{F}_{n}\) such that \(X\) is a \(K3\) surface and the branch divisor of \(f\) is \(B\). In other words, we will solve a part of the Fenchel's problem for Hirzebruch surfaces. In addition, we will decide the Galois group and give the way to construct \(f:X\to\mathbb{F}_{n}\) from the pair \(\mathbb{F}_{n}\) and \(B\).
**Theorem 1.6**.: _Let \(X\) be a \(K3\) surface and \(G\) be a finite abelian subgroup of \(\operatorname{Aut}(X)\) such that the quotient space \(X/G\) is isomorphic to \(\mathbb{F}_{n}\). Then \(G\) is one of \(\mathcal{A}G_{n}\) as a group. Conversely, for every \(G\in\mathcal{A}G_{n}\), there is a \(K3\) surface \(X^{\prime}\) and a finite abelian subgroup \(G^{\prime}\) of \(\operatorname{Aut}(X^{\prime})\) such that \(X^{\prime}/G^{\prime}\) is isomorphic to \(\mathbb{F}_{n}\) and \(G^{\prime}\cong G\) as a group._
Subsequently, we will get a similar result for Enriques surfaces.
**Definition 1.7**.: _We use the following notations._
\[\mathcal{A}G(E):=\left\{\begin{matrix}\mathbb{Z}/2\mathbb{Z}^{\oplus a},\ \mathbb{Z}/4\mathbb{Z}^{\oplus 2},\ \mathbb{Z}/2 2\mathbb{Z}^{\oplus f}\oplus\mathbb{Z}/4\mathbb{Z},\ \mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/8\mathbb{Z}\\ \ :a=2,3,4\ f=1,2\end{matrix}\right\}\]
\[\mathcal{A}G_{\infty}(E):=\left\{\mathbb{Z}/2\mathbb{Z}^{\oplus a}:\ a=2,3,4\right\}\]
\[\mathcal{A}G_{0}(E):=\left\{\begin{matrix}\mathbb{Z}/2\mathbb{Z}^{\oplus a}, \ \mathbb{Z}/4\mathbb{Z}^{\oplus 2},\ \mathbb{Z}/2\mathbb{Z}^{\oplus f}\oplus \mathbb{Z}/4\mathbb{Z}\\ \ :a=2,3,4,\ f=1,2\end{matrix}\right\}\]
\[\mathcal{A}G_{1}(E):=\left\{\begin{matrix}\mathbb{Z}/2\mathbb{Z}^{\oplus a}, \ \mathbb{Z}/2\mathbb{Z}^{\oplus f}\oplus\mathbb{Z}/4\mathbb{Z},\ \mathbb{Z}/4\mathbb{Z}\oplus \mathbb{Z}/8\mathbb{Z}\\ \ :a=2,3,4,\ f=1,2\end{matrix}\right\}\]
\[\mathcal{A}G_{2}(E):=\left\{\mathbb{Z}/2\mathbb{Z}^{\oplus a},\ \mathbb{Z}/4 \mathbb{Z}^{\oplus 2},\ \mathbb{Z}/2\mathbb{Z}^{\oplus 2}\oplus\mathbb{Z}/4\mathbb{Z}:\ a=2,3\right\}\]
\[\mathcal{A}G_{4}(E):=\left\{\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/4\mathbb{Z}\right\}\]
Then \(\mathcal{AG}(E)=\bigcup_{n=0,1,2,4,\infty}\mathcal{AG}_{n}(E)\). Let \(E\) be an Enriques surface \(E\) and \(H\) be a finite abelian subgroup of \(\operatorname{Aut}(E)\) such that \(E/H\) is smooth. Let \(X\) be the \(K3\)-cover of \(E\), and \(G:=\{s\in\operatorname{Aut}(X):s\text{ is a lift of some }h\in H\}\). Then \(G\) is a finite abelian subgroup of \(\operatorname{Aut}(X)\), \(G\) has a non-symplectic involution whose fixed locus is empty, and \(X/G=E/H\). The case of \(E/H\cong\mathbb{P}^{2}\) was studied in [8]. By analyzing the groups of Theorem 1.5, we get the following theorems:
**Theorem 1.8**.: _Let \(E\) be an Enriques surface and \(H\) be a finite subgroup of \(\operatorname{Aut}(E)\) such that the quotient space \(E/H\) is smooth. If there is a birational morphism from \(E/H\) to a Hirzebruch surface \(\mathbb{F}_{n}\), then \(0\leq n\leq 4\). In particular, if the quotient space \(E/H\) is a Hirzebruch surface \(\mathbb{F}_{n}\), then \(n=0,1,2,4\)._
**Theorem 1.9**.: _Let \(E\) be an Enriques surface and \(H\) be a finite abelian subgroup of \(\operatorname{Aut}(E)\) such that the quotient space \(E/H\) is isomorphic to \(\mathbb{F}_{n}\). Then \(H\) is one of \(\mathcal{AG}_{n}(E)\) as a group. Conversely, for every \(H^{\prime}\in\mathcal{AG}_{n}(E)\), there is an Enriques surface \(E^{\prime}\) and a finite abelian subgroup \(H^{\prime}\) of \(\operatorname{Aut}(E^{\prime})\) such that \(E^{\prime}/H^{\prime}\) is smooth and \(H^{\prime}\cong H\) as a group._
**Theorem 1.10**.: _Let \(E\) be an Enriques surface and \(H\) be a finite abelian subgroup of \(\operatorname{Aut}(E)\) such that the quotient space \(E/H\) is smooth. Then \(H\) is one of \(\mathcal{AG}(E)\) as a group. Conversely, for every \(H\in\mathcal{AG}(E)\), there is an Enriques surface \(E^{\prime}\) and a finite abelian subgroup \(H^{\prime}\) of \(\operatorname{Aut}(E^{\prime})\) such that \(E^{\prime}/H^{\prime}\) is smooth and \(H^{\prime}\cong H\) as a group._
Section 2 is preliminaries. In Section 3.1, we will give examples for pairs \((X^{\prime},G^{\prime})\) described in Theorem 1.5. In other words, we will show that for each \(G\in\mathcal{AG}_{n}\) where \(n=0,1,2,3,4,6,8,12\), there is a pair \((X^{\prime},G^{\prime})\) where \(X^{\prime}\) is a K3 surface and \(G^{\prime}\) is a finite abelian subgroup of \(\operatorname{Aut}(X^{\prime})\) such that \(G\cong G^{\prime}\) as a group and \(X^{\prime}/G^{\prime}\cong\mathbb{F}_{n}\). Furthermore, we will give the way to construct \((X^{\prime},G^{\prime})\), and we will show that the way to construct \((X^{\prime},G^{\prime})\) is uniquely determined up to isomorphism from the branch divisor of the quotient map \(p:X^{\prime}\to X^{\prime}/G^{\prime}\). In Section 3.2, we will describe branch divisors and abelian groups for the case where the quotient space is a Hirzebruch surface. In Section 4, first, we will show Theorem 1.1 and 1.2. Next, we will show that for a pair \((X,G)\) where \(X\) is a \(K3\) surface and \(G\) is a finite abelian subgroup, if \(X/G\) is smooth, then \(G\) is isomorphic to one of \(\mathcal{AG}\) as a group. In Section 5, we will show Theorem 1.8, 1.9, and 1.10. In Section 6, based on [6], we will describe the existence of a \(K3\) surface \(X\) and a finite group \(G\) which is not necessarily an abelian group such that \(X/G\) is smooth, and \(X/G\) is neither \(\mathbb{P}^{2}\) nor an Enriques surface.
## 2. Preliminaries
We recall the properties of the Galois cover.
**Definition 2.1**.: _Let \(f:X\to M\) be a branched covering, where \(M\) is a complex manifold and \(X\) is a normal complex space. We call \(f:X\to M\) the Galois cover if there is a subgroup \(G\) of \(\operatorname{Aut}(X)\) such that \(X/G\cong M\) and \(f:X\to M\) is isomorphic to the quotient map \(p:X\to X/G\cong M\). We call \(G\) the Galois group of \(f:X\to M\). Furthermore, if \(G\) is an abelian group, then we call \(f:X\to M\) the Abelian cover._
**Definition 2.2**.: _Let \(f:X\to M\) be a finite branched covering, where \(M\) is a complex manifold and \(X\) is a normal complex space and \(\Delta\) be the branch locus of
\(f\). Let \(B_{1},\ldots,B_{s}\) be irreducible hypersurfaces of \(M\) and positive integers \(b_{1},\ldots,b_{s}\), where \(b_{i}\geq 2\) for \(i=1,\ldots,s\). If \(\Delta=B_{1}\cup\ldots\cup B_{s}\) and for every \(j\) and for any irreducible component \(D\) of \(f^{-1}(B_{j})\) the ramification index at \(D\) is \(b_{j}\), then we call an effective divisor \(B:=\sum_{i=1}^{s}b_{i}B_{i}\) the branch divisor of \(f\)._
Let \(X\) be a normal projective variety and \(G\) be a finite subgroup of \(\operatorname{Aut}(X)\). Let \(Y:=X/G\) be the quotient space and \(p:X\to Y\) be the quotient map. The branch locus, denoted by \(\Delta\) is a subset of \(Y\) given by \(\Delta:=\{y\in Y|\ |p^{-1}(y)|<|G|\}\). It is known that \(\Delta\) is an algebraic subset of dimension \(\dim{(X)}-1\) if \(Y\) is smooth [19]. Let \(\{B_{i}\}_{i=1}^{r}\) be the irreducible components of \(\Delta\) whose dimension is \(1\). Let \(D\) be an irreducible component of \(D\) of \(p^{-1}(B_{j})\) and \(G_{D}:=\{g\in G:g_{|D}=\operatorname{id}_{D}\}\). Then the ramification index at \(D\) is \(b_{j}:=|G_{D}|\), and the positive integer \(b_{j}\) is independent of an irreducible component of \(p^{-1}(B_{j})\). Then \(b_{1}B_{1}+\cdots+b_{r}B_{r}\) is the branch divisor of \(G\). We state the facts (Theorem 2.3 and 2.4) of the Galois cover theory which we need.
**Theorem 2.3**.: _([12]) For a complex manifold \(M\) and an effective divisor \(B\) on \(M\), if there is a branched covering map \(f:X\to M\) where \(X\) is a simply connected complex manifold \(X\) and the branch divisor of \(f\) is \(B\), then there is a subgroup \(G\) of \(\operatorname{Aut}(X)\) such that \(X/G\cong M\) and \(f:X\to M\) is isomorphic to the quotient map \(p:X\to X/G\cong M\). Furthermore, a pair \((X,G)\) is a unique up to isomorphism._
**Theorem 2.4**.: _([12]) For a complex manifold \(M\) and an effective divisor \(B:=\sum_{i=1}^{n}b_{i}Bi\) on \(M\), where \(B_{i}\) is an irreducible hypersurface for \(i=1,\ldots,n\). Let \(f:X\to M\) be a branched cover whose branch divisor is \(B\) and where \(X\) is a simply connected complex manifold. Then for a branched cover \(g:Y\to M\) whose branch divisor is \(\sum_{j=1}^{m}b_{j}^{\prime}B_{j}\) and \(b_{j}^{\prime}\) is divisible by \(b_{i}\) and \(m\leq n\), there is a branched cover \(h:X\to Y\) such that \(f=g\circ h\)._
Let \(X\) be a \(K3\) surface and \(G\) be a finite subgroup of \(\operatorname{Aut}(X)\) such that \(X/G\) is smooth. Since \(K3\) surfaces are simply connected, \(G\) is determined by the branch divisor of the quotient map \(p:X\to X/G\) from Theorem 2.3. In order to classify finite abelin groups \(G\) which acts on \(K3\) surfaces and the quotient space is smooth, we will search a smooth rational surface \(S\) and an effective divisor \(B\) on \(S\) such that there is a \(K3\) surface and a finite subgroup \(G\) of \(\operatorname{Aut}(X)\) such that \(X/G\cong S\) and the branch divisor of the quotient map \(p:X\to X/G\) is \(B\). There is the problem which is called Fenchel's problem.
**Problem 2.5**.: _Let \(M\) be a projective manifold. Give a necessary and sufficient condition on an effective divisor \(D\) on \(M\) for the existence of a finite Galois (resp. Abelian) cover \(\pi:X\to M\) whose branch divisor is \(D\)._
The Fenchel's problem was originally for compact Riemann surfaces and was answered by Bundgaard-Nielsen [4] and Fox [5].
**Theorem 2.6**.: _([4],[5]) Let \(k\geq 1\) and let \(D:=\sum_{i=1}^{k}m_{i}x_{i}\) be a divisor on a compact Riemann surface \(M\) where \(x_{i}\in M\) and \(m_{i}\in\mathbb{Z}\) for \(i=1,\ldots,k\). Then there is a finite Galois cover \(p:X\to M\) such that the branch divisor of \(p\) is \(D\) except for i) \(M=\mathbb{P}^{1}\) and \(k=1\), and ii) \(M=\mathbb{P}^{1}\), \(k=2\), and \(m_{1}\neq m_{2}\)._
_Furthermore, for the case \(M=\mathbb{P}^{1}\) there exists a finite Abelian cover \(\mathbb{P}^{1}\to\mathbb{P}^{1}\) whose branch divisor is \(D\) if and only if_
_i) \(k=2\) and \(m_{1}=m_{2}\) or ii) \(k=3\) and \(m_{1}=m_{2}=m_{3}=2\)._
In order to study the cover of the Galois cover \(X\to X/G\), the following theorem is useful.
**Theorem 2.7**.: _Let \(X\) be a smooth projective variety, \(G\) be a finite subgroup of \(\operatorname{Aut}(X)\) such that \(X/G\) is smooth. Let \(p:X\to X/G\) be the quotient map, and \(B:=b_{1}B_{1}+\ldots+b_{r}B_{r}\) be the branch divisor of \(p\). Then_
\[K_{X}=p^{*}K_{X/G}+\sum_{i=1}^{r}\frac{b_{i}-1}{b_{i}}p^{*}B_{i}\]
_where \(K_{X}\)\((\)resp. \(K_{X/G})\) is the canonical divisor of \(X\)\((\)resp. \(X/G)\)._
Let \(X\) be a \(K3\) surface and \(G\) be a finite subgroup of \(\operatorname{Aut}(X)\) such that \(X/G\) is smooth, and \(B\) be the branch divisor of the quotient map \(p:X\to X/G\). The canonical line bundle of a \(K3\) surface is trivial. By Theorem 2.7, the branch divisor is restricted in the Picard group of the smooth rational surface \(X/G\), i.e. \(B\) must satisfy
\[K_{X/G}+\sum_{i=1}^{r}\frac{b_{i}-1}{b_{i}}B_{i}=0\text{ in }\operatorname{Pic} _{\mathbb{Q}}(X/G).\]
In Section 3.1, we will show that for a Hirzebruch surface \(\mathbb{F}_{n}\), if \(\mathbb{F}_{n}\) has an effective divisor \(B=\sum_{i=1}^{k}b_{i}B_{i}\) where \(B_{i}\) is an irreducible curve and \(b_{i}\geq 2\) for \(i=1,\ldots,k\) such that \(\sum_{i=1}^{k}\frac{b_{i}-1}{b_{i}}B_{i}+K_{S}=0\) in \(\operatorname{Pic}_{\mathbb{Q}}(\mathbb{F}_{n})\), then \(0\leq n\leq 12\). In Section 4, we will show Theorem 1.1 by using Theorem 2.7.
The following theorem is important for checking the structure of \(G\) from the branch divisor.
**Theorem 2.8**.: _(See [17]) For a \(K3\) surface \(X\) and a finite subgroup \(G\) of \(\operatorname{Aut}(X)\) such that \(X/G\) is smooth. Let \(B:=\sum_{i=1}^{k}b_{i}B_{i}\) be the branch divisor of the quotient map \(p:X\to X/G\). We put \(p^{*}B_{i}=\Sigma_{j=1}^{l}b_{i}C_{i,j}\) where \(C_{i,j}\) is an irreducible curve for \(j=1,\ldots,l\). Let \(G_{C_{i,j}}:=\{g\in G:\ g_{|C_{i,j}}=\operatorname{id}_{C_{i,j}}\}\), \(G_{i}\) be a subgroup of \(G\), which is generated by \(G_{C_{i,1}},\ldots,G_{C_{i,l}}\), and \(I\subset\{1,\ldots,k\}\) be a subset. Then, the following holds. \(i)\) If \((X/G)\backslash\cup_{i\in I}B_{i}\) is simply connected, then \(G\) is generated by \(\{G_{j}\}_{j\in\{1,\ldots,k\}\backslash I}\). \(ii)\)\(G_{C_{i,j}}\cong\mathbb{Z}/b_{i}\mathbb{Z}\) and \(G_{C_{i,j}}\) is generated by a purely non-symplectic automorphism of order \(b_{i}\). \(iii)\) If \(G\) is abelian, then there is an automorphism \(g\in G\) such that \(\cup_{j=1}^{l}C_{i,j}\subset\operatorname{fix}(g)\), and hence \(C_{i,j}\) are pairwise disjoint. \(iv)\) If the self intersection number \((B_{i}\cdot B_{i})\) of \(B_{i}\) is positive, then \(l=1\), and hence \(G_{i}\) is generated by a purely non-symplectic automorphism of order \(b_{i}\)._
Proof.: We will show \(i)\). We assume that \((X/G)\backslash\cup_{i\in I}B_{i}\) is simply connected. Let \(H\) be the subgroup of \(G\) which is generated by \(\{G_{j}\}_{j\in\{1,\ldots,k\}\backslash I}\), and \(X_{0}:=X\backslash\cup_{i\in I}p^{-1}(B_{i})\). Then \(G\) and \(H\) act on \(X_{0}\). We assume that \(G\neq H\). Let \(Y:=X_{0}/H\) be the quotient space, and \(G^{\prime}:=G/H\). Then \(G^{\prime}\) acts faithfully on \(Y\), \(Y/G^{\prime}\cong(X/G)\backslash\cup_{i\in I}B_{i}\), and the branch locus of \(Y\to Y/G^{\prime}\) is a finite set. Since \((X/G)\backslash\cup_{i\in I}B_{i}\) is smooth and simply connected, this is a contradiction. Therefore, \(G\) is generated by \(\{G_{j}\}_{j\in\{1,\ldots,k\}\backslash I}\).
Since \(X\) is a \(K3\) surface, an automorphism whose fixed locus contains a curve can only be purely non-symplectic. Therefore, by the definition of the ramification index \(b_{i}\), we get ii).
We will show \(iii)\) and \(iv)\). Since \(B_{i}\) is contained in the branch locus, we get \(p^{-1}(B_{i})=\bigcup_{j=1}^{l}C_{i,j}\subset\bigcup_{g\in G}\mathrm{fix}(g)\). Since \(G\) is finite, for each \(j\), there is \(s_{j}\in G\) such that \(C_{i,j}\subset\mathrm{fix}(s_{j})\). Since \(B_{i}\) is irreducible, we get that \(p(C_{i,j})=p(C_{i,k})\) for \(1\leq j<k\leq l\). Therefore, there is \(t\in G\) such that \(t(C_{i,j})=C_{i,k}\). Since \(C_{i,j}\subset\mathrm{fix}(s_{j})\) and \(t(C_{i,j})=C_{i,k}\), we obtain that \(C_{i,k}\subset\mathrm{fix}(t\circ s_{j}\circ t^{-1})\). Since \(G\) is abelian, we have \(s_{j}=t\circ s_{j}\circ t^{-1}\). We get \(iii)\). If the self intersection number \((B_{i}\cdot B_{i})\) of \(B_{i}\) is positive, then by Hodge index theorem, we get \(l=1\). By \(ii)\), \(G_{i}\cong\mathbb{Z}/b_{i}\mathbb{Z}\) is generated by a purely non-symplectic automorphism of order \(b_{i}\).
Let \(X\) be a \(K3\) surface and \(G\) be a finite abelian subgroup of \(\mathrm{Aut}(X)\) such that \(X/G\) is smooth and \(B:=\sum_{i=1}^{k}b_{i}B_{i}\) be the branch divisor of the quotient map \(p:X\to X/G\). If \(k=1\), then by Theorem 2.8\(G=G_{B_{1}}\cong\mathbb{Z}/b_{1}\mathbb{Z}\). We assume that \(k=2\). By Theorem 2.8, \(G\) is generated by \(G_{B_{1}}\cong\mathbb{Z}/b_{1}\mathbb{Z}\) and \(G_{B_{2}}\cong\mathbb{Z}/b_{2}\mathbb{Z}\). Moreover, we assume that the intersection \(B_{1}\cap B_{2}\) of \(B_{1}\) and \(B_{2}\) is not an empty set. Since \(B_{1}\cap B_{2}\neq\emptyset\), \(p^{-1}(B_{1})\cap p^{-1}(B_{2})\neq\emptyset\). Since the fixed locus of an automorphism is a pairwise disjoint set of points and curves, we get \(G_{B_{1}}\cap G_{B_{2}}=\{\mathrm{id}_{X}\}\). Therefore, \(G=G_{B_{1}}\oplus G_{B_{2}}\), but in the case of \(k\geq 3\) it is not necessarily \(G=\bigoplus_{i=1}^{k}G_{B_{i}}\) even if \(B_{i}\cap B_{j}\neq\emptyset\) for \(1\leq i<j\leq k\).
For an irreducible component \(B_{i}\) of \(B\) we write \(p^{*}B_{i}=\sum_{j=1}^{l}b_{i}C_{j}\) where \(C_{j}\) is a smooth curve for \(j=1,\ldots,l\). Since the degree of \(p\) is \(|G|\), by \(iv)\) of Theorem 2.8, we get that \(|G|(B_{i}\cdot B_{i})=b_{i}^{2}l(C_{j}\cdot C_{j})\) for \(j=1,\ldots,l\). If the self intersection number \((B_{i})^{2}\) of \(B_{i}\) is positive, then by \(iv)\) of Theorem 2.8, we get that \(l=1\) and the genus of \(C_{1}\) is \(2\) or more. If \((B_{i})^{2}\) is zero, then \(C_{1},\ldots,C_{l}\) are elliptic curves. If \((B_{i})^{2}\) is negative, then \(C_{1},\ldots,C_{l}\) are rational curves. Recall that there is \(g\in G\) such that \(g\) is a non-symplectic automorphism of order \(b_{i}\) and \(C_{1},\ldots,C_{l}\) are contained in \(\mathrm{Fix}(g)\). There are many results on the number of curves, the genus of curves, and the number of isolated points of the fixed locus of a non-symplectic automorphism. We use them to search \(B\) such that there is a Galois cover \(f:X\to S\) such that \(X\) is a \(K3\) surface and the branch divisor of \(f\) is \(B\) and we use them to restore \(G\) from \(B\). Here \(S\) is a smooth rational surface and \(B\) is an effective divisor on \(S\).
## 3. Abelian groups of K3 surfaces with Hirzebruch surfaces
Here, we give the list of a numerical class of an effective divisor \(B=\sum_{i=1}^{k}b_{i}B_{i}\) on \(\mathbb{F}_{n}\) such that \(B_{i}\) is a smooth curve for each \(i=1,\ldots,k\) and \(K_{\mathbb{F}_{n}}+\sum_{i=1}^{k}\frac{b_{i}-1}{b_{i}}B_{i}=0\) in \(\mathrm{Pic}_{\mathbb{Q}}(\mathbb{F}_{n})\).
**Definition 3.1**.: _For a Hirzebruch surface \(\mathbb{F}_{n}\) where \(n\in\mathbb{Z}_{\geq 0}\), we take two irreducible curves \(C\) and \(F\) such that \(\mathrm{Pic}(\mathbb{F}_{n})=\mathbb{Z}C\oplus\mathbb{Z}F\), \((C\cdot F)=1\), \((F\cdot F)=0\), \((C\cdot C)=-n\), and \(K_{\mathbb{F}_{n}}=-2C-(n+2)F\) in \(\mathrm{Pic}(\mathbb{F}_{n})=\mathbb{Z}C\oplus\mathbb{Z}F\). Notice that for \(n=0\), \(C=pr_{1}^{*}\mathcal{O}_{\mathbb{P}^{1}}(1)\) and \(F=pr_{2}^{*}\mathcal{O}_{\mathbb{P}^{1}}(1)\), and for \(n\geq 1\), \(C\) is the unique curve on \(\mathbb{F}_{n}\) such that the self intersection number is negative, and \(F\) is the fibre class of the conic bundle of \(\mathbb{F}_{n}\)._
**Lemma 3.2**.: _Let \(\mathbb{F}_{n}\) be a Hirzebruch surface where \(n\neq 0\), and \(C^{\prime}\subset\mathbb{F}_{n}\) be an irreducible curve. Then one of the following holds: 1) \(C^{\prime}=C\)._
_2) \(C^{\prime}=F\) in \(\operatorname{Pic}(\mathbb{F}_{n})\)._
_3) \(C^{\prime}=aC+bF\) where \(a\geq 1\) and \(b\geq na\)._
**Definition 3.3**.: _Let \(X\) be a \(K3\) surface and \(G\) be a finite subgroup of \(\operatorname{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{n}\). Let \(B:=\sum_{i=1}^{l}b_{i}B_{i}\) be the branch divisor of the quotient map \(p:X\to X/G\). For each \(B_{i}\), there are integers \(\alpha_{i},\beta_{i}\) such that \(B_{i}=\alpha_{i}C+\beta_{i}F\) in \(\operatorname{Pic}(\mathbb{F}_{n})\). We call_
\[\sum_{i=1}^{l}b_{i}(\alpha_{i}C+\beta_{i}F)\]
_as the numerical class of \(B\)._
**Proposition 3.4**.: _Let \(X\) be a \(K3\) surface and \(G\) be a finite subgroup of \(\text{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{n}\). Then \(0\leq n\leq 12\)._
Proof.: We assume that \(X/G\cong\mathbb{F}_{n}\) where \(n\geq 1\). Let \(B\) be the branch divisor of the quotient map \(p:X\to X/G\). We write \(B:=\sum_{i=1}^{k}b_{i}B_{i}+\sum_{j=1}^{l}b_{j}^{\prime}B_{j}^{\prime}\) such that \(B_{i}\neq F\) and \(B_{j}^{\prime}=F\) in \(\operatorname{Pic}(\mathbb{F}_{n})\) for \(i=1,\ldots,k\) and \(j=1,\ldots,l\). Since the canonical line bundle of a \(K3\) surface is trivial and \(\operatorname{Pic}(\mathbb{F}_{n})\) is torsion free, by Theorem 2.7, we get that
\[0=K_{\mathbb{F}_{n}}+\sum_{i=1}^{k}\frac{b_{i}-1}{b_{i}}B_{i}+\sum_{j=1}^{l} \frac{b_{j}^{\prime}-1}{b_{j}^{\prime}}B_{j}^{\prime}\ \ \text{in}\ \ \operatorname{Pic}(\mathbb{F}_{n}).\]
Since \(B_{i}\) is an irreducible curve for \(i=1,\ldots,k\), there are integers \(c_{i},d_{i}\) such that \(B_{i}=c_{i}C+d_{i}F\) in \(\operatorname{Pic}(\mathbb{F}_{n})\) and \((c_{i},d_{i})=(1,0)\) or \(d_{i}\geq nc_{i}>0\). By \(K_{\mathbb{F}_{n}}=-2C-(n+2)F\) in \(\operatorname{Pic}(\mathbb{F}_{n})=\mathbb{Z}C\oplus\mathbb{Z}F\), we get that
\[\left\{\begin{array}{l}2=\sum_{i=1}^{k}\frac{b_{i}-1}{b_{i}}c_{i}\\ n+2=\sum_{i=1}^{k}\frac{b_{i}-1}{b_{i}}d_{i}+\sum_{j=1}^{l}\frac{b_{j}^{ \prime}-1}{b_{j}^{\prime}}.\end{array}\right.\]
Since \(b_{i}\geq 2\), \(\frac{1}{2}\leq\frac{b_{i}-1}{b_{i}}<1\). Since \(2=\sum_{i=1}^{k}\frac{b_{i}-1}{b_{i}}c_{i}\), \(\sum_{i=1}^{k}c_{i}=3\) or \(4\). By a simple calculation, we get that i) \(\sum_{i=1}^{k}c_{i}=4\) if and only if \(b_{1}=\cdots=b_{k}=2\), and ii) if \(\sum_{i=1}^{k}c_{i}=3\), then \((b_{1},\ldots,b_{k};c_{1},\ldots,c_{k})\) where \(c_{1}\leq\cdots\leq c_{k}\) is one of \((3;3)\), \((2,4;1,2)\), \((3,3;1,2)\), \((2,3,6;1,1,1)\), \((2,4,4;1,1,1)\), and \((3,3,3;1,1,1)\).
We assume that \((c_{i},d_{i})\neq(1,0)\) for \(i=1,\ldots,k\), i.e. \(C\) is not an irreducible component of \(B\). Since \(d_{i}\geq nc_{i}\) for \(i=1,\ldots,k\), by \(2=\sum_{i=1}^{k}\frac{b_{i}-1}{b_{i}}c_{i}\) and \(n+2=\sum_{i=1}^{k}\frac{b_{i}-1}{b_{i}}d_{i}+\sum_{j=1}^{l}\frac{b_{j}^{ \prime}-1}{b_{j}^{\prime}}\), we get that \(n+2\geq 2n+\sum_{j=1}^{l}\frac{b_{j}^{\prime}-1}{b_{j}^{\prime}}\). Since \(\frac{b_{i}^{\prime}-1}{b_{i}^{\prime}}\geq 0\), we get \(0\leq n\leq 2\).
We assume that \((c_{i},d_{i})=(1,0)\) for some \(1\leq i\leq k\), i.e. \(C\) is an irreducible component of \(B\). For simplify, we assume that \(i=1\). In the same way as above, we get that \(n+2\geq n(2-\frac{b_{1}-1}{b_{1}})\). Since \(2\leq b_{1}\leq 6\), we obtain \(0\leq 12\leq n\).
Notice that by simple calculations, there are not a \(K3\) surface \(X\) and a finite subgroup \(G\) of \(\operatorname{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{l}\) for \(l=10,11\).
In section 6, we will give the list of a numerical class of an effective divisor \(B=\sum_{i=1}^{k}b_{i}B_{i}\) on \(\mathbb{F}_{n}\) such that \(B_{i}\) is a smooth curve for each \(i=1,\ldots,k\) and \(K_{\mathbb{F}_{n}}+\sum_{i=1}^{k}\frac{b_{i}-1}{b_{i}}B_{i}=0\) in \(\operatorname{Pic}(\mathbb{F}_{n})\).
### Abelian covers of a Hirzebruch surface by a K3 surface
Let \(X\) be a \(K3\) surface, \(G\) be a finite abelian subgroup of \(\operatorname{Aut}(X)\) such that \(X/G\) is a Hirzebruch surface \(\mathbb{F}_{n}\), and \(B\) be the branch divisor of the quotient map \(p:X\to X/G\). In this section, we will decide the numerical class of \(B\). Notice that since \(G\) is abelian and the quotient space \(X/G\) is smooth, the support of \(B\) and that of \(p^{*}B\) are simple normal crossing.
Furthermore, we will show that the structure as a group of \(G\) depends only on the numerical class of \(B\) by Theorem 2.8, and we will give the way to construct \(X\) and \(G\) which depends only on the numerical class of \(B\) by Theorem 2.3 and the cyclic cover. As a result the following will follow. For each \(G\in\mathcal{A}G_{n}\) where \(n=0,1,2,3,4,6,8,12\), there is a pair \((X,G^{\prime})\) where \(X\) is a \(K3\) surface and \(G^{\prime}\) is a finite abelian subgroup of \(\operatorname{Aut}(X)\) such that \(G\cong G^{\prime}\) as a group and \(X/G^{\prime}\cong\mathbb{F}_{n}\). In [9], the case where \(G\cong\mathbb{Z}/2\mathbb{Z}\) is studied.
**Theorem 3.5**.: _(see [3, Chapter I, Section 17]) Let \(M\) be a smooth projective variety, and \(D\) be a smooth effective divisor on \(M\). Then if the class \(\mathcal{O}_{M}(D)/n\in\)Pic\((M)\), then there is the Galois cover \(f:X\to M\) whose branch divisor is \(nD\) and the Galois group is isomorphic to \(\mathbb{Z}/n\mathbb{Z}\) as a group._
For \(n\geq 0\), a Hirzebruch surface \(\mathbb{F}_{n}\) is isomorphic to a variety \(\mathcal{F}_{n}\) in \(\mathbb{P}^{1}\times\mathbb{P}^{2}\)
\[\mathcal{F}_{n}:=\{([X_{0}:X_{1}],[Y_{0}:Y_{1}:Y_{2}])\in\mathbb{P}^{1}\times \mathbb{P}^{2}:\ X_{0}^{n}Y_{0}=X_{1}^{n}Y_{1}\}.\]
From here, we assume that \(\mathbb{F}_{n}=\mathcal{F}_{n}\). The first projection gives the fibre space structure \(f:\mathbb{F}_{n}\to\mathbb{P}^{1}\) such that the numerical class of the fibre of \(f\) is \(F\), and
\[C=\{([X_{0}:X_{1}],[Y_{0}:Y_{1}:Y_{2}])\in\mathbb{F}_{n}:\ Y_{0}=Y_{1}=0\},\]
is the unique irreducible curve on \(\mathbb{F}_{n}\) such that the self intersection number is negative. Let \(a\) and \(b\) be positive integers such that \(b\geq na\). Furthermore, we put
\[F(X_{0},X_{1},Y_{0},Y_{1},Y_{2}):=\sum_{0\leq i\leq b-na,0\leq j,k\leq a}t_{i, j,k}X_{0}^{i}X_{1}^{b-na-i}Y_{0}^{j}Y_{1}^{k}Y_{2}^{a-j-k}\]
where \(t_{i,j,k}\in\mathbb{C}\), and
\[B_{F}:=\{([X_{0}:X_{1}],[Y_{0}:Y_{1}:Y_{2}])\in\mathbb{F}_{n}:\ F(X_{0},X_{1}, Y_{0},Y_{1},Y_{2})=0\}.\]
If \(B_{F}\) is an irreducible curve of \(\mathbb{F}_{n}\), then \(B_{F}=aC+bF\) in Pic\((\mathbb{F}_{n})\).
Let \(g_{1}\) and \(g_{m}\) be automorphisms of \(\mathbb{P}^{1}\) which are induced by matrixes
\[g_{1}:=\begin{pmatrix}0&1\\ 1&0\end{pmatrix},\ g_{m}:=\begin{pmatrix}1&0\\ 0&\zeta_{m}\end{pmatrix},\]
where \(\zeta_{m}\) is an \(m\)-th root of unity \(m\geq 2\). Then \(\langle g_{1},g_{2}\rangle\cong\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\), and \(\langle g_{m}\rangle\cong\mathbb{Z}/m\mathbb{Z}\) for \(m\geq 2\). Here for a subset \(S\) of group \(G\), \(\langle S\rangle\) is the subgroup of \(G\) which is generated by \(S\). Then
\[\mathbb{P}^{1}\cong\mathbb{P}^{1}/\langle g_{1},g_{2}\rangle\ \text{and}\ \mathbb{P}^{1}\cong\mathbb{P}^{1}/\langle g_{m}\rangle,\]
and the quotient maps are isomorphic to
\[\mathbb{P}^{1}\ni[z_{0}:z_{1}]\mapsto[(z_{0}^{2}+z_{1}^{2})^{2}:(z_{0}^{2}-z_{ 1}^{2})^{2}]\in\mathbb{P}^{1}\ \text{and}\ \mathbb{P}^{1}\ni[z_{0}:z_{1}]\mapsto[z_{0}^{m}:z_{1}^{m}]\in\mathbb{P}^{1}\]
for \(m\geq 2\), and the branch divisors are
\[2x_{0}+2x_{1}+2x_{2}\ \text{and}\ mx_{0}+mx_{1},\]
where \(x_{0}:=[1:0]\), \(x_{1}:=[0:1]\), and \(x_{2}:=[1:1]\).
The above Galois covers \(\mathbb{P}^{1}\to\mathbb{P}^{1}/\langle g_{1},g_{2}\rangle\cong\mathbb{P}^{1}\) and \(\mathbb{P}^{1}\to\mathbb{P}^{1}/\langle g_{m}\rangle\cong\mathbb{P}^{1}\) naturally induce the Galois covers of \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) and \(\mathbb{F}_{n}\) whose Galois groups are induced by \(g_{m}\) for \(m\geq 2\). We will explain in a bit more detail for \(\mathbb{F}_{n}\). For \(\mathbb{P}^{1}\to\mathbb{P}^{1}/\langle g_{1},g_{2}\rangle\), let \(\mathbb{P}^{1}\times_{\mathbb{P}^{1}}\mathbb{F}_{n}\) be the fibre product of \(\mathbb{P}^{1}\to\mathbb{P}^{1}/\langle g_{1},g_{2}\rangle\) and \(f:\mathbb{F}_{n}\to\mathbb{P}^{1}\). Let \(p:\mathbb{P}^{1}\times_{\mathbb{P}^{1}}\mathbb{F}_{n}\to\mathbb{F}_{n}\) be the natural projection of the fibre product. Then
\[\mathbb{P}^{1}\times_{\mathbb{P}^{1}}\mathbb{F}_{n}\cong\mathbb{F}_{4n}\]
and \(p:\mathbb{P}^{1}\times_{\mathbb{P}^{1}}\mathbb{F}_{n}\to\mathbb{F}_{n}\) is the Galois cover such that the branch divisor of \(p\) is
\[2F+2F+2F\text{ in }\text{Pic}(\mathbb{F}_{n}),\]
and the Galois group is isomorphic to \(\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\) as a group, which is induced by \(\langle g_{1},g_{2}\rangle\). Let \(C_{m}\) is the irreducible curve on \(\mathbb{F}_{m}\) such that the self intersection number is negative and \(F_{m}\) is the numerical class of the fibre \(\mathbb{F}_{m}\to\mathbb{P}^{1}\) for \(m\geq 1\). Then
\[p^{*}C_{n}=C_{4n}\text{ and }p^{*}F_{n}=4F_{4n}\text{ in }\text{Pic}(\mathbb{F}_{4n}).\]
For \(\mathbb{P}^{1}\to\mathbb{P}^{1}/\langle g_{m}\rangle\), let \(\mathbb{P}^{1}\times_{\mathbb{P}^{1}}\mathbb{F}_{n}\) be the fibre product of \(\mathbb{P}^{1}\to\mathbb{P}^{1}/\langle g_{m}\rangle\) and \(f:\mathbb{F}_{n}\to\mathbb{P}^{1}\). Let \(p:\mathbb{P}^{1}\times_{\mathbb{P}^{1}}\mathbb{F}_{n}\to\mathbb{F}_{n}\) be the natural projection of the fibre product. Then
\[\mathbb{P}^{1}\times_{\mathbb{P}^{1}}\mathbb{F}_{n}\cong\mathbb{F}_{mn},\]
\(p:\mathbb{P}^{1}\times_{\mathbb{P}^{1}}\mathbb{F}_{n}\to\mathbb{F}_{n}\) is the Galois cover such that the branch divisor of \(p\) is
\[mF+mF\text{ in }\text{Pic}(\mathbb{F}_{n}),\]
and the Galois group is isomorphic to \(\mathbb{Z}/m\mathbb{Z}\) as a group, which is induced by \(\langle g_{m}\rangle\), and
\[p^{*}C_{n}=C_{mn}\text{ and }p^{*}F_{n}=mF_{mn}\text{ in }\text{Pic}(\mathbb{F}_{mn}).\]
**Definition 3.6**.: _From here, we use the notation. \(B^{k}_{i,j}\) (or simply \(B_{i,j}\)) is a smooth curve on \(\mathbb{F}_{n}\) such that \(B^{k}_{i,j}=iC+jF\) in \(\text{Pic}(\mathbb{F}_{n})\) for \(n\geq 0\), where \(k\in\mathbb{N}\)._
**Proposition 3.7**.: _For each numerical classes (1,2,3) of the list in section 6, there are a \(K3\) surface \(X\) and a finite abelian subgroup \(G\) of \(\text{Aut}(X)\) such that \(X/G\cong\mathbb{P}^{1}\times\mathbb{P}^{1}\) and the numerical class of the branch divisor \(B\) of the quotient map \(p:X\to X/G\) is (1,2,3)._
_Furthermore, for a pair \((X,G)\) of a \(K3\) surface \(X\) and a finite abelian subgroup \(G\) of \(\text{Aut}(X)\), if \(X/G\cong\mathbb{P}^{1}\times\mathbb{P}^{1}\) and the numerical class of the branch divisor \(B\) of the quotient map \(p:X\to X/G\) is (1,2,3), then \(G\) is isomorphic to \(\mathbb{Z}/3\mathbb{Z}\), \(\mathbb{Z}/3\mathbb{Z}^{\oplus 2}\), \(\mathbb{Z}/3\mathbb{Z}^{\oplus 3}\), in order, as a group._
Proof.: Let \(B_{3,3}\) be a smooth curve on \(\mathbb{P}^{1}\times\mathbb{P}^{1}\). Then the numerical class of \(3B_{3,3}\) is (1). By Theorem 3.5, there is the Galois cover \(p:X\to\mathbb{P}^{1}\times\mathbb{P}^{1}\) such that the branch divisor is \(3B_{3,3}\) and the Galois group is \(\mathbb{Z}/3\mathbb{Z}\) as a group. By Theorem 2.7, the canonical divisor of \(X\) is a numerically trivial. By [18], \(X\) is not a bi-ellitptic surface. By [8], \(X\) is not an abelian surface. If \(X\) is an Enriques surface, then there is the Galois cover \(q:X^{\prime}\to\mathbb{P}^{1}\times\mathbb{P}^{1}\) such that \(X^{\prime}\) is a \(K3\) surface, the Galois group is \(\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/3\mathbb{Z}\) as a group, and the branch divisor is \(3B_{3,3}\). By Theorem 2.8, this is a contradiction. Therefore, \(X\) is a \(K3\) surface.
In addition, let \((X^{\prime},G^{\prime})\) be a pair of a \(K3\) surface \(X^{\prime}\) and a finite abelian subgroup \(G^{\prime}\) of \(\text{Aut}(X^{\prime})\) such that \(X^{\prime}/G^{\prime}\cong\mathbb{P}^{1}\times\mathbb{P}^{1}\) and the numerical class of the branch divisor \(B^{\prime}\) of the quotient map \(p^{\prime}:X^{\prime}\to X^{\prime}/G^{\prime}\) is (1). By Theorem 2.8, \(G^{\prime}\cong\mathbb{Z}/3\mathbb{Z}\) as a group. Since the support of \(B^{\prime}\) is smooth, there is a smooth curve
\(B^{\prime}_{3,3}\) such that \(B^{\prime}=3B^{\prime}_{3,3}\). Then by the above discussion, there is the Galois cover \(f:X\to\mathbb{P}^{1}\times\mathbb{P}^{1}\) such that \(X\) is a \(K3\) surface, the branch divisor is \(B^{\prime}\), and the Galois group \(G\) is \(\mathbb{Z}/3\mathbb{Z}\) as a group. Since a \(K3\) surface is simply connected, by Theorem 2.3, the pair \((X^{\prime},G^{\prime})\) is isomorphic to the pair \((X,G)\).
Let \(B^{1}_{1,0}\), \(B^{2}_{1,0}\), and \(B_{1,3}\) be smooth curves on \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) such that \(B^{1}_{1,0}+B^{2}_{1,0}+B_{1,3}\) is simple normal crossing. Then the numerical class of \(3B^{1}_{1,0}+3B^{2}_{1,0}+3B_{1,3}\) is (2). Let \(p:\mathbb{P}^{1}\times\mathbb{P}^{1}\to\mathbb{P}^{1}\times\mathbb{P}^{1}\) be the Galois cover such that the branch divisor is \(3B^{1}_{1,0}+3B^{2}_{1,0}\), and the Galois group is \(\mathbb{Z}/3\mathbb{Z}\) as a group, which is induced by the Galois cover \(\mathbb{P}^{1}\ni[z_{0}:z_{1}]\mapsto[z_{0}^{3}:z_{1}^{3}]\in\mathbb{P}^{1}\). Since \(B^{1}_{1,0}+B^{2}_{1,0}+B_{1,3}\) is simple normal crossing, \(p^{*}B_{1,3}\) is a reduced divisor on \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) such that whose support is a union of pairwise disjoint smooth curves, and \(p^{*}B_{1,3}=(3,3)\) in \(\operatorname{Pic}(\mathbb{P}^{1}\times\mathbb{P}^{1})\). As for the case of (1), there is the Galois cover \(q:X\to\mathbb{P}^{1}\times\mathbb{P}^{1}\) such that \(X\) is a \(K3\) surface, the Galois group is \(\mathbb{Z}/3\mathbb{Z}\) as a group, and the branch divisor is \(3p^{*}B_{1,3}\). Then the branched cover \(p\circ q:X\to\mathbb{P}^{1}\times\mathbb{P}^{1}\) has \(3B^{1}_{1,0}+3B^{2}_{1,0}+3B_{1,3}\) as the branch divisor. Since \(X\) is simply connected, by Theorem 2.3, \(p\circ q\) is the Galois cover. Since the degree of \(p\circ q\) is \(9\), by Theorem 2.8, the Galois group of \(p\circ q\) is \(\mathbb{Z}/3\mathbb{Z}^{\oplus 2}\) as a group.
Conversely, for a \(K3\) surface \(X\) and a finite abelian subgroup \(G\) of \(\operatorname{Aut}(X)\) such that \(X/G\cong\mathbb{P}^{1}\times\mathbb{P}^{1}\) and the numerical class of the branch divisor \(B\) of the quotient map \(p:X\to X/G\) is (2). By the above discussion, \(G\) isomorphic to \(\mathbb{Z}/3\mathbb{Z}^{\oplus 2}\) as a group, and \(X\to X/G\) is given by the composition of the Galois cover \(X\to\mathbb{P}^{1}\times\mathbb{P}^{1}\) whose numerical class of the branch divisor is (1) and the Galois cover \(p:\mathbb{P}^{1}\times\mathbb{P}^{1}\to\mathbb{P}^{1}\times\mathbb{P}^{1}\) which is induced by the Galois cover \(\mathbb{P}^{1}\ni[z_{0}:z_{1}]\mapsto[z_{0}^{3}:z_{1}^{3}]\in\mathbb{P}^{1}\).
As for the case of (2), we get the claim for (3). In this case, the Galois group is \(\mathbb{Z}/3\mathbb{Z}^{\oplus 3}\) as a group. Furthermore, let \(X\) be a \(K3\) surface and \(G\) be a finite abelian subgroup of \(\operatorname{Aut}(X)\) such that \(X/G\cong\mathbb{P}^{1}\times\mathbb{P}^{1}\) and the numerical class of the branch divisor \(B\) of \(G\) is (3). As for the case of (2), \(X\to X/G\) is given by the composition of the Galois cover \(X\to\mathbb{P}^{1}\times\mathbb{P}^{1}\) whose numerical class of the branch divisor is (1) and the Galois cover \(p:\mathbb{P}^{1}\times\mathbb{P}^{1}\to\mathbb{P}^{1}\times\mathbb{P}^{1}\) which is isomorphic to the Galois cover \(p:\mathbb{P}^{1}\times\mathbb{P}^{1}\ni([z_{0}:z_{1}],[w_{0}:w_{1}])\mapsto([z_ {0}^{3}:z_{1}^{3}],[w_{0}^{3}:w_{1}^{3}])\in\mathbb{P}^{1}\times\mathbb{P}^{1}\).
For (1), we obtain an example if we use a curve \(B_{3,3}\) in \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) given by the equation
\[B_{3,3}:z_{0}^{3}w_{0}^{3}+z_{0}^{3}w_{1}^{3}+z_{1}^{3}w_{0}^{3}+2z_{1}^{3}w_{ 1}^{3}=0.\]
For (2), we obtain an example if we use curves \(B^{1}_{1,0},B^{2}_{1,0},B_{1,3}\) in \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) given by the equations
\[B^{1}_{1,0}:z_{0}=0,\ B^{2}_{1,0}:z_{1}=0,\ B_{1,1}:z_{0}w_{0}+z_{0}w_{1}+z_{1} w_{0}+2z_{1}w_{1}=0,\]
\[B^{1}_{0,1}:w_{0}=0,\ B^{2}_{0,1}:w_{1}=0.\]
**Corollary 3.8**.: _For each numerical classes (194,83), (302,251,201,84) of the list in section 6, there are a \(K3\) surface \(X\) and a finite abelian subgroup \(G\) of \(\text{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{n}\) and the numerical class of the branch divisor \(B\) of the quotient map \(p:X\to X/G\) is (194,83), (302,251,201,84)._
_Furthermore, for a pair \((X,G)\) of a \(K3\) surface \(X\) and a finite abelian subgroup \(G\) of \(\text{Aut}(X)\), if \(X/G\cong\mathbb{F}_{n}\) and the numerical class of the branch divisor \(B\) of the quotient map \(p:X\to X/G\) is (194,83), (302,251,201,84), then \(G\) is \(\mathbb{Z}/3\mathbb{Z}\), \(\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/3\mathbb{Z}\), \(\mathbb{Z}/22\mathbb{Z}\oplus\mathbb{Z}/3\mathbb{Z}\), \(\mathbb{Z}/3\mathbb{Z}^{\oplus 2}\), \(\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/3\mathbb{Z}^{\oplus 2}\), in order, as a group._
Proof.: In the same way as Proposition 3.7, we get this corollary. More specifically, let \(X\) be a \(K3\) surface, \(G\) be a finite abelian subgroup of \(\text{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{n}\), and \(B\) be the branch divisor of the quotient map \(p:X\to X/G\). Then we get the following.
i) If the numerical class of \(B\) is one of (194,302), then \(X\to X/G\) is given by Theorem 3.5.
ii) If the numerical class of \(B\) is (83), then \(X\to X/G\) is given by the composition of the Galois cover \(X^{\prime}\to\mathbb{F}_{2}\) whose numerical class of the branch divisor is (194) and the Galois cover \(\mathbb{F}_{2}\to\mathbb{F}_{1}\) which is induced by the Galois cover \(\mathbb{P}^{1}\to\mathbb{P}^{1}\) of degree 2.
iii) If the numerical class of \(B\) is one of (251,201,84), then \(X\to X/G\) is given by the composition of the Galois cover \(X^{\prime}\to\mathbb{F}_{6}\) whose numerical class of the branch divisor is (302) and the Galois cover \(\mathbb{F}_{6}\to\mathbb{F}_{m}\) which is induced by the Galois cover \(\mathbb{P}^{1}\to\mathbb{P}^{1}\) of degree \(\frac{6}{m}\).
For (194), we obtain an example if we use a curve \(B_{3,6}\) in \(\mathbb{F}_{2}\) given by the equation
\[B_{3,6}:Y_{0}^{3}+Y_{1}^{3}+Y_{2}^{3}=0.\]
For (83), we obtain an example if we use curves \(B_{3,3},B_{0,1}^{1},B_{0,1}^{2}\) in \(\mathbb{F}_{1}\) given by the equations
\[B_{3,3}:Y_{0}^{3}+Y_{1}^{3}+Y_{2}^{3}=0,\ B_{0,1}^{1}:X_{0}=0,\ B_{0,1}^{2}:X _{1}=0.\]
For (302), we obtain an example if we use a section \(C\) and a curve \(B_{2,12}\) in \(\mathbb{F}_{6}\) given by the equation
\[B_{2,12}:Y_{0}^{2}+Y_{1}^{2}+Y_{2}^{2}=0.\]
For (251), we obtain an example if we use a section \(C\) and curves \(B_{2,6},B_{0,1}^{1},B_{0,1}^{2}\) in \(\mathbb{F}_{3}\) given by the equations
\[B_{2,6}:Y_{0}^{2}+Y_{1}^{2}+Y_{2}^{2}=0,\ B_{0,1}^{1}:X_{0}=0,\ B_{0,1}^{2}:X _{1}=0.\]
For (201), we obtain an example if we use a section \(C\) and curves \(B_{2,4},B_{0,1}^{1},B_{0,1}^{2}\) in \(\mathbb{F}_{2}\) given by the equations
\[B_{2,4}:Y_{0}^{2}+Y_{1}^{2}+Y_{2}^{2}=0,\ B_{0,1}^{1}:X_{0}=0,\ B_{0,1}^{2}:X _{1}=0.\]
For (84), we obtain an example if we use a section \(C\) and curves \(B_{2,2},B_{0,1}^{1},B_{0,1}^{2}\) in \(\mathbb{F}_{1}\) given by the equations
\[B_{2,2}:Y_{0}^{2}+Y_{1}^{2}+Y_{2}^{2}=0,\ B_{0,1}^{1}:X_{0}=0,\ B_{0,1}^{2}:X _{1}=0.\]
**Proposition 3.9**.: _For each numerical classes (4,5,6,7,8,9,10,11,12,13) of the list in section 6, there are a \(K3\) surface \(X\) and a finite abelian subgroup \(G\) of \(\text{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{n}\) and the numerical class of the branch divisor \(B\) of the quotient map \(p:X\to X/G\) is (4,5,6,7,8,9,10,11,12,13)._
_Furthermore, for a pair \((X,G)\) of a \(K3\) surface \(X\) and a finite abelian subgroup \(G\) of \(\text{Aut}(X)\), if \(X/G\cong\mathbb{F}_{n}\) and the numerical class of the branch divisor \(B\) of the quotient map \(p:X\to X/G\) is (4,5,6,7,8,9,10,11,12,13), then \(G\) is \(\mathbb{Z}/2\mathbb{Z}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 3}\), \(\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/4\mathbb{Z}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\oplus\mathbb{Z}/4\mathbb{Z}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 3}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 4}\), \(\mathbb{Z}/2\mathbb{Z}^{3}\oplus\mathbb{Z}/4\mathbb{Z}\), in order, as a group._
Proof.: In the same way as Proposition 3.7, we get this proposition. More specifically, let \(X\) be a \(K3\) surface, \(G\) be a finite abelian subgroup of \(\text{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{n}\), and \(B\) be the branch divisor of the quotient map \(p:X\to X/G\). Then we get the following.
i) If the numerical class of \(B\) is (4), then \(X\to X/G\) is given by Theorem 3.5.
ii) If the numerical class of \(B\) is one of (5,6,7,8,9,10,11,12,13), then \(X\to X/G\) is given by the composition of the Galois cover \(X\to\mathbb{P}^{1}\times\mathbb{P}^{1}\) whose numerical class of the branch divisor is (4) and the Galois cover \(\mathbb{P}^{1}\times\mathbb{P}^{1}\to\mathbb{P}^{1}\times\mathbb{P}^{1}\) which is induced by the Galois cover \(\mathbb{P}^{1}\to\mathbb{P}^{1}\).
For (4), we obtain an example if we use a curve \(B_{4,4}\) in \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) given by the equation
\[B_{4,4}:(z_{0}^{4}+z_{1}^{4})(w_{0}^{4}+w_{1}^{4})+2z_{0}^{2}z_{1}^{2}w_{0}^{ 2}w_{1}^{2}=0.\]
For (5), we obtain an example if we use curves \(B_{1,0}^{1},B_{1,0}^{2},B_{2,4}\) in \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) given by the equations
\[B_{1,0}^{1}:z_{0}=0,\ B_{1,0}^{2}:z_{1}=0,\ B_{2,4}:(z_{0}^{2}+z_{1}^{2})(w_{0 }^{4}+w_{1}^{4})+2z_{0}z_{1}w_{0}^{2}w_{1}^{2}=0.\]
For (6), we obtain an example if we use curves \(B_{1,0}^{1},B_{1,0}^{2},B_{2,2},B_{0,1}^{1},B_{0,1}^{2}\) in \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) given by the equations
\[B_{1,0}^{1}:z_{0}=0,\ B_{1,0}^{2}:z_{1}=0,\ B_{2,2}:(z_{0}^{2}+z_{1}^{2})(w_{0 }^{2}+w_{1}^{2})+2z_{0}z_{1}w_{0}w_{1}=0,\]
\[B_{0,1}^{1}:w_{0}=0,\ B_{0,1}^{2}:w_{1}=0.\]
For (7), we obtain an example if we use curves \(B_{1,0}^{1},B_{1,0}^{2},B_{2,4}\) in \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) given by the equations
\[B_{1,0}^{1}:z_{0}=0,\ B_{1,0}^{2}:z_{1}=0,\ B_{2,4}:(z_{0}+z_{1})(w_{0}^{4}+w_ {1}^{4})+(z_{0}-z_{1})w_{0}^{2}w_{1}^{2}=0.\]
For (8), we obtain an example if we use curves \(B_{1,0}^{1},B_{1,0}^{2},B_{1,1},B_{0,1}^{1},B_{0,1}^{2}\) in \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) given by the equations
\[B_{1,0}^{1}:z_{0}=0,\ B_{1,0}^{2}:z_{1}=0,\ B_{1,1}:(z_{0}+z_{1})(w_{0}+w_{1})+ 2(z_{0}-z_{1})(w_{0}-w_{1})=0,\]
\[B_{0,1}^{1}:w_{0}=0,\ B_{0,1}^{2}:w_{1}=0.\]
For (9), we obtain an example if we use curves \(B_{1,0}^{1},B_{1,0}^{2},B_{1,2},B_{0,1}^{1},B_{0,1}^{2}\) in \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) given by the equations
\[B_{1,0}^{1}:z_{0}=0,\ B_{1,0}^{2}:z_{1}=0,\ B_{1,2}(z_{0}+z_{1})(w_{0}^{2}+w_ {1}^{2})+(z_{0}-z_{1})w_{0}w_{1},\]
\[B_{0,1}^{1}:w_{0}=0,\ B_{0,1}^{2}:w_{1}=0.\]
For (10), we obtain an example if we use curves \(B_{1,0}^{1},B_{1,0}^{2},B_{1,0}^{2},B_{1,0}^{3},B_{1,4}\) in \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) given by the equations
\[B_{1,0}^{1}:z_{0}=0,\ B_{1,0}^{2}:z_{1}=0,\ B_{1,0}^{3}:z_{0}-z_{1}=0,\]
\[B_{1,4}:(z_{0}+z_{1})(w_{0}^{4}+w_{1}^{4})+2(z_{0}-z_{1})(w_{0}^{4}-w_{1}^{4})=0.\]
For (11), we obtain an example if we use curves \(B_{1,0}^{1},B_{1,0}^{2},B_{1,0}^{3},B_{1,1},B_{0,1}^{1},B_{0,1}^{2},B_{0,1}^{3}\) in \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) given by the equations
\[B_{1,0}^{1}:z_{0}=0,\ B_{1,0}^{2}:z_{1}=0,\ B_{1,0}^{3}:z_{0}-z_{1}=0,\]
\[B_{1,1}:(z_{0}-2z_{1})w_{0}+(2z_{0}+z_{1})w_{1}=0,\]
\[B_{0,1}^{1}:w_{0}=0,\ B_{0,1}^{2}:w_{1}=0,\ B_{0,1}^{3}:w_{0}-w_{1}=0,\]
For (12), we obtain an example if we use curves \(B_{1,0}^{1},B_{1,0}^{2},B_{1,0}^{3},B_{1,2},B_{0,1}^{1},B_{0,1}^{2}\) in \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) given by the equations
\[B_{1,0}^{1}:z_{0}=0,\ B_{1,0}^{2}:z_{1}=0,\ B_{1,0}^{3}:z_{0}-z_{1}=0,\]
\[B_{1,2}:(z_{0}-2z_{1})w_{0}^{2}+(2z_{0}+z_{1})w_{1}^{2}=0,\ B_{0,1}^{1}:w_{0}=0,\ B_{0,1}^{2}:w_{1}=0.\]
For (13), we obtain an example if we use curves \(B_{1,0}^{1},B_{1,0}^{2},B_{1,0}^{3},B_{1,1},B_{0,1}^{1},B_{0,1}^{2}\) in \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) given by the equations
\[B_{1,0}^{1}:z_{0}=0,\ B_{1,0}^{2}:z_{1}=0,\ B_{1,0}^{3}:z_{0}-z_{1}=0,\]
\[B_{1,2}:(z_{0}-2z_{1})w_{0}+(2z_{0}+z_{1})w_{1}=0,\ B_{0,1}^{1}:w_{0}=0,\ B_{0,1}^{2}:w_{1}=0.\]
**Corollary 3.10**.: _For each numerical classes (79), (195,85), (277,202,86,87) of the list in section 6, there are a \(K3\) surface \(X\) and a finite abelian subgroup \(G\) of \(\text{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{n}\) and the numerical class of the branch divisor \(B\) of the quotient map \(p:X\to X/G\) is (79), (195,85), (277,202,86,87)._
_Furthermore, for a pair \((X,G)\) of a \(K3\) surface \(X\) and a finite abelian subgroup \(G\) of \(\text{Aut}(X)\), if \(X/G\cong\mathbb{F}_{n}\) and the numerical class of the branch divisor \(B\) of the quotient map \(p:X\to X/G\) is (79), (195,85), (277,202,86,87), then \(G\) is \(\mathbb{Z}/2\mathbb{Z}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\), \(\mathbb{Z}/2\mathbb{Z}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\), \(\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/4\mathbb{Z}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 3}\), in order, as a group._
Proof.: In the same way as Proposition 3.7, we get this corollary. More specifically, let \(X\) be a \(K3\) surface, \(G\) be a finite abelian subgroup of \(\text{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{n}\), and \(B\) be the branch divisor of the quotient map \(p:X\to X/G\). Then we get the following.
i) If the numerical class of \(B\) is one of (79,195,277), then \(X\to X/G\) is given by Theorem 3.5.
ii) If the numerical class of \(B\) is one of (85), then \(X\to X/G\) is given by the composition of the Galois cover \(X\to\mathbb{F}_{2}\) whose numerical class of the branch divisor is (195) and the Galois cover \(\mathbb{F}_{2}\to\mathbb{F}_{1}\) which is induced by the Galois cover \(\mathbb{P}^{1}\to\mathbb{P}^{1}\) of degree \(2\).
iii) If the numerical class of \(B\) is one of (202,86,87), then \(X\to X/G\) is given by the composition of the Galois cover \(X\to\mathbb{F}_{4}\) whose numerical class of the branch divisor is (277) and the Galois cover \(\mathbb{F}_{4}\to\mathbb{F}_{m}\) which is induced by the Galois cover \(\mathbb{P}^{1}\to\mathbb{P}^{1}\) of degree \(\frac{4}{m}\).
For (79), we obtain an example if we use a curve \(B_{4,6}\) in \(\mathbb{F}_{1}\) given by the equation
\[B_{4,6}:X_{0}^{2}Y_{1}^{4}+X_{1}^{2}Y_{0}^{4}+X_{0}X_{1}Y_{2}^{4}=0.\]
For (195), we obtain an example if we use a curve \(B_{4,8}\) in \(\mathbb{F}_{2}\) given by the equation
\[B_{4,8}:Y_{0}^{4}+Y_{1}^{4}+Y_{2}^{4}=0.\]
For (85), we obtain an example if we use curves \(B_{4,4},B^{1}_{0,1},B^{2}_{0,1}\) in \(\mathbb{F}_{1}\) given by the equations
\[B_{4,4}:Y^{4}_{0}+Y^{4}_{1}+Y^{4}_{2}=0,\ B^{1}_{0,1}:X_{0}=0,\ B^{2}_{0,1}:X_{1} =0.\]
For (277), we obtain an example if we use a section \(C\) and a curve \(B_{3,12}\) in \(\mathbb{F}_{4}\) given by the equation
\[B_{3,12}:Y^{3}_{0}+Y^{3}_{1}+Y^{3}_{2}=0.\]
For (202), we obtain an example if we use a section \(C\) and curves \(B_{3,6},B^{1}_{0,1},B^{2}_{0,1}\) in \(\mathbb{F}_{2}\) given by the equations
\[B_{3,6}:Y^{3}_{0}+Y^{3}_{1}+Y^{3}_{2}=0,\ B^{1}_{0,1}:X_{0}=0,\ B^{2}_{0,1}:X_ {1}=0.\]
For (86), we obtain an example if we use a section \(C\) and curves \(B_{3,3},B^{1}_{0,1},B^{2}_{0,1}\) in \(\mathbb{F}_{1}\) given by the equations
\[B_{3,3}:Y^{3}_{0}+Y^{3}_{1}+Y^{3}_{2}=0,\ B^{1}_{0,1}:X_{0}=0,\ B^{2}_{0,1}:X_ {1}=0.\]
For (87), we obtain an example if we use a section \(C\) and curves \(B_{3,3},B^{1}_{0,1},B^{2}_{0,1},B^{3}_{0,1}\) in \(\mathbb{F}_{1}\) given by the equations
\[B_{3,3}:Y^{3}_{0}+Y^{3}_{1}+Y^{3}_{2}=0,\ B^{1}_{0,1}:X_{0}=0,\ B^{2}_{0,1}:X_ {1}=0,\ B^{3}_{0,1}:X_{0}-X_{1}=0.\]
**Proposition 3.11**.: _For each numerical classes (14,15,16) of the list in section 6, there are a \(K3\) surface \(X\) and a finite abelian subgroup \(G\) of \(\text{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{n}\) and the numerical class of the branch divisor \(B\) of the quotient map \(p:X\to X/G\) is (14,15,16)._
_Furthermore, for a pair \((X,G)\) of a \(K3\) surface \(X\) and a finite abelian subgroup \(G\) of \(\text{Aut}(X)\), if \(X/G\cong\mathbb{F}_{n}\) and the numerical class of the branch divisor \(B\) of the quotient map \(p:X\to X/G\) is (14,15,16), then \(G\) is \(\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 3}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 4}\), in order, as a group._
Proof.: Let \(B^{1}_{2,2},B^{2}_{2,2}\) be smooth curves on \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) such that \(B^{1}_{2,2}+B^{2}_{2,2}\) is simple normal crossing. Then the numerical class of \(2B^{1}_{2,2}+2B^{2}_{2,2}\) is (14). Since \(B^{i}_{2,2}=(2C+2F)\) in \(\text{Pic}(\mathbb{P}^{1}\times\mathbb{P}^{1})\), by Theorem 3.5, there are the Galois covers \(p_{i}:X_{i}\to\mathbb{P}^{1}\times\mathbb{P}^{1}\) such that the branch divisor of \(p_{i}\) is \(2B^{i}_{2,2}\) for \(i=1,2\) and the Galois group of \(p_{i}\) is isomorphic to \(\mathbb{Z}/2\mathbb{Z}\) as a group for \(i=1,2\). Since \(B^{1}_{2,2}+B^{2}_{2,2}\) is simple normal crossing, the fibre product \(X:=X_{1}\times_{\mathbb{P}^{1}\times\mathbb{P}^{1}}X_{2}\) of \(p_{1}\) and \(p_{2}\) is smooth. Therefore, there is the Galois cover \(p:X\to\mathbb{P}^{1}\times\mathbb{P}^{1}\) such that \(X\) is a \(K3\) surface, the Galois group is isomorphic to \(\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\) as a group, and the branch divisor is \(2B^{1}_{2,2}+2B^{2}_{2,2}\). The rest of this proposition is proved in the same way as Proposition 3.7. More specifically, let \(X\) be a \(K3\) surface, \(G\) be a finite abelian subgroup of \(\text{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{n}\), and \(B\) be the branch divisor of \(G\). Then we get the following.
i) If the numerical class of \(B\) is (14), then \(X\to X/G\) is given by Theorem 3.5 and the fibre product.
ii) If the numerical class of \(B\) is one of (15,16), then \(X\to X/G\) is given by the composition of the Galois cover \(X\to\mathbb{P}^{1}\times\mathbb{P}^{1}\) whose numerical class of the branch divisor is (14) and the Galois cover \(\mathbb{P}^{1}\times\mathbb{P}^{1}\to\mathbb{P}^{1}\times\mathbb{P}^{1}\) which is induced by the Galois cover \(\mathbb{P}^{1}\to\mathbb{P}^{1}\).
For (14), we obtain an example if we use curves \(B^{1}_{2,2},B^{2}_{2,2}\) in \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) given by the equations
\[B^{1}_{2,2}:z_{0}^{2}w_{0}^{2}+z_{1}^{2}w_{1}^{2}=0,\ B^{2}_{2,2}:z_{0}^{2}w_{1}^ {2}+z_{1}^{2}w_{0}^{2}=0.\]
For (15), we obtain an example if we use curves \(B^{1}_{1,0},B^{2}_{1,0},B^{1}_{1,2},B^{2}_{1,2}\) in \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) given by the equations
\[B^{1}_{1,0}:z_{0}=0,\ B^{2}_{1,0}:z_{1}=0,\ B^{1}_{1,2}:z_{0}w_{0}^{2}+z_{1}w_{ 1}^{2}=0,\ B^{2}_{1,2}z_{0}w_{1}^{2}+z_{1}w_{0}^{2}=0.\]
For (16), we obtain an example if we use curves \(B^{1}_{1,0},B^{2}_{1,0},B^{1}_{1,1},B^{2}_{1,1},B^{1}_{0,1},B^{2}_{0,1}\) in \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) given by the equations
\[B^{1}_{1,0}:z_{0}=0,\ B^{2}_{1,0}z_{1}=0,\ B^{1}_{1,1}:(z_{0}-2z_{1})w_{0}+(2z_ {0}+z_{1})w_{1}=0,\]
\[B^{2}_{1,1}:z_{0}(w_{0}-2w_{1})+z_{1}(2w_{0}+w_{1})=0,\ B^{1}_{0,1}:w_{0}=0,\ B^{2}_{0,1}:w_{1}=0.\]
**Corollary 3.12**.: _For each numerical classes (80), (196,89), (197,88), (279,203, 90,91) of the list in section 6, there are a \(K3\) surface \(X\) and a finite abelian subgroup \(G\) of \(Aut(X)\) such that \(X/G\cong\mathbb{F}_{n}\) and the numerical class of the branch divisor \(B\) of the quotient map \(p:X\to X/G\) is (80), (196,89), (197,88), (279,203,90,91)._
_Furthermore, for a pair \((X,G)\) of a \(K3\) surface \(X\) and a finite abelian subgroup \(G\) of \(Aut(X)\), if \(X/G\cong\mathbb{F}_{n}\) and the numerical class of the branch divisor \(B\) of the quotient map \(p:X\to X/G\) is (80), (196,89), (197,88), (279,203,90,91), then \(G\) is \(\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 3}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 3}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 3}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\oplus\mathbb{Z}/4\mathbb{Z}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 4}\), in order, as a group._
Proof.: In the same way as Proposition 3.7, we get this corollary. More specifically, let \(X\) be a \(K3\) surface, \(G\) be a finite abelian subgroup of \(\text{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{n}\), and \(B\) be the branch divisor of the quotient map \(p:X\to X/G\). Then we get the following.
i) If the numerical class of \(B\) is one of (80,196,197,279), then \(X\to X/G\) is given by Theorem 3.5 and the fibre product.
ii) If the numerical class of \(B\) is (89), then \(X\to X/G\) is given by the composition of the Galois cover \(X\to\mathbb{F}_{2}\) whose numerical class of the branch divisor is (196) and the Galois cover \(\mathbb{F}_{2}\to\mathbb{F}_{1}\) which is induced by the Galois cover \(\mathbb{P}^{1}\to\mathbb{P}^{1}\) of degree 2.
iii) If the numerical class of \(B\) is (88), then \(X\to X/G\) is given by the composition of the Galois cover \(X\to\mathbb{F}_{2}\) whose numerical class of the branch divisor is (197) and the Galois cover \(\mathbb{F}_{2}\to\mathbb{F}_{1}\) which is induced by the Galois cover \(\mathbb{P}^{1}\to\mathbb{P}^{1}\) of degree 2.
iv) If the numerical class of \(B\) is one of (203,90,91), then \(X\to X/G\) is given by the composition of the Galois cover \(X\to\mathbb{F}_{4}\) whose numerical class of the branch divisor is (279) and the Galois cover \(\mathbb{F}_{4}\to\mathbb{F}_{1}\) which is induced by the Galois cover \(\mathbb{P}^{1}\to\mathbb{P}^{1}\) of degree 4.
For (80), we obtain an example if we use curves \(B_{2,4},B_{2,2}\) in \(\mathbb{F}_{1}\) given by the equations
\[B_{2,4}:X_{0}^{2}Y_{1}^{2}+X_{1}^{2}Y_{0}^{2}+X_{0}X_{1}Y_{2}^{2}=0,\ B_{2,2}:Y _{0}^{2}+Y_{1}^{2}+Y_{2}^{2}=0.\]
For (196), we obtain an example if we use curves \(B^{1}_{2,4},B^{2}_{2,4}\) in \(\mathbb{F}_{2}\) given by the equations
\[B^{1}_{2,4}:2Y^{2}_{0}+Y^{2}_{1}+Y^{2}_{2}=0,\ B^{2}_{2,4}:Y^{2}_{0}+Y^{2}_{1}+2 Y^{2}_{2}=0.\]
For (89), we obtain an example if we use curves \(B^{1}_{2,2},B^{2}_{2,2},B^{1}_{0,1},B^{2}_{0,1}\) in \(\mathbb{F}_{1}\) given by the equations
\[B^{1}_{2,2}:2Y^{2}_{0}+Y^{2}_{1}+Y^{2}_{2}=0,\ B^{2}_{2,2}:Y^{2}_{0}+Y^{2}_{1}+2 Y^{2}_{2}=0,\]
\[B^{1}_{0,1}:X_{0}=0,\ B^{2}_{0,1}:X_{1}=0.\]
For (197), we obtain an example if we use a section \(C\) and curves \(B_{1,2},B_{2,6}\) in \(\mathbb{F}_{2}\) given by the equations
\[B_{1,2}:Y_{0}+Y_{2}=0,\ B_{2,6}:X^{2}_{0}Y^{2}_{1}+X^{2}_{1}Y^{2}_{0}+(X^{2}_{ 0}+2X^{2}_{1})Y^{2}_{2}=0.\]
For (88), we obtain an example if we use a section \(C\) and curves \(B_{1,1},B_{2,3},B^{1}_{0,1},B^{2}_{0,1}\) in \(\mathbb{F}_{1}\) given by the equations
\[B_{1,1}:Y_{0}+Y_{2}=0,\ B_{2,3}:X_{0}Y^{2}_{1}+X_{1}Y^{2}_{0}+(X_{0}+2X_{1})Y^{ 2}_{2}=0,\]
\[B^{1}_{0,1}:X_{0}=0,\ B^{2}_{0,1}:X_{1}=0.\]
For (279), we obtain an example if we use a section \(C\) and curves \(B_{1,4},B_{2,8}\) in \(\mathbb{F}_{4}\) given by the equations
\[B_{1,4}:Y_{0}+Y_{2}=0,\ B_{2,8}:Y^{2}_{0}+Y^{2}_{1}+Y^{2}_{2}=0.\]
For (203), we obtain an example if we use a section \(C\) and curves \(B_{1,2},B_{2,4},B^{1}_{0,1},B^{2}_{0,1}\) in \(\mathbb{F}_{2}\) given by the equations
\[B_{1,2}:Y_{0}+Y_{2}=0,\ B_{2,4}:Y^{2}_{0}+Y^{2}_{1}+Y^{2}_{2}=0,\]
\[B^{1}_{0,1}:X_{0}=0,\ B^{2}_{0,1}:X_{1}=0.\]
For (90), we obtain an example if we use a section \(C\) and curves \(B_{1,1},B_{2,2},B^{1}_{0,1},B^{2}_{0,1}\) in \(\mathbb{F}_{1}\) given by the equations
\[B_{1,1}:Y_{0}+Y_{2}=0,\ B_{2,2}:Y^{2}_{0}+Y^{2}_{1}+Y^{2}_{2}=0,\]
\[B^{1}_{0,1}:X_{0}=0,\ B^{2}_{0,1}:X_{1}=0.\]
For (91), we obtain an example if we use a section \(C\) and curves \(B_{1,1},B_{2,2},B^{1}_{0,1},B^{2}_{0,1},B^{3}_{0,1}\) in \(\mathbb{F}_{1}\) given by the equations
\[B_{1,1}:Y_{0}+Y_{2}=0,\ B_{2,2}:Y^{2}_{0}+Y^{2}_{1}+Y^{2}_{2}=0,\]
\[B^{1}_{0,1}:X_{0}=0,\ B^{2}_{0,1}:X_{1}=0,\ B^{3}:X_{0}-X_{1}=0.\]
A lattice is a pair \((L,b)\) of a free abelian group \(L:=Z^{\oplus n}\) of rank n and a symmetric non degenerate bilinear form \(b:L\times L\to\mathbb{Z}\) taking values in \(\mathbb{Z}\). The discriminant group of \(L\) is \(L^{\vee}/L\), where the dual \(L^{\vee}:=\{m\in L\otimes\mathbb{Q}|\ b(m,l)\in\mathbb{Z}\ for\ all\ l\in L\}\) (here we denote by \(b\) the \(\mathbb{Q}\) linear extension of \(b\)). Let \(U\) be the hyperbolic lattice, and \(A_{n}\) and \(E_{n}\) are the negative definite lattices of rank \(n\) associated to the corresponding root systems.
**Proposition 3.13**.: _For each classes (17),(18) of the list in section 6, there are a \(K3\) surface \(X\) and a finite abelian subgroup \(G\) of \(\text{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{n}\) and the numerical class of the branch divisor \(B\) of the quotient map \(p:X\to X/G\) is (17),(18)._
_Furthermore, for a pair \((X,G)\) of a \(K3\) surface \(X\) and a finite abelian subgroup \(G\) of \(\text{Aut}(X)\), if \(X/G\cong\mathbb{F}_{n}\) and the numerical class of the branch divisor \(B\) of the quotient map \(p:X\to X/G\) is (17),(18), then \(G\) is \(\mathbb{Z}/3\mathbb{Z}^{\oplus 2}\), \(\mathbb{Z}/3\mathbb{Z}^{\oplus 2}\), in order, as a group._
Proof.: Let \(B^{1}_{1,1}\), \(B^{2}_{1,1}\), and \(B^{3}_{1,1}\) be smooth curves such that \(B^{1}_{1,1}+B^{2}_{1,1}+B^{3}_{1,1}\) is simple normal crossing. Since \(B^{1}_{1,1}+B^{2}_{1,1}+B^{3}_{1,1}=(3C+3F)\) in \(\text{Pic}(\mathbb{P}^{1}\times\mathbb{P}^{1})\), by Theorem 3.5, there is the Galois cover \(p^{\prime}:X^{\prime}\to\mathbb{P}^{1}\times\mathbb{P}^{1}\) such that the branch divisor is \(3B^{1}_{1,1}+3B^{2}_{1,1}+3B^{3}_{1,1}\) and the Galois group is isomorphic to \(\mathbb{Z}/3\mathbb{Z}\) as a group. Since \(B^{1}_{1,1}+B^{2}_{1,1}+B^{3}_{1,1}\) is simple normal crossing, singular points of \(X^{\prime}\) are rational double points. More precisely, the singular locus of \(X^{\prime}\) consists of six \(A_{2}\) points. Let \(p_{m}:X^{\prime}_{m}\to X^{\prime}\) be the minimal resolution of \(X^{\prime}\). Then the canonical divisor of \(X^{\prime}_{m}\) is numerical trivial. Since \(X^{\prime}_{m}\) has a curve such that the self intersection number is negative, \(X^{\prime}_{m}\) is a \(K3\) surface or Enriques surface. Since \(X^{\prime}_{m}\) has an automorphism \(s\) of order \(3\) such that the curves of \(\text{Fix}(s)\) are three rational curves \(C_{i}\) for \(i=1,2,3\), by [11], \(X^{\prime}_{m}\) is a \(K3\) surface. By [1, Theorem 2.8 and Proposition 3.2] or [14, Table 2], we get that
\[\text{Pic}(X^{\prime}_{m})^{s^{*}}:=\{\alpha\in\text{Pic}(X^{\prime}_{m}):\ s^{*}\alpha=\alpha\}\cong U\oplus E_{6}\oplus A^{3}_{2}.\]
Let \(z_{1},\ldots,z_{6}\) be singular points of \(X^{\prime}\), and \(e_{1},\ldots,e_{12}\) be the exceptional divisors of \(p_{m}\), where \(z_{i}=p_{m}(e_{2i-1})=p_{m}(e_{2i})\) for \(i=1,\ldots,6\). Notice that \((e_{2i-1}\cdot e_{2i})=1\), \((e_{2i-1}\cdot e_{2i-1})=-2\), and \((e_{2i}\cdot e_{2i})=-2\). Since \(C_{i}\subset\)Fix\((s)\) for \(i=1,2,3\), we get that \((e_{2i-1}\cup e_{2i})\cap\)Fix\((s)\) contains at least \(2\) points. Since \(s(e_{2i-1}\cup e_{2i})=(e_{2i-1}\cup e_{2i})\) and \(e_{2i-1}\cap e_{2i}\) is one point, we get that \(e_{2i-1}\cap e_{2i}\subset\)Fix\((s)\). Therefore, \(s(e_{2i-1})=e_{2i-1}\) and \(s(e_{2i})=e_{2i}\), and hence \(e_{2i-1},e_{2i}\in\)Pic\((X^{\prime}_{m})^{s^{*}}\) for \(i=1,\ldots,6\). Since \(\text{Pic}(X^{\prime}_{m})^{s^{*}}\) is a primitive sublattice, the minimal primitive sublattice which contains \((p^{\prime}\circ p_{m})^{*}\text{Pic}(\mathbb{P}^{1}\times\mathbb{P}^{1})\) and \(e_{1},\ldots,e_{12}\) of \(\text{Pic}(X^{\prime}_{m})\) is \(\text{Pic}(X^{\prime}_{m})^{s^{*}}\).
Let \(f:=p^{\prime}\circ p_{m}:X^{\prime}_{m}\to\mathbb{P}^{1}\times\mathbb{P}^{1}\). Since \(f_{*}C_{i}=B^{i}_{1,1}\), we get \((C_{i}\cdot f^{*}F)=((C+F)\cdot F)=1\) for \(i=1,2,3\). Let
\[C^{\prime}_{1}:=C_{1}+\sum_{i=1}^{6}\frac{(C_{1}\cdot e_{2i-1})}{2}e_{2i-1}+ \sum_{i=1}^{6}\frac{(C_{1}\cdot(e_{2i-1}+2e_{2i}))}{6}(e_{2i-1}+2e_{2i}).\]
Then \((C^{\prime}_{1}\cdot e_{i})=0\) for \(i=1,\ldots,12\). Since \((e_{2i-1}\cdot e_{2i-1})=-2\), \((e_{2i-1}\cdot e_{2i-1}+2e_{2i})=0\), and \((e_{2i-1}+2e_{2i}\cdot e_{2i-1}+2e_{2i})=-6\), we get \(6C^{\prime}_{1}\in\)Pic\((X^{\prime}_{m})\). Therefore, the minimal primitive sublattice \(K\) of \(\text{Pic}(X^{\prime}_{m})^{s^{*}}\), which contains \(f^{*}C\) and \(6C^{\prime}_{1}\) is a unimodular lattice. Let \(M\) be the minimal primitive sublattice of \(\text{Pic}(X^{\prime}_{m})\), which contains the curves \(e_{1},\ldots,e_{12}\). Then \(M\subset U^{\perp}\). Since \(U\) is a unimodular lattice and \(M\) and \(U\) are sublattice of \(\text{Pic}(X^{\prime}_{m})^{s^{*}}\), we get \(U\oplus M=\text{Pic}(X^{\prime}_{m})^{s^{*}}\). Therefore, the rank of \(M\) is \(12\) and \(M^{\vee}/M\cong\mathbb{Z}/3\mathbb{Z}^{\oplus 4}\). Thus, by [6,Theorem 5.2] there are a \(K3\) surface \(X\) and a symplectic automorphism \(t\) of order \(3\) of \(X\) such that \(X^{\prime}=X/\langle t\rangle\), and hence there is a finite abelian subgroup \(G\subset\text{Aut}(X)\) such that \(X/G\cong\mathbb{P}^{1}\times\mathbb{P}^{1}\), \(G\cong\mathbb{Z}/3\mathbb{Z}^{\oplus 2}\), and the branch divisor is \(3B^{1}_{1,1}+3B^{2}_{1,1}+3B^{3}_{1,1}\). In the same way, we get the claim for (18).
More specifically, let \(X\) be a \(K3\) surface \(X\), \(G\) be a finite abelian subgroup \(G\) of \(\operatorname{Aut}(X)\) such that \(X/G\cong\mathbb{P}^{1}\times\mathbb{P}^{1}\), and the numerical class of the branch divisor \(B\) of \(G\) is (17) or (18). By Theorem 3.5, there is the Galois cover \(p^{\prime}:X^{\prime}\to\mathbb{P}^{1}\times\mathbb{P}^{1}\) such that the branch divisor is \(B\) and the Galois group is isomorphic to \(\mathbb{Z}/3\mathbb{Z}\) as a group. Then we get that \(X\) is the universal cover of \(X^{\prime}\) of degree 3.
For (17), we obtain an example if we use curves \(B^{1}_{1,1},B^{2}_{1,1},B^{3}_{1,1}\) in \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) given by the equations
\[B^{1}_{1,1}:z_{0}w_{0}+z_{1}w_{1}=0,\ B^{2}_{1,1}:z_{0}w_{0}-z_{1}w_{1}=0,\ B^{3}_{1,1}:z_{0}w_{1}+z_{1}w_{0}=0.\]
For (18), we obtain an example if we use curves \(B_{1,0},B_{1,1},B_{1,2}\) in \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) given by the equations
\[B_{1,0}:z_{0}=0,\ B_{1,1}:z_{0}w_{1}+z_{1}w_{0}=0,\ B_{1,2}:z_{0}w_{1}^{2}+z_{1 }w_{0}^{2}+z_{1}w_{1}^{2}=0.\]
**Corollary 3.14**.: _For each numerical classes (198,92), (204), (303,252, 205,93) of the list in section 6, there are a \(K3\) surface \(X\) and a finite abelian subgroup \(G\) of \(\operatorname{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{n}\) and the numerical class \(B\) of the branch divisor of the quotient map \(p:X\to X/G\) is (198,92), (204), (303,252,205,93)._
_Furthermore, for a pair \((X,G)\) of a \(K3\) surface \(X\) and a finite abelian subgroup \(G\) of \(\operatorname{Aut}(X)\), if \(X/G\cong\mathbb{F}_{n}\) and the numerical class of the branch divisor \(B\) of the quotient map \(p:X\to X/G\) is (198,92), (204), (303,252,205,93), then \(G\) is \(\mathbb{Z}/3\mathbb{Z}^{\oplus 2}\), \(\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/3\mathbb{Z}^{\oplus 2}\), \(\mathbb{Z}/3\mathbb{Z}^{\oplus 2}\), \(\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/3\mathbb{Z}^{\oplus 2}\), \(\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/3\mathbb{Z}^{\oplus 3}\), in order, as a group._
Proof.: In the same way as Proposition 3.13, we get this corollary. More specifically, let \(X\) be a \(K3\) surface \(X\), \(G\) be a finite abelian subgroup \(G\) of \(\operatorname{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{n}\), and \(B\) be the branch divisor of the quotient map \(p:X\to X/G\). Let \(p^{\prime}:X^{\prime}\to\mathbb{P}^{1}\times\mathbb{P}^{1}\) be the Galois cover such that the branch divisor is \(B\) and which is given by Theorem 3.5. Then we get the following.
i) If the numerical class of \(B\) is one of (198,204,303), then \(X\) is the universal cover of \(X^{\prime}\) of degree 3.
ii) If the numerical class of \(B\) is (92), then \(X\to X/G\) is given by the composition of the Galois cover \(X^{\prime}\to\mathbb{F}_{2}\) whose numerical class of the branch divisor is (92) and the Galois cover \(\mathbb{F}_{2}\to\mathbb{F}_{1}\) which is induced by the Galois cover \(\mathbb{P}^{1}\to\mathbb{P}^{1}\) of degree 2.
iii) If the numerical class of \(B\) is one of (252,205,93), then \(X\to X/G\) is given by the composition of the Galois cover \(X^{\prime}\to\mathbb{F}_{6}\) whose numerical class of the branch divisor is (303) and the Galois cover \(\mathbb{F}_{6}\to\mathbb{F}_{m}\) which is induced by the Galois cover \(\mathbb{P}^{1}\to\mathbb{P}^{1}\) of degree \(\frac{6}{m}\).
For (198), we obtain an example if we use curves \(B^{1}_{1,2},B^{2}_{1,2},B^{3}_{1,2}\) in \(\mathbb{F}_{2}\) given by the equations
\[B^{1}_{1,2}:Y_{0}+Y_{2}=0,\ B^{2}_{1,2}:Y_{1}+Y_{2}=0,\ B^{3}_{1,2}:Y_{0}+Y_{1 }+Y_{2}=0.\]
For (92), we obtain an example if we use curves \(B^{1}_{1,1},B^{2}_{1,1},B^{3}_{1,1},B^{1}_{0,1},B^{2}_{0,1}\) in \(\mathbb{F}_{1}\) given by the equations
\[B^{1}_{1,1}:Y_{0}+Y_{2}=0,\ B^{2}_{1,1}:Y_{1}+Y_{2}=0,\ B^{3}_{1,1}:Y_{0}+Y_{1 }+Y_{2}=0,\]
\[B^{1}_{0,1}:X_{0}=0,\ B^{2}_{0,1}:X_{1}=0.\]
For (204), we obtain examples if we use a section \(C\) and curves \(B^{1}_{1,3},B^{2}_{1,3}\) in \(\mathbb{F}_{2}\) given by the equations
\[B^{1}_{1,3}:X_{0}Y_{0}+X_{0}Y_{1}+X_{1}Y_{2}=0,\ B^{2}_{1,3}:X_{1}Y_{0}+X_{1}Y_ {1}+2X_{0}Y_{2}=0.\]
For (303), we obtain examples if we use a section \(C\) and curves \(B^{1}_{1,6},B^{2}_{1,6}\) in \(\mathbb{F}_{6}\) given by the equations
\[B^{1}_{1,6}:Y_{0}+2Y_{2}=0,\ B^{2}_{1,6}Y_{1}+2Y_{2}=0.\]
For (252), we obtain examples if we use a section \(C\) and curves \(B^{1}_{1,3},B^{2}_{1,3},B^{1}_{0,1},B^{2}_{0,1}\) in \(\mathbb{F}_{3}\) given by the equations
\[B^{1}_{1,3}:Y_{0}+2Y_{2}=0,\ B^{2}_{1,3}:Y_{1}+2Y_{2}=0,\]
\[B^{1}_{0,1}:X_{0}=0,\ B^{2}_{0,1}:X_{1}=0.\]
For (205), we obtain examples if we use a section \(C\) and curves \(B^{1}_{1,2},B^{2}_{1,2},B^{1}_{0,1},B^{2}_{0,1}\) in \(\mathbb{F}_{2}\) given by the equations
\[B^{1}_{1,2}:Y_{0}+2Y_{2}=0,\ B^{2}_{1,2}:Y_{1}+2Y_{2}=0,\]
\[B^{1}_{0,1}:X_{0}=0,\ B^{2}_{0,1}:X_{1}=0.\]
For (93), we obtain examples if we use a section \(C\) and curves \(B^{1}_{1,1},B^{2}_{1,1},B^{1}_{0,1},B^{2}_{0,1}\) in \(\mathbb{F}_{1}\) given by the equations
\[B^{1}_{1,1}:Y_{0}+2Y_{2}=0,\ B^{2}_{1,1}:Y_{1}+2Y_{2}=0,\]
\[B^{1}_{0,1}:X_{0}=0,\ B^{2}_{0,1}:X_{1}=0.\]
**Proposition 3.15**.: _For each numerical classes (19,20) of the list in section 6, there are a \(K3\) surface \(X\) and a finite abelian subgroup \(G\) of Aut\((X)\) such that \(X/G\cong\mathbb{F}_{n}\) and the numerical class of the branch divisor \(B\) of the quotient map \(p:X\to X/G\) is (19,20)._
_Furthermore, for a pair \((X,G)\) of a \(K3\) surface \(X\) and a finite abelian subgroup \(G\) of Aut\((X)\), if \(X/G\cong\mathbb{F}_{n}\) and the numerical class of the branch divisor \(B\) of the quotient map \(p:X\to X/G\) is (19,20), then \(G\) is \(\mathbb{Z}/2\mathbb{Z}^{\oplus 3}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 3}\), in order, as a group._
Proof.: Let \(B^{i}_{1,1}\) be a smooth curve on \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) for \(i=1,2,3,4\) such that \(\sum_{i=1}^{4}B^{i}_{1,1}\) is simple normal crossing. Then the numerical class of \(\sum_{i=1}^{4}2B^{i}_{1,1}\) is (19). We set \(\{x_{1},x_{2}\}:=B^{1}_{1,1}\cap B^{2}_{1,1}\) and \(\{x_{3},x_{4}\}:=B^{3}_{1,1}\cap B^{3}_{1,1}\). Let \(Z:=\text{Blow}_{\{x_{1},x_{2},x_{3},x_{4}\}}\mathbb{P}^{1}\times\mathbb{P}^{1}\). Let \(E_{i}\) be the exceptional divisor for \(i=1,2,3,4\). Then \(\text{Pic}(Z)=\text{Pic}(\mathbb{P}^{1}\times\mathbb{P}^{1})\bigoplus_{i=1}^{4 }\mathbb{Z}E_{i}\). Let \(C_{i}\) be the proper transform of \(B^{i}_{1,1}\) for \(i=1,2,3,4\). Then for \(i=1,2\)\(j=3,4\),
\[C_{i}=(C+F)-E_{1}-E_{2}\text{ and }C_{j}=(C+F)-E_{3}-E_{4}\text{ in }\text{Pic}(Z).\]
By Theorem 3.5, there are the Galois covers \(p_{1}:Y_{1}\to Z\) and \(p_{2}:Y_{2}\to Z\) such that the branch divisor of \(p_{1}\) is \(2C_{1}+2C_{2}\), and that of \(p_{2}\) is \(2C_{3}+2C_{4}\). Since \(C_{1}\cap C_{2}\) and \(C_{3}\cap C_{4}\) are empty sets, \(Y_{1}\) and \(Y_{2}\) are smooth. Since \(\sum_{i=1}^{4}C^{i}_{1,1}\) is simple normal crossing, \(Y:=Y_{1}\times_{Z}Y_{2}\) is smooth and a \(K3\) surface. Therefore, there is the Galois cover \(f:Y\to Z\) whose branch divisor is \(\sum_{i=1}^{4}2C_{i}\) and Galois group is \(\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\) as
a group. Let \(C^{\prime}_{i}\) be a smooth curve on \(Y\) such that \(f^{*}C_{i}=2C^{\prime}_{i}\) for \(i=1,2,3,4\). Then
\[C^{\prime}_{1}=f^{*}((\frac{C}{2},\frac{F}{2})-\frac{1}{2}E_{1}-\frac{1}{2}E_{2 })\text{ and }C^{\prime}_{3}=f^{*}((\frac{C}{2},\frac{F}{2})-\frac{1}{2}E_{3}-\frac{1}{2}E _{4}),\text{ in }\text{Pic}(Y).\]
Thus, we get
\[\sum_{i=1}^{4}f^{*}E_{i}=2f^{*}(C+F)-2C^{\prime}_{1}-2C^{\prime}_{2},\text{ in }\text{Pic}(Y).\]
By Theorem 3.5, there is the Galois cover \(g:W\to Y\) whose branch divisor is \(\sum_{i=1}^{4}2f^{*}E_{i}\). Let \(E^{\prime}_{i}\) be a smooth curve on \(W\) such that \(g^{*}f^{*}E_{i}=2E^{\prime}_{i}\). Since \((f^{*}E_{i}\cdot f^{*}E_{i})=-2\), \((E^{\prime}_{i}\cdot E^{\prime}_{i})=-1\) for \(i=1,2,3,4\). Let \(f:W\to X\) be a contraction of \(E^{\prime}_{1},\dots,E^{\prime}_{4}\). Since \(Y\) is a \(K3\) surface, \(X\) is a \(K3\) surface. Since \(W\) is a double cover of \(Y\), there is a symplectic involution \(s\) of \(X\) such that \(X/\langle s\rangle\to\mathbb{P}^{1}\times\mathbb{P}^{1}\) is a Galois cover whose branch divisor is \(2B^{1}_{1,1}+2B^{2}_{1,1}+2B^{3}_{1,1}+2B^{4}_{1,1}\). Therefore, there is a finite abelian subgroup \(G\subset\text{Aut}(X)\) such that \(X/G\cong\mathbb{P}^{1}\times\mathbb{P}^{1}\), \(G\cong\mathbb{Z}/2\mathbb{Z}^{\oplus 3}\), and the branch divisor is \(2B^{1}_{1,1}+2B^{2}_{1,1}+2B^{3}_{1,1}+2B^{4}_{1,1}\).
Next, let \(B_{1,0},B_{1,2},B^{1}_{1,1},B^{2}_{1,1}\) be smooth curves on \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) such that \(B_{1,0}+B_{1,2}+B^{1}_{1,1}+B^{2}_{1,1}\) is simple normal crossing. Then the numerical class of \(2B_{1,0}+2B_{1,2}+2B^{1}_{1,1}+2B^{2}_{1,1}\) is (20). We set \(\{x_{1},x_{2}\}:=B_{1,0}\cap B_{1,2}\) and \(\{x_{3},x_{4}\}:=B^{1}_{1,1}\cap B^{2}_{1,1}\). Let \(Z:=\text{Blow}_{\{x_{1},x_{2},x_{3},x_{4}\}}\mathbb{P}^{1}\times\mathbb{P}^{1}\). Let \(E_{i}\) be the exceptional divisor for \(i=1,2,3,4\). Then \(\text{Pic}(Z)=\text{Pic}(\mathbb{P}^{1}\times\mathbb{P}^{1})\bigoplus_{i=1}^{4 }\mathbb{Z}E_{i}\). Let \(C_{1,0},C_{1,2},C^{1}_{1,1},C^{2}_{1,1}\) be the proper transform of \(B_{1,0}\),\(B_{1,2}\),\(B^{1}_{1,1}\),\(B^{2}_{1,1}\) in order. Then
\[C_{1,0}=C-E_{1}-E_{2}\text{ and }C_{1,2}=(C+F)-E_{1}-E_{2}\text{ in }\text{Pic}(Z),\]
and
\[C^{1}_{1,1}=(C+F)-E_{3}-E_{4}\text{ and }C^{2}_{1,1}=(C+F)-E_{3}-E_{4}\text{ in }\text{Pic}(Z).\]
Let \(p_{1}:Y_{1}\to Z\) be a cyclic cover whose branch divisor is \(2C_{1,0}+2C_{1,2}\), and \(p_{2}:Y_{2}\to Z\) be a cyclic cover whose branch divisor is \(2C^{1}_{1,1}+2C^{2}_{1,1}\). Then as for the case of (19), \(Y:=Y_{1}\times_{Z}Y_{2}\) is a \(K3\) surface, and there is the Galois cover \(f:Y\to Z\) whose branch divisor is \(\sum_{i=1}^{4}2C_{i}\) and Galois group is to \(\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\) as a group. Since \(\frac{f^{*}C_{1,0}}{2}\in\)Pic\((Y)\) and \(\frac{f^{*}C_{1,2}}{2}\in\)Pic\((Y)\), we get \(\frac{f^{*}(C_{1,2}-C_{1,1})}{2}=f^{*}(0,\frac{1}{2})\in\)Pic\((Y)\). As for the case of (19), we get \(\frac{\sum_{i=1}^{4}f^{*}E_{i}}{2}\in\)Pic\((Y)\), and hence we get the claim for (20).
More specifically, let \(X\) be a \(K3\) surface \(X\), \(G\) be a finite abelian subgroup \(G\) of \(\text{Aut}(X)\) such that \(X/G\cong\mathbb{P}^{1}\times\mathbb{P}^{1}\), and the numerical class of the branch divisor \(B\) of the quotient map \(p:X\to X/G\) is (19) or (20). By Theorem 3.5 and the fibre product, there is the Galois cover \(p^{\prime}:X^{\prime}\to\mathbb{P}^{1}\times\mathbb{P}^{1}\) such that the branch divisor is \(B\) and the Galois group is \(\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\) as a group. Then we get that \(X\) is the universal cover of \(X^{\prime}\) of degree \(2\).
For (19), we obtain an example if we use curves \(B^{1}_{1,1},B^{2}_{1,1},B^{3}_{1,1},B^{4}_{1,1}\) in \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) given by the equations
\[B^{1}_{1,1}:z_{0}w_{0}+z_{1}w_{1}=0,\text{ }B^{2}_{1,1}:z_{0}w_{0}-z_{1}w_{1}=0,\]
\[B^{3}_{1,1}:z_{0}w_{1}+z_{1}w_{0}=0,\text{ }B^{4}_{1,1}:z_{0}w_{1}-z_{1}w_{0}=0.\]
For (20), we obtain an example if we use curves \(B_{1,0},B_{1,1}^{1},B_{1,1}^{2},B_{1,2}\) in \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) given by the equations
\[B_{1,0}:z_{0}=0,\ B_{1,1}^{1}:z_{0}w_{0}+z_{1}w_{1}=0,\]
\[B_{1,1}^{2}:z_{0}w_{1}+z_{1}w_{0}=0,\ B_{1,2}:z_{0}w_{1}^{2}+3z_{1}w_{0}^{2}=0.\]
**Corollary 3.16**.: _For each numerical classes (81), (82), (199, 94), (200, 96), (282, 206, 97, 98) of the list in section 6, there are a \(K3\) surface \(X\) and a finite abelian subgroup \(G\) of \(\text{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{n}\) and the numerical class of the branch divisor \(B\) of the quotient map \(p:X\to X/G\) is (81), (82), (199, 94), (200, 96), (282, 206, 97, 98)._
_Furthermore, for a pair \((X,G)\) of a \(K3\) surface \(X\) and a finite abelian subgroup \(G\) of \(\text{Aut}(X)\), if \(X/G\cong\mathbb{F}_{n}\) and the numerical class of the branch divisor \(B\) of the quotient map \(p:X\to X/G\) is (81), (82), (199, 94), (200, 96), (282, 206, 97, 98), then \(G\) is \(\mathbb{Z}/2\mathbb{Z}^{\oplus 3}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 3}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 3}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 4}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 3}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 4}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 3}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 4}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 3}\oplus\mathbb{Z}/4\mathbb{Z}\), \(\mathbb{Z}/22\mathbb{Z}^{\oplus 5}\), in order, as a group._
Proof.: In the same way as Proposition 3.15, we get this corollary. More specifically, let \(X\) be a \(K3\) surface \(X\), \(G\) be a finite abelian subgroup \(G\) of \(\text{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{n}\), and \(B\) be the branch divisor of the quotient map \(p:X\to X/G\). Then we get following.
i) We assume that the numerical class of \(B\) is one of (81,82,199,200,282). By Theorem 3.5 and the fibre product, there is the Galois cover \(p^{\prime}:X^{\prime}\to\mathbb{F}_{n}\) such that the branch divisor is \(B\) and the Galois group is \(\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\) as a group. Then \(X\) is the universal cover of \(X^{\prime}\) of degree \(2\).
ii) If the numerical class of \(B\) is (94), then \(X\to X/G\) is given by the composition of the Galois cover \(X\to\mathbb{F}_{2}\) whose numerical class of the branch divisor is (199) and the Galois cover \(\mathbb{F}_{2}\to\mathbb{F}_{1}\) which is induced by the Galois cover \(\mathbb{P}^{1}\to\mathbb{P}^{1}\) of degree \(2\).
iii) If the numerical class of \(B\) is (96), then \(X\to X/G\) is given by the composition of the Galois cover \(X\to\mathbb{F}_{2}\) whose numerical class of the branch divisor is (200) and the Galois cover \(\mathbb{F}_{2}\to\mathbb{F}_{1}\) which is induced by the Galois cover \(\mathbb{P}^{1}\to\mathbb{P}^{1}\) of degree \(2\).
iv) If the numerical class of \(B\) is one of (206,98,97), then \(X\to X/G\) is given by the composition of the Galois cover \(X\to\mathbb{F}_{4}\) whose numerical class of the branch divisor is (303) and the Galois cover \(\mathbb{F}_{4}\to\mathbb{F}_{m}\) which is induced by the Galois cover \(\mathbb{P}^{1}\to\mathbb{P}^{1}\) of degree \(\frac{4}{m}\).
For (81), we obtain an example if we use a section \(C\) and curves \(B_{1,2}^{1},B_{1,2}^{2},B_{1,2}^{3}\) in \(\mathbb{F}_{1}\) given by the equations
\[B_{1,2}^{1}:X_{0}Y_{1}+X_{1}Y_{0}+(X_{0}+X_{1})Y_{2}=0,\ B_{1,2}^{2}:X_{0}Y_{1} +2X_{1}Y_{0}+(2X_{0}+X_{1})Y_{2}=0,\]
\[B_{1,2}^{3}:2X_{0}Y_{1}+X_{1}Y_{0}+(X_{0}+2X_{1})Y_{2}=0.\]
For (82), we obtain an example if we use curves \(B_{1,3},B_{1,1}^{1},B_{1,1}^{2},B_{1,1}^{3}\) in \(\mathbb{F}_{1}\) given by the equations
\[B_{1,3}:X_{0}^{2}Y_{1}+X_{1}^{2}Y_{0}+X_{0}X_{1}Y_{2}=0,\ B_{1,1}^{1}:Y_{0}+Y_{1 }+Y_{2}=0,\]
\[B_{1,1}^{2}:Y_{0}+2Y_{1}+Y_{2}=0,\ B_{1,1}^{3}:2Y_{0}+Y_{1}+Y_{2}=0.\]
For (199), we obtain an example if we use a section \(C\) and curves \(B_{2,4},B_{1,2}^{1},B_{1,2}^{2}\) in \(\mathbb{F}_{2}\) given by the equations
\[B_{2,4}:X_{0}^{2}Y_{1}+(X_{0}^{2}+X_{1}^{2})Y_{2}=0,\ B_{1,2}^{1}:Y_{0}+Y_{2}=0, \ B_{1,2}^{2}:2Y_{0}+2Y_{1}=0.\]
For (94), we obtain an example if we use a section \(C\) and curves \(B_{1,2},B_{1,1}^{1}\), \(B_{1,1}^{2}\), \(B_{0,1}^{1},B_{0,1}^{2}\) in \(\mathbb{F}_{1}\) given by the equations
\[B_{1,2}:X_{0}Y_{1}+(X_{0}+X_{1})Y_{2}=0,\ B_{1,1}^{1}:Y_{0}+Y_{2}=0,\ B_{1,1}^{ 2}:2Y_{0}+2Y_{1}=0,\]
\[B_{0,1}^{1}:X_{0}=0,\ B_{0,1}^{2}:X_{1}=0.\]
For (200), we obtain an example if we use curves \(B_{1,2}^{1},B_{1,2}^{2},B_{1,2}^{3},B_{1,2}^{4}\) in \(\mathbb{F}_{2}\) given by the equations
\[B_{1,2}^{1}:Y_{0}+2Y_{2}=0,\ B_{1,2}^{2}:Y_{1}+2Y_{2}=0,\]
\[B_{1,2}^{3}:3Y_{0}+Y_{1}+Y_{2}=0,\ B_{1,2}^{4}:Y_{0}+Y_{1}+3Y_{2}=0.\]
For (96), we obtain an example if we use curves \(B_{1,1}^{1}\),\(B_{1,1}^{2}\),\(B_{1,1}^{3}\),\(B_{1,1}^{4}\),\(B_{0,1}^{1},B_{0,1}^{2}\) in \(\mathbb{F}_{1}\) given by the equations
\[B_{1,1}^{1}:Y_{0}+2Y_{2}=0,\ B_{1,1}^{2}:Y_{1}+2Y_{2}=0,\]
\[B_{1,1}^{3}:3Y_{0}+Y_{1}+Y_{2}=0,\ B_{1,1}^{4}:Y_{0}+Y_{1}+3Y_{2}=0,\]
\[B_{0,1}^{1}:X_{0}=0,\ B_{0,1}^{2}:X_{1}=0.\]
For (282), we obtain an example if we use a section \(C\) and curves \(B_{1,4}^{1}\),\(B_{1,4}^{2}\),\(B_{1,4}^{3}\) in \(\mathbb{F}_{4}\) given by the equations
\[B_{1,4}^{1}:Y_{0}+2Y_{2}=0,\ B_{1,4}^{2}:Y_{1}+2Y_{2}=0,\ B_{1,4}^{3}:3Y_{0}+Y _{1}+Y_{2}=0.\]
For (206), we obtain examples if we use a section \(C\) and curves \(B_{1,2}^{1}\),\(B_{1,2}^{2}\),\(B_{1,2}^{3}\),\(B_{0,1}^{1},B_{0,1}^{2}\) in \(\mathbb{F}_{2}\) given by the equations
\[B_{1,2}^{1}:Y_{0}+2Y_{2}=0,\ B_{1,2}^{2}:Y_{1}+2Y_{2}=0,\ B_{1,2}^{3}:3Y_{0}+Y _{1}+Y_{2}=0.\]
\[B_{0,1}^{1}:X_{0}=0,\ B_{0,1}^{2}:X_{1}=0.\]
For (97), we obtain examples if we use a section \(C\) and curves \(B_{1,1}^{1}\),\(B_{1,1}^{2}\),\(B_{1,1}^{3}\),\(B_{1,1}^{1}\),\(B_{0,1}^{2}\) in \(\mathbb{F}_{1}\) given by the equations
\[B_{1,1}^{1}:Y_{0}+2Y_{2}=0,\ B_{1,1}^{2}:Y_{1}+2Y_{2}=0,\ B_{1,1}^{3}:3Y_{0}+Y _{1}+Y_{2}=0.\]
\[B_{0,1}^{1}:X_{0}=0,\ B_{0,1}^{2}:X_{1}=0.\]
For (98), we obtain an example if we use a section \(C\) and curves \(B_{1,1}^{1}\),\(B_{1,1}^{2}\),\(B_{1,1}^{3}\),\(B_{0,1}^{1}\), \(B_{0,1}^{2}\) in \(\mathbb{F}_{1}\) given by the equations
\[B_{1,1}^{1}:Y_{0}+2Y_{2}=0,\ B_{1,1}^{2}:Y_{1}+2Y_{2}=0,\ B_{1,1}^{3}:3Y_{0}+Y _{1}+Y_{2}=0,\ B_{1,1}^{4}:Y_{0}+Y_{1}+3Y_{2}=0,\]
\[B_{0,1}^{1}:X_{0}=0,\ B_{0,1}^{2}:X_{1}=0,\ B_{0,1}^{3}:X_{0}-X_{1}=0.\]
**Proposition 3.17**.: _For numerical classes (278,207,99,100) of the list in section 6, there are a \(K3\) surface \(X\) and a finite abelian subgroup \(G\) of \(\text{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{n}\) and the numerical class of the branch divisor \(B\) of the quotient map \(p:X\to X/G\) is (278,207,99,100)._
_Furthermore, for a pair \((X,G)\) of a \(K3\) surface \(X\) and a finite abelian subgroup \(G\) of \(\text{Aut}(X)\), if \(X/G\cong\mathbb{F}_{n}\) and the numerical class of the branch divisor \(B\) of the quotient map \(p:X\to X/G\) is (278,207,99,100), then \(G\) is \(\mathbb{Z}/4\mathbb{Z}\), \(\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/4\mathbb{Z}\), \(\mathbb{Z}/4\mathbb{Z}^{\oplus 2}\), \(\mathbb{Z}/2\mathbb{Z}^{2}\oplus\mathbb{Z}/4\mathbb{Z}\), in order, as a group._
Proof.: Let \(B_{2,8}\) be a smooth curve on \(\mathbb{F}_{4}\). Then the numerical class of \(2C+4B_{2,8}\) is (278). Since \(B_{2,8}=2C+8F\) in \(\text{Pic}(\mathbb{F}_{4})\), by Theorem 3.5, there is the Galois cover \(p_{1}:X_{1}\to\mathbb{F}_{4}\) such that the branch divisor is \(2B_{2,8}\) and the Galois group is \(\mathbb{Z}/2\mathbb{Z}\) as a group. Let \(E_{2,8}\) be a smooth curve on \(X_{1}\) such that \(p_{1}^{*}B_{2,8}=2E_{2,8}\). Since \(C+B_{2,8}\) is simple normal crossing, \(p_{1}^{*}C\) is a reduced divisor on \(X_{1}\), whose support is a union of pairwise disjoint smooth curves. Since \(p_{1}^{*}C+E_{2,8}=p_{1}^{*}(2C+4F)=2p_{1}^{*}(C+2F)\) in \(\text{Pic}(X_{1})\), by Theorem 3.5, there is a Galois cover \(p_{2}:X_{2}\to X_{1}\) such that the branch divisor is \(p_{1}^{*}C+E_{2,8}\) and the Galois group is \(\mathbb{Z}/2\mathbb{Z}\) as a group. Then \(p:=p_{1}\circ p_{2}:X_{2}\to\mathbb{F}_{4}\) is the branched cover such that \(p\) has \(2C+4B_{2,8}\) as the branch divisor. In the same way of Proposition 3.7, \(X\) is a \(K3\) surface, and \(p:X\to\mathbb{F}_{4}\) is the Galois cover whose Galois group is \(\mathbb{Z}/4\mathbb{Z}\) as a group. In the same way of Proposition 3.7, we get the claim for (207,99,100).
More specifically, let \(X\) be a \(K3\) surface, \(G\) be a finite abelian subgroup of \(\text{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{n}\), and \(B\) be the branch divisor of the quotient map \(p:X\to X/G\). Then we get the following.
i) If the numerical class of \(B\) is (278), then \(X\to X/G\) is given by the above way.
ii) If the numerical class of \(B\) is one of (207,99,100), then \(X\to X/G\) is given by the composition of the Galois cover \(X^{\prime}\to\mathbb{F}_{4}\) whose numerical class of the branch divisor is (278) and the Galois cover \(\mathbb{F}_{4}\to\mathbb{F}_{m}\) which is induced by the Galois cover \(\mathbb{P}^{1}\to\mathbb{P}^{1}\) of degree \(\frac{m}{4}\).
For (278), we obtain an example if we use a section \(C\) and a curve \(B_{2,8}\) in \(\mathbb{F}_{4}\) given by the equation
\[B_{2,8}:Y_{0}^{2}+Y_{1}^{2}+Y_{2}^{2}=0.\]
For (207), we obtain examples if we use a section \(C\) and curves \(B_{2,4}\),\(B_{0,1}^{1}\),\(B_{0,1}^{2}\) in \(\mathbb{F}_{2}\) given by the equations
\[B_{2,4}:Y_{0}^{2}+Y_{1}^{2}+Y_{2}^{2}=0,\ B_{0,1}^{1}:X_{0}=0,\ B_{0,1}^{2}:X _{1}=0.\]
For (99), we obtain examples if we use a section \(C\) and curves \(B_{2,2}\),\(B_{0,1}^{1}\),\(B_{0,1}^{2}\) in \(\mathbb{F}_{1}\) given by the equations
\[B_{2,2}:Y_{0}^{2}+Y_{1}^{2}+Y_{2}^{2}=0,\ B_{0,1}^{1}:X_{0}=0,\ B_{0,1}^{2}:X _{1}=0.\]
For (100), we obtain examples if we use a section \(C\) and curves \(B_{2,2}\),\(B_{0,1}^{1}\), \(B_{0,1}^{2}\),\(B_{0,1}^{3}\) in \(\mathbb{F}_{1}\) given by the equations
\[B_{2,2}:Y_{0}^{2}+Y_{1}^{2}+Y_{2}^{2}=0,\ B_{0,1}^{1}:X_{0}=0,\ B_{0,1}^{2}:X _{1}=0,\ B_{0,1}^{3}:X_{0}-X_{1}=0.\]
**Proposition 3.18**.: _For numerical classes (280,208) of the list in section 6, there are a \(K3\) surface \(X\) and a finite abelian subgroup \(G\) of \(\text{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{n}\)
_and the numerical class of the branch divisor \(B\) of the quotient map \(p:X\to X/G\) is (280,208)._
_Furthermore, for a pair \((X,G)\) of a \(K3\) surface \(X\) and a finite abelian subgroup \(G\) of \(\text{Aut}(X)\), if \(X/G\cong\mathbb{F}_{n}\) and the numerical class of the branch divisor \(B\) of the quotient map \(p:X\to X/G\) is (280,208), then \(G\) is \(\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/4\mathbb{Z}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\oplus\mathbb{Z}/4\mathbb{Z}\), in order, as a group._
Proof.: Let \(B_{1,6}\) and \(B_{1,4}\) be smooth curves on \(\mathbb{F}_{4}\) such that \(C+B_{1,6}+B_{1,4}\) is simple normal crossing. Then the numerical class of \(4C+2B_{1,6}+4B_{1,4}\) is (280). Since \(C+B_{1,4}=2C+2F\) in \(\text{Pic}(\mathbb{F}_{8})\), by Theorem 3.5, there is the Galois cover \(p_{1}:X_{1}\to\mathbb{F}_{4}\) such that the branch divisor is \(2C+2B_{1,4}\) and the Galois group is \(\mathbb{Z}/2\mathbb{Z}\) as a group. Let \(E_{C},E_{1,4}\) be two smooth curves on \(X_{1}\) such that \(p_{1}^{*}C=2E_{C}\) and \(p_{1}^{*}B_{1,4}=2E_{1,4}\). Since \(C+B_{1,6}+B_{1,4}\) is simple normal crossing, \(p_{1}^{*}B_{1,6}\) is a reduced divisor on \(X_{1}\), whose support is a union of pairwise disjoint smooth curves. Since Since \(p_{1}^{*}B_{1,6}=p_{1}^{*}(C+6F)=p_{1}^{*}(C+4F)+p_{1}^{*}(2F)=2E_{1,4}+2p_{1}^ {*}F\) in \(\text{Pic}(X_{1})\), by Theorem 3.5, there is the Galois cover \(p_{2}:X_{2}\to X_{1}\) such that the branch divisor is \(2p_{1}^{*}B_{1,6}\) and the Galois group is \(\mathbb{Z}/2\mathbb{Z}\). Notice taht \(\frac{p_{2}^{*}p_{1}^{*}B_{1,6}}{2}\in\)Pic\((X_{2})\). Since \(C+B_{1,6}+B_{1,4}\) is simple normal crossing, \(p_{2}^{*}E_{C}\) and \(p_{2}^{*}E_{1,4}\) are reduced divisors on \(X_{2}\), whose support are unions of pairwise disjoint smooth curves. Since \(p_{2}^{*}(E_{C}+E_{1,4})=p_{2}^{*}p_{1}^{*}(C+2F)=p_{2}^{*}p_{1}^{*}(C+6F)-p_{ 2}^{*}p_{1}^{*}4F=p_{2}^{*}p_{1}^{*}B_{1,6}-4p_{2}^{*}p_{1}^{*}F\) in \(\text{Pic}(X_{2})\) and \(\frac{p_{2}^{*}p_{1}^{*}B_{1,6}}{2}\in\)Pic\((X_{2})\), by Theorem 3.5, there is the Galois cover \(p_{3}:X\to X_{2}\) such that the branch divisor is \(p_{2}^{*}(E_{C}+E_{1,4})\) and the Galois group is \(\mathbb{Z}/2\mathbb{Z}\). Then \(p:=p_{1}\circ p_{2}\circ p_{3}:X\to\mathbb{F}_{4}\) is the branched cover such that \(p\) has \(4C+2B_{1,6}+4B_{1,4}\) as the branch divisor. In the same way of Proposition 3.7, \(X\) is a \(K3\) surface, and \(p:X\to\mathbb{F}_{4}\) is the Galois cover whose Galois group is \(\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/4\mathbb{Z}\) as a group. In the same way of Proposition 3.7, we get the claim for (208).
More specifically, let \(X\) be a \(K3\) surface, \(G\) be a finite abelian subgroup of \(\text{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{n}\), and \(B\) be the branch divisor of the quotient map \(p:X\to X/G\). Then we get the following.
i) If the numerical class of \(B\) is (280), then \(X\to X/G\) is given by the above way.
ii) If the numerical class of \(B\) is (208), then \(X\to X/G\) is given by the composition of the Galois cover \(X^{\prime}\to\mathbb{F}_{4}\) whose numerical class of the branch divisor is (280) and the Galois cover \(\mathbb{F}_{4}\to\mathbb{F}_{2}\) which is induced by the Galois cover \(\mathbb{P}^{1}\to\mathbb{P}^{1}\) of degree 2.
For (280), we obtain an example if we use a section \(C\) and curves \(B_{1,6}\),\(B_{1,4}\) in \(\mathbb{F}_{4}\) given by the equations
\[B_{1,6}:X_{0}^{2}Y_{1}+X_{1}^{2}Y_{0}+(X_{0}^{2}+2X_{1}^{2})Y_{2}=0,\ B_{1,4}:2Y _{0}+Y_{2}=0.\]
For (208), we obtain an example if we use a section \(C\) and curves \(B_{1,3}\),\(B_{1,2}\), \(B_{0,1}^{1},B_{0,1}^{2}\) in \(\mathbb{F}_{2}\) given by the equations
\[B_{1,3}:X_{0}Y_{1}+X_{1}Y_{0}+(X_{0}+2X_{1})Y_{2}=0,\ B_{1,2}:2Y _{0}+Y_{2}=0,\] \[B_{0,1}^{1}:X_{0}=0,\ B_{0,1}^{2}:X_{1}=0.\]
**Corollary 3.19**.: _For each numerical classes (311,281,210,209,101) of the list in section 6, there are a \(K3\) surface \(X\) and a finite abelian subgroup \(G\) of \(\text{Aut}(X)\) such
that \(X/G\cong\mathbb{F}_{n}\) and the numerical class of the branch divisor \(B\) of the quotient map \(p:X\to X/G\) is (311,281,210,209,101)._
_Furthermore, for a pair \((X,G)\) of a \(K3\) surface \(X\) and a finite abelian subgroup \(G\) of \(\text{Aut}(X)\), if \(X/G\cong\mathbb{F}_{n}\) and the numerical class of the branch divisor \(B\) of the quotient map \(p:X\to X/G\) is (311281,210,209,101), then \(G\) is \(\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/4\mathbb{Z}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 3}\oplus\mathbb{Z}/4\mathbb{Z}\), \(\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/4\mathbb{Z}^{\oplus 2}\), \(\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/4\mathbb{Z}\oplus\mathbb{Z}/8\mathbb{Z}\), in order, as a group._
Proof.: In the same way Proposition 3.18, we get the claim. More specifically, let \(X\) be a \(K3\) surface, \(G\) be a finite abelian subgroup of \(\text{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{n}\), and \(B\) be the branch divisor of the quotient map \(p:X\to X/G\). Then we get the following.
i) If the numerical class of \(B\) is (311), then \(X\to X/G\) is given by the above way.
ii) If the numerical class of \(B\) is one of (101,209,210,281), then \(X\to X/G\) is given by the composition of the Galois cover \(X\to\mathbb{F}_{8}\) whose numerical class of the branch divisor is (311) and the Galois cover \(\mathbb{F}_{8}\to\mathbb{F}_{m}\) which is induced by the Galois cover \(\mathbb{P}^{1}\to\mathbb{P}^{1}\) of degree \(\frac{8}{m}\).
For (311), we obtain examples if we use a section \(C\) and curves \(B^{1}_{1,8}\),\(B^{2}_{1,8}\) in \(\mathbb{F}_{8}\) given by the equations
\[B^{1}_{1,8}:Y_{0}+Y_{1}+Y_{2}=0,\ B^{2}_{1,8}:Y_{0}+Y_{1}+2Y_{2}=0.\]
For (281), we obtain examples if we use a section \(C\) and curves \(B^{1}_{1,4}\),\(B^{2}_{1,4}\), \(B^{1}_{0,1},B^{2}_{0,1}\) in \(\mathbb{F}_{4}\) given by the equations
\[B^{1}_{1,4}:Y_{0}+Y_{1}+Y_{2}=0,\ B^{2}_{1,4}:Y_{0}+Y_{1}+2Y_{2}=0,\]
\[B^{1}_{0,1}:X_{0}=0,\ B^{2}:X_{1}=0.\]
For (209), we obtain examples if we use a section \(C\) and curves \(B^{1}_{1,2}\),\(B^{2}_{1,2}\), \(B^{1}_{0,1},B^{2}_{0,1}\) in \(\mathbb{F}_{2}\) given by the equations
\[B^{1}_{1,2}:Y_{0}+Y_{1}+Y_{2}=0,\ B^{2}_{1,2}:Y_{0}+Y_{1}+2Y_{2}=0,\]
\[B^{1}_{0,1}:X_{0}=0,\ B^{2}:X_{1}=0.\]
For (101), we obtain examples if we use a section \(C\) and curves \(B^{1}_{1,1}\),\(B^{2}_{1,1}\), \(B^{1}_{0,1},B^{2}_{0,1}\) in \(\mathbb{F}_{1}\) given by the equations
\[B^{1}_{1,1}:Y_{0}+Y_{1}+Y_{2}=0,\ B^{2}_{1,1}:Y_{0}+Y_{1}+2Y_{2}=0,\]
\[B^{1}_{0,1}:X_{0}=0,\ B^{2}:X_{1}=0.\]
For (210), we obtain an example if we use a section \(C\) and curves \(B^{1}_{1,2}\),\(B^{2}_{1,2}\), \(B^{1}_{0,1}\),\(B^{2}_{0,1}\) in \(\mathbb{F}_{2}\) given by the equations
\[B^{1}_{1,2}:Y_{0}+Y_{1}+Y_{2}=0,\ B^{2}_{1,2}:Y_{0}+Y_{1}+2Y_{2}=0,\]
\[B^{1}_{0,1}:X_{0}=0,\ B^{2}_{0,1}:X_{1}=0,\ B^{3}_{0,1}:X_{0}-X_{1}=0.\]
**Proposition 3.20**.: _For each numerical classes (316,304,283,254,253,211,95) of the list in section 6, there are a \(K3\) surface \(X\) and a finite abelian subgroup \(G\) of \(\text{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{n}\) and the numerical class of the branch divisor \(B\) of the quotient map \(p:X\to X/G\) is (316,304,283,254,253,211,95)._
_Furthermore, for a pair \((X,G)\) of a \(K3\) surface \(X\) and a finite abelian subgroup \(G\) of \(\text{Aut}(X)\), if \(X/G\cong\mathbb{F}_{n}\) and the numerical class of the branch divisor \(B\) of the quotient map \(p:X\to X/G\) is (316,304,283,254,253,211,95), then \(G\) is \(\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/3\mathbb{Z}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\oplus\mathbb{Z}/3\mathbb{Z}\), \(\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/3\mathbb{Z}^{\oplus 2}\), \(\mathbb{Z}/22\mathbb{Z}^{\oplus 3}\oplus\mathbb{Z}/3\mathbb{Z}\oplus\mathbb{Z}/4 \mathbb{Z}\), \(\mathbb{Z}/22\mathbb{Z}^{\oplus 2}\oplus\mathbb{Z}/3\mathbb{Z}^{\oplus 2}\), \(\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/3\mathbb{Z}^{\oplus 2}\oplus\mathbb{Z}/4 \mathbb{Z}\), in order, as a group._
Proof.: Let \(B^{i}_{1,12}\) be a smooth curve on \(\mathbb{F}_{12}\) for \(i=1,2\) such that \(C+B^{1}_{1,12}+B^{2}_{1,12}\) is simple normal crossing. Then the numerical class of \(6C+2B^{1}_{1,12}+3B^{2}_{1,12}\) is (316). Since \(C+B^{1}_{1,12}=2C+12F\) in \(\text{Pic}(\mathbb{F}_{12})\), by Theorem 3.5, there is the Galois cover \(p_{1}:X_{1}\to\mathbb{F}_{12}\) such that the branch divisor is \(2C+2B^{1}_{1,12}\) and the Galois group is \(\mathbb{Z}/2\mathbb{Z}\) as a group. Since \(C+B^{1}_{1,12}+B^{2}_{1,12}\) is simple normal crossing, \(p^{*}_{1}B^{2}_{1,12}\) is a reduced divisor on \(X_{1}\), whose support is a union of pairwise disjoint smooth curves. Since \(C\) and \(B^{1}_{1,12}\) are smooth curves, there are smooth curves \(E_{C}\), \(E^{1}_{1,12}\) on \(X_{1}\) such that \(p^{*}_{1}C=2E_{C}\) and \(p^{*}_{1}B^{1}_{1,12}=2E^{1}_{1,12}\). Since \(E_{C}+p^{*}_{1}B^{2}_{1,12}=E_{C}+p^{*}_{1}(C+12F)=E_{C}+p^{*}_{1}C+12p^{*}_{1 }F=3E_{C}+12p^{*}_{1}F\) in \(\text{Pic}(X_{1})\), by Theorem 3.5, there is the Galois cover \(p_{2}:X\to X_{1}\) such that the branch divisor is \(3E_{C}+3p^{*}_{1}B^{2}_{1,12}\) and the Galois group is \(\mathbb{Z}/3\mathbb{Z}\) as a group. Then \(p:=p_{1}\circ p_{2}:X\to\mathbb{F}_{12}\) is the branched cover such that \(p\) has \(6C+2B^{1}_{1,12}+3B^{2}_{1,12}\) as the branch divisor. In the same way as Proposition 3.7, \(X\) is a \(K3\) surface, and \(p:X\to\mathbb{F}_{12}\) is the Galois cover whose Galois group is \(G\cong\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/3\mathbb{Z}\) as a group.
More specifically, let \(X\) be a \(K3\) surface, \(G\) be a finite abelian subgroup of \(\text{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{n}\), and \(B\) be the branch divisor of the quotient map \(p:X\to X/G\). Then we get the following.
i) If the numerical class of \(B\) is (316), then \(X\to X/G\) is given by the above way.
ii) If the numerical class of \(B\) is one of (304,283,254,253,211,95), then \(X\to X/G\) is given by the composition of the Galois cover \(X^{\prime}\to\mathbb{F}_{12}\) whose numerical class of the branch divisor is (316) and the Galois cover \(\mathbb{F}_{12}\to\mathbb{F}_{m}\) which is induced by the Galois cover \(\mathbb{P}^{1}\to\mathbb{P}^{1}\) of degree \(\frac{12}{m}\).
For (316), we obtain an example if we use a section \(C\) and curves \(B^{1}_{1,12}\),\(B^{2}_{1,12}\) in \(\mathbb{F}_{12}\) given by the equations
\[B^{1}_{1,12}:Y_{0}+2Y_{2}=0,\ B^{2}_{1,12}:Y_{1}+2Y_{2}=0.\]
For (304), we obtain examples if we use a section \(C\) and curves \(B^{1}_{1,6}\),\(B^{2}_{1,6}\), \(B^{1}_{0,1},B^{2}_{0,1}\) in \(\mathbb{F}_{6}\) given by the equations
\[B^{1}_{1,6}:Y_{0}+2Y_{2}=0,\ B^{2}_{1,6}:Y_{1}+2Y_{2}=0,\]
\[B^{1}_{0,1}:X_{0}=0,\ B^{2}_{0,1}:X_{1}=0.\]
For (283), we obtain examples if we use a section \(C\) and curves \(B^{1}_{1,4}\),\(B^{2}_{1,4}\), \(B^{1}_{0,1},B^{2}_{0,1}\) in \(\mathbb{F}_{4}\) given by the equations
\[B^{1}_{1,4}:Y_{0}+2Y_{2}=0,\ B^{2}_{1,4}:Y_{1}+2Y_{2}=0,\]
\[B^{1}_{0,1}:X_{0}=0,\ B^{2}_{0,1}:X_{1}=0.\]
For (253), we obtain examples if we use a section \(C\) and curves \(B^{1}_{1,3}\),\(B^{2}_{1,3}\), \(B^{1}_{0,1},B^{2}_{0,1}\) in \(\mathbb{F}_{3}\) given by the equations
\[B^{1}_{1,3}:Y_{0}+2Y_{2}=0,\ B^{2}_{1,3}:Y_{1}+2Y_{2}=0,\]
\[B^{1}_{0,1}:X_{0}=0,\ B^{2}_{0,1}:X_{1}=0.\]
For (211), we obtain examples if we use a section \(C\) and curves \(B^{1}_{1,2}\),\(B^{2}_{1,2}\), \(B^{1}_{0,1},B^{2}_{0,1}\) in \(\mathbb{F}_{2}\) given by the equations
\[B^{1}_{1,2}:Y_{0}+2Y_{2}=0,\ B^{2}_{1,2}:Y_{1}+2Y_{2}=0,\]
\[B^{1}_{0,1}:X_{0}=0,\ B^{2}_{0,1}:X_{1}=0.\]
For (95), we obtain examples if we use a section \(C\) and curves \(B^{1}_{1,1}\),\(B^{2}_{1,1}\), \(B^{1}_{0,1},B^{2}_{0,1}\) in \(\mathbb{F}_{1}\) given by the equations
\[B^{1}_{1,1}:Y_{0}+2Y_{2}=0,\ B^{2}_{1,1}:Y_{1}+2Y_{2}=0,\]
\[B^{1}_{0,1}:X_{0}=0,\ B^{2}_{0,1}:X_{1}=0.\]
For (254), we obtain an example if we use a section \(C\) and curves \(B^{1}_{1,3}\),\(B^{2}_{1,3}\), \(B^{1}_{0,1},B^{2}_{0,1},B^{3}_{0,1}\) in \(\mathbb{F}_{3}\) given by the equations
\[B^{1}_{1,3}:Y_{0}+2Y_{2}=0,\ B^{2}_{1,3}:Y_{1}+2Y_{2}=0,\]
\[B^{1}_{0,1}:X_{0}=0,\ B^{2}_{0,1}:X_{1}=0,\ B^{3}_{0,1}:X_{0}-X_{1}=0.\]
### Complete proof of Theorem 1.6
In this section, we will show that there is no numerical class such that it has an abelian \(K3\) cover except the numerical classes which are mentioned in Section 3.1. Then by Section 3.1, we will get Theorem 1.6. From here, we use the notations that
i) \(X\) is a \(K3\) surface,
ii) \(G\) is a finite abelian subgroup of \(\mathrm{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{n}\),
iii) \(p:X\to X/G\) is the quotient map, and
iv) \(B:=\sum_{i=1}^{k}b_{i}B_{i}\) is the branch divisor of \(p\).
Furthermore, we use the notation that \(B^{k}_{i,j}\) (or simply \(B_{i,j}\)) is a smooth curve on \(\mathbb{F}_{n}\) such that \(B^{k}_{i,j}=iC+jF\) in \(\mathrm{Pic}(\mathbb{F}_{n})\) if \(n\geq 0\) where \(k\in\mathbb{N}\).
For the branch divisor \(B=\sum_{i=1}^{m}\sum_{j=1}^{n(i)}b^{i}_{j}B^{j}_{s_{i},t_{i}}\) where \(m,n(i)\in\mathbb{N}\), we use the notation that
\[G^{j}_{s_{i},t_{i}}:=\{g\in G:g_{|p^{-1}(B^{j}_{s_{i},t_{i}})}=\mathrm{id}_{p ^{-1}(B^{j}_{s_{i},t_{i}})}\}.\]
Recall that by Theorem 2.8, \(G^{j}_{s_{i},t_{i}}\) is a cyclic group of order \(b^{i}_{j}\) which is generated by a non-symplectic automorphism of order \(b^{i}_{j}\). Since \(G\) is abelian, the support of \(B\) and the support of \(p^{*}B\) are simple normal crossing.
**Lemma 3.21**.: _We assume that \(X/G\cong\mathbb{F}_{n}\) for \(n\geq 1\). If \(B=aC+\sum_{i=1}^{k}b_{i}(c_{i}C+nc_{i}F)+\sum_{j=1}^{l}d_{j}F_{j}\) in \(\mathrm{Pic}(\mathbb{F}_{n})\) where \(a,b_{i},d_{j}\geq 2\) and \(c_{i},l\geq 1\), then \(3\geq l\geq 2\) and \(d_{1}=\cdots=d_{l}\)._
Proof.: By Theorem 2.8, there are pairwise disjoint smooth curves \(C_{1},\ldots,C_{m}\) such that \(p^{*}C=\sum_{i=1}^{m}aC_{i}\). Since \(C_{1},\ldots,C_{m}\) are pairwise disjoint, we get that \((\sum_{i=1}^{m}C_{i}\cdot\sum_{i=1}^{m}C_{i})=\sum_{i=1}^{m}(C_{i}\cdot C_{i} )=m(C_{i}\cdot C_{i})\) for \(i=1,\ldots,m\). Since \((C\cdot C)=-n<0\), \((C_{i}\cdot C_{i})<0\) for \(i=1,\ldots,m\). Since \(X\) is a \(K3\) surface, \(C_{i}\) is a smooth rational curve for \(i=1,\ldots,m\). Let \(p_{|C_{i}}:C_{i}\to C\) be the finite map. Let \(B_{c_{i},nc_{i}}\) be an irreducible curve on \(\mathbb{F}_{n}\). Since \(B_{c_{i},nc_{i}}=c_{i}C+nc_{i}F\) in \(\mathrm{Pic}(\mathbb{F}_{n})\), we get that \(C\cap B_{c_{i},nc_{i}}\) is an empty set. Since the support of \(B\) is simple normal crossing, \(p_{|C_{i}}\) is the Galois covering whose branch divisor is \(\sum_{j=1}^{l}d_{j}(C\cap F_{j})\). If \(d_{i}\neq d_{j}\), then \(p_{|C_{i}}\) must be non-trivial. Since \(G\) is an abelian group, \(p_{|C_{i}}\) is the Abelian cover, however
by Theorem 2.6, this is a non-Abelian cover. This is a contradiction. Therefore, \(d_{1}=\cdots=d_{l}.\)
By Lemma 3.21, the numerical class of \(B\) is not one of (128,129,132,137,143,150, 151,152,154,159,160,162,170,171,172,173,174,175,179,188,193,220,227,230,235,247, 248,255,256,257,264,269,271,274,276,285,288,290,295,297,301,307,310,313,315) of the list in section 6.
**Lemma 3.22**.: _We assume that \(X/G\cong\mathbb{F}_{n}\) for \(n\geq 1\). If \(B=aC+\sum_{i=1}^{k}b_{i}B_{i}+\sum_{j=1}^{l}d_{j}B_{0,1}^{j}\) where \(a,b_{i},d_{j}\geq 2\), then \(d_{1}=\cdots=d_{l}\), \(2\leq\sum_{i=1}^{k}(C\cdot B_{i})+\sum_{j=1}^{l}(C\cdot B_{0,1}^{j})\leq 3\), and \(b_{i}=d_{1}\) if \((C\cdot B_{i})\neq 0\) for \(i=1,\ldots,k\)._
Proof.: In the sane way of Lemma 3.21, we get that for \(p^{*}C=\sum_{i=1}^{m}C_{i}\), the finite map \(p_{|C_{i}}:C_{i}\to C\) is the Abelian cover between \(\mathbb{P}^{1}\) whose branch divisor is \(\sum_{j=1}^{l}d_{j}(C\cap F_{j})\) and Galois group is \(\{g\in G:\mid g(C_{1})=C_{1}\}\). By Theorem 2.6, we get the claim.
By Lemma 3.22, the numerical class of \(B\) is not one of (127,133,134,135,145,146, 156,157, 158,161,163,164,165,166,167,168,169,223,224,225,236,237,238,239,240,261, 262,263,268,270,272,273,275,284,289,292,296,298,299,300,306,312,314) of the list in section 6.
**Lemma 3.23**.: _If there are irreducible curves \(B_{1}\) and \(B_{2}\) and positive even integers \(b_{1},b_{2}\geq 2\) such that \(B=b_{1}B_{1}+b_{2}B_{2}\), and \((B_{1}\cdot B_{2})\neq 0\), then \((B_{1}\cdot B_{2})=8\)._
Proof.: By Theorem 2.8, \(G=G_{B_{1}}\oplus G_{B_{2}}\) and \(G_{B_{i}}\cong\mathbb{Z}/b_{i}\mathbb{Z}\) for \(i=1,2.\) Let \(s_{i}\in G_{B_{i}}\) be a generator for \(i=1,2\). Since \(G\) is abelian, \(s_{1}^{\frac{b_{i}}{2}}\circ s_{2}^{\frac{b_{2}}{2}}\) is a symplectic automorphism of order \(2\). Since \(X/G\) is smooth, \(\operatorname{Fix}(s_{1}^{\frac{b_{i}}{2}}\circ s_{2}^{\frac{b_{2}}{2}})=p^{- 1}(B_{1})\cap p^{-1}(B_{2})\). Since the support of \(B\) is simple normal crossing and \(|G|=b_{1}b_{2}\), we get that \(|p^{-1}(B_{1})\cap p^{-1}(B_{2})|=(B_{1}\cdot B_{2})\). By the fact that the fixed locus of a symplectic automorphism of order \(2\) are \(8\) isolated points, we get that \((B_{1}\cdot B_{2})=8\).
By Lemma 3.23, the numerical class of \(B\) is not one of (21,25,26,28,103,112,130, 176,213,216,241) of the list in section 6.
**Lemma 3.24**.: _If there are irreducible curves \(B_{1}\) and \(B_{2}\) such that \(B=3B_{1}+3B_{2}\) and \((B_{1}\cdot B_{2})\neq 0\), then \((B_{1}\cdot B_{2})=3\)._
Proof.: By Theorem 2.8, \(G=G_{B_{1}}\oplus G_{B_{2}}\) and \(G_{B_{i}}\cong\mathbb{Z}/3\mathbb{Z}\) for \(i=1,2.\) Let \(s_{i}\in G_{B_{i}}\) be a generator for \(i=1,2\). Since \(G\) is abelian, we may assume that \(s_{1}\circ s_{2}\) is a non-symplectic automorphism of order \(3\). By Theorem 2.8, \(\operatorname{Fix}(s_{1}\circ s_{2})\) does not contain a curve. Then by [1, Theorem 2.8] or [14, Table 2], \(\operatorname{Fix}(s_{1}\circ s_{2})\) is only three isolated points. Since \(X/G\) is smooth, \(\operatorname{Fix}(s_{1}\circ s_{2})=p^{-1}(B_{1})\cap p^{-1}(B_{2})\). Since \(B_{1}+B_{2}\) is simple normal crossing and \(G=G_{B_{1}}\oplus G_{B_{2}}\), we get that \(|p^{-1}(B_{1})\cap p^{-1}(B_{2})|=(B_{1}\cdot B_{2})\). Therefore, we get \((B_{1}\cdot B_{2})=3\).
By Lemma 3.24, the numerical class of \(B\) is not one of (22,23,212,218) of the list in section 6.
**Lemma 3.25**.: _If there are irreducible curves \(B_{i}\) and positive integers \(b_{i}\geq 2\) for \(i=1,\ldots,k\) such that \(B=\sum_{i=1}^{k}b_{i}B_{i}\) and \(G=G_{B_{i}}\) for some \(i\), then \((B_{i}\cdot B_{j})=0\) for \(j\neq i\)._
Proof.: Recall that by Theorem 2.8, \(G_{B_{m}}\) is generated by a non-symplectic automorphism of order \(b_{m}\) and \(\operatorname{Fix}(G_{B_{m}})\supset p^{-1}(B_{m})\) for \(m=1,\ldots,k\). If \((B_{i}\cdot B_{j})\neq 0\) for \(j\neq 0\), then \(p^{-1}(B_{i})\cap p^{-1}(B_{j})\) is not an empty set. By the fact that the fixed locus of an automorphism is a pairwise set of points and curves, this is a contradiction.
By Lemma 3.25, the numerical class of \(B\) is not one of (24,131,177,219,242) of the list in section 6.
**Lemma 3.26**.: _If there are irreducible curves \(B_{1}\) and \(B_{2}\) such that \(B=2B_{1}+2B_{2}\) and \((B_{1}\cdot B_{2})\neq 0\), then \(\frac{B_{i}}{2}\in\)Pic\((\mathbb{F}_{n})\) for \(i=1,2\)._
Proof.: By Theorem 2.8, \(G=G_{B_{1}}\oplus G_{B_{2}}\) and \(G_{B_{i}}\cong\mathbb{Z}/2\mathbb{Z}\) for \(i=1,2\). Since the fixed locus of a non-symplectic automorphism of order \(2\) is a set of pairwise set of smooth curves or empty set, \(X/G_{B_{i}}\) is smooth. Then there is a double cover \(X/G_{B_{i}}\to X/G\cong\mathbb{F}_{n}\) whose branch divisor is \(2B_{j}\) for \(i,j=1,2\) and \(i\neq j\). By Theorem 3.5, \(\frac{B_{i}}{2}\in\)Pic\((\mathbb{F}_{n})\) for \(i=1,2\).
By Lemma 3.26, the numerical class of \(B\) is not one of (27,113,117) of the list in section 6.
**Lemma 3.27**.: _If there are irreducible curves \(B_{1},B_{2},B_{3}\) such that \(B=2B_{1}+3B_{2}+6B_{3}\) and \((B_{2}\cdot B_{2})\geq 1\) and \((B_{i}\cdot B_{j})\neq 0\) for \(1\leq i<j\leq 3\), then \((B_{2}\cdot B_{2})=1\)._
Proof.: Theorem 2.8, \(G_{B_{1}}\cong\mathbb{Z}/2\mathbb{Z}\), \(G_{B_{2}}\cong\mathbb{Z}/3\mathbb{Z}\), \(G_{B_{3}}\cong\mathbb{Z}/6\mathbb{Z}\). Since \((B_{i}\cdot B_{j})\neq 0\) for \(1\leq i<j\leq 3\), we get \(G_{B_{1}}\oplus G_{B_{2}}\cap G_{B_{3}}=\{\operatorname{id}_{X}\}\). Therefore, \(G=G_{B_{1}}\oplus G_{B_{2}}\oplus G_{B_{3}}\). Since \((B_{2}\cdot B_{2})>0\), we get that \(p^{*}B_{2}=3C_{2}\) and the only curve of \(\operatorname{Fix}(G_{B_{2}})\) is \(C_{2}\).
We assume that \((B_{2}\cdot B_{2})\geq 2\). Since \(|G|=36\), \((C_{1,1}^{2}\cdot C_{1,1}^{2})\geq 8\), and hence the genus of \(C_{1,1}^{2}\) is at least \(5\). By [1,14] and the only curve of \(\operatorname{Fix}(G_{B_{2}})\) is \(C_{2}\), this is a contradiction.
By Lemma 3.27, the numerical class of \(B\) is not one of (29,214) of the list in section 6.
**Lemma 3.28**.: _If there are irreducible curves \(B_{1},B_{2},B_{3}\) such that \(B=2B_{1}+4B_{2}+4B_{3}\) and \((B_{i}\cdot B_{j})\neq 0\) for \(1\leq i<j\leq 3\), then \((B_{1}\cdot B_{2})=1\)._
Proof.: Theorem 2.8, \(G_{B_{1}}\cong\mathbb{Z}/2\mathbb{Z}\) and \(G_{B_{i}}\cong\mathbb{Z}/4\mathbb{Z}\) for \(i=2,3\). Since \((B_{i}\cdot B_{j})\neq 0\) for \(1\leq i<j\leq 3\), we get \(G_{B_{1}}\cap(G_{B_{2}}\oplus G_{B_{3}})=\{\operatorname{id}_{X}\}\). Therefore, \(G=G_{B_{1}}\oplus G_{B_{2}}\oplus G_{B_{3}}\). Let \(s\in G_{B_{1}}\) and \(t\in G_{B_{2}}\) be generators. Then \(s\circ t\) is a non-symplectic automorphism of order \(4\) and \(p^{-1}(B_{1})\cap p^{-1}(B_{2})\subset\operatorname{Fix}(s\circ t)\). By Theorem 2.8 and \(G=G_{B_{1}}\oplus G_{B_{2}}\oplus G_{B_{3}}\), \(\operatorname{Fix}(s\circ t)\) does not contain a curve. By [2, Proposition 1], the number of isolated points of \(\operatorname{Fix}(s\circ t)\) is \(4\). If \((B_{1}\cdot B_{2})\geq 2\), then \(|p^{-1}(B_{1})\cap p^{-1}(B_{2})|\geq 8\). This is a contradiction.
By Lemma 3.28, the numerical class of \(B\) is not (30,109,155,215) of the list in section 6.
**Lemma 3.29**.: _We assume that \(X/G\cong\mathbb{P}^{1}\times\mathbb{P}^{1}\). Then \(B\neq a(\{q\}\times\mathbb{P}^{1})+bC_{1}+cC_{2}\) where \(C_{1}\) and \(C_{2}\) are smooth curves on \(\mathbb{P}^{1}\times\mathbb{P}^{1}\), \(C_{1}\cap C_{2}\neq\emptyset\), and \(a,b,c\) are even integers._
Proof.: We assume that \(B=a(\{q\}\times\mathbb{P}^{1})+bC_{1}+cC_{2}\) where \(C_{1}\) and \(C_{2}\) are smooth curves on \(\mathbb{P}^{1}\times\mathbb{P}^{1}\), \(C_{1}\cap C_{2}\neq\emptyset\), and \(a,b,c\) are even integers. Since \(C_{1}\cap C_{2}\neq\emptyset\), by Theorem 2.8, \(G=G_{C_{1}}\oplus G_{C2}\). Since \(b,c\) are even integers, \(G_{C_{1}},G_{C_{2}}\) are cyclic groups, and \(G=G_{C_{1}}\oplus G_{C2}\), the number of non-symplectic involution of \(G\) is \(2\). Since \((B_{1,0}\cdot C_{i})\neq 0\) and \(a\) is even, \(G\) must have at least \(3\) non-symplectic involutions. This is a contradiction.
By Lemma 3.29, the numerical class of \(B\) is not one of (31,32,33,35,36,37,38) of the list in section 6.
**Lemma 3.30**.: _If there are irreducible curves \(B_{i}\) and positive integers \(b_{i}\geq 2\) for \(i=1,\ldots,k\) such that \(B=\sum_{i=1}^{k}b_{i}B_{i}\), \(G=G_{B_{1}}\oplus G_{B_{2}}\) and \(b_{1}\) and \(b_{2}\) are coprime, then for each \(i=1,2\)\(j=3,\ldots,k\), we get that \(b_{i}\) and \(b_{j}\) are coprime if \((B_{i}\cdot B_{j})\neq 0\)._
Proof.: Let \(s\in G_{B_{1}}\) and \(t\in G_{B_{2}}\) be generators. Theorem 2.8, the order of \(s\) is \(b_{1}\) and that of \(t\) is \(b_{2}\). Since \(G=G_{B_{1}}\oplus G_{B_{2}}\), there are integers \(u\) and \(v\) such that \(G_{B_{j}}\) is generated by \(s^{u}\circ t^{v}\).
We assume that \((B_{1}\cdot B_{j})\neq 0\) and \(b_{1}\) and \(b_{j}\) are not coprime. Since \(b_{1}\) and \(b_{2}\) are coprime, there is an integer \(l\) such that \((s^{u}\circ t^{v})^{l}\neq\operatorname{id}_{X}\) and \((s^{u}\circ t^{v})^{l}=s^{m}\) or \(t^{m}\). Since \(b_{1}\) and \(b_{j}\) are not coprime, we assume that \((s^{u}\circ t^{v})^{l}=s^{m}\). Then \(p^{-1}(B_{1})\) and \(p^{-1}(B_{j})\) are contained in \(\operatorname{Fix}(s^{m})\). By the fact that the fixed locus of an automorphism is a pairwise set of points and curves, this is a contradiction.
By Theorem 2.8, and Lemma 3.30, the numerical class of \(B\) is not one of (34,40,265,266,293,294,308,309) of the list in section 6.
We assume that the numerical class of \(B\) is (39) of the list in section 6. We denote \(B\) by \(3B_{1,0}+3B_{2,2}+3B_{0,1}\). By Theorem 2.8, \(G=G_{2,2}\). Since \(G_{2,2}\cong\mathbb{Z}/3\mathbb{Z}\), \(G\) has \(1\) subgroups generated by a non-symplectic automorphism of order \(3\). Since \((B_{1,0}\cdot B_{2,2})\neq 0\), \(G\) contains at least \(2\) such a subgroup from Theorem 2.8. This is a contradiction.
**Lemma 3.31**.: _If there are irreducible curves \(B_{1},B_{2},B_{3}\) such that \(B=2B_{1}+2B_{2}+2B_{3}\), and \((B_{i}\cdot B_{j})\neq 0\) for \(1\leq i<j\leq 3\), then we get that \(\frac{B_{3}}{2}\in\)Pic\((\mathbb{F}_{n})\) if \((B_{1}\cdot B_{2})=4\)._
Proof.: By Theorem 2.8, \(G_{B_{i}}\cong\mathbb{Z}/2\mathbb{Z}\) for \(i=1,2,3\). Since \((B_{i}\cdot B_{j})\neq 0\) for \(1\leq i<j\leq 3\), by Theorem 2.8, \(G=G_{B_{1}}\oplus G_{B_{2}}\oplus G_{B_{3}}\).
We assume that \((B_{1}\cdot B_{2})=4\). Then \(p^{-1}(B_{1})\cap p^{-1}(B_{2})\) is a set of \(8\) points. Since the fixed locus of a symplectic automorphism of order \(2\) is a set of \(8\) isolated points, \(X/G_{B_{1}}\oplus G_{B_{2}}\) is smooth. Then there is a double cover \(X/G_{B_{1}}\oplus G_{B_{2}}\to X/G\cong\mathbb{F}_{n}\) whose branch divisor is \(2B_{3}\). Thus, \(\frac{B_{3}}{2}\in\)Pic\((\mathbb{F}_{n})\) for \(i=1,2\).
By Lemma 3.31, the numerical class of \(B\) is not one of (41,119,122,217) of the list in section 6.
**Lemma 3.32**.: _If there are irreducible curves \(B_{1},B_{2},B_{3}\) such that \(B=2B_{1}+2B_{2}+2B_{3}\), and \((B_{i}\cdot B_{j})\neq 0\) for \(1\leq i<j\leq 3\), then \((B_{i}\cdot B_{j})\leq 4\) for \(1\leq i<j\leq 3\)._
Proof.: By Theorem 2.8, \(G_{B_{i}}\cong\mathbb{Z}/2\mathbb{Z}\) for \(i=1,2,3\) and \(G=G_{B_{1}}\oplus G_{B_{2}}\oplus G_{B_{3}}\). Let \(s,t\in G\) be generators of \(G_{B_{i}}\) and \(G_{B_{j}}\) respectively where \(1\leq i<j\leq 3\). Then \(s\circ t\) is a symplectic automorphism of order \(2\) and \(p^{-1}(B_{i})\cap p^{-1}(B_{j})\subset\)Fix\((s\circ t)\). Since \(|G|=8\), we get \(2(B_{i}\cdot B_{j})=|p^{-1}(B_{i})\cap p^{-1}(B_{j})|\). Thus, we have that \((B_{i}\cdot B_{j})\leq 4\)
By Lemma 3.32, the numerical class of \(B\) is not one of (42,120) of the list in section 6.
**Lemma 3.33**.: _We assume that \(X/G\cong\mathbb{P}^{1}\times\mathbb{P}^{1}\). Then \(B\neq a_{1}(\{q_{1}\}\times\mathbb{P}^{1})+a_{2}(\{q_{2}\}\times\mathbb{P}^{1}) +bC^{\prime}+c(\mathbb{P}^{1}\times\{q_{3}\})\) where \(C^{\prime}\) is an irreducible curve, \(C^{\prime}=(nC+mF)\) in \(\text{Pic}(\mathbb{P}^{1}\times\mathbb{P}^{1})\), \(n,m>0\), and \(a_{1}a_{2},b,c\) are even integers._
Proof.: We assume that \(B=a_{1}(\{q_{1}\}\times\mathbb{P}^{1})+a_{2}(\{q_{2}\}\times\mathbb{P}^{1})+ bC^{\prime}+c(\mathbb{P}^{1}\times\{q_{3}\})\) where \(C^{\prime}\) is an irreducible curve, \(C^{\prime}=(nC+mF)\), \(n,m>0\), and \(a_{1}a_{2},b,c\) are even integers. By Theorem 2.8, \(G=G_{1,0}^{2}\oplus G_{C^{\prime}}\). By \(a_{1}a_{2}\) and \(b\) are even integers, the number of non-symplectic involution of \(G\) is \(2\). Since \((B_{0,1}\cdot C^{\prime})\neq 0\) and \((B_{0,1}\cdot B_{1,0}^{i})\neq 0\) for \(i=1,2\) and \(c\) is an even integer, \(G\) must have at least \(3\) non-symplectic involutions. This is a contradiction.
By Lemma 3.33, the numerical class of \(B\) is not one of (43,44) of the list in section 6.
**Lemma 3.34**.: _We assume that \(X/G\cong\mathbb{P}^{1}\times\mathbb{P}^{1}\). Then \(B\neq a_{1}(\{q_{1}\}\times\mathbb{P}^{1})+b_{1}C_{1}+b_{2}C_{2}+a_{2}(\mathbb{ P}^{1}\times\{q_{2}\})\) where \(C_{i}\) is an irreducible curve, \(C_{i}=(n_{i}C+m_{i}F)\) in \(\text{Pic}(\mathbb{P}^{1}\times\mathbb{P}^{1})\), \(n_{i},m_{i}>0\), for \(i=1,2\), and \(a_{1},a_{2},b_{1}b_{2}\) are even integers._
Proof.: We assume that \(B=a_{1}(\{q_{1}\}\times\mathbb{P}^{1})+b_{1}C_{1}+b_{2}C_{2}+a_{2}(\mathbb{P} ^{1}\times\{q_{2}\})\) where \(C_{i}\) is an irreducible curve, \(C=(n_{i},m_{i})\) in \(\text{Pic}(\mathbb{P}^{1}\times\mathbb{P}^{1})\), \(n_{i},m_{i}>0\), for \(i=1,2\), and \(a_{1},a_{2},b_{1}b_{2}\) are even integers. By Theorem 2.8, \(G=G_{C_{1}}\oplus G_{C_{2}}\). By \(b_{1}b_{2}\) is an even integer, the number of non-symplectic involutions of \(G\) is at most \(2\). Since \((B_{1,0}\cdot C_{i})\neq 0\) and \((B_{0,1}\cdot C_{i})\neq 0\) for \(i=1,2\), and \(a_{1}\) and \(a_{2}\) are even integers, \(G\) must have at least \(3\) non-symplectic involutions. This is a contradiction.
By Lemma 3.34, the numerical class of \(B\) is not one of (47,48,49,50,51,52) of the list in section 6.
We assume that the numerical class of \(B\) is (53) of the list in section 6. We denote \(B\) by \(3B_{1,0}+2B_{1,1}^{1}+6B_{1,1}^{2}+3B_{0,1}\). By Theorem 2.8, \(G_{1,1}^{i}\cong\mathbb{Z}/2\mathbb{Z}\) and \(G_{1,1}^{2}\cong\mathbb{Z}/6\mathbb{Z}\) and \(G=G_{1,1}^{1}\oplus G_{1,1}^{2}\). Then the number of subgroups of G which is generated by a non-symplectic automorphism of order \(3\) is \(1\). By Theorem 2.8 and \((B_{1,0}\cdot B_{1,1}^{2})\neq 0\), \(G\) must have at least \(2\) such subgroups. This is a contradiction.
We assume that the numerical class of \(B\) is (54) of the list in section 6. We denote \(B\) by \(3B_{1,0}+3B_{1,1}^{1}+3B_{1,1}^{2}+3B_{0,1}\). By Theorem 2.8, \(G_{1,1}^{i}\cong\mathbb{Z}/3\mathbb{Z}\) for \(i=1,2\), and \(G=G_{1,1}^{1}\oplus G_{1,1}^{2}\). Then the number of subgroups of G which is generated by a non-symplectic automorphism of order \(3\) is \(3\). By Theorem 2.8, \((B_{1,0}\cdot B_{1,1}^{i})\neq 0\), and \((B_{1,0}\cdot B_{0,1})\neq 0\), \(G\) must have at least \(4\) such subgroups. This is a contradiction.
We assume that the numerical class of \(B\) is (56) of the list in section 6. We denote \(B\) by \(2B_{1,0}^{1}+6B_{1,0}^{2}+3B_{1,2}+3B_{0,1}\). By Theorem 2.8, \(G_{1,0}^{1}\cong\mathbb{Z}/2\mathbb{Z}\) and \(G_{1,2}\cong\mathbb{Z}/3\mathbb{Z}\), and \(G=G_{1,0}^{1}\oplus G_{1,2}\). Then the number of subgroups of G which is generated by a non-symplectic automorphism of order \(3\) is \(1\). By Theorem 2.8 and \((B_{0,1}\cdot B_{1,2})\neq 0\), \(G\) must have at least \(2\) such subgroups. This is a contradiction.
We assume that the numerical class of \(B\) is (58) of the list in section 6. We denote \(B\) by \(2B_{1,0}^{1}+2B_{1,1}^{1}+2B_{1,1}^{2}+2B_{1,1}^{3}+2B_{0,1}\). By Theorem 2.8, \(G_{1,1}^{1}\cong\mathbb{Z}/2\mathbb{Z}\) and \(G_{1,0}\cong\mathbb{Z}/3\mathbb{Z}\), and \(G=G_{1,0}^{1}\oplus G_{1,2}\). Then the number of subgroups of G which is generated by a non-symplectic automorphism of order \(3\) is \(1\). By Theorem 2.8 and \((B_{0,1}\cdot B_{1,2})\neq 0\), \(G\) must have at least \(2\) such subgroups. This is a contradiction.
We assume that the numerical class of \(B\) is (58) of the list in section 6. We denote \(B\) by \(2B_{1,0}^{1}+2B_{1,1}^{1}+2B_{1,1}^{2}+2B_{1,1}^{3}+2B_{0,1}\). By Theorem 2.8, \(G_{1,1}^{1}\cong\mathbb{Z}/2\mathbb{Z}\) and \(G_{1,1}\cong\mathbb{Z}/3\mathbb{Z}\), and \(G=G_{1,1}^{1}\oplus G_{1,1}\). Then the number of subgroups of G which is generated by a non-symplectic automorphism of order \(3\) is \(1\). By Theorem 2.8 and \((B_{0,1}\cdot B_{1,2})\neq 0\), \(G\) must have at least \(2\) such subgroups. This is a contradiction.
We assume that the numerical class of \(B\) is (56) of the list in section 6. We denote \(B\) by \(2B_{1,0}^{1}+6B_{1,0}^{2}+3B_{1,2}+3B_{0,1}\). By Theorem 2.8, \(G_{1,0}^{1}\cong\mathbb{Z}/2\mathbb{Z}\) and \(G_{1,1}\cong\mathbb{Z}/3\mathbb{Z}\), and \(G=G_{1,0}^{1}\oplus G_{1,2}\). Then the number of subgroups of G which is generated by a non-symplectic automorphism of order \(3\) is \(1\). By Theorem 2.8 and \((B_{0,1}\cdot B_{1,2})\neq 0\), \(G\) must have at least \(2\) such subgroups. This is a contradiction.
We assume that the numerical class of \(B\) is (58) of the list in section 6. We denote \(B\) by \(2B_{1,0}^{1}+2B_{1,1}^{1}+2B_{1,1}^{2}+2B_{1,1}^{3}+2B_{0,1}\). By Theorem 2.8\(G_{1,1}^{i}\cong\mathbb{Z}/2\mathbb{Z}\) and \(G_{1,1}\cong\mathbb{Z}/3\mathbb{Z}\), and \(G=G_{1,1}^{1}\oplus G_{1,1}\). Then the number of subgroups of G which is generated by a non-symplectic automorphism of order \(3\) is \(1\). By Theorem 2.8 and \((B_{0,1}\cdot B_{1,2})\neq 0\), \(G\) must have at least \(2\) such subgroups. This is a contradiction.
We assume that the numerical class of \(B\) is (56) of the list in section 6. We denote \(B\) by \(2B_{1,0}^{1}+6B_{1,0}^{2}+3B_{1,2}+3B_{0,1}\). By Theorem 2.8, \(G_{1,0}^{1}\cong\mathbb{Z}/2\mathbb{Z}\) and \(G_{1,1
for \(i=1,2,3\). Since \((B_{1,1}^{i}\cdot B_{1,1}^{j})\neq 0\) for \(1\leq i<j\leq 3\) and \(G_{1,1}^{i}\cong\mathbb{Z}/2\mathbb{Z}\) for \(i=1,2,3\), \(G=G_{1,1}^{1}\oplus G_{1,1}^{2}\oplus G_{1,1}^{3}\). Then the number of non-symplectic involutions of \(G\) is \(4\). Since \((B_{1,0}\cdot B_{0,1})\neq 0\), \((B_{0,1}\cdot C_{i})\neq 0\), and \((B_{1,0}\cdot C_{i})\neq 0\) for \(i=1,2,3\), \(G\) must have at least \(5\) non-symplectic involutions. This is a contradiction.
**Lemma 3.35**.: _We assume that \(X/G\cong\mathbb{P}^{1}\times\mathbb{P}^{1}\). If \(B=\sum_{i=1}^{2}a_{i}(\{p_{i}\}\times\mathbb{P}^{1})+bC^{\prime}+\sum_{j=1}^{ 2}c_{j}(\mathbb{P}^{1}\times\{q_{j}\})\) where \(C^{\prime}\) is an irreducible curve, \(\{p_{i}\}\times\mathbb{P}^{1}\cap C^{\prime}\neq\emptyset\), \(C\cap\mathbb{P}^{1}\times\{q_{i}\}\neq\emptyset\)\(a_{i},c_{1},c_{2},b\in\mathbb{N}_{\geq 2}\), then \(a_{1}=a_{2}\) and \(c_{1}=c_{2}\)._
Proof.: Let \(C_{p_{1}}\) be one of irreducible components of \(p^{*}(\{p_{1}\}\times\mathbb{P}^{1})\). Since \((\{p_{1}\}\times\mathbb{P}^{1}\cdot\{p_{1}\}\times\mathbb{P}^{1})=0\), \(C_{p_{1}}\) is an elliptic curve. Let \(\pi:X\to Y:=X/G_{C^{\prime}}\) be the quotient map, and \(G^{\prime}:=G/G_{C^{\prime}}\) be a finite abelian subgroup of \(\operatorname{Aut}(Y)\). Since \(\{p_{i}\}\times\mathbb{P}^{1}\cap C\neq\emptyset\), the finite map \(\pi_{|C_{p_{1}}}:C_{p_{1}}\to C^{\prime}_{p_{1}}:=\pi(C_{p_{1}})\) is a branched cover. Since \(C_{p_{1}}\) is an elliptic curve, \(C^{\prime}_{p_{1}}\) is \(\mathbb{P}^{1}\) Since the branch divisor of the quotient map \(\pi^{\prime}:Y\to Y/G^{\prime}\cong\mathbb{P}^{1}\times\mathbb{P}^{1}\) is \(\sum_{i=1}^{2}a_{i}\{p_{i}\}\times\mathbb{P}^{1}+\Sigma_{j=1}^{2}c_{j}\mathbb{ P}^{1}\times\{q_{j}\}\), the branch divisor of \(\pi_{C^{\prime}_{p_{1}}}:C^{\prime}_{p_{1}}\to p_{1}\times\mathbb{P}^{1}\) is is \(c_{1}q_{1}+c_{2}q_{2}\). By Theorem 2.6, we get that \(c_{1}=c_{2}\). In the same way, we obtain that \(a_{1}=a_{2}\).
By Lemma 3.35, the numerical class of \(B\) is not one of (61,62,63,64) of the list in section 6.
We assume that the numerical class of \(B\) is one of (69,70,71,72,73,74,75,76,77,78) of the list in section 6. By Theorem 2.6, there are an abelian surface and a finite group \(G\) such that \(A/G=\mathbb{P}^{1}\times\mathbb{P}^{1}\) and the branch divisor is \(B\). By Theorem 2.4, there is a surjective morphism from a \(K3\) surface to an abelian surface. This is a contradiction.
**Lemma 3.36**.: _If \(X/G\cong\mathbb{F}_{n}\) where \(n\geq 1\), then \(B\neq aC+bB_{s,t}+cB_{u,v}+dB_{0,1}\) where \(a,d\geq 0\) are even integers, \(a=0\) or \(a\geq 2\), and, \(b,c>0\) are even integers._
Proof.: We assume that \(B=aC+bB_{s,t}+cB_{u,v}+dB_{0,1}\) where \(a,d\geq 0\) are even integers, \(a=0\) or \(a\geq 2\), and, \(b,c>0\) are even integers. By Theorem 2.8 and \((B_{s,t}\cdot B_{u,v})\neq 0\), we get that \(G=G_{s,t}\oplus G_{u,v}\). Then the number of non-symplectic involution of \(G\) is \(2\). Since \((B_{s,t}\cdot B_{0,1})\neq 0\) and \((B_{u,v}\cdot B_{0,1})\neq 0\), \(G\) must have at least \(3\) non-symplectic involutions. This is a contradiction.
By Lemma 3.36, the numerical class of \(B\) is not one of (104,114,115,118,141,148, 184,185,187,232,234,245,246,259,291) of the list in section 6.
**Lemma 3.37**.: _For the branch divisor \(B=\sum_{i=1}^{k}b_{i}B_{i}\), we get that \(\frac{|G|}{b_{i}^{2}}(B_{i}\cdot B_{i})\) is an even integer for \(1\leq i\leq k\)._
Proof.: For \(i=1,\ldots,k\), we put \(p^{*}B_{i}=\sum_{j=1}^{l}b_{i}C_{j}\) where \(C_{j}\) is a smooth curve for \(j=1,\ldots,l\). By Theorem 2.8, \(C_{1},\ldots,C_{l}\) are pairwise disjoint. Then we get that \(\frac{|G|}{b_{i}^{2}}(B_{i}\cdot B_{i})=\sum_{j=1}^{l}(C_{j}\cdot C_{j})\). Since \(X\) is a \(K3\) surface, \((C_{j}\cdot C_{j})\) is an even integer, and hence \(\frac{|G|}{b_{i}^{2}}(B_{i}\cdot B_{i})\) is an even integer.
By Lemma 3.37 the numerical class of \(B\) is not one of (106,107,140,147,180,183,231, 233,258) of the list in section 6.
We assume that the numerical class of \(B\) is (110) of the list in section 6. We denote \(B\) by \(2B_{1,2}+4B_{1,1}^{1}+4B_{1,1}^{2}+2B_{0,1}\). By Theorem 2.8, \(G_{1,2}\cong\mathbb{Z}/2\mathbb{Z}\), and \(G_{1,1}^{i}\cong\mathbb{Z}/4\mathbb{Z}\) for \(i=1,2\) Since \((B_{1,2}\cdot B_{1,1}^{i})\neq 0\) for \(i=1,2\), by Theorem 2.8, \(G=G_{1,2}\oplus G_{1,1}^{1}\oplus G_{1,1}^{2}\), and hence \(|G|=36\). Let \(s\in G_{1,2}\) and \(t\in G_{1,1}^{1}\) be generators. Then \(s\circ t\) is a non-symplectic automorphism of order \(4\) and \(p^{-1}(B_{1,2})\cap p^{-1}(B_{1,1}^{1})\subset\operatorname{Fix}(s\circ t)\). Since \(G=G_{1,2}\oplus G_{1,1}^{1}\oplus G_{1,1}^{2}\) and \(B=2B_{1,2}+4B_{1,1}^{1}+4B_{1,1}^{2}+2B_{0,1}\), by Theorem 2.8, \(\operatorname{Fix}(s\circ t)\) does not contain a curve. By [2, Proposition 1], the number of isolated points of \(\operatorname{Fix}(s\circ t)\) is \(4\). Since \((B_{1,2}\cdot B_{1,1}^{1})=2\), we get that \(|p^{-1}(B_{1,2})\cdot p^{-1}(B_{1,1}^{1})|\geq 8\). This is a contradiction.
We assume that the numerical class of \(B\) is (121) of the list in section 6. We denote \(B\) by \(2B_{2,3}+2B_{1,1}^{1}+2B_{1,1}^{2}+2B_{0,1}\). By Theorem 2.8, \(G_{2,3}\cong G_{1,1}^{1}\cong G_{1,1}^{2}\cong\mathbb{Z}/2\mathbb{Z}\). Since an intersection of two of \(B_{2,3},B_{1,1}^{1},B_{1,1}^{2}\) is not an empty set, by Theorem 2.8, \(G=G_{2,3}\oplus G_{1,1}^{1}\oplus G_{1,1}^{2}\), and hence \(|G|=8\). Let \(s\in G_{2,3}\) and \(t\in G_{0,1}\) be generators. Since \(s\) and \(t\) are non-symplectic involutions, \(\operatorname{Fix}(s)\) and \(\operatorname{Fix}(t)\) are a set of curves and \(\operatorname{Fix}(s\circ t)\) is a set of \(8\) isolated points. Since \((B_{2,3}\cdot B_{0,1})=2\), \(|p^{-1}(B_{2,3})\cap p^{-1}(B_{1,1}^{1})|=4\). Since \(\operatorname{Fix}(s\circ t)\supset p^{-1}(B_{2,3})\cap p^{-1}(B_{0,1})\), \(X/(G_{2,3}\oplus G_{0,1})\) has \(2\) singular points, however, since the branch divisor of the double cover \(X/(G_{2,3}\oplus G_{0,1})\to X/G\) is \(2B_{1,1}^{1}+2B_{1,1}^{2}\) and \((B_{1,1}^{1}\cdot B_{1,1}^{2})=1\), the number of singular points of \(X/(G_{2,3}\oplus G_{0,1})\) must be \(1\). This is a contradiction.
As for the case of (121), the numerical class of \(B\) is not one of (191,192) of the list in section 6.
We assume that the numerical class of \(B\) is (123) of the list in section 6. We denote \(B\) by \(2B_{2,2}+2B_{1,2}+2B_{1,1}+2B_{0,1}\). By Theorem 2.8, \(G_{2,2}\cong G_{1,2}\cong G_{1,1}\cong\mathbb{Z}/2\mathbb{Z}\). Since an intersection of two of \(B_{2,2},B_{1,2},B_{1,1}\) is not an empty set, by Theorem 2.8, \(G=G_{2,2}\oplus G_{1,2}\oplus G_{1,1}\). Since \((B_{2,2}\cdot B_{1,2})=4\), \(X/(G_{2,2}\oplus G_{1,2})\) is smooth. Then there is a double cover \(X/G_{2,2}\oplus G_{1,2}\to X/G\cong\mathbb{F}_{1}\) whose branch divisor is \(2B_{1,1}+2B_{0,1}\). Since \(\frac{B_{1,1}+B_{0,1}}{2}\not\in\)Pic\((\mathbb{F}_{1})\), by Theorem 3.5, this is a contradiction.
We assume that the numerical class of \(B\) is (125) of the list in section 6. We denote \(B\) by \(2B_{1,2}^{1}+2B_{1,2}^{2}+2B_{1,1}^{1}+2B_{1,1}^{2}\). By Theorem 2.8, \(G_{1,2}^{i}\cong G_{1,1}^{i}\cong\mathbb{Z}/2\mathbb{Z}\) for \(i=1,2\). Since an intersection of two of \(B_{1,2}^{1},B_{1,2}^{2},B_{1,1}^{1},B_{1,1}^{2}\) is not an empty set, by Theorem 2.8, \(G=G_{1,2}^{1}\oplus G_{1,2}^{2}\oplus G_{1,1}^{1}\oplus G_{1,1}^{2}\) or \(G=G_{1,2}^{1}\oplus G_{1,2}^{2}\oplus G_{1,1}^{1}\). We assume that \(G=G_{1,2}^{1}\oplus G_{1,2}^{2}\oplus G_{1,1}^{1}\oplus G_{1,1}^{2}\). Since \(|G|=16\) and \((B_{1,2}^{1}\cdot B_{1,2}^{2})=3\), \(|p^{-1}(B_{1,2}^{1}\cap B_{1,2}^{2})|\geq 12\). Since the number of isolated points of symplectic involution is \(8\), this is a contradiction. Therefore, \(G=G_{1,2}^{1}\oplus G_{1,2}^{2}\oplus G_{1,1}^{1}\).
By Theorem 3.5, there are the Galois covers \(p_{1}:Y_{1}\to\mathbb{F}_{1}\) and \(p_{2}:Y_{2}\to\mathbb{F}_{1}\) such that the branch divisor of \(p_{1}\) is \(2B_{1,2}^{1}+2B_{1,2}^{2}\) and that of \(p_{2}\) is \(2B_{1,1}^{1}+2B_{1,1}^{2}\). Let \(X^{\prime}:=Y_{1}\times_{\mathbb{F}_{1}}Y_{2}\). Then there is the Galois cover \(q:X^{\prime}\to\mathbb{F}_{1}\) whose branch divisor is \(2B_{1,2}^{1}+2B_{1,2}^{2}+2B_{1,1}^{1}+2B_{1,1}^{2}\) and Galois group is isomorphic to \(\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\) as a group. By Theorem 2.3, there is a symplectic automorphism of order \(2\)\(s\in G\) such that \(X^{\prime}=X/\langle s\rangle\). Since \(s\) is symplectic, the minimal resolution \(f:X^{\prime}_{m}\to X^{\prime}\) is a \(K3\) surface. Let \(e_{1},\ldots,e_{8}\) be the exceptional divisors of \(f\). We set \(\{p_{1},p_{2},p_{3}\}:=B_{1,2}^{1}\cap B_{1,2}^{2}\), and \(\{p_{4}\}:=B_{1,1}^{1}\cap B_{1,1}^{2}\). Let \(\pi:\operatorname{Blow}_{\{p_{1},p_{2},p_{3},p_{4}\}}\mathbb{F}_{1}\to\mathbb{F}_ {1}\) be the blow-up of \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) at points \(p_{1},p_{2},p_{3},p_{4}\), and \(E_{i}:=\pi^{-1}(p_{i})\) be an exceptional divisor of \(\pi\) for \(i=1,2,3,4\). Since the support of \(B\) is simple normal crossing, in the same way of
Proposition 3.15, there is a Galois cover \(q:X^{\prime}_{m}\rightarrow\text{Blow}_{\{p_{1},p_{2},p_{3},p_{4}\}}\mathbb{F}_{1}\) whose branch divisor is \(2C^{1}_{1,2}+2C^{2}_{1,2}+2C^{1}_{1,1}+2C^{2}_{1,1}\) and Galois group is isomorphic to \(\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\) as a group, where \(C^{1}_{1,2},C^{2}_{1,2},C^{1}_{1,1},C^{2}_{1,1}\) are proper transform of \(B^{1}_{1,2},B^{2}_{1,2},B^{1}_{1,1},B^{2}_{1,1}\) by the birational map \(\pi^{-1}\) in order. Notice that \(q^{*}(\sum_{i=1}^{4}E_{i})=\sum_{j=1}^{8}e_{j}\) and there is the commutative diagram:
Furthermore, we put \(\{x_{1},\ldots,x_{8}\}:=\)Fix\((s)\). Then \(\text{Blow}_{\{x_{1},\ldots,x_{8}\}}X/\langle s\rangle=X^{\prime}_{m}\), the branch divisor of the double cover \(\text{Blow}_{\{x_{1},\ldots,x_{8}\}}X\to X^{\prime}_{m}\) is \(\sum_{j=1}^{8}e_{j}\), and there is the commutative diagram:
In the same way of Proposition 3.15, we get that
\[\sum_{i=1}^{4}q^{*}E_{i}=2(\pi\circ q)^{*}(C+\frac{3}{2}F)-2C^{1}_{1,2}-2C^{1} _{1,1}\text{ in }\text{Pic}(X^{\prime}_{m}).\]
Since \(\text{Blow}_{\{x_{1},\ldots,x_{8}\}}X\) and \(X^{\prime}_{m}\) are smooth, and \(q^{*}(\sum_{i=1}^{4}E_{i})=\sum_{j=1}^{8}e_{j}\), we get that \(\frac{\sum_{i=1}^{4}q^{*}E_{i}}{2}\in\)Pix\((X^{\prime}_{m})\), and hence \(\frac{F}{2}\in\)Pic\((X^{\prime}_{m})\).
Since \(C^{1}_{1,2}\cap C^{2}_{1,2}\) is an empty set and \(\frac{C^{1}_{1,2}+C^{2}_{1,2}}{2}\in\)Pic\((\text{Blow}_{\{p_{1},p_{2},p_{3},p_{4}\}}\mathbb{F}_{1})\), by Theorem 3.5, there is the Galois cover \(g:Z\rightarrow\text{Blow}_{\{p_{1},p_{2},p_{3},p_{4}\}}\mathbb{F}_{1}\) such that \(Z\) is smooth, the branch divisor is \(2C^{1}_{1,2}+2C^{2}_{1,2}\), and the Galois group is isomorphic to \(\mathbb{Z}/2\mathbb{Z}\) as a group. By Theorem 2.3, there is a non-symplectic automorphism of order \(2\)\(\rho\) of \(X^{\prime}_{m}\) such that \(X^{\prime}_{m}/\langle\rho\rangle=Z\). Let \(h:X^{\prime}_{m}\to Z\) be the quotient map. Then \(q=g\circ h\), and hence \(\frac{F}{2}\in\)Pic\((X^{\prime}_{m})^{\rho}\). Since the degree of \(g\) is \(2\) and \((C^{1}_{1,2}\cdot\frac{F}{2})=\frac{1}{2}\) and \(\frac{g^{*}C^{1}_{1,2}}{2}\in\)Pic\((Z)\), we get that \(g^{*}\frac{F}{2}\not\in\)Pic\((Z)\). Recall that \(C^{i}_{1,1}=C+F-e_{4}\) in Pic\((\text{Blow}_{\{p_{1},p_{2},p_{3},p_{4}\}}\mathbb{F}_{1})\) for \(i=1,2\). Since the branch divisor of \(h\) is \(2g^{*}C^{1}_{1,1}+2g^{*}C^{2}_{1,1}\), we get that \(q^{*}(\frac{1}{2}C+\frac{1}{2}F-e_{4})\in\)Pic\((X^{\prime}_{m})\). By [2], Pic\((X^{\prime}_{m})^{\rho}\) is generated by \(h^{*}\)Pic\((Z)\) and \(q^{*}(\frac{1}{2}C+\frac{1}{2}F-e_{4})\) over \(\mathbb{Z}\). Since \(g^{*}\frac{F}{2}\not\in\)Pic\((Z)\) and \(2q^{*}(\frac{1}{2}C+\frac{1}{2}F-e_{4})\in h^{*}\)Pic\((Z)\), we may assume that there is \(\alpha\in\text{Pic}(Z)\) such that
\[q^{*}\frac{F}{2}=h^{*}\alpha+q^{*}(\frac{1}{2}C+\frac{1}{2}F-e_{4}).\]
Then \(g^{*}(\frac{-1}{2}C+e_{4})\in\)Pic\((Z)\). Since the degree of \(g\) is \(2\) and \((C^{1}_{1,2}\cdot\frac{-1}{2}C+e_{4})=\frac{3}{2}\) and \(\frac{g^{*}C^{1}_{1,2}}{2}\in\)Pic\((Z)\), we get that \((\frac{g^{*}C^{1}_{1,2}}{2}\cdot g^{*}(\frac{-1}{2}C+e_{4}))=\frac{3}{2}\). By the assumption that \(\frac{g^{*}C^{1}_{1,2}}{2}\in\)Pic\((Z)\) and \(g^{*}(\frac{-1}{2}C+e_{4})\in\)Pic\((Z)\), this is a contradiction. Therefore, the numerical class of \(B\) is not \((125)\) of the list in section 6.
We assume that the numerical class of \(B\) is (126) of the list in section 6. We denote \(B\) by \(2B_{1,2}+2B_{1,1}^{1}+2B_{1,1}^{2}+2B_{1,1}^{3}+2B_{0,1}\). By Theorem 2.8, \(G_{1,2}\cong G_{1,1}^{i}\cong G_{0,1}\cong\mathbb{Z}/2\mathbb{Z}\) where \(i=1,2,3\). Since an intersection of two of \(B_{1,2},B_{1,1}^{1},B_{1,1}^{2},B_{1,1}^{3},B_{0,1}\) is not an empty set, by Theorem 2.8, \(G=G_{1,2}\oplus G_{1,1}^{1}\oplus G_{1,1}^{2}\oplus G_{1,1}^{3}\). Let \(G_{s}\) be the subgroup of \(G\) which consists of symplectic automorphisms of \(G\). Then \(G_{s}\cong\mathbb{Z}/2\mathbb{Z}^{\oplus 3}\). By [16], the number of singular points of \(X/G_{s}\) is 14, however, since the branch divisor of the double cover \(X/G_{s}\to X/G\) is \(B=2B_{1,2}+2B_{1,1}^{1}+2B_{1,1}^{2}+2B_{1,1}^{3}+2B_{0,1}^{2}\) and the support of \(B\) is simple normal crossing, the number of singular points of \(X/G_{s}\) is 13. This is a contradiction. Therefore, the numerical class of \(B\) is not (126) of the list in section 6.
**Lemma 3.38**.: _If \(X/G\cong\mathbb{F}_{n}\) where \(n\geq 1\), then \(B\neq aC+bB_{s,t}+cB_{u,v}\) where \(a,b,c>0\) are even integers, and \((C\cdot B_{s,t})\neq 0\) and \((C\cdot B_{u,v})\neq 0\), i.e. \(s\neq t\) or \(u\neq v\)._
Proof.: We assume that \(B=aC+bB_{s,t}+cB_{u,v}\) where \(a,b,c>0\) are even integers, and \((C\cdot B_{s,t})\neq 0\) and \((C\cdot B_{u,v})\neq 0\) By Theorem 2.8 and \((B_{s,t}\cdot B_{u,v})\neq 0\), \(G=G_{s,t}\oplus G_{u,v}\). Then the number of non-symplectic involutions of \(G\) is 2. Since \((C\cdot B_{s,t})\neq 0\) and \((C\cdot B_{u,v})\neq 0\), \(G\) must have at least 3 non-symplectic involutions. This is a contradiction.
By Lemma 3.38, the numerical class of \(B\) is not one of (139,181,182,244) of the list in section 6.
We assume that the numerical class of \(B\) is (189) of the list in section 6. We denote \(B\) by \(2B_{1,0}+2B_{1,4}+2B_{1,1}^{1}+2B_{1,1}^{2}\). By Theorem 2.8, \(G_{1,0}\cong G_{1,4}\cong G_{1,1}^{i}\cong\mathbb{Z}/2\mathbb{Z}\) where \(i=1,2\). Since \((B_{1,4}\cdot B_{1,1}^{i})\neq 0\) for \(i=1,2\), by Theorem 2.8, \(G=G_{1,4}\oplus G_{1,1}^{1}\oplus G_{1,1}^{2}\). Let \(s\in G_{1,1}^{1}\) and \(t\in G_{1,1}^{2}\) be generators. Since the number of non-symplectic automorphisms of order 2 of \(G\) is 4 and Theorem 2.8, we may assume that \(\text{Fix}(s)\) is the support of \(p^{*}B_{1,1}^{1}\). Since the support of \(B\) is simple normal crossing and \((B_{1,4}\cdot B_{1,1}^{1})=4\), \(X/(G_{1,4}\oplus G_{1,1}^{1})\) is smooth. Then there is the Galois cover \(X/G_{1,4}\oplus G_{1,1}^{1}\to\mathbb{F}_{1}\) such that the branch divisor is \(2B_{1,0}+2B_{1,1}^{2}\) and the Galois group is isomorphic to \(\mathbb{Z}/2\mathbb{Z}\) as a group. Since \(\frac{B_{1,0}+B_{1,1}^{2}}{2}\not\in\)Pic\((\mathbb{F}_{1})\), this is a contradiction.
As for the case of (189), the numerical class of \(B\) does not (190) of the list in section 6.
We assume that the numerical class of \(B\) is (228) of the list in section 6. We denote \(B\) by \(3B_{1,0}+3B_{1,2}+3B_{1,4}\). By Theorem 2.8 and \((B_{1,2}\cdot B_{1,4})\neq 0\), \(G=G_{1,2}\oplus G_{1,4}\). Let \(s\in G_{1,4}\) be a generator of \(G_{1,4}\). Then the only curve of \(\text{Fix}(s)\) is \(C_{1,4}\). Since \((B_{1,4}\cdot B_{1,4})=6\), the genus of \(C_{1,4}\) is 4. By [1,14], \(\text{Fix}(s)\) does not have isolated points, and hence \(X/G_{1,4}\) is smooth. Let \(q:X/G_{1,4}\to X/G\) be the quotient map. Then the degree of \(q\) is 3, and the branch divisor of \(q\) is \(3B_{1,0}+3B_{1,2}\). Since the degree of \(q\) is 3 and \(X/G_{1,4}\) is smooth, \(\frac{3}{3^{2}}(B_{1,0}\cdot B_{1,0})\) is an integer. Since \((B_{1,0}\cdot B_{1,0})=-2\), \(\frac{3}{3^{2}}(B_{1,0}\cdot B_{1,0})=-\frac{2}{3}\). This is a contradiction.
We assume that the numerical class of \(B\) is (229) of the list in section 6. We denote \(B\) by \(3B_{1,0}+3B_{1,2}+3B_{1,3}+3B_{0,1}\). By Theorem 2.8, \(G_{1,0}\cong G_{1,2}\cong G_{1,3}\cong G_{0,1}\cong\mathbb{Z}/3\mathbb{Z}\). Since \((B_{1,2}\cdot B_{1,3})\neq 0\), by Theorem 2.8, \(G=G_{1,2}\oplus G_{1,3}\)
Let \(s,t\in G\) be generators of \(G_{1,2}\) and \(G_{1,3}\) respectively such that \(s\circ t\) is a non-symplectic automorphism of order \(3\). Since \(G=G_{1,2}\oplus G_{1,3}\), the number of subgroups of \(G\) which are generated by a non-symplectic automorphism of order \(3\) is \(3\). Since \((B_{1,2}\cdot B_{0,1})\neq 0\) and \((B_{1,3}\cdot B_{0,1})\neq 0\), we get that \(p^{-1}(B_{0,1})\) is contained in \(\mathrm{Fix}(s\circ t)\), and hence \(p^{-1}B_{1,0}\) is contained in \(\mathrm{Fix}(s)\). Since \(|G|=9\), there is an elliptic curve \(C_{0,1}\) on \(X\) such that \(p^{*}B_{0,1}=3C_{0,1}\). By [1,14], the number of isolated points of \(\mathrm{Fix}(s\circ t)\) is \(3\). Since \((B_{1,0}\cdot B_{1,3})=1\) and \((B_{1,2}\cdot B_{1,3})=3\), we have \(|p^{-1}(B_{1,0}\cup B_{1,2})\cap p^{-1}(B_{1,3})|=4\). Since \(p^{-1}(B_{1,0}\cup B_{1,2})\subset\)\(\mathrm{Fix}(s)\) and \(p^{-1}(B_{1,3})\subset\)\(\mathrm{Fix}(t)\), we get that \(p^{-1}(B_{1,0}\cup B_{1,2})\cap p^{-1}(B_{1,3})\subset\)\(\mathrm{Fix}(s\circ t)\). By the fact that the number of isolated points of \(\mathrm{Fix}(s\circ t)\) is \(3\), this is a contradiction.
We assume that the numerical class of \(B\) is \((243)\) of the list in section \(6\). We denote \(B\) by \(2B_{1,0}+2B_{1,4}+2B_{2,4}\). By Theorem 2.8, \(G=G_{1,4}\oplus G_{2,4}\). Let \(s\in G\) be a generator of \(G_{1,4}\). Since \((B_{1,0}\cdot B_{1,4})\neq 0\) and \((B_{1,4}\cdot B_{2,4})\neq 0\), the only curve of \(\mathrm{Fix}(s)\) is \(C_{1,4}\). Since the fixed locus of a non-symplectic involution does not have isolated points, \(X/G_{1,4}\) is smooth. Let \(q:X/G_{1,4}\to X/G\cong\mathbb{F}_{2}\) be the quotient map. The degree of \(q\) is \(2\) and the branch divisor of \(q\) is \(2B_{1,0}+2B_{2,2}\). Since \(\frac{B_{1,0}+B_{2,2}}{2}\not\in\)\(\mathrm{Pic}(\mathbb{F}_{2})\), by Theorem 3.5, this is a contradiction.
We assume that the numerical class of \(B\) is \((249)\) of the list in section \(6\). We denote \(B\) by \(2B_{1,0}+2B_{1,3}^{1}+2B_{1,3}^{2}+2B_{1,2}\). By Theorem 2.8, \(G_{1,3}^{i}\cong G_{1,2}\cong\mathbb{Z}/2\mathbb{Z}\) where \(i=1,2\). Since an intersection of two of \(B_{1,3}^{1},B_{1,3}^{2},B_{1,2}\) is not an empty set, \(G=G_{1,3}^{1}\oplus G_{1,3}^{2}\oplus G_{1,2}\). Since \(|G|=8\) and \((B_{1,3}^{1}\cdot B_{1,3}^{2})=4\), \(Y:=X/(G_{1,3}^{1}\oplus G_{1,3}^{2})\) is smooth. Then there is the Galois cover \(q:Y\to X/G\) such that the branch divisor is \(2B_{1,0}+2B_{1,2}\), and the Galois group is \(\mathbb{Z}/2\mathbb{Z}\) as a group. Since the fixed locus of a non-symplectic automorphism of order \(2\) does not have isolated points, \(X/G_{1,3}^{1}\) is smooth, and there is the Galois cover \(q^{\prime\prime}:X/G_{1,3}^{1}\to Y\) such that the branch divisor of \(q^{\prime\prime}\) is \(2q^{*}B_{1,3}^{1}\) and the Galois group of \(q^{\prime\prime}\) is \(\mathbb{Z}/2\mathbb{Z}\) as a group. Since \(Y\) and \(X/G_{1,3}^{1}\) are smooth, and the degree of \(q^{\prime\prime}\) is two, we get that \(\frac{q^{*}B_{1,3}^{1}}{2}\in\)\(\mathrm{Pic}(Y)\). Recall that the branch divisor of \(q\) is \(2B_{1,0}+2B_{1,2}\), and the degree of \(q\) is two. Since \(\frac{q^{*}B_{1,2}}{2}\in\)\(\mathrm{Pic}(Y)\), we get that \(\frac{q^{*}F}{2}=\frac{q^{*}B_{1,3}^{1}}{2}-\frac{q^{*}B_{1,2}}{2}\in\)\(\mathrm{Pic}(Y)\). Since \((B_{1,0}\cdot F)=1\), we get that \((\frac{q^{*}B_{1,0}}{2}\cdot\frac{q^{*}F}{2})=\frac{1}{2}\). Since \(\frac{q^{*}B_{1,0}}{2}\in\)\(\mathrm{Pic}(Y)\) and \(\frac{q^{*}F}{2}\in\)\(\mathrm{Pic}(Y)\), this is a contradiction. Therefore, the numerical class of \(B\) is not \((249)\).
We assume that the numerical class of \(B\) is \((250)\) of the list in section \(6\). We denote \(B\) by \(2B_{1,0}+2B_{1,3}+2B_{1,2}^{1}+2B_{1,2}^{2}+2B_{0,1}\). By Theorem 2.8, \(G_{1,3}\cong G_{1,2}^{i}\cong\mathbb{Z}/2\mathbb{Z}\) where \(i=1,2\). Since an intersection of two of \(B_{1,3},B_{1,2}^{1},B_{1,2}^{2}\) is not an empty set, by Theorem 2.8, \(G=G_{1,3}\oplus G_{1,2}^{1}\oplus G_{1,2}^{2}\). Let \(s\in G_{1,2}^{1}\) be a generator. Since the number of non-symplectic automorphisms of order \(2\) of \(G\) is \(4\) and Theorem 2.8, we may assume that \(p^{-1}(B_{1,3}^{1})\) and \(p^{-1}(B_{1,0})\) are contained in \(\mathrm{Fix}(s)\). Since the support of \(B\) is simple normal crossing and \((B_{1,3}\cdot B_{1,0}+B_{1,2}^{1})=4\), \(X/(G_{1,3}\oplus G_{1,2}^{1})\) is smooth and there is the Galois cover \(X/(G_{1,3}\oplus G_{1,2}^{1})\to\mathbb{F}_{2}\) such that the branch divisor is \(2B_{1,2}^{2}+2B_{0,1}\) and the Galois group is \(\mathbb{Z}/2\mathbb{Z}\) as a group. Since \(\frac{B_{1,2}^{2}+B_{0,1}}{2}\not\in\)\(\mathrm{Pic}(\mathbb{F}_{2})\), this is a contradiction.
We assume that the numerical class of \(B\) is (286) of the list in section 6. We denote \(B\) by \(2B_{1,0}+3B^{1}_{1,4}+6B^{2}_{1,4}\). By Theorem 2.8, \(G_{1,0}\cong\mathbb{Z}/2\mathbb{Z}\), \(G^{1}_{1,4}\cong\mathbb{Z}/3\mathbb{Z}\), \(G^{2}_{1,4}\cong\mathbb{Z}/6\mathbb{Z}\), and \(G=G^{1}_{1,4}\oplus G^{2}_{1,4}\). Let \(s\) be a generator of \(G^{1}_{1,4}\). Since \((B^{1}_{1,4}\cdot B^{1}_{1,4})=4\), the genus of \(C^{1}_{1,4}\) is 5 where \(p^{*}B^{1}_{1,4}=3C^{1}_{1,4}\). Since \(G_{1,0}\cong\mathbb{Z}/2\mathbb{Z}\) and \((B^{1}_{1,4}\cdot B^{2}_{1,4})\neq 0\), the only curve of \(\operatorname{Fix}(s)\) is \(C^{1}_{1,4}\). By [1,14], this is a contradiction.
We assume that the numerical class of \(B\) is (287) of the list in section 6. We denote \(B\) by \(2B_{1,0}+4B^{1}_{1,4}+4B^{2}_{1,4}\). By Theorem 2.8, \(G^{i}_{1,4}\cong\mathbb{Z}/4\mathbb{Z}\) for \(i=1,2\). Since \((B^{1}_{1,4}\cdot B^{2}_{1,4})\neq 0\), by Theorem 2.8, \(G=G^{1}_{1,4}\oplus G^{2}_{1,4}\). Let \(s\in G^{1}_{1,4}\) and \(t\in G^{2}_{1,4}\) be generators. Then non-symplectic involutions of \(G\) are \(s^{2}\) and \(t^{2}\). By Theorem 2.8, we may assume that \(\operatorname{Fix}(s^{2})=p^{-1}(B_{1,0})\cup p^{-1}(B^{1}_{1,4})\) and \(\operatorname{Fix}(t^{2})=p^{-1}(B^{2}_{1,4})\). For a symplectic involution \(s^{2}\circ t^{2}\), since \(X/G\) is smooth, \(\operatorname{Fix}(s^{2}\circ t^{2})\subset\operatorname{Fix}(s^{2})\cap \operatorname{Fix}(t^{2})\). Since \((C\cdot B^{i}_{1,4})=0\) and \((B^{1}_{1,4}\cdot B^{1}_{1,4})=4\), we get that \(p^{-1}(B_{1,0}\cup B^{1}_{1,4})\cap p^{-1}(B^{2}_{1,4})\) are 4 points. By the fact that the fixed locus of a symplectic involution of a \(K3\) surface are 8 isolated points, this is a contradiction.
We assume that the numerical class of \(B\) is (305) of the list in section 6. We denote \(B\) by \(3B_{1,0}+2B^{1}_{1,6}+6B^{2}_{1,6}\). By Theorem 2.8 and \((B^{1}_{1,6}\cdot B^{2}_{1,6})\neq 0\), \(G=G^{1}_{1,6}\oplus G^{2}_{1,6}\). Let \(\rho_{1},\rho_{2}\in G\) be generators of \(G_{B^{1}_{1,6}}\) and \(G_{B^{2}_{1,6}}\) respectively. Then \(\rho_{2}^{2}\) is a non-symplectic automorphism of order 3 and a generator of \(G_{1,0}\). Since \((C\cdot C)=-6\) and \(|G|=12\), we get that \(p^{*}C=\sum_{j=1}^{4}3C_{j}\) where \(C_{j}\) is a smooth rational curve. Then \(C_{1},\dots,C_{4},C_{1,6}^{2}\subset\)\(\operatorname{Fix}(\rho_{2}^{2})\) where \(p^{*}B^{2}_{1,6}=6C_{1,6}^{2}\). By [1,14], this is a contradiction.
We assume that the type of \(B\) is (45) of the list in section 6. We denote \(B\) by \(4B^{1}_{1,0}+4B^{2}_{1,0}+2B_{1,3}+2B_{0,1}\). We take the Galois cover \(q:\mathbb{P}^{1}\times\mathbb{P}^{1}\to\mathbb{P}^{1}\times\mathbb{P}^{1}\) whose branch divisor is \(4B^{1}_{1,0}+4B^{2}_{1,0}\). Since the support of \(B\) is simple normal crossing, \(q^{*}(2B_{1,3}+2B_{0,1})=2B_{4,3}+2B_{0,1}\). By Theorem 2.4, there is the Galois morphism \(g:X\to\mathbb{P}^{1}\times\mathbb{P}^{1}\) such that the branch divisor is \(2B_{4,3}+2B_{0,1}\) and the Galois group is abelian. Since the numerical class of \(2B_{4,3}+2B_{0,1}\) is (25), this is a contradiction.
As for the case of (45), the numerical class of \(B\) is not one of (46,55,57,59,60,65,66, 67,68,102,105, 108,111,116,124,136,138,142,144,149,153,178,186,221, 222,226,260,
267) of the list in section 6 by (25,24,27,25,37,34,40,34,34,212,213,214,215,216,217, 286,286,287,287,305, 228, 241,243,286,287,303,305,308) in order.
Therefore, we get Theorem 1.6.
## 4. Abelian groups of K3 surfaces with smooth quotient
In this section, first of all, we will show Theorem 1.1 and 1.2. Next, we will show Theorem 1.5. By Section 3, we had that if \(X/G\) is \(\mathbb{P}^{2}\) or \(\mathbb{F}_{n}\), then \(G\) is one of \(\mathcal{A}G\) as a group.
**Proposition 4.1**.: _Let \(X\) be a \(K3\) surface and \(G\) be a finite subgroup of \(\text{Aut}(X)\) such that \(X/G\) is a smooth rational surface. For a birational morphism \(f:X/G\to\mathbb{F}_{n}\), we get that \(0\leq n\leq 12\)._
Proof.: Let \(f:X/G\to\mathbb{F}_{n}\) be a birational morphism, \(e_{i}\) be the exceptional divisors for \(i=1,\ldots,m\), and \(B=\sum_{i=1}^{k}b_{i}B_{i}\) be the branch divisor. Since \(X/G\) and \(\mathbb{F}_{n}\) are smooth and \(f\) is a birational morphism, we get \(\operatorname{Pic}(X/G)=f^{*}\operatorname{Pic}(\mathbb{F}_{n})\bigoplus_{i=1 }^{m}\mathbb{Z}e_{i}\) and there are positive integers \(a_{i}\) for \(i=1,\ldots,m\) such that \(K_{X/G}=f^{*}K_{\mathbb{F}_{n}}+\sum_{i=1}^{m}a_{i}e_{i}\). By Theorem 2.7,
\[0=f^{*}K_{\mathbb{F}_{n}}+\sum_{i=1}^{m}a_{i}e_{i}+\sum_{i=1}^{k}\frac{b_{i}-1} {b_{i}}B_{i}.\]
Since \(\operatorname{Pic}(X/G)=f^{*}\operatorname{Pic}(\mathbb{F}_{n})\bigoplus_{i=1 }^{m}\mathbb{Z}e_{i}\), at least one of \(B_{1},\ldots,B_{k}\) is not an exceptional divisor of \(f\). By rearranging if necessary, we assume that \(B_{i}\) is not an exceptional divisor of \(f\) for \(1\leq i\leq u\), and \(B_{j}\) is an exceptional divisor of \(f\) for \(u+1\leq j\leq k\). Then \(f_{*}B_{i}\) is an irreducible curve on \(\mathbb{F}_{n}\) for \(1\leq i\leq u\). Therefore, for \(1\leq i\leq u\), there are non-negative integers \(c_{i},d_{i},g_{j}^{\text{}}\) such that
\[B_{i}=f^{*}(c_{i}C+d_{i}F)-\sum_{j=1}^{m}g_{j}^{i}e_{j}\text{ in }\operatorname{Pic }(X/G)\]
where \((c_{i},d_{i})=(1,0)\), \((0,1)\), or \(d_{i}\geq c_{i}n>0\). Since \(K_{\mathbb{F}_{n}}=-2C-(n+2)F\) in \(\operatorname{Pic}(\mathbb{F}_{n})\), by Theorem 2.7, we get that \(2=\sum_{i}\frac{b_{i}-1}{b_{i}}c_{i}\) and \(n+2=\sum_{i}\frac{b_{i}-1}{b_{i}}d_{i}\). In the same way as Theorem 3.4, we get this proposition.
Let \(X\) be a \(K3\) surface, \(G\) be a finite subgroup of \(\operatorname{Aut}(X)\) such that \(X/G\) is smooth, and \(f:X/G\to\mathbb{F}_{n}\) be a birational morphism. By Proposition 4.1, we get \(0\leq n\leq 12\). By the proof of Proposition 4.1, the numerical class of \(f_{*}B\) is one of the list on Section 3. Let \(B=\sum_{i=1}^{k}b_{i}B_{i}+\sum_{j=k+1}^{l}b_{j}B_{j}\) where \(B_{i}\) is not an exceptional divisor of \(f\) for \(i=1,\ldots,k\) and \(B_{j}\) is an exceptional divisor of \(f\) for \(j=k+1,\ldots,l\). Since \((X/G)\backslash\bigcup_{j=k+1}^{l}B_{j}\) is isomorphic to \(\mathbb{F}_{n}\backslash\bigcup_{j=k+1}^{l}f(B_{j})\) and \(f(B_{j})\) is a point for \(j=k+1,\ldots l\), \((X/G)\backslash\bigcup_{j=k+1}^{l}B_{j}\) is simply connected. By Theorem 2.8, \(G\) is generated by \(G_{1},\ldots,G_{k}\). Therefore, as for the case of Hirzebruch surface, we will guess \(G\) from the numerical class of \(f_{*}B\). Recall that if \(G\) is abelian, then \(G_{i}\) is a cyclic group, which is generated by a purely non-symplectic automorphism of order \(b_{i}\). If \(f_{*}B_{1}=C\), or \(F\), then \(G\) is generated by \(G_{2},\ldots,G_{k}\), and If \((f_{*}B_{1},f_{*}B_{2})=(C,F)\), then \(G\) is generated by \(G_{3},\ldots,G_{k}\).
Recall that since \(X/G\) is a smooth rational, \(X/G\) is given by blowups of \(\mathbb{F}_{n}\). Next, we will investigate the relationship between a branch divisor and exceptional divisors of blow-ups.
**Lemma 4.2**.: _Let \(X\) be a \(K3\) surface, and \(G\subset\operatorname{Aut}(X)\) a finite subgroup such that \(X/G\) is a smooth rational surface, and \(B\) be the branch divisor of the quotient map \(p:X\to X/G\). For a birational morphism \(h:X/G\to T\) where \(T\) is a smooth projective surface, let \(e_{i}\) be the exceptional divisor of \(h\) for \(i=1,\ldots,m\). Then for \(i=1,\ldots,m\) we have that \(h(e_{i})\in\operatorname{Supp}(h_{*}B)\)._
Proof.: Let \(e_{1},\ldots,e_{m}\) be the exceptional divisors of \(h\). Since \(X/G\) and \(T\) are smooth and \(h\) is birational, \(\operatorname{Pic}(X/G)=h^{*}\operatorname{Pic}(T)\bigoplus_{j=1}^{m}\mathbb{Z}e _{j}\) and there are positive integers \(a_{i}\) such that
\[K_{X/G}=h^{*}K_{T}+\sum_{i=1}^{m}a_{i}e_{i}.\]
We assume that \(h(e_{i})\not\in\operatorname{Supp}(h_{*}B)\) for some \(1\leq i\leq m\). For simply, we assume that \(i=1\), i.e. \(h(e_{1})\not\in\operatorname{Supp}(h_{*}B)\). Let \(B_{1},\ldots,B_{k}\) be irreducible components of \(B\) such that \(B_{j}\) is not an exceptional divisor of \(h\) for \(j=1,\ldots,k\). Since \(h(e_{1})\not\in\operatorname{Supp}(h_{*}B)\), there are integers \(c_{j,s}\) such that \(B_{j}=h^{*}C_{j}+\sum_{s=2}^{m}c_{j,s}e_{s}\), where \(C_{j}\) is an irreducible curve in \(T\). By Theorem 2.7, we get that
\[0=(h^{*}K_{T}+\sum_{i=1}^{m}a_{i}e_{i})+\sum_{j=1}^{k}\frac{b_{j}-1}{b_{j}}(h^{ *}C_{j}+\sum_{s=2}^{m}c_{j,s}e_{s})+\sum_{j=1}^{m}l_{j}e_{j}\text{ in }\operatorname{Pic}(X/G),\]
where \(l_{j}=0\) or \(l_{j}=\frac{d_{j}-1}{d_{j}}\) for some an integer \(d_{j}\geq 2\). Since \(a_{i}\geq 1\), \(c_{j,1}=0\), \(l_{j}\geq 0\), and \(\operatorname{Pic}(X/G)=h^{*}\operatorname{Pic}(T)\bigoplus_{j=1}^{m} \mathbb{Z}e_{j}\), this is a contradiction.
**Proposition 4.3**.: _Let \(X\) be a \(K3\) surface, \(G\subset\operatorname{Aut}(X)\) a finite subgroup such that the quotient space \(X/G\) is smooth, and \(B\) be the branch divisor of the quotient morphism \(p:X\to X/G\). Let \(f:X/G\to T\) be a birational morphism where \(T\) is a smooth surface, \(e_{1},\ldots,e_{m}\) be the exceptional divisors of \(f\), and \(f_{*}B:=\sum_{i=1}^{u}b_{i}\widetilde{B_{i}}\) where \(\widetilde{B_{i}}\) is an irreducible curves on \(U\) for \(i=1,\ldots,u\). If \(\widetilde{B_{i}}\) is smooth for each \(1\leq i\leq u\), then for \(1\leq j\leq m\) there are \(1\leq s<t\leq u\) such that \(f(e_{j})\in\widetilde{B_{s}}\cap\widetilde{B_{t}}\)._
Proof.: We set \(B=\sum_{i=1}^{u}b_{i}B_{i}+\sum_{j=u+1}^{k}b_{j}B_{j}\) where \(B_{i}\) is not an exceptional divisor of \(f\) for \(i=1,\ldots,u\), and \(B_{j}\) is an exceptional divisor of \(f\) for \(j=u+1,\ldots,k\). Then \(f_{*}B=\sum_{i=1}^{u}b_{i}f_{*}B_{i}\). We assume that \(f_{*}B_{i}\) is a smooth curve for \(i=1,\ldots,u\). By Lemma 4.2, \(f(e_{i})\in\operatorname{supp}(f_{*}B)\) for \(i=1,\ldots,m\). Let \(S:=X/G\), \(Z:=\{f(e_{1}),\ldots,f(e_{m})\}:=\{z_{1},\ldots,z_{v}\}\subset T\) where \(v:=|\{f(e_{1}),\ldots,f(e_{m})\}|\), \(q:\operatorname{Blow}_{Z}T\to T\) be the blow-up, and \(E_{i}:=q^{-1}(z_{i})\) be the exceptional divisor of \(q\) for \(1\leq i\leq v\). Then there is a birational morphism \(g:S\to\operatorname{Blow}_{Z}T\) such that \(f=q\circ g\), i.e. the following diagram is commutative:
By changing the number if necessary, we assume that \(g(e_{i})=E_{i}\) for \(1\leq i\leq v\). Then the exceptional divisors of \(g\) are \(e_{v+1},\ldots,e_{m}\). Since \(\operatorname{Pic}(\operatorname{Blow}_{Z}T)=q^{*}\operatorname{Pic}(T) \bigoplus_{j=1}^{v}\mathbb{Z}E_{j}\) and \(f=q\circ g\),
\[\operatorname{Pic}(S)=g^{*}\operatorname{Pic}(\operatorname{Blow}_{Z}T) \bigoplus_{j=v+1}^{m}\mathbb{Z}e_{j}=\left(f^{*}\operatorname{Pic}(T) \bigoplus_{i=1}^{v}\mathbb{Z}g^{*}E_{i}\right)\bigoplus_{j=v+1}^{m}\mathbb{Z} e_{j}.\]
Since \(K_{\operatorname{Blow}_{Z}T}=q^{*}K_{T}+\sum_{j=1}^{v}E_{j}\),
\[K_{S}=g^{*}K_{\operatorname{Blow}_{Z}T}+\sum_{i=v+1}^{m}a_{i}^{\prime}e_{i}= \left(f^{*}K_{T}+\sum_{j=1}^{v}g^{*}E_{i}\right)+\sum_{i=v+1}^{m}a_{i}^{ \prime}e_{i}\]
where \(a_{i}^{\prime}\) is a positive integer for \(i=v+1,\ldots,m\).
We assume that for some \(1\leq i\leq m\), \(f(e_{i})\not\in f_{*}B_{s}\cap f_{*}B_{t}\) for each \(1\leq s<t\leq u\). Since \(Z=\{f(e_{1}),\ldots,f(e_{v})\}\), we assume that \(1\leq i\leq v\). for simplicity, we assume that \(i=1\). In addition, since \(f(e_{j})\in\operatorname{supp}(f_{*}B)\) for \(j=1,\ldots,m\), by changing the number if necessary, we assume that \(f(e_{1})\in\operatorname{supp}(f_{*}B_{1})\), and \(f(e_{1})\not\in\operatorname{supp}(f_{*}B_{j})\) for \(2\leq j\leq u\). Recall that the exceptional divisors of
are \(E_{1},\ldots,E_{v}\), the exceptional divisors of \(g\) are \(e_{v+1},\ldots,e_{m}\), and \(g(e_{i})=E_{i}\) for \(1\leq i\leq v\). Since \(f=q\circ g\), for \(j=1,\ldots,u\) there are non-negative integers \(c_{j,s},c^{\prime}_{j,t}\) such that
\[B_{j}=f^{*}f_{*}B_{j}-\sum_{s=1}^{v}c_{j,s}g^{*}E_{s}-\sum_{t=v+1}^{m}c^{ \prime}_{j,t}e_{t}\ \ \text{in}\ \text{Pic}(S).\]
Since \(f(e_{1})\not\in f_{*}B_{j}\) for \(2\leq j\leq u\), we get that \(c_{j,1}=0\) for \(2\leq j\leq u\). Since \(f_{*}B_{1}\) is smooth, \(c_{1,1}=1\). Since \(K_{S}=f^{*}K_{T}+\sum_{j=1}^{v}g^{*}E_{i}+\sum_{i=v+1}^{m}a^{\prime}_{i}e_{i}\) and \(0=K_{S}+\sum_{i=1}^{k}\frac{b_{j}-1}{b_{i}}B_{i}\) in \(\text{Pic}(S)\),
\[0= (f^{*}K_{T}+\sum_{j=1}^{v}g^{*}E_{i}+\sum_{i=v+1}^{m}a^{\prime}_ {i}e_{i})\] \[+\sum_{i=1}^{u}\frac{b_{i}-1}{b_{i}}(f^{*}f_{*}B_{j}-\sum_{s=1}^{ v}c_{j,s}g^{*}E_{s}-\sum_{t=v+1}^{m}c^{\prime}_{j,t}e_{t})\] \[+\sum_{j=u+1}^{k}\frac{b_{j}-1}{b_{j}}B_{j}\ \ \text{in}\ \text{Pic}(S).\]
From the coefficient of \(g^{*}E_{1}\), we get that \(1=\frac{b_{1}-1}{b_{1}}\) Since \(b_{1}\geq 2\), this is a contradiction.
Let \(X\) be a \(K3\) surface, \(G\) be a finite subgroup of \(\text{Aut}(X)\) such that \(X/G\) is a smooth rational surface, and \(B\) be the branch divisor of the quotient map \(p:X\to X/G\). Let \(h:X/G\to T\) be a birational morphism where \(T\) is a smooth projective surface, and \(e_{1},\ldots,e_{m}\) be the exceptional divisors of \(h\). We set \(h_{*}B:=\sum_{j=1}^{l}b_{j}B^{\prime}_{j}\). We write \(B=\sum_{i=1}^{l}b_{i}B_{i}+\sum_{j=l+1}^{k}b_{j}B_{j}\) such that \(h_{*}B_{i}=B^{\prime}_{i}\) for \(i=1,\ldots,l\). Then \(B_{j}\) is one of the exceptional divisor of \(h\) for \(j=l+1,\ldots,k\), and for \(i=1,\ldots,l\) there are non-negative integers \(c_{i,1},\ldots,c_{i,m}\) such that \(B_{i}=h_{*}^{-1}B^{\prime}_{i}-\sum_{t=1}^{m}c_{i,t}e_{t}\).
**Remark 4.4**.: _In the above situation, for \(e_{u}\) and \(e_{v}\) where \(1\leq u<v\leq m\) and \(h(e_{u})=h(e_{v})\), we get that \(c_{i,u}=0\) if and only if \(c_{i,v}=0\)._
**Remark 4.5**.: _In the situation of Proposition 4.3, we assume that \(T=\mathbb{F}_{n}\). Then there are positive integers \(a_{1},\ldots,a_{m}\) such that \(K_{X/G}=h^{*}K_{\mathbb{F}_{n}}+\sum_{i=1}^{m}a_{i}e_{i}\). By the proof of Proposition 4.3, we get that \(a_{1}=\cdots=a_{u}=1\) and_
\[1+\frac{\beta_{i}-1}{\beta_{i}}=\sum_{j=1}^{k}\frac{b_{j}-1}{b_{j}}c_{i,j}\ \ for\ i=1,\ldots,u,\]
_where \(\beta_{i}=1\) if \(e_{i}\) is not an irreducible component of \(B\), and \(\beta_{i}\) is the ramification index at \(e_{i}\) if \(e_{i}\) is an irreducible component of \(B\)._
_Furthermore, we assume that \(X/G\neq\text{Blow}_{\{h(e_{1}),\ldots,h(e_{u})\}}\mathbb{F}_{n}\). For the birational morphism \(g:X/G\to\text{Blow}_{\{h(e_{1}),\ldots,h(e_{u})\}}\mathbb{F}_{n}\) in the proof of Proposition 4.3, we rearrange the order so that \(\{g(e_{u+1}),\ldots,g(e_{u+v})\}=\{g(e_{u+1}),\ldots,g(e_{m})\}\), where \(v:=|\{g(e_{u+1}),\ldots,g(e_{m})\}|\). Like the proof of Proposition 4.3, by considering the blow-up of \(\text{Blow}_{\{h(e_{1}),\ldots,h(e_{u})\}}\mathbb{F}_{n}\) at \(\{g(e_{u+1}),\ldots,g(e_{u+v})\}\), we get that \(a_{u+1}=\cdots=a_{u+v}=2\) and_
\[2+\frac{\beta_{i}-1}{\beta_{i}}=\sum_{j=1}^{k}\frac{b_{j}-1}{b_{j}}c_{i,j}\ \ for\ i=u+1,\ldots,u+v,\]
_where \(\beta_{i}=1\) if \(e_{i}\) is not an irreducible component of \(B\), and \(\beta_{i}\) is the ramification index at \(e_{i}\) if \(e_{i}\) is an irreducible component of \(B\)._
Recall that by Theorem 2.8, \(G_{B_{i}}\) is generated by a non-symplectic automorphism of order \(b_{i}\). As a corollary of Theorem 2.8 and Proposition 4.3, we get the following Theorem 4.6.
**Theorem 4.6**.: _Let \(X\) be a \(K3\) surface, \(G\) be a finite subgroup of \(\text{Aut}(X)\) such that \(X/G\) is smooth, and \(B\) be the branch divisor of the quotient map \(p:X\to X/G\). Let \(f:X\to S\) be the birational morphism where \(S\) minimal rational surface. We put \(f_{*}B:=\sum_{i=1}^{k}b_{i}B_{i}\) where \(B_{i}\) is an irreducible curve for \(i=1,\dots,k\). We denote by \(G_{s}\) the subgroup of \(G\), which consists of symplectic automorphisms of \(G\), and \(b\) the least common multiple of \(b_{1},\dots,b_{k}\). Then there is a purely non-symplectic automorphism \(g\in G\) of order \(b\) such that \(G\) is the semidirect product \(G_{s}\rtimes\langle g\rangle\) of \(G_{s}\) and \(\langle g\rangle\)._
Proof.: Since \(G_{s}\) is a normal subgroup of \(G\) and \(G/G_{s}\) is a cyclic group, in order to show Theorem 4.6, we only show that there is a purely non-symplectic automorphism \(g\in G\) of order \(b\).
First of all, we assume that \(X/G\cong\mathbb{P}^{2}\). We put \(B:=\sum_{i=1}^{k}b_{i}B_{i}\) where \(B_{i}\) is an irreducible curve for \(i=1,\dots,k\). By Theorem 2.7, \(0=\sum_{i=1}^{k}\frac{b_{i}-1}{b_{i}}\text{deg}\,B_{i}+\text{deg}\,K_{\mathbb{ P}^{2}}\), in which \(K_{\mathbb{P}^{2}}\) is the canonical line bundle of \(\mathbb{P}^{2}\). Since the degree of \(K_{\mathbb{P}^{2}}\) is \(-3\) and \(\frac{1}{2}\leq\frac{l-1}{l}<1\) for any positive integer \(l\), we get that \(4\leq\sum_{i=1}^{k}\text{deg}B_{i}\leq 6\). If \(\sum_{i=1}^{k}\text{deg}B_{i}=6\), then \(b_{1}=\dots=b_{k}=2\). By Theorem 2.8, in this case the statement of theorem is established. We assume that \(\sum_{i=1}^{k}\text{deg}B_{i}\leq 5\). By [15, Theorem 2], \(b=b_{i}\) for some \(1\leq i\leq k\) or \(b=l.c.m(b_{i},b_{j})\) for \(i<j\). By Theorem 2.8, in the former case, we get this theorem.
For the latter, i.e. if \(b\neq b_{i}\) for \(1\leq i\leq k\), then \(B\) is one of (i) \(3L_{1}+3L_{2}+3L_{3}+2L_{4}+2L_{5}\) where \(L_{3}\) pass through the points \(L_{1}\cap L_{2}\) and \(L_{4}\cap L_{5}\) (see [15, pp. 408]), (ii) \(3L_{1}+3L_{2}+3L_{3}+2Q\) where \(L_{1},L_{2}\) are the tangent to \(Q\) and \(L_{3}\) is in general position with respect to \(L_{1}\cup L_{2}\cup Q\) (see [15, pp. 408]), and (iii) \(2L_{1}+2L_{2}+3L_{3}+3Q\) where \(L_{1},L_{2},L_{3}\) are three distinct tangent lines to \(Q\) (see [15, pp. 410]). Here, \(L_{i}\) and \(Q\) are smooth curves on \(\mathbb{P}^{2}\) with \(\text{deg}\,L_{i}=1\) and \(\text{deg}\,Q=2\) for \(i=1,\dots,5\). Then there are \(1\leq i<j\leq k\) such that \(b=l.c.m(b_{i},b_{j})\), \(B_{i}+B_{j}\) is simple normal crossing, and \((B_{i}\cap B_{j})\setminus\cup_{s\neq i,j}B_{s}\) is not an empty set. For clarity, we may assume that \(i=1\), \(j=2\). We take one point \(y\in(B_{1}\cap B_{2})\setminus\cup_{i=3}^{k}B_{i}\). Let \(x\in p^{-1}(y)\). By the assumption for \(y\) and Theorem 2.3, there are open subset \(V\subset\mathbb{P}^{2}\) and \(U\subset X\) such that \(y\in V\), \(x\in U\), \(p_{|U}:U\to V\) is isomorphic to \(\{z\in\mathbb{C}^{2}:\ |z|<1\}\ni(z_{1},z_{2})\mapsto(z_{1}^{b_{1}},z_{2}^{b_{2}}) \in\{z\in\mathbb{C}^{2}:\ |z|<1\}\), and hence \(G_{x}:=\{g\in G\,|\,g(x)=x\}\cong\mathbb{Z}/b_{1}\mathbb{Z}\oplus\mathbb{Z}/b _{2}\mathbb{Z}\). Since \(b=l.c.m(b_{1},b_{2})\), there is a purely non-symplectic automorphism \(g\in G\) with order \(b\).
Next, we assume that \(X/G\cong\mathbb{F}_{n}\). By the list of the numerical class of \(B\) in Section 6, if the numerical class of \(B\) is not one of (65,70,73,77,83,92,102,127,128,132,136, 143,153,154,170,235,251,252,253), then \(b=b_{i}\) for some \(1\leq i\leq k\). Therefore, by Theorem 2.8, we get this theorem. If the numerical class of \(B\) is one of (65,70,73,77,92,127,128,132,136,143, 153,154,170,235,251,252, 253), then there are \(1\leq i<j\leq k\) such that \(b=l.c.m(b_{i},b_{j})\), \(B_{i}+B_{j}\) is simple normal crossing, and \((B_{i}\cap B_{j})\setminus\cup_{s\neq i,j}B_{s}\) is not an empty set. As for the case of \(\mathbb{P}^{2}\), we get this theorem.
We assume that the numerical class of \(B\) is (83). We write \(B=3B_{3,3}+2B_{0,1}^{1}+2B_{0,1}^{2}\). Since \(B_{0,1}^{1}\cap B_{0,1}^{2}\) is an empty set, if \(B_{3,3}\cap B_{0,1}^{1}\) is not one point, then by \((B_{3,3}\cdot B_{0,1}^{1})=3\), there is a point \(y\in B_{3,3}\cap B_{0,1}^{1}\) such that the support of \(B\) is simple normal crossing at \(y\). Since \(b=6\), by Theorem 2.8, we get this theorem. Therefore, we assume that \(B_{3,3}\cap B_{0,1}^{1}\) and \(B_{3,3}\cap B_{0,1}^{2}\) are one point. Let \(q:X/G_{s}\to X/G\) be the quotient map. Then the singular locus of \(X/G_{s}\) is \(q^{-1}(B_{3,3}\cap B_{0,1}^{1})\cup q^{-1}(B_{3,3}\cap B_{0,1}^{2})\). Since the Galois group of \(q\) is \(G/G_{s}\cong\mathbb{Z}/6\mathbb{Z}\), the branch divisor of \(q\) is \(B\), and \(B_{3,3}\cap B_{0,1}^{1}\) and \(B_{3,3}\cap B_{0,1}^{2}\) are one point, \(X/G_{s}\) has just two singular point. By [16, Theorem 3], this is a contradiction. Therefore, if the numerical class of \(B\) is (83), then we get this theorem. As for the case of (83), we get this theorem for (102).
Finally, we assume that \(X/G\) is not \(\mathbb{P}^{2}\) or \(\mathbb{F}_{n}\). We take a birational morphism \(f:X/G\to\mathbb{F}^{n}\) where \(0\leq n\). Let \(e_{1},\ldots,e_{m}\) be the exceptional divisors of \(f\). In the same way of the case where \(X/G\cong\mathbb{P}^{2}\) or \(\mathbb{F}_{n}\), we only consider the case that the numerical class of \(f_{*}B\) is one of (65,70,73,77,83,92, 102,127,128,132,136, 143, 153,154,170,235,251,252, 253).
We assume that the numerical class of \(f_{*}B\) is (65). By Remark 4.5, there are positive integers \(a_{1},\ldots,a_{5},b\) such that
\[1+\frac{b-1}{b}=\frac{2}{3}a_{1}+\frac{5}{6}a_{2}+\frac{1}{2}a_{3}+\frac{3}{4 }a_{4}+\frac{3}{4}a_{5}.\]
Since the numerical class of \(f_{*}B\) is (65), we may assume that \(a_{1}\) or \(a_{2}\) is \(0\), and either \(a_{4}\) or \(a_{5}\) is \(0\). However, there are not such positive integers. Therefore, the numerical class of \(f_{*}B\) is not (65). As for the case of (65), the numerical class of \(B\) is not one of (73,77,128,132,170,235,251,253).
We assume that the numerical class of \(f_{*}B\) is (70). By Remark 4.5, there are positive integers \(a_{1},\ldots,a_{6},b\) such that
\[1+\frac{b-1}{b}=\frac{1}{2}a_{1}+\frac{2}{3}a_{2}+\frac{5}{6}a_{3}+\frac{1}{2 }a_{4}+\frac{3}{4}a_{5}+\frac{3}{4}a_{6}.\]
Since the numerical class of \(f_{*}B\) is (70), we may assume that two of \(a_{1}\), \(a_{2}\) and \(a_{3}\) are \(0\), and two of \(a_{4}\), \(a_{5}\) and \(a_{6}\) are \(0\). The integers satisfying the above conditions is only \((a_{1},\ldots,a_{6},b)=(1,0,0,1,0,0,12)\). Therefore, for \(B:=\sum_{j=1}^{l}B_{j}B_{j}\), \(b_{i}=12\) for some \(1\leq i\leq l\). By Theorem 2.8, if the numerical class of \(f_{*}B\) is (65), then we get this theorem. As for the case of (70), if the numerical class of \(B\) is one of (136,143), then we get this theorem.
We assume that the numerical class of \(B\) is (83). By Remark 4.5, there are positive integers \(a_{1},\ldots,a_{6},b\) such that
\[1+\frac{b-1}{b}=\frac{2}{3}a_{1}+\frac{1}{2}a_{2}+\frac{1}{2}a_{3}.\]
Since the numerical class of \(f_{*}B\) is (83), we may assume that either \(a_{2}\) or \(a_{3}\) is \(0\). The integers satisfying the above conditions is \((a_{1},a_{2},a_{3},b)=(2,1,0,6)\) or \((2,0,1,6)\). Therefore, we get this of theorem. As for the case of (83), if the numerical class of \(B\) is one of (92,102,127,153,154,252), then we get this theorem.
**Theorem 4.7**.: _Let \(X\) be a \(K3\) surface and \(G\) be a finite subgroup of \(Aut(X)\) such that \(X/G\) is smooth. For a birational morphism \(f:X/G\to\mathbb{F}_{n}\) where \(0\leq n\), we get that \(n\) is not one of \(5,7,9,10,11\)._
Proof.: Let \(p:X\to X/G\) be the quotient map, and \(B:=\sum_{i=1}^{k}b_{i}B_{i}\) be the branch divisor of \(p\). Let \(f:X/G\to\mathbb{F}_{n}\) be a birational morphism where \(0\leq n\), and \(e_{1},\ldots,e_{m}\) be the exceptional divisors of \(f\).
First we will show this theorem for the cases where \(f\) is an isomorphism, i.e. \(X/G\cong\mathbb{F}_{n}\). By Theorem 2.7, \(n=0,1,2,3,4,5,6,7,8,9,\) or \(12\). We assume that \(n=5,7,\) or \(9\). Then the numerical class of \(B\) is one of (296,297,298,299,300, 301,310,314,315) of the list in section 6.
We assume that the numerical class of \(B\) is (296). We denote \(B\) by \(4B_{1,0}+2B_{1,5}+4B_{1,6}\). Let \(p^{*}B_{1,0}=\sum_{i=1}^{m}4C_{i}\) where \(C_{i}\) is a smooth curve for \(i=1,\ldots,m\). Since \((B_{1,0}\cdot B_{1,0})<0\), \((C_{i}\cdot C_{i})<0\). Since \(X\) is a \(K3\) surface, and \(C_{i}\) is irreducible, we get that \((C_{i}\cdot C_{i})=-2\). Since the degree of \(p\) is \(|G|\), and \((B_{1,0}\cdot B_{1,0})=-5\), we get that \(\frac{-5|G|}{16}=-2m+2\sum_{1\leq i<j\leq m}(C_{i}\cdot C_{j})\), and hence \(\frac{5|G|}{32}\leq m\). Let \(p^{*}B_{1,6}=\sum_{j=1}^{l}4C^{\prime}_{j}\) where \(C^{\prime}_{j}\) is a smooth curve for \(j=1,\ldots,l\). Since \((B_{1,0}\cdot B_{1,6})=1\), \(\frac{|G|}{16}=m(C_{1}\cdot\sum_{j=1}^{l}C^{\prime}_{j})\). Since \((C_{1}\cdot\sum_{j=1}^{l}C^{\prime}_{j})\geq 1\), we get that \(m\leq\frac{|G|}{16}\). By \(\frac{5|G|}{32}\leq m\) and \(m\leq\frac{|G|}{16}\), we get that the numerical class of \(B\) is not (296). As for the case of (296), the numerical class of \(f_{*}B\) is not one of (297,298,299,300, 301,310,314,315). Therefore, if \(X/G\cong\mathbb{F}_{n}\), then \(n\neq 5,7,9,10,11\).
Next, we assume that \(f\) is not an isomorphism, i.e. \(X/G\) is not a Hirzebruch surface \(\mathbb{F}_{n}\). By Theorem 4.1, \(n=0,1,2,3,4,5,6,7,8,9,\) or \(12\). We assume that \(n=5,7,\) or \(9\). The numerical class of \(f_{*}B\) is one of (296,297,298,299,300, 301,310,314,315).
We assume that the numerical class of \(f_{*}B\) is (296). Let \(p^{*}B_{1,0}=4\sum_{i=1}^{m}C_{i}\) where \(C_{i}\) is a smooth curve for \(i=1,\ldots,m\). Since the degree of \(p\) is \(|G|\), by \((C\cdot F)=1\), we get that \(|G|=4m(C_{1}\cdot p^{*}f^{*}F)\), and hence \(|G|\) is a multiple of \(4m\). Since \(f_{*}B_{1,0}=C\), \((B_{1,0},B_{1,0})\leq(C\cdot C)=-5\). By \(\frac{|G|}{16}(B_{1,0}\cdot B_{1,0})=-2m+2\sum_{1\leq i<j\leq m}(C_{i}\cdot C _{j})\), we get that \(m=\frac{|G|}{4}\). Since the numerical class of \(f_{*}B\) is (296), there must be positive integers \(a_{1},a_{2},a_{3},b\) such that
\[1+\frac{b-1}{b}=\frac{3}{4}a_{1}+\frac{1}{2}a_{2}+\frac{3}{4}a_{3},\]
and either \(a_{1}\) or \(a_{2}\) is \(0\). The integers satisfying the above conditions is only \((a_{1},a_{2},a_{3},b)=(1,0,1,2)\), and hence \(f(e_{i})\in f_{*}B_{1,5}\cap f_{*}B_{1,6}\) for each \(i=1,\ldots,l\). Since \((f_{*}B_{1,5}\cdot f_{*}B_{1,6})=1\), \(f_{*}B_{1,5}\cap f_{*}B_{1,6}\) is one point. We put \(x:=f_{*}B_{1,5}\cap f_{*}B_{1,6}\). Let \(q:\mbox{Blow}_{x}\mathbb{F}_{5}\to\mathbb{F}_{5}\) be the blow-up of \(\mathbb{F}_{5}\) at \(x\). Then there is a birational morphism \(g:X/G\to\mbox{Blow}_{x}\mathbb{F}_{5}\) such that \(f=q\circ g\). Let \(C^{\prime}:=g_{*}B_{1,0}\). Let \(E\) be the exceptional divisor of \(q\). Since \(f(e_{i})=x\) for each \(i=1,\ldots,l\), \(g(e_{i})\in E\) for each \(i=1,\ldots,l\). Since \(g_{*}B=4C^{\prime}+2g_{*}B_{1,5}+4g_{*}B_{1,6}+2E\), if \(g\) is not an isomorphism, then there must be integers \(a_{1},a_{2},a_{3},a_{4},b\) such that
\[2+\frac{b-1}{b}=\frac{3}{4}a_{1}+\frac{1}{2}a_{2}+\frac{3}{4}a_{3}+\frac{1}{2}a _{4},\]
and if \(a_{1}\) is not \(0\), then either \(a_{2}=a_{3}=0\). However, there are not such positive integers. Therefore, \(g\) is an isomorphism, i.e. \(X/G=\mbox{Blow}_{x}\mathbb{F}_{5}\), and hence \(B=4B_{1,0}+2B_{1,5}+4B_{1,6}+2E\) and \((B_{1,0}\cdot E)=1\). We put \(p^{*}E=2\sum_{j=1}^{m}C^{\prime}_{j}\) where
\(C^{\prime}_{j}\) is a smooth curve for \(j=1,\ldots,u\). Since \(m=\frac{|G|}{den}\), \(\frac{|G|}{2}=|G|(C_{1}\cdot\sum_{j=1}^{u}C^{\prime}_{j})\). This is a contradiction. Therefore, the numerical class of \(B\) is not (296). As for the case of (296), the numerical class of \(B\) is not one of (310,314).
We assume that the numerical class of \(f_{*}B\) is (297). Then there must be integers \(a_{1},a_{2},a_{3},a_{4},b\) such that
\[1+\frac{b-1}{b}=\frac{3}{4}a_{1}+\frac{1}{2}a_{2}+\frac{3}{4}a_{3}+\frac{3}{4} a_{4},\]
and if \(a_{1}\) is not zero, then \(a_{2}=a_{3}=0\). The integers satisfying the above condition is \((a_{1},a_{2},a_{3},a_{4},b)=(1,0,0,1,2)\) or \((0,0,1,1,2)\). Therefore, for each \(i=1,\ldots,l\), we get that \(f(e_{i})\in f_{*}B_{1,0}\cap f_{*}B_{0,1}\) or \(f(e_{i})\in f_{*}B_{1,5}^{2}\cap f_{*}B_{0,1}\). If \(f(e_{i})\in f_{*}B_{1,5}^{2}\cap f_{*}B_{0,1}\) for all \(i=1,\ldots,l\), then \((B_{1,0}\cdot B_{1,0})=-5\) and \((B_{1,0}\cdot B_{0,1})=1\). However, as for the case of \(X/G\cong\mathbb{F}_{n}\), we can see that such things can not happen. Therefore, \(f(e_{i})\in f_{*}B_{1,0}\cap f_{*}B_{0,1}\) for some \(i=1,\ldots,l\). By using the blow-up of \(\mathbb{F}_{5}\) at \(x:=f_{*}B_{1,0}\cap f_{*}B_{0,1}\), as for the case of (296), this is a contradiction. Therefore, the numerical class of \(B\) is not (297). As for the case of (297), the numerical class of \(B\) is not (315).
We assume that the numerical class of \(f_{*}B\) is (298). Then there must be integers \(a_{1},a_{2},a_{3},b\) such that
\[1+\frac{b-1}{b}=\frac{5}{6}a_{1}+\frac{1}{2}a_{2}+\frac{2}{3}a_{3}.\]
The integers satisfying the above condition is only \((a_{1},a_{2},a_{3},b)=(1,0,1,2)\), and hence \(f(e_{i})\in f_{*}B_{1,5}\cap f_{*}B_{1,6}\) for each \(i=1,\ldots,l\). Since \((f_{*}B_{1,5}\cdot f_{*}B_{1,6})=1\), \(f_{*}B_{1,5}\cap f_{*}B_{1,6}\) is one point. We put \(x:=f_{*}B_{1,5}\cap f_{*}B_{1,6}\). Let \(q:\mathrm{Blow}_{x}\mathbb{F}_{5}\to\mathbb{F}_{5}\) be the blow-up of \(\mathbb{F}_{5}\) at \(x\). As for the case of (296), since there are no integers \(a_{1},a_{2},a_{3},a_{4},b\) such that
\[2+\frac{b-1}{b}=\frac{3}{4}a_{1}+\frac{1}{2}a_{2}+\frac{3}{4}a_{3}+\frac{1}{2 }a_{4},\]
we get that \(X/G=\mathrm{Blow}_{x}\mathbb{F}_{5}\), and hence \(B=6B_{1,0}+2B_{1,6}+3B_{1,6}+2E\), and \((B_{1,0}\cdot E)=1\). We put \(p^{*}E=2\sum_{j=1}^{u}C^{\prime}_{j}\) where \(C^{\prime}_{j}\) is a smooth curve for \(j=1,\ldots,u\). Since \((E\cdot E)=-1\), we get that \(u=\frac{|G|}{4}+\sum_{1\leq i<j\leq u}(C^{\prime}_{i}\cdot C^{\prime}_{j})\), and hence \(u\geq\frac{|G|}{4}\). Since \((B_{1,0}\cdot E)=1\), \(\frac{|G|}{12}\) is a multiple of \(u\). This is a contradiction. Therefore, the numerical class of \(B\) is not (298).
We assume that the numerical class of \(f_{*}B\) is (299). Then there must be positive integers \(a_{1},a_{2},a_{3},a_{4},b\) such taht
\[1+\frac{b-1}{b}=\frac{5}{6}a_{1}+\frac{1}{2}a_{2}+\frac{2}{3}a_{3}+\frac{2}{3 }a_{4},\]
and \(a_{1}a_{3}=0\). The integers satisfying the above conditions is \((a_{1},a_{2},a_{3},a_{4},b)=(1,0,0,1,2)\) or \((0,1,1,1,6)\). Therefore, for each \(i=1,\ldots,l\), we get that \(f(e_{i})\in f_{*}B_{1,0}\cap f_{*}B_{0,1}\) or \(f(e_{i})\in f_{*}B_{1,6}\cap f_{*}B_{1,5}\cap f_{*}B_{0,1}\). If \(f(e_{i})\in f_{*}B_{1,6}\cap f_{*}B_{1,5}\cap f_{*}B_{0,1}\) for all \(i=1,\ldots,l\), then \((B_{1,0}\cdot B_{1,0})=-5\) and \((B_{1,0}\cdot B_{0,1})=1\). We get that this is not established in the same way as in the case of \(X/G\cong\mathbb{F}_{n}\). By using the blow-up of \(\mathbb{F}_{5}\) at \(x:=f_{*}B_{1,0}\cap f_{*}B_{0,1}\), as for the case of (298), we get that there is no case where \(f(e_{i})\in f_{*}B_{1,0}\cap f_{*}B_{0,1}\) for some \(i=1,\ldots,l\). Therefore, the numerical class
of \(B\) is not (299). As for the case of (299), the numerical class of \(B\) is not one of (300,301).
**Corollary 4.8**.: _Let \(X\) be a \(K3\) surface and \(G\) be a finite subgroup of \(\text{Aut}(X)\) such that \(X/G\) is smooth. If there is a birational morphism \(f:X/G\to\mathbb{F}_{n}\) from the quotient space \(X/G\) to a Hirzebruch surface \(\mathbb{F}_{n}\) where \(n=6,8\), or \(12\), then \(f\) is an isomorphism, i.e. \(X/G\) is a Hirzebruch surface._
Proof.: Let \(n\geq 1\) and \(C_{-n}\subset\mathbb{F}_{n}\) be the unique irreducible curve such that \((C_{-n}\cdot C_{-n})=-n\). Since for \(x\in\mathbb{F}_{n}\), if \(x\in C_{-n}\), then \(\text{Blow}_{x}\mathbb{F}_{n}=\text{Blow}_{y}\mathbb{F}_{n+1}\) where \(y\in\mathbb{F}_{n+1}\backslash C_{-(n+1)}\), and if \(x\not\in C_{-n}\), then \(\text{Blow}_{x}\mathbb{F}_{n}=\text{Blow}_{y}\mathbb{F}_{n-1}\) where \(y\in C_{-(n-1)}\), by Theorem 4.7, we get this corollary.
**Theorem 4.9**.: _Let \(X\) be a \(K3\) surface and \(G\) be a finite abelian subgroup of \(\text{Aut}(X)\). If \(X/G\) is smooth, then \(G\) is isomorphic to one of \(\mathcal{A}G\) as groups._
Proof.: Since \(X/G\) is smooth, the quotient space \(X/G\) is an Enriques surface or a rational surface. If \(X/G\) is Enriques, then \(G\cong\mathbb{Z}/2\mathbb{Z}\) as a group and \(\mathbb{Z}/2\mathbb{Z}\in\mathcal{A}G\). By Section 3, if \(X/G\cong\mathbb{F}_{n}\), then \(G\) is isomorphic to one of \(\mathcal{A}G\) as a group. By [15], if \(X/G\cong\mathbb{P}^{2}\), then \(G\) is isomorphic to one of \(\mathcal{A}G\) as a group. Therefore, we assume that \(X/G\) is rational, and \(X/G\neq\mathbb{P}^{2}\) or \(\mathbb{F}_{n}\).
Let \(f:X/G\to\mathbb{F}_{n}\) be a birational morphism where \(0\leq n\leq 12\), and \(B\) be the branch divisor of \(G\). By Theorem 4.7 and Corollary 4.8, we may assume that \(0\leq n\leq 4\). By the proof of Theorem 4.1, the numerical class of \(f_{*}B\) is one of the list in section 6.
We assume that the numerical class of \(f_{*}B\) is one of (4,5,6,10,11,12,14,15,16,19,20, 25,26,27,28,32,33,36,37,38,41,42,46,51,52,57,58,59,60,79,80,81,82,85,87,88,89,91,94, 96,98,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,176,177,178,180, 181,182,183,184,185,186,187,189,190,191,192,195,196,197,199,200,202,203,206,216, 217,241,242,243,244,245,246,249,250,270,271,272,273,274,275,276,277,279, 282) of the list in section 6. By Theorem 2.8, \(G\) is generated by automorphisms \(g_{1},\ldots,g_{m}\) where \(1\leq m\leq 5\) and the order of \(g_{i}\) is two for \(i=1,\ldots,m\). Therefore, \(G\) is \(\mathbb{Z}/2\mathbb{Z}^{\oplus a}\) where \(1\leq a\leq 5\) as a group.
We assume that the numerical class of \(f_{*}B\) is one of (1,2,3,17,18,22,23,24,39,54, 55,194,198,201,204,205,212,218,219,228,229,284,285,289,290) of the list in section 6. By Theorem 2.8, \(G\) is generated by automorphisms \(g_{1},\ldots,g_{m}\) where \(1\leq m\leq 3\) and the order of \(g_{i}\) is \(3\) for \(i=1,\ldots,m\). Therefore, \(G\) is \(\mathbb{Z}/3\mathbb{Z}^{\oplus b}\) where \(1\leq b\leq 3\) as a group.
We assume that the numerical class of \(f_{*}B\) is one of (29,34,40,44,49,50,53,56,62,63, 64,66,67,68,69,71,77,83,84,92,93,102,106,107,108,127,128,133,134,135,137,138,145, 146,147,148,149,151,153,154,163,164,165,166,167,168,169,174,175,179,188,193,211, 214,220,221,223,224,225,226,227,230,236,237,238,239,240,248,251,252,254,256,258, 259,260,265,266,267,268,269,283,286,288,292,293,294,295) of the list in section 6. By Theorem 2.8, \(G\) is generated by automorphisms \(g_{i},\ldots,g_{m}\), \(h_{1},\ldots h_{n}\), where \(1\leq m\leq 3\), \(1\leq n\leq 2\), the order of \(g_{i}\) is \(2\) for \(i=1,\ldots,m\), and the order of \(h_{j}\) is \(3\) for \(j=1,\ldots,n\). Therefore, \(G\) is \(\mathbb{Z}/2\mathbb{Z}^{\oplus d}\oplus\mathbb{Z}/3\mathbb{Z}^{\oplus e}\) where (d,e)=(1,1),(1,2),(1,3),(2,1),(2,2), (3,1),(3,2) as a group.
We assume that the numerical class of \(f_{*}B\) is one of (7,8,9,13,21,30,31,35,43,45,47, 48,61,86,90, 97,99,100,103,104,105,109,110,130,131,139,140,141,142,155,156,157,158, 161,162,207,208,209,210, 213,215,222,231,232,233,234, 255,257,261,262,263,264,278, 280,281,287,291) of the list in section 6. By Theorem 2.8, \(G\) is generated by automorphisms \(g_{i},\ldots,g_{m}\), \(h_{1},\ldots h_{n}\) where the order of \(g_{i}\) is 2 for \(i=1,\ldots,m\), the order of \(h_{j}\) is 4 for \(j=1,\ldots,n\), and \((n,m)\) is one of (0,1),(0,2), (0,3), (1,1),(1,2), (2,1),(3,1). Therefore, \(G\) is \(\mathbb{Z}/2\mathbb{Z}^{\oplus f}\oplus\mathbb{Z}/4\mathbb{Z}^{\oplus g}\) where (f,g)=(0,1),(0,2), (0,3),(1,1), (1,2),(2,1),(3,1) as a group.
We assume that the numerical class of \(f_{*}B\) is (65) of the list in section 6. We denote \(B\) by \(3B^{1}_{1,0}+6B^{2}_{1,0}+2B_{1,1}+4B^{1}_{0,1}+4B^{2}_{0,1}+\sum_{j=1}^{l}b^{ \prime}_{i}B^{\prime}_{i}\) where \(f_{*}B^{i}_{1,0}=(1,0)\), \(f_{*}B^{i}_{0,1}=(0,1)\) in \(\operatorname{Pic}(\mathbb{P}^{1}\times\mathbb{P}^{1})\), and \(B^{\prime}_{j}\) is an exceptional divisor of \(f\) for \(j=1,\ldots,l\). By Theorem 2.8, \(G\cong\mathbb{Z}/2\mathbb{Z}^{\oplus i}\oplus\mathbb{Z}/3\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}\) where \(i=\)0 or 1. If \(G\cong\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/3\mathbb{Z}\oplus\mathbb{Z}/4 \mathbb{Z}\), then \(G\) is one of \(\mathcal{A}G\) as a group. We assume that \(G\cong\mathbb{Z}/3\mathbb{Z}\oplus\mathbb{Z}/4\mathbb{Z}\). By Remark 4.5, there are integers \(\beta,a_{j}\geq 0\) such that
\[1+\frac{\beta-1}{\beta}=\frac{5}{6}a_{1}+\frac{1}{2}a_{2}+\frac{2}{3}a_{3}+ \frac{11}{12}a_{4}+\frac{11}{12}a_{5}.\]
Since \(G\cong\mathbb{Z}/3\mathbb{Z}\oplus\mathbb{Z}/4\mathbb{Z}\), \(\beta\)=1,2,3,4,6, or 12. Since \(f_{*}B=3(1,0)+6(1,0)+2(1,1)+4(0,1)+4(0,1)\), the support of \(f_{*}B\) is simple normal crossing. Since each irreducible component of \(f_{*}B\) is smooth, \(a_{j}=0\) or 1 for each \(1\leq j\leq 5\). Since \(f_{*}B=3(1,0)+6(1,0)+2(1,1)+4(0,1)+4(0,1)\), the non-zero element of \(\{a_{1},a_{2}\}\) is just one, and the non-zero element of \(\{a_{4},a_{5}\}\) is just one. The integers which satisfy the above condition are \((\beta,a_{1},a_{2},a_{3})=(12,1,0,1)\) and \((a_{4},a_{5})=(1,0)\) or (0,1). Therefore, \(f(e_{i})\not\in f_{*}B^{2}_{1,0}\) for \(i=1,\ldots,l\). By the fact that \(f_{*}B^{2}_{1,0}=(1,0)\) and \(f_{*}B_{1,1}=(1,1)\) in \(\operatorname{Pic}(\mathbb{P}^{1}\times\mathbb{P}^{1})\) and the fact that \(f(e_{i})\not\in f_{*}B^{2}_{1,0}\) for \(i=1,\ldots,l\), we get that \(B^{2}_{1,0}\cap B_{1,1}\) is not an empty set, and hence \(p^{-1}(B^{2}_{1,0})\cap p^{-1}(B_{1,1})\) is an empty set. Since \(G\cong\mathbb{Z}/3\mathbb{Z}\oplus\mathbb{Z}/4\mathbb{Z}\), the number of subgroup of \(G\) which is generated by a non-symplectic automorphism of order 2 is one. Since each ramification index of \(B^{2}_{1,0}\) and \(B_{1,1}\) is divided by 2, by Theorem 2.8, there is a non-symplectic automorphism \(g\) of order 2 such that \(\operatorname{Fix}(g)\supset f^{-1}B^{2}_{1,0}\) and \(\operatorname{Fix}(g)\supset f^{-1}B_{1,1}\). Since \(p^{-1}(B^{2}_{1,0})\cap p^{-1}(B_{1,1})\neq\emptyset\), this is a contradiction. Therefore, if the numerical class of \(f_{*}B\) is (65), then \(G\) is one of \(\mathcal{A}G\) as a group.
As for the case of (65), if the numerical class of \(f_{*}B\) is one of (95,136,150,159, 235,247,253) of the list in section 6, then \(G\) is one of \(\mathcal{A}G\) as a group.
We assume that the numerical class of \(f_{*}B\) is (70) of the list in section 6. We denote \(B\) by \(2B^{1}_{1,0}+3B^{2}_{1,0}+6B^{3}_{1,0}+2B^{1}_{0,1}+4B^{2}_{0,1}+4B^{3}_{0,1}+ \sum_{j=1}^{l}b^{\prime}_{i}B^{\prime}_{i}\) where \(f_{*}B^{i}_{1,0}=(1,0)\), \(f_{*}B^{i}_{0,1}=(0,1)\), and \(B^{\prime}_{j}\) is an exceptional divisor of \(f\) for \(j=1,\ldots,l\). By Theorem 2.8, \(G\cong\mathbb{Z}/2\mathbb{Z}^{\oplus i}\oplus\mathbb{Z}/3\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}\) where \(i=0,1\),or 2. There are some integers \(\beta,a_{j}\) such that
\[1+\frac{\beta-1}{\beta}=\frac{1}{2}a_{1}+\frac{2}{3}a_{2}+\frac{5}{6}a_{3}+ \frac{1}{2}a_{4}+\frac{3}{4}a_{5}+\frac{3}{4}a_{6}.\]
Since \(G\cong\mathbb{Z}/2\mathbb{Z}^{\oplus i}\oplus\mathbb{Z}/3\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}\) where \(i=0,1\), or 2, we get \(\beta\)=1,2,3,4,6, or 12. Since \(f_{*}B=2(1,0)+3(1,0)+6(1,0)+2(0,1)+4(0,1)+4(0,1)\), the support of \(f_{*}B\) is simple normal crossing. Since each irreducible component of \(f_{*}B\) is smooth, \(a_{j}=0\) or 1 for each \(1\leq j\leq 6\), and by Theorem 4.3 the non-zero element of \(\{a_{1},a_{2},a_{3}\}\)
is just one, and the non-zero element of \(\{a_{4},a_{5},a_{6}\}\) is just one. From the above, \((\beta,a_{1},a_{2},a_{3},b_{1},b_{2},b_{3})=(1,1,0,0,1,0,0)\). Therefore, \(f(e_{j})\in f_{*}(B^{1}_{1,0})\cap f_{*}(B^{1}_{0,1})\) for \(j=1,\ldots,l\). Since \(((1,0)\cdot(1,0))=0\), we get that \((p^{*}B^{i}_{1,0}\cdot p^{*}B^{i}_{1,0})=0\) for \(i=2,3\). Since \(X\) is a \(K3\) surface, the support of \(p^{*}B^{i}_{1,0}\) is a union of elliptic curves for \(i=2,3\). Since \(G\cong\mathbb{Z}/2\mathbb{Z}^{\oplus i}\oplus\mathbb{Z}/3\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}\) where \(i=0,1\), or \(2\), the number of subgroups of \(G\) which are generated by a non-symplectic automorphism of order \(3\) is one, and hence there is a non-symplectic automorphism \(g\) of order \(3\) such that \(\text{Fix}(g)\) has at least two elliptic curves. By [1,14], this is a contradiction. Therefore, the numerical class of \(f_{*}B\) is not (70).
As for the case of (70), the numerical class of \(f_{*}B\) is not one of (75,143) of the list in section 6.
If the numerical class of \(f_{*}B\) is (72) of the list in section 6, then by Theorem 2.8, \(G\cong\mathbb{Z}/2\mathbb{Z}^{\oplus i}\oplus\mathbb{Z}/4\mathbb{Z}^{\oplus j}\) where \((i,j)\) is one of (0,1), (0,2), (1,1), (1,2), (2,1), (2,2), (3,1). We assume that \(G\cong\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\oplus\mathbb{Z}/4\mathbb{Z}^{\oplus 2}\). Since \(G\) is generated by non-symplectic automorphism of order \(2\) and \(4\), \(G_{s}:=\{g\in G:\ g\ \text{is symplectic}\}\cong\mathbb{Z}/2\mathbb{Z}^{\oplus 2} \oplus\mathbb{Z}/4\mathbb{Z}\). By the classification of finite symplectic groups ([13,10,16]), we see that there is no \(G_{s}\) where \(G_{s}\cong\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\oplus\mathbb{Z}/4\mathbb{Z}\). Therefore, \(G\cong\mathbb{Z}/2\mathbb{Z}^{\oplus i}\oplus\mathbb{Z}/4\mathbb{Z}^{\oplus j}\) where \((i,j)\) is one of (0,1), (0,2), (1,1), (1,2), (2,1), (3,1), and if the numerical class of \(f_{*}B\) is (72), then \(G\) is one of \(\mathcal{A}G\) as a group.
As for the case of (72), if the numerical class of \(f_{*}B\) one of (74,78,111,144) of the list in section 6, then \(G\) is one of \(\mathcal{A}G\) as a group.
We assume that the numerical class of \(f_{*}B\) is (73) of the list in section 6. We denote \(B\) by \(2B^{1}_{1,0}+4B^{2}_{1,0}+4B^{3}_{1,0}+3B^{1}_{0,1}+3B^{2}_{0,1}+3B^{3}_{0,1}+ \sum_{j=1}^{l}b^{\prime}_{i}B^{\prime}_{i}\) By Theorem 2.8, \(G\cong\mathbb{Z}/2\mathbb{Z}^{\oplus i}\oplus\mathbb{Z}/3\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}\) where \(i=0,1\), or \(2\). As for the case of (68), there are integers \(\beta,a_{j}\) such that
\[1+\frac{\beta-1}{\beta}=\frac{1}{2}a_{1}+\frac{3}{4}a_{2}+\frac{3}{4}a_{3}+ \frac{2}{3}a_{4}+\frac{2}{3}a_{5}+\frac{2}{3}a_{6},\]
and \(a_{j}=0\) or \(1\) for each \(1\leq j\leq 6\), \(\beta=1,2,3,4,6\), or \(12\), the non-zero element of \(\{a_{1},a_{2},a_{3}\}\) is only one, and the non-zero element of \(\{a_{4},a_{5},a_{6}\}\) is only one, however, integers which satisfy the above condition do not exist. Therefore, the numerical class of \(f_{*}B\) is not (73).
As for the case of (73), the numerical class of \(f_{*}B\) is not one of (101,129,132, 152,160,170,171,172,173) of the list in section 6.
We assume that the numerical class of \(f_{*}B\) is (76) of the list in section 6. We denote \(B\) by \(2B^{1}_{1,0}+4B^{2}_{1,0}+4B^{3}_{1,0}+2B^{1}_{0,1}+2B^{2}_{0,1}+2B^{3}_{0,1}+ 2B^{4}_{0,1}+\sum_{i=1}^{n}b^{\prime}_{i}B^{\prime}_{i}\), where \(f_{*}B^{i}_{1,0}=(1,0)\), \(f_{*}B^{i}_{0,1}=(0,1)\) in \(\text{Pic}(\mathbb{P}^{1}\times\mathbb{P}^{1})\), and \(f_{*}B^{\prime}_{i}=0\). By Theorem 2.8, \(G\cong\mathbb{Z}/2\mathbb{Z}^{\oplus i}\oplus\mathbb{Z}/4\mathbb{Z}\), where \(i=0,1,2,3\), or \(4\). We assume that \(G\cong\mathbb{Z}/2\mathbb{Z}^{\oplus 4}\oplus\mathbb{Z}/4\mathbb{Z}\). By Theorem 2.8, \(G=G^{1}_{1,0}\oplus G^{2}_{1,0}\oplus G^{1}_{0,1}\oplus G^{2}_{0,1}\oplus G^{ 3}_{0,1}\). As for the case of (70), we get that \(f(e_{i})\in B^{1}_{1,0}\cap B^{j}_{0,1}\) for each \(i=1,\ldots,m\) where \(j=1,2,3,4\). Therefore, we get \((B^{2}_{1,0}\cdot B^{j}_{0,1})=1\). Let \(s\in G^{2}_{1,0}\) be a generator. Since \(G=G^{1}_{1,0}\oplus G^{2}_{1,0}\oplus G^{1}_{0,1}\oplus G^{2}_{0,1}\oplus G^{3}_ {0,1}\), by Theorem 2.8, there is a non-symplectic automorphism \(t\in G^{j}_{0,1}\) for some \(j=1,2,3\) such that \(\text{Fix}(t\circ s)\) does not have a curve. Since \((B^{2}_{1,0}\cdot B^{j}_{0,1})=1\) and \(|G|=2^{3}4\), we get that \(|p^{-1}(B^{2}_{1,0})\cap p^{-1}(B^{j}_{0,1})|=8\). By
[2, Proposition 1], the number of isolated points of \(\operatorname{Fix}(t\circ s)\) is \(4\). This is a contradiction. Therefore, if the numerical class of \(f_{*}B\) is (76), then \(G\cong\mathbb{Z}/2\mathbb{Z}^{\oplus i}\oplus\mathbb{Z}/4\mathbb{Z}\) where \(i=0,1,2\), or \(3\), and hence \(G\) is one of \(\mathcal{A}G\) as a group.
We assume that the numerical class of \(f_{*}B\) is (150) of the list in section 6. We denote \(B\) by \(3B_{1,0}+2B_{1,1}^{1}+6B_{1,1}^{2}+4B_{0,1}^{1}+12B_{0,1}^{2}+\sum_{j=1}^{l}b_ {i}^{\prime}B_{i}^{\prime}\) where \(f_{*}B_{s,t}^{i}=sC+tF\), in \(\operatorname{Pic}(\mathbb{F}_{1})\), and \(B_{j}^{\prime}\) is an exceptional divisor of \(f\) for \(j=1,\ldots,l\). By Theorem 2.8, \(G\cong\mathbb{Z}/2\mathbb{Z}^{\oplus i}\oplus\mathbb{Z}/3\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}\) where \(i=0\) or \(1\). Then the number of subgroup of \(G\) which is generated by a non-symplectic automorphism of order \(3\) is one. By the above, for \(e_{i}\), there are integers \(\beta,a_{j}\geq 0\) such that
\[1+\frac{\beta-1}{\beta}=\frac{2}{3}a_{1}+\frac{1}{2}a_{2}+\frac{5}{6}a_{3}+ \frac{3}{4}a_{4}+\frac{11}{12}a_{5}.\]
Since \(G\cong\mathbb{Z}/3\mathbb{Z}\oplus\mathbb{Z}/4\mathbb{Z}\), \(\beta\)=1,2,3,4,6, or \(12\). Since \(f_{*}B=3(1,0)+6(1,0)+2(1,1)+4(0,1)+4(0,1)\), the support of \(f_{*}B\) is simple normal crossing. Since each irreducible component of \(f_{*}B\) is smooth, \(a_{j}=0\) or \(1\) for each \(1\leq j\leq 5\). The integers which satisfy the above condition are \((\beta,a_{1},a_{2},a_{3},a_{4},a_{5})=(4,0,0,1,0,1)\). Therefore, \(f(e_{i})\not\in f_{*}B_{1,0}\cap f_{*}B_{0,1}^{2}\) for \(i=1,\ldots,l\), and hence \(p^{-1}(B_{1,0})\cap p^{-1}(B_{0,1}^{2})\) is not an empty set. Since \(G_{1,0}\cong\mathbb{Z}/3\mathbb{Z}\), \(G_{0,1}^{2}\cong\mathbb{Z}/12\mathbb{Z}\), and \(p^{-1}(B_{1,0})\cap p^{-1}(B_{0,1}^{2})\) is not an empty set, we get that the number of subgroup of \(G\) which is generated by a non-symplectic automorphism of order \(3\) is at least two. This is a contradiction. Therefore, the numerical class of \(f_{*}B\) is not (150).
As for the case of (150), the numerical class of \(f_{*}B\) is not (159) of the list in section 6.
## 5. Abelian groups of Enriques surfaces with smooth quotient
Let \(E\) be an Enriques surface and \(H\) be a finite abelian subgroup of \(\operatorname{Aut}(E)\) such that \(E/H\) is smooth. Let \(X\) be the \(K3\)-cover of \(E\), and \(G:=\{s\in\operatorname{Aut}(X):s\text{ is a lift of some }h\in H\}\). Then \(G\) is a finite abelian group, \(G\) has a non-symplectic involution whose fixed locus is empty, \(X/G=E/H\), and the branch divisor of \(G\) is that of \(H\).
**Theorem 5.1**.: _Let \(E\) be an Enriques surface and \(H\) be a finite subgroup of \(\operatorname{Aut}(E)\). We assume that the quotient space \(E/H\) is smooth and there is a birational morphism from \(E/H\) to a Hirzebruch surface \(\mathbb{F}_{n}\), where \(0\leq n\). Then \(0\leq n\leq 4\)._
Proof.: Let \(f:E/H\to\mathbb{F}_{n}\) be a birational morphism, and \(B:=\sum_{i=1}^{k}b_{i}B_{i}\) be the branch divisor of the quotient map \(E\to E/H\). Since the canonical line bundle of an Enriques surface is numerically trivial, by Theorem 2.7, the numerical class of \(f_{*}B\) is one of Section 3. By [11, Proposition 4.5], \(G\) does not have a non-symplectic automorphism whose order is odd. Therefore, \(b_{i}\) is even number for each \(i=1,\ldots,k\) by Theorem 2.8. By the list of the numerical class of Section 3, we get the claim.
**Theorem 5.2**.: _For each numerical classes (6,8,9,11,12,13,16,89,90,91,94,96,97,98, 101,203,206,209,210,281) of the list in section 6, there are an Enriques surface \(E\) and a finite abelian subgroup \(H\) of \(\operatorname{Aut}(E)\) such that \(E/H\) is a Hirzebruch surface \(\mathbb{F}_{n}\), and the numerical class of the branch divisor \(B\) of the quotient map \(E\to E/H\) is (6,8,9,11,12,13,16,89,90,91,94,96,97,98,101,203,206,209,210,281)._
_Furthermore, for a pair \((E,H)\) of an Enriques surface \(E\) and a finite abelian subgroup \(H\) of \(\operatorname{Aut}(E)\), if \(E/H\cong\mathbb{F}_{n}\) and the numerical class of the branch divisor
of the quotient map \(E\to E/H\) is (6,8,9,11,12,13,16,89,90,91,94,96,97,98,101,203, 206, 209, 210, 281), then \(H\) is \(\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\), \(\mathbb{Z}/4\mathbb{Z}^{\oplus 2}\), \(\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/4\mathbb{Z}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 3}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\oplus\mathbb{Z}/4\mathbb{Z}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 3}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\oplus\mathbb{Z}/4\mathbb{Z}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 3}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 3}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 3}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 3}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 3}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 3}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\oplus\mathbb{Z}/4\mathbb{Z}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 4}\), \(\mathbb{Z}/4\mathbb{Z}\oplus\mathbb{Z}/8\mathbb{Z}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 3}\), \(\mathbb{Z}/4\mathbb{Z}^{\oplus 2}\), \(\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\oplus\mathbb{Z}/4\mathbb{Z}\), \(\mathbb{Z}/4\mathbb{Z}\oplus\mathbb{Z}/8\mathbb{Z}\), in order, as a group._
Proof.: Let \(X\) be the \(K3\)-cover of \(E\), \(G:=\{s\in\operatorname{Aut}(X):s\text{ is a lift of some }h\in H\}\), and \(p:X\to X/G\) be the quotient map. Then \(G\) is a finite abelian group, \(X/G\cong\mathbb{F}_{n}\), and the branch divisor of \(p\) is \(B\). Since \(b_{i}\) is power of two for each \(i=1,\ldots,k\), \(G\cong\mathbb{Z}/2\mathbb{Z}^{\oplus s}\oplus\mathbb{Z}/4\mathbb{Z}^{\oplus t }\oplus\mathbb{Z}/8\mathbb{Z}^{\oplus u}\) where \(s,t,u\geq 0\). By Theorem 2.8, and the assumption that \(G\) has a non-symplectic automorphism of order \(2\) such that whose fixed locus is an empty set, we get \(s+t+u\geq 3\), and hence the numerical class of \(B\) is one of (6,8,9,10,11,12,13,15,16,19,20,81,82,87,88,89,90,91,94,96,97,98,100,101,199, 200,203,206,208,209,210,281,282) of the list in section 6.
We assume that the numerical class of \(B\) is (6). We denote \(B\) by \(2B^{1}_{1,0}+2B^{2}_{1,0}+2B_{2,2}+2B^{1}_{0,1}+2B^{2}_{0,1}\). By Proposition 3.9, \(G=G^{1}_{1,0}\oplus G_{2,2}\oplus G^{1}_{0,1}\cong\mathbb{Z}/2\mathbb{Z}^{ \oplus 3}\). Let \(s,t,u,\in G\) be generators of \(G^{1}_{1,0}\), \(G^{1}_{0,1}\), and \(G_{2,2}\) respectively. Then the non-symplectic automorphisms of \(G\) are \(s\), \(t\), \(u\), and \(s\circ t\circ u\).
From here, we will show that \(\operatorname{Fix}(s\circ t\circ u)\) is an empty set. We assume that the curves of \(\operatorname{Fix}(s)\) are only \(p^{-1}(B^{1}_{1,0})\). Since \(s\) is a non-symplectic automorphism of order \(2\), the quotient space \(X/\langle s\rangle\) is a smooth rational surface. The quotient map \(q:X/\langle s\rangle\to X/G\cong\mathbb{P}^{1}\times\mathbb{P}^{1}\) is the Galois cover such that the branch divisor is \(2B^{2}_{0,1}+2B_{2,2}+2B^{1}_{0,1}+2B^{2}_{0,1}\), and the Galois group is isomorphic to \(\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\) as a group. By Theorem 3.5, there is the Galois cover \(g:Y\to X/G\) whose branch divisor is \(2B_{2,2}+2B^{1}_{0,1}+2B^{2}_{0,1}\) and Galois group is isomorphic to \(\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\) as a group. Since \(\operatorname{Fix}(s)\) is not an empty set and the order of \(s\) is \(2\), \(X/\langle s\rangle\) is a smooth rational surface. By Theorem 2.4, there is the Galois cover \(h:X/\langle s\rangle\to Y\) such that \(q=g\circ h\). Since the degree of \(q\) is \(4\) and that of \(g\) is \(4\), \(h\) is an isomorphism. Since the branch divisor of \(q\) is not that of \(g\), this is a contradiction. Therefore, \(\operatorname{Fix}(s)\) is \(p^{-1}(B^{1}_{1,0})\cup p^{-1}(B^{2}_{1,0})\). In the same way, \(\operatorname{Fix}(t)\) is \(p^{-1}(B^{1}_{0,1})\cup p^{-1}(B^{2}_{0,1})\). Therefore, by Theorem 2.8, \(\operatorname{Fix}(s\circ t\circ u)\) is an empty set, and hence \(E:=X/\langle s\circ t\circ u\rangle\) is an Enriques surface. Let \(H:=G/\langle s\circ t\circ u\rangle\). Then \(E/H\cong\mathbb{P}^{1}\times\mathbb{P}^{1}\), \(H\cong\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\), and the branch divisor of \(H\) is \(B\). It is easy to show that for an Enriques surface \(E\) and a finite abelian subgroup \(H\) of \(\operatorname{Aut}(E)\) such that \(E/H\cong\mathbb{P}^{1}\times\mathbb{P}^{1}\) if the numerical class of \(H\) is (6), then \(H\cong\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\).
As for the case of (6), the claim is established for (89).
We assume that the numerical class of \(B\) is (8). We denote \(B\) by \(4B^{1}_{1,0}+4B^{2}_{1,0}+2B_{1,1}+4B^{1}_{0,1}+4B^{2}_{0,1}\). By Proposition 3.9, \(G=G^{1}_{1,0}\oplus G_{1,1}\oplus G^{1}_{0,1}\cong\mathbb{Z}/2\mathbb{Z}\oplus \mathbb{Z}/4\mathbb{Z}^{\oplus 2}\). Let \(s,t,u,\in G\) be generators of \(G^{1}_{1,0}\), \(G^{1}_{0,1}\), and \(G_{1,1}\) respectively. By Theorem 2.8, \(s\) and \(t\) are non-symplectic automorphism of order \(4\) and \(u\) is a non-symplectic automorphism of order \(2\). By Theorem 2.8, \(G^{2}_{1,0}\) is generated by \(s\circ t^{2x}\circ u^{y}\) where \(x,y=0\) or \(2\). Since \((s\circ t^{2x}\circ u^{y})^{2}=s^{2}\) for \(x,y=0\) or \(2\), we get that \(\operatorname{Fix}(s^{2})\) is \(p^{-1}(B^{1}_{1,0})\cup p^{-1}(B^{2}_{1,0})\). As for the case of (6), we get the claim for (8).
As for the case of (8), the claim is established for (101).
We assume that the numerical class of \(B\) is (9). We denote \(B\) by \(4B^{1}_{1,0}+4B^{2}_{1,0}+2B_{1,2}+2B^{1}_{0,1}+2B^{2}_{0,1}\). By Proposition 3.9, \(G=G^{1}_{1,0}\oplus G_{1,2}\oplus G^{1}_{0,1}\cong\mathbb{Z}/2\mathbb{Z}^{\oplus 2 }\oplus\mathbb{Z}/4\mathbb{Z}\). Let \(s,t,u\in G\) be generators of \(G^{1}_{1,0}\), \(G^{1}_{0,1}\), and \(G_{1,2}\) respectively. As for the case of (6), \(\mathrm{Fix}(t)\) is \(p^{-1}(B^{1}_{0,1})\cup p^{-1}(B^{2}_{0,1})\). As for the case of (8), \(\mathrm{Fix}(s)\) is the support of \(p^{-1}(B^{1}_{1,0})\cup p^{-1}(B^{2}_{1,0})\). As for the case of (6), we get the claim for (101).
We assume that the numerical class of \(B\) is (10). We denote \(B\) by \(2B^{1}_{1,0}+2B^{2}_{1,0}+2B^{3}_{1,0}+2B_{1,4}\). Let \(s_{1},s_{2},t\in G\) be generators of \(G^{1}_{1,0}\), \(G^{2}_{1,0}\), and \(G_{1,4}\) respectively. By Proposition 3.9, \(G=G^{1}_{1,0}\oplus G^{2}_{1,0}\oplus G_{1,4}\cong\mathbb{Z}/2\mathbb{Z}^{\oplus 3}\). Then the non-symplectic involutions of \(G\) are \(s_{1},s_{2},t\) and \(s_{1}\circ s_{2}\circ t\).
We assume that \(\mathrm{Fix}(s_{1})\) is \(p^{-1}(B^{1}_{1,0})\cup p^{-1}(B^{3}_{1,0})\). Then \(X/\langle s_{1}\rangle\) is a smooth rational surface, and the quotient map \(q:X/\langle s_{1}\rangle\to X/G\cong\mathbb{P}^{1}\times\mathbb{P}^{1}\) is the Galois cover such that the branch divisor is \(2B^{2}_{0,1}+2B_{1,4}\), and the Galois group is isomorphic to \(\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\) as a group. Since \(\mathbb{P}^{1}\times\mathbb{P}^{1}\backslash B^{2}_{1,0}\) is simply connected, in the same way of the proof of Theorem 2.8, this is a contradiction. Therefore, \(\mathrm{Fix}(s_{i})\) is \(p^{-1}(B^{i}_{1,0})\) for \(i=1,2\), and hence \(\mathrm{Fix}(s_{1}\circ s_{2}\circ t)\) is \(p^{-1}(B^{3}_{1,0})\). There are not an Enriques surface \(E\) and a finite abelian subgroup \(H\) of \(\mathrm{Aut}(E)\) such that \(E/H\cong\mathbb{P}^{1}\times\mathbb{P}^{1}\) and the numerical class of the branch divisor of \(H\) is (10).
As for the case of (10), we get the claim for (87,100).
We assume that the numerical class of \(B\) is (11). We denote \(B\) by \(2B^{1}_{1,0}+2B^{2}_{1,0}+2B^{3}_{1,0}+2B_{1,1}+2B^{1}_{0,1}+2B^{2}_{0,1}+2B^{ 3}_{0,1}\). By Proposition 3.9, \(G=\oplus_{i=1}^{2}G^{i}_{1,0}\oplus G_{1,1}\oplus_{i=1}^{2}\)\(G^{i}_{0,1}\), and hence the number of non-symplectic automorphisms of order \(2\) of \(G\) is \(16\). By Theorem 2.8, \(G\) has a non-symplectic automorphism of order \(2\) whose fixed locus is an empty set. Furthermore, it is easy to show that for an Enriques surface \(E\) and a finite abelian subgroup \(H\) of \(\mathrm{Aut}(E)\) such that \(E/H\cong\mathbb{P}^{1}\times\mathbb{P}^{1}\) if the numerical class of \(H\) is (11), then \(H\cong\mathbb{Z}/2\mathbb{Z}^{\oplus 4}\).
As for the case of (11), the claim is established for (12,13,16,91,94,96,97,98,206, 210).
We assume that the numerical class of \(B\) is (15). We denote \(B\) by \(2B^{1}_{1,0}+2B^{2}_{1,0}+2B^{1}_{1,2}+2B^{2}_{1,2}\). By Proposition 3.11, \(G=G^{1}_{1,0}\oplus G^{1}_{1,2}\oplus G^{2}_{1,2}\). Let \(s,t,u\in G\) be generators of \(G^{1}_{1,0}\), \(G^{1}_{1,2}\), and \(G^{2}_{1,2}\) respectively. Then the non-symplectic automorphisms of order \(2\) of \(G\) are \(s\), \(t\), \(u\), and \(s\circ t\circ u\). We assume that \(\mathrm{Fix}(s\circ t\circ u)\) is an empty set. Since \((B^{i}_{1,0}\cdot B^{j}_{1,2})\neq 0\) for \(i,j=1,2\), \(\mathrm{Fix}(s)\) is \(p^{-1}(B^{1}_{1,0})\cup p^{-1}(B^{2}_{1,0})\). Since \((B^{1}_{1,0}+B^{2}_{1,0}\cdot B^{1}_{1,2})=4\), \(X/(G^{1}_{1,0}\oplus G^{1}_{1,2})\) is smooth. Since \(G=G^{1}_{1,0}\oplus G^{1}_{1,2}\oplus G^{2}_{1,2}\), the branch divisor of the quotient map \(X/(G^{1}_{1,0}\oplus G^{1}_{1,2})\to X/G\cong\mathbb{F}_{2}\) is \(2B^{2}_{1,0}\) and its degree is \(2\). Since \(\frac{B^{2}_{1,0}}{2}\not\in\)Pic\((\mathbb{P}^{1}\times\mathbb{P}^{1})\) and \(X/(G^{1}_{1,0}\oplus G^{1}_{1,2})\) is smooth, this is a contradiction. Therefore, there are not an Enriques surface \(E\) and a finite abelian subgroup \(H\) of \(\mathrm{Aut}(E)\) such that \(E/H\cong\mathbb{F}_{1}\) and the numerical class of branch divisor of \(H\) is (15).
As for the case of (15), we get that there are not an Enriques surface \(E\) and a finite abelian subgroup \(H\) of \(\mathrm{Aut}(E)\) such that \(E/H\cong\mathbb{F}_{n}\) and the numerical class of the branch divisor of \(H\) is (88).
We assume that the numerical class of \(B\) is (19). We denote \(B\) by \(2B^{1}_{1,1}+2B^{2}_{1,1}+2B^{3}_{1,1}+2B^{4}_{1,1}\). By Proposition 3.15, \(G=G^{1}_{1,1}\oplus G^{2}_{1,1}\oplus G^{3}_{1,1}\). Let \(s_{i}\in G^{i}_{1,1}\) be a generator of \(G^{i}_{1,1}\) for \(i=1,2,3,4\). By Theorem 2.8, \(\operatorname{Fix}(s_{i})\) is not an empty set for \(i=1,2,3,4\). Since \(G\cong\mathbb{Z}/2\mathbb{Z}^{\oplus 3}\), \(s_{4}=s_{1}\circ s_{2}\circ s_{3}\), and hence \(G\) does not have a non-symplectic automorphism of order \(2\) whose fixed locus is an empty set. Therefore, there are not an Enriques surface \(E\) and a finite abelian subgroup \(H\) of \(\operatorname{Aut}(E)\) such that \(E/H\cong\mathbb{F}_{1}\) and the numerical class of the branch divisor of \(H\) is (19).
As for the case of (19), we get that there are not an Enriques surface \(E\) and a finite abelian subgroup \(H\) of \(\operatorname{Aut}(E)\) such that \(E/H\cong\mathbb{F}_{n}\) and the numerical class of the branch divisor of \(H\) is (19,20,81,82,200).
We assume that the numerical class of \(B\) is (90). We denote \(B\) by \(2B_{1,0}+2B_{1,1}+2B_{2,2}+4B^{1}_{0,1}+4B^{2}_{0,1}\). By Corollary 3.12, \(G=G_{1,1}\oplus G_{2,2}\oplus G^{1}_{0,1}\). Let \(q:X/\langle G_{1,0},G_{1,1},G_{2,2}\rangle\to X/G\cong\mathbb{F}_{1}\) be the quotient map. Then the branch divisor of \(q\) is \(4B^{1}_{0,1}+4B^{2}_{0,1}\). By Theorem 2.4, \(X/\langle G_{1,0},G_{1,1},G_{2,2}\rangle\cong\mathbb{F}_{4}\), and the branch divisor of \(\langle G_{1,0},G_{1,1},G_{2,2}\rangle\) is \(2B_{1,0}+2q^{*}B_{1,1}+2q^{*}B_{2,2}\). Let \(s,t,u\in G\) be generators of \(G_{1,1}\), \(G_{2,2}\), and \(G^{1}_{0,1}\) respectively. Then \(\operatorname{Fix}(s)\) is the support of \(p^{*}B_{1,0}\) and that of \(p^{*}B_{1,1}\). Then as for the case of (6), we get the claim.
As for the case of (90), the claim is established for (203,209,281).
We assume that the numerical class of \(B\) is (199). We denote \(B\) by \(2B_{1,0}+2B_{1,4}+2B^{1}_{1,2}+2B^{2}_{1,2}\). By Corollary 3.16, \(G=G_{1,4}\oplus G^{1}_{1,2}\oplus G^{2}_{1,2}\). Let \(s,t,u\in G\) be generators of \(G_{1,4}\), \(G^{1}_{1,2}\), and \(G^{2}_{1,2}\) respectively. Then the non-symplectic automorphisms of \(G\) are \(s\), \(t\), \(u\), and \(s\circ t\circ u\). Since each fixed locus of \(s\), \(t\), and \(u\) is not an empty set, by Theorem 2.8, if \(G\) has a non-symplectic automorphism of order \(2\) whose fixed locus is an empty set, then that is \(s\circ t\circ u\). We assume that \(\operatorname{Fix}(s\circ t\circ u)\) is an empty set. Then we may assume that \(\operatorname{Fix}(t)\) is \(p^{-1}(B_{1,0})\cup p^{-1}(B^{1}_{1,2})\). Since \((B_{1,0}+B^{1}_{1,2}\cdot B_{1,4})=6\), we get \(|p^{-1}(B_{1,0}\cup B^{1}_{1,2})\cap p^{-1}(B_{1,4})|=12\). Since \(s\circ t\) is a symplectic automorphism of order \(2\) and \(p^{-1}(B_{1,0}\cup B^{1}_{1,2})\cap p^{-1}(B_{1,4})\) is contained in \(\operatorname{Fix}(s\circ t)\), this is a contradiction. Therefore, there are not an Enriques surface \(E\) and a finite abelian subgroup \(H\) of \(\operatorname{Aut}(E)\) such that \(E/H\cong\mathbb{F}_{1}\) and the numerical class of the branch divisor of \(H\) is (199).
We assume that the numerical class of \(B\) is (208). We denote \(B\) by \(4B_{1,0}+2B_{1,3}+4B_{1,2}+2B^{1}_{0,1}+2B^{2}_{0,1}\). By Proposition 3.18, \(G=G_{1,3}\oplus G_{1,2}\oplus G^{1}_{0,1}\). Let \(s,t,u\in G\) be generators of \(G_{1,3}\), \(t\in G_{1,2}\), and \(u\in G^{1}_{0,1}\) respectively. Then the non-symplectic automorphisms of \(G\) are \(s\), \(t^{2}\), \(u\), and \(s\circ t^{2}\circ u\). Since each fixed locus of \(s\), \(t^{2}\), and \(u\) is not an empty set by Theorem 2.8, if \(G\) has a non-symplectic automorphism of order \(2\) whose fixed locus is an empty set, then that is \(s\circ t^{2}\circ u\).
We assume that \(\operatorname{Fix}(s\circ t^{2}\circ u)\) is an empty set. Then \(\operatorname{Fix}(t^{2})\) is \(p^{-1}(B_{1,0})\cup p^{-1}(B_{1,2})\), and \(\operatorname{Fix}(u)\) is \(p^{-1}(B^{1}_{1,0})\cup p^{-1}(B^{2}_{1,0})\). Since \((B_{1,3}\cdot B^{1}_{0,1}+B^{2}_{0,1})=4\), we get that \(X/(G_{1,3}\oplus G^{1}_{0,1})\) is smooth, and the branch divisor of the quotient map \(f:X/(G_{1,3}\oplus G^{1}_{0,1})\to X/G\cong\mathbb{F}_{2}\) is \(4B_{1,0}+4B_{1,2}\), and the Galois group is \(\mathbb{Z}/4\mathbb{Z}\), which is induced by \(t\). Furthermore, since \((B_{1,3}\cdot B_{1,0}+B_{1,2})=4\) and \((B_{1,3}\cdot B^{1}_{0,1}+B^{2}_{0,1})=4\), \(G/\langle s,t^{2},u\rangle\) is smooth, and the branch divisor of the quotient map \(g:X/\langle s,t^{2},u\rangle\to X/G\cong\mathbb{F}_{2}\) is \(2B_{1,0}+2B_{1,2}\), and the Galois group
is isomorphic to \(\mathbb{Z}/2\mathbb{Z}\) as a group. Let \(E_{1,0}\) and \(E_{1,2}\) be the support of \(g^{*}B_{1,0}\) and \(g^{*}B_{1,2}\) respectively. Then \(g^{*}B_{1,0}=2E_{1,0}\) and \(g^{*}B_{1,2}=2E_{1,2}\). Moreover, by Theorem 3.5, there is the double cover \(h:X/(G_{1,3}\oplus G_{0,1}^{1})\to X/\langle s,t^{2},u\rangle\) such that \(f=g\circ h\) and the branch divisor is \(2E_{1,0}+2E_{1,2}\). Since \(X/(G_{1,3}\oplus G_{0,1}^{1})\) and \(X/\langle s,t^{2},u\rangle\) are smooth, we get \(\frac{E_{1,0}+E_{1,2}}{2}\in\)Pic\((X/\langle s,t^{2},u\rangle)\). Since \(g^{*}B_{1,2}=g^{*}B_{1,0}+2g^{*}F\) in \(\text{Pic}(X/\langle s,t^{2},u\rangle)\),
\[2E_{1,2}=2E_{1,0}+2g^{*}F\text{ in }\text{Pic}(X/\langle s,t^{2},u\rangle).\]
Since \(X/\langle s,t^{2},u\rangle\) is a smooth rational surface, \(\text{Pic}(X/(G_{1,3}\oplus G_{0,1}^{1}))\) is torsion free. Therefore, we get
\[E_{1,2}=E_{1,0}+g^{*}F\text{ in }\text{Pic}(X/\langle s,t^{2},u\rangle),\]
and hence
\[E_{1,2}+E_{1,0}=2E_{1,0}+g^{*}F\text{ in }\text{Pic}(X/\langle s,t^{2},u\rangle).\]
Since \(\frac{E_{1,2}+E_{1,0}}{2}\in\)Pic\((X/\langle s,t^{2},u\rangle)\), we get
\[\frac{g^{*}F}{2}\in\text{Pic}(X/\langle s,t^{2},u\rangle).\]
Since \((B_{1,0}\cdot F)=1\), the degree of \(g\) is two, and \(\frac{g^{*}B_{1,0}}{2}\) and \(\frac{g^{*}F}{2}\) are elements of \(\text{Pic}(X/\langle s,t^{2},u\rangle)\), this is a contradiction. Therefore, there are not an Enriques surface \(E\) and a finite abelian subgroup \(H\) of \(\text{Aut}(E)\) such that \(E/H\cong\mathbb{F}_{1}\) and the numerical class of the branch divisor of \(H\) is (208).
We assume that the numerical class of \(B\) is (282). We denote \(B\) by \(2B_{1,0}+2B_{1,4}^{1}+2B_{1,4}^{2}+2B_{1,4}^{3}\). By Corollary 3.16, \(G=\oplus_{i=1}^{3}G_{1,4}^{i}\), Let \(s_{i}\in G_{1,4}^{i}\) be a generator for \(i=1,2,3\). Then the non-symplectic automorphisms of \(G\) are \(s_{i}\) and \(s_{1}\circ s_{2}\circ s_{3}\) where \(i=1,2,3\). Since each fixed locus of \(s_{i}\) is not an empty set for each \(i=1,2,3\) by Theorem 2.8, if \(G\) has a non-symplectic automorphism of order \(2\) whose fixed locus is an empty set, then that is \(s_{1}\circ s_{2}\circ s_{3}\). We assume that \(\text{Fix}(s_{1}\circ s_{2}\circ s_{3})\) is an empty set. Then we may assume that \(\text{Fix}(s_{1})\) is \(p^{-1}(B_{1,0})\cup p^{-1}(B_{1,4}^{1})\). Since \((B_{1,0}+B_{1,4}^{1}\cdot B_{1,4})=4\), we get that \(X/(G_{1,4}^{1}\oplus G_{1,4}^{2})\) is smooth, and the branch divisor of the quotient map \(X/(G_{1,4}^{1}\oplus G_{1,4}^{1})\to X/G\cong\mathbb{F}_{4}\) is \(2B_{1,4}^{3}\). This is a contradiction as the degree of the quotient map is \(2\). Therefore, there are not an Enriques surface \(E\) and a finite abelian subgroup \(H\) of \(\text{Aut}(E)\) such that \(E/H\cong\mathbb{F}_{4}\) and the numerical class of the branch divisor of \(H\) is (282).
By Theorem 5.2, we get Theorem 1.9.
**Theorem 5.3**.: _Let \(E\) be an Enriques surface and \(H\) be a finite abelian subgroup of \(\text{Aut}(E)\). If \(E/H\) is smooth, then \(H\) is isomorphic to one of \(\mathcal{A}G(E)\) as a group._
Proof.: Let \(X\) be the \(K3\)-cover of \(E\), \(G:=\{s\in\text{Aut}(X):s\text{ is a lift of some }h\in H\}\), and \(p:X\to X/G\) be the quotient map. Then \(G\) is a finite abelian group, \(X/G=E/H\), and the branch divisor of \(p\) is \(B\). We classified \(H\) for the case of \(E/H\cong\mathbb{F}_{n}\) in Theorem 5.2. From here, we assume that \(E/H\) is smooth and \(E/H\not\cong\mathbb{F}_{n}\) or \(\mathbb{P}^{2}\). Since \(G\) does not have a non-symplectic automorphism whose order is odd ([11]), by Theorem 2.8 and 1.5, \(G\cong\mathbb{Z}/2\mathbb{Z}^{\oplus s}\oplus\mathbb{Z}/4\mathbb{Z}^{\oplus t} \oplus\mathbb{Z}/8\mathbb{Z}^{\oplus u}\) where \(s,t,u\geq 0\). By the assumption that \(G\) has a non-symplectic automorphism of order \(2\) such that whose fixed locus is an empty set, and the fact that \(G\) is generated by non-symplectic
automorphisms whose fixed locus have a curve, we get \(s+t+u\geq 3\). Therefore, \(G\) is one of the following as a group:
\[\{\mathbb{Z}/2\mathbb{Z}^{\oplus a},\ \mathbb{Z}/4\mathbb{Z}^{\oplus 3},\ \ \mathbb{Z}/2 \mathbb{Z}^{\oplus f}\oplus\mathbb{Z}/4\mathbb{Z}^{\oplus g},\ \mathbb{Z}/2 \mathbb{Z}\oplus\mathbb{Z}/4\mathbb{Z}\oplus\mathbb{Z}/8\mathbb{Z}:\]
\[3\leq a\leq 5,\ (f,g)=(1,2),(2,1),(3,1)\}.\]
If \(G\) is one of
\[\{\mathbb{Z}/2\mathbb{Z}^{\oplus a},\ \ \mathbb{Z}/2\mathbb{Z}^{\oplus f}\oplus \mathbb{Z}/4\mathbb{Z}^{\oplus g}:3\leq a\leq 5,\ (f,g)=(1,2),(2,1),(3,1)\}\]
as a group, then quotient group \(G/K\) of \(G\) by a subgroup \(K\cong\mathbb{Z}/2\mathbb{Z}\) is one of
\[\{\mathbb{Z}/2\mathbb{Z}^{\oplus a},\ \mathbb{Z}/4\mathbb{Z}^{\oplus 2},\ \mathbb{Z}/2 \mathbb{Z}^{\oplus f}\oplus\mathbb{Z}/4\mathbb{Z}:a=2,3,4\ f=1,2\}\subset \mathcal{A}G(E)\]
as a group. Let \(f:X/G\to\mathbb{F}_{n}\) be the birational morphism. We assume that \(G\cong\mathbb{Z}/4\mathbb{Z}^{\oplus 3}\). By the assumption that \(G\cong\mathbb{Z}/4\mathbb{Z}^{\oplus 3}\) and Theorem 2.8, the numerical class of \(f_{*}B\) is only (142). We denote \(B\) by \(2B_{1,0}+4B_{1,4}^{1}+4B_{1,4}^{2}+4B_{0,1}^{1}+4B_{0,1}^{2}+\sum_{i=1}^{n}b_{ i}^{\prime}B_{i}^{\prime}\) where \(f_{*}B_{1,0}=C\), \(f_{*}B_{1,4}^{i}=C+4F\), \(f_{*}B_{0,1}^{i}=F\), and \(f_{*}B_{i}^{\prime}=0\) in \(\operatorname{Pic}(\mathbb{F}_{4})\). Since \(G\cong\mathbb{Z}/4\mathbb{Z}^{\oplus 3}\), by Theorem 2.8, we get that \(G=G_{1,4}^{1}\oplus G_{1,4}^{2}\oplus G_{0,1}^{1}\). Let \(s\in G_{1,4}^{1}\), \(t\in G_{1,4}^{2}\), and \(u\in G_{0,1}^{1}\) be generators respectively. The non-symplectic involutions of \(G\) are \(s^{2}\), \(t^{2}\), \(u^{2}\), and \(s^{2}\circ t^{2}\circ u^{2}\). Since each fixed locus of \(s^{2}\), \(t^{2}\), and \(u^{2}\) is not an empty set, if \(G\) has a non-symplectic automorphism of order \(2\) whose fixed locus is an empty set, then that is \(s^{2}\circ t^{2}\circ u^{2}\). If the fixed locus of \(s^{2}\circ t^{2}\circ u^{2}\) is an empty set, then the fixed locus of \(s\circ t\circ u\) is an empty set. By [2], this is a contradiction. Therefore, \(G\) is not \(\mathbb{Z}/4\mathbb{Z}^{\oplus 3}\) as a group.
We assume that \(G\cong\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/4\mathbb{Z}\oplus\mathbb{Z}/8 \mathbb{Z}\). By Theorem 2.8, the numerical class of \(f_{*}B\) is only (101). By the proof of Theorem 4.9, \(f\) is an isomorphism, i.e. \(X/G\cong\mathbb{F}_{1}\). By Theorem 5.2, we get the claim.
By Theorem 5.2 and 5.3, we get Theorem 1.10.
## 6. Non-Abelian case
Let \(S\) be a smooth rational surface such which is neither \(\mathbb{P}^{2}\) nor an Enriques surface. In Theorem 6.5, we will describe the existence of a \(K3\) surface \(X^{\prime}\) and a finite subgroup \(G^{\prime}\) of \(\operatorname{Aut}(X^{\prime})\) such that \(X^{\prime}/G^{\prime}\cong S\) by the result of [6, Proposition 5.1 and Theorem 5.2]. It does not assume that \(G^{\prime}\) is an abelian group, but it does not elucidate the group structure of \(G^{\prime}\). First, we prepare a little. Next, we state Theorem 6.5. Let
\[\mathcal{A}S(K3):=\begin{Bmatrix}\mathbb{Z}/n\mathbb{Z},\ \mathbb{Z}/m\mathbb{Z}^{ \oplus 2},\ \mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/k\mathbb{Z},\ \mathbb{Z}/2\mathbb{Z}^{\oplus l}\\ n=2,\ldots,8,\ m=2,3,4,\ k=4,6,\ l=3,4\end{Bmatrix}\]
**Theorem 6.1**.: ([13]) _Let \(X\) be a \(K3\) surface, and \(G\) be a finite abelian subgroup of \(\operatorname{Aut}(X)\). If \(G\) is symplectic, then \(G\in\mathcal{A}S(K3)\) as a group. Conversely, for each an abelian group \(G^{\prime}\in\mathcal{A}S(K3)\), there is a \(K3\) surface \(X^{\prime}\) such that \(G^{\prime}\) acts faithfully on \(X^{\prime}\) as symplectic._
**Theorem 6.2**.: ([6, Proposition 5.1]) _Let \(X\) be a \(K3\) surface, and \(G\) be a finite abelian subgroup of \(\operatorname{Aut}(X)\) such that \(G\) acts symplectically on \(X\). Let \(Y\) be a \(K3\) surface such that \(Y\) is a minimal resolution of the quotient space \(X/G\), and \(E_{G}\) be the lattices spanned by the curves on \(Y\) arising from the resolution of the singularities of \(X/G\)._
_Then \(E_{G}\) is one of the following root lattices:_
_Let \(M_{G}\) be the minimal primitive sublattice of \(\operatorname{NS}(Y)\) which contains \(E_{G}\). Then \(M_{G}\) is an overlattice of finite index \(\operatorname{r}_{G}\) of \(E_{G}\) and its properties are the followings_
\begin{tabular}{|c||c|c|c|c|c|} \hline \(G\) & \(\mathbb{Z}/2\mathbb{Z}\) & \(\mathbb{Z}/3\mathbb{Z}\) & \(\mathbb{Z}/4\mathbb{Z}\) & \(\mathbb{Z}/5\mathbb{Z}\) & \(\mathbb{Z}/6\mathbb{Z}\) \\ \hline \(\operatorname{r}_{G}\) & \(2\) & \(3\) & \(4\) & \(5\) & \(6\) \\ \hline \(\operatorname{rank}M_{G}\) & \(8\) & \(12\) & \(14\) & \(16\) & \(16\) \\ \hline \(M_{G}^{\vee}/M_{G}\) & \(\mathbb{Z}/2\mathbb{Z}^{\oplus 6}\) & \(\mathbb{Z}/3\mathbb{Z}^{\oplus 4}\) & \((\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/4\mathbb{Z})^{\oplus 2}\) & \(\mathbb{Z}/5\mathbb{Z}^{\oplus 2}\) & \(\mathbb{Z}/6\mathbb{Z}^{\oplus 2}\) \\ \hline \(G\) & \(\mathbb{Z}/7\mathbb{Z}\) & \(\mathbb{Z}/8\mathbb{Z}\) & \(\mathbb{Z}/2\mathbb{Z}^{\oplus 2}\) & \(\mathbb{Z}/2\mathbb{Z}^{\oplus 3}\) & \(\mathbb{Z}/2\mathbb{Z}^{\oplus 4}\) \\ \hline \(\operatorname{r}_{G}\) & \(7\) & \(8\) & \(2^{2}\) & \(2^{3}\) & \(2^{4}\) \\ \hline \(\operatorname{rank}M_{G}\) & \(18\) & \(18\) & \(12\) & \(14\) & \(15\) \\ \hline \(M_{G}^{\vee}/M_{G}\) & \(\mathbb{Z}/7\mathbb{Z}\) & \(\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/4\mathbb{Z}\) & \(\mathbb{Z}/22\mathbb{Z}^{\oplus 8}\) & \(\mathbb{Z}/2\mathbb{Z}^{\oplus 7}\) \\ \hline \(\operatorname{\mathcal{G}}\) & \(\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/4\mathbb{Z}\) & \(\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/6\mathbb{Z}\) & \(\mathbb{Z}/3\mathbb{Z}^{\oplus 2}\) & \(\mathbb{Z}/4\mathbb{Z}^{\oplus 2}\) \\ \hline \(\operatorname{r}_{G}\) & \(8\) & \(12\) & \(3^{2}\) & \(4^{2}\) \\ \hline \(\operatorname{rank}M_{G}\) & \(16\) & \(18\) & \(16\) & \(18\) \\ \hline \(M_{G}^{\vee}/M_{G}\) & \((\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/6\mathbb{Z})^{\oplus 2}\) & \(\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/6\mathbb{Z}\) & \(\mathbb{Z}/3\mathbb{Z}^{\oplus 4}\) & \(\mathbb{Z}/4\mathbb{Z}^{\oplus 2}\) \\ \hline \end{tabular}
**Theorem 6.3**.: ([6, Theorem 5.2]) _For \(G\in\mathcal{AS}(K3)\) and a \(K3\) surface \(X\) such that \(G\) acts faithfully on \(X\) as symplectic, a \(K3\) surface \(Y\) is a minimal resolution of the quotient space \(X/G\) if and only if \(M_{G}\) is primitively embedded in \(\operatorname{NS}(Y)\)._
In what follows, we explain Theorem 6.5. Let \(X\) be a \(K3\) surface, and \(G\) be a finite subgroup of \(\operatorname{Aut}(X)\) such that \(X/G\) is smooth. If \(X/G\) is neither \(\mathbb{P}^{2}\) nor an Enriques surface, then \(X/G\) is a Hirzebruch surface or its blow-ups. Let \(p:X\to X/G\) be the quotient morphism, and \(B:=\sum_{i=1}^{k}b_{i}B_{i}\) be the branch divisor of \(p:X\to X/G\). Since a Hirzebruch surface \(\mathbb{F}_{n}\) has the unique fibre structure, there is the fibre structure \(f:X/G\to\mathbb{P}^{1}\) induced by a Hirzebruch surface. Then the general fibre of \(f\circ p:X\to\mathbb{P}^{1}\) is a disjoint union of elliptic curves.
Let \(G_{s}\) be the symplectic subgroup of \(G\) generated by symplectic automorphisms of \(G\). We set \(G_{h}:=\{g\in G_{s}\,|\,g\) does not replace irreducible components of \((f\circ p)^{-1}(u)\) for any \(u\in\mathbb{P}^{1}.\}\).
**Lemma 6.4**.: _The group \(G_{h}\) is an abelia group._
Proof.: Since \(G_{h}\) is a finite symplectic group, \(\bigcup_{g\in G_{h}}\operatorname{Fix}(g)\) is a finite set. There is a point \(u\in\mathbb{P}^{1}\) such that \(G_{s}\) acts freely on \((f\circ p)^{-1}(u)\). Since the general fibre of \(f\circ p:X\to\mathbb{P}^{1}\) is a disjoint union of elliptic curves, we get that \(G_{h}\) is an abelian group.
Let \(X_{s}:=X/G_{s}\) be the quotient space, \(p_{1}:X_{s}\to X/G=X_{s}/(G/G_{s})\) be the quotient map, and \(f_{1}:=f\circ p_{1}:X_{s}\to\mathbb{P}^{1}\) be the fibre structure. Then \(p_{1}:X_{s}\to X/G=X_{s}/(G/G_{s})\) is a cyclic cover whose branch divisor is \(B\). Let \(X_{h}:=X/G_{h}\) be the quotient space, \(p_{2}:X_{h}\to X_{s}=X_{h}/(G_{s}/G_{h})\) be the quotient map.
For \(f\circ p_{1}\circ p_{2}:X_{h}\to\mathbb{P}^{1}\), by the Stein factorization, there are the fibre structure \(h:X_{h}\to\mathbb{P}^{1}\) and a finite map \(k:\mathbb{P}^{1}\to\mathbb{P}^{1}\) such that \(f\circ p_{1}\circ p_{2}=k\circ h\), i.e. the following diagram is commutative:
Then \(X_{h}\) is the normalization of the fibre product of \(f\circ p_{1}:X_{s}\to\mathbb{P}^{1}\) and \(k:\mathbb{P}^{1}\to\mathbb{P}^{1}\). By the definition of \(G_{h}\), we get that \(k:\mathbb{P}^{1}\to\mathbb{P}^{1}\) is a Galois cover whose Galois group is isomorphic to \(G_{s}/G_{h}\) as a group. Recall that \(G_{h}\) is an abelian group, and by Theorem 4.6\(p_{1}:X_{s}\to X/G\) is a cyclic cover of order \(b\) such that the branch divisor is \(B\) and the Galois group is \(G/G_{s}\) where \(b\) is the least common multiple of \(b_{1},\dots,b_{k}\). By the above, we have the following theorem.
**Theorem 6.5**.: _Let \(S\) be a smooth rational surface which is neither \(\mathbb{P}^{2}\) nor an Enriques surface. We fix a fibre structure \(f:S\to\mathbb{P}^{1}\), and an effective divisor \(B:=\sum_{i=1}^{k}b_{i}B_{i}\) on \(S\). Let \(b\) be the least common multiple of \(b_{1},\dots,b_{k}\)._
_Then there are a \(K3\) surface \(X\) and a finite subgroup \(G\) of \(\text{Aut}(X)\) such that \(X/G=S\) and the branch divisor of the quotient morphism \(p:X\to X/G\) is \(B\) if and only if there are a Galois cover \(k:\mathbb{P}^{1}\to\mathbb{P}^{1}\) and a cyclic cover \(q:T\to S\) of degree \(b\) whose the branch divisor is \(B\) such that the minimal singular resolution \(X^{\prime}\) of the normalization of the fibre product of \(f\circ q:T\to\mathbb{P}^{1}\) and \(k:\mathbb{P}^{1}\to\mathbb{P}^{1}\) is a \(K3\) surface, and \(M^{\prime}_{G}\) is primitively embedded in \(\operatorname{NS}(X^{\prime})\) for some \(G^{\prime}\in\mathcal{AS}(K3)\)._
## 7. the list of a numerical class
Here, we will give the list of a numerical class of an effective divisor \(B=\sum_{i=1}^{k}b_{i}B_{i}\) on \(\mathbb{F}_{n}\) such that \(B_{i}\) is a smooth curve for each \(i=1,\dots,k\) and \(K_{\mathbb{F}_{n}}+\sum_{i=1}^{k}\frac{b_{i}-1}{b_{i}}B_{i}=0\) in \(\operatorname{Pic}(\mathbb{F}_{n})\).
If there are a \(K3\) surface \(X\) and a finite subgroup \(G\) of \(\text{Aut}(X)\) such that \(X/G=\mathbb{F}_{0}\cong\mathbb{P}^{1}\times\mathbb{P}^{1}\), then by Theorem 2.7 the numerical class of \(B\) is one of the following:
\[3(3C+3F)\ \ \mathbb{Z}/3\mathbb{Z} \tag{2}\] \[3C+3C+3(C+3F)\ \ \mathbb{Z}/3\mathbb{Z}^{\oplus 2}\] (3) \[3C+3C+3(C+F)+3F+3F\ \ \mathbb{Z}/3\mathbb{Z}^{\oplus 3}\] (4) \[2(4C+4F)\ \ \mathbb{Z}/2\mathbb{Z}\] (5) \[2C+2C+2(2C+4F)\ \ \mathbb{Z}/2\mathbb{Z}^{\oplus 2}\] (6) \[2C+2C+2(2C+2F)+2F+2F\ \ \mathbb{Z}/2\mathbb{Z}^{\oplus 3}\] (7) \[4C+4C+2(C+4F)\ \ \mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/4\mathbb{Z} \tag{1}\]
(8) \[4C+4C+2(C+F)+4F+4F \mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/4\mathbb{Z}^{\oplus 2}\] (9) \[4C+4C+2(C+2F)+2F+2F \mathbb{Z}/2\mathbb{Z}^{\oplus 2}\oplus\mathbb{Z}/4\mathbb{Z}\] (10) \[2C+2C+2C+2(C+4F) \mathbb{Z}/2\mathbb{Z}^{\oplus 3}\] (11) \[2C+2C+2C+2(C+F)+2F+2F \mathbb{Z}/2\mathbb{Z}^{\oplus 5}\] (12) \[2C+2C+2C+2(C+2F)+2F+2F \mathbb{Z}/2\mathbb{Z}^{\oplus 4}\] (13) \[2C+2C+2C+2(C+F)+4F+4F \mathbb{Z}/2\mathbb{Z}^{\oplus 3}\oplus\mathbb{Z}/4\mathbb{Z}\] (14) \[2(2C+2F)+2(2C+2F) \mathbb{Z}/2\mathbb{Z}^{\oplus 2}\] (15) \[2C+2C+2(C+2F)+2(C+2F) \mathbb{Z}/2\mathbb{Z}^{\oplus 3}\] (16) \[2C+2C+2(C+F)+2(C+F)+2F+2F \mathbb{Z}/2\mathbb{Z}^{\oplus 4}\] (17) \[3(C+F)+3(C+F)+3(C+F) \mathbb{Z}/3\mathbb{Z}^{\oplus 2}\] (18) \[3C+3(C+F)+3(C+2F) \mathbb{Z}/3\mathbb{Z}^{\oplus 2}\] (19) \[2(C+F)+2(C+F)+2(C+F) \mathbb{Z}/2\mathbb{Z}^{\oplus 3}\] (20) \[2C+2(C+F)+2(C+2F) \mathbb{Z}/2\mathbb{Z}^{\oplus 3}\] (21) \[2(C+F)+4(2C+2F)\] (22) \[3(C+F)+3(2C+2F)\] (23) \[3(C+2F)+3(2C+F)\] (24) \[3C+3(2C+3F)\] (25) \[2C+2(3C+4F)\] (26) \[2(C+F)+2(3C+3F)\] (27) \[2(C+2F)+2(3C+2F)\] (28) \[2(C+3F)+2(3C+F)\] (29) \[2(C+F)+3(C+F)+6(C+F)\] (30) \[2(C+F)+4(C+F)+4(C+F)\] (31) \[2C+4(2C+2F)+2F\] (32) \[4C+2(2C+2F)+4F\] (33) \[2C+2(3C+3F)+2F\] (34) \[3C+6C+2(C+4F)\] (35) \[4C+2(C+F)+4(C+2F)\] (36) \[2C+2(C+F)+2(2C+3F)\] (37) \[2C+2(C+2F)+2(2C+2F)\] (38) \[2C+2(C+3F)+2(2C+F)\] (39) \[3C+3(2C+2F)+3F\] (40) \[2C+6C+3(C+3F)\] (41) \[2(C+F)+2(C+F)+2(2C+2F)\] (42) \[2(C+2F)+2(C+F)+2(2C+F)\] (43) \[2C+4C+4(C+2F)+2F\] (44) \[3C+6C+2(C+3F)+2F\] (45) \[4C+4C+2(C+3F)+2F\] (46) \[2C+2C+2(2C+3F)+2F\] (47) \[2C+2C+2(2C+3F)+2F\] (48) \[3C+6C+2(C+4F)\] (49) \[4C+2(C+F)+4(C+2F)\] (50) \[2C+2(C+F)+2(2C+3F)\] (51) \[2C+2C+2(C+F)+2(2C+2F)\] (52) \[2C+2(C+2F)+2(2C+F)+2(2C+F)\] (53) \[2C+2C+2(C+3F)+2F\] (54) \[2C+2C+2(2C+3F)+2F\] (55) \[2C+2C+2(2C+3F)+2F\] (56) \[2C+2C+2(2C+3F)+2F\] (57) \[2C+2(2C+F)+2(2C+F)+2(2C+F)\] (58) \[2C+2C+2(2C+2F)+2F+2F\] (59) \[2C+2C+2(2C+2F)+2(2C+2F)\] (60) \[2C+2C+2(2C+2F)+2(2C+2F)\] (61) \[2C+2C+2(C+F)+2(2C+F)+2(2C+2F)\] (62) \[2C+2C+2(2C+F)+2(2C+2F)\] (63) \[2C+2C+2(2C+2F)+2(2C+2F)\] (64) \[2C+2C+2(2C+2F)+2(2C+2F)\] (65) \[2C+2C+2(2C+2F)+2(2C+2F)\] (66) \[2C+2C+2(2C+2F)+2(2C+2F)\] (67) \[2C+2C+2(2C+2F)+2(2C+2F)\] (68) \[2C+2C+2(2C+2F)
\[2C+4(C+F)+4(C+F)+2F \tag{48}\] \[4C+2(C+F)+4(C+F)+4F\] (49) \[2C+3(C+F)+6(C+F)+2F\] (50) \[6C+2(C+F)+3(C+F)+6F\] (51) \[2C+2(C+F)+2(2C+2F)+2F\] (52) \[2C+2(C+2F)+2(2C+F)+2F\] (53) \[3C+2(C+F)+6(C+F)+3F\] (54) \[3C+3(C+F)+3(C+F)+3F\] (55) \[3C+3C+3(C+2F)+3F\] (56) \[2C+6C+3(C+2F)+3F\] (57) \[2C+2C+2(C+F)+2(C+3F)\] (58) \[2C+2(C+F)+2(C+F)+2(C+F)+2F\] (59) \[2C+2C+2C+2(C+3F)+2F\] (60) \[2C+2C+2(C+F)+2(C+2F)+2F\] (61) \[2C+4C+4(C+F)+2F+4F\] (62) \[2C+3C+6(C+F)+2F+3F\] (63) \[2C+6C+3(C+F)+2F+6F\] (64) \[3C+6C+2(C+F)+3F+6F\] (65) \[3C+6C+2(C+F)+4F+4F\] (66) \[2C+6C+3(C+F)+3F+3F\] (67) \[3C+6C+2(C+2F)+2F+2F\] (68) \[3C+6C+2(C+F)+2F+2F+2F\] (69) \[2C+3C+6C+2F+3F+6F\] (70) \[2C+3C+6C+2F+4F+4F\] (71) \[2C+3C+6C+3F+3F+3F\] (72) \[2C+4C+4C+2F+4F+4F\] (73) \[2C+4C+4C+3F+3F+3F\] (74) \[3C+3C+3C+3F+3F+3F\] (75) \[2C+3C+6C+2F+2F+2F\] (76) \[2C+4C+4C+2F+2F+2F+2F\] (77) \[3C+3C+3C+2F+2F+2F+2F\] (78) \[2C+2C+2C+2C+2F+2F+2F+2F \tag{47}\]
(81) \[2C+2(C+2F)+2(C+2F)+2(C+2F) \mathbb{Z}/2\mathbb{Z}^{\oplus 3}\] (82) \[2(C+3F)+2(C+F)+2(C+F)+2(C+F) \mathbb{Z}/2\mathbb{Z}^{\oplus 3}\] (83) \[3(3C+3F)+2F+2F \mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/3\mathbb{Z}\] (84) \[3C+3(2C+2F)+6F+6F \mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/3\mathbb{Z}^{\oplus 2}\] (85) \[2(4C+4F)+2F+2F \mathbb{Z}/2\mathbb{Z}^{\oplus 2}\] (86) \[2C+2(3C+3F)+4F+4F \mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/4\mathbb{Z}\] (87) \[2C+2(3C+3F)+2F+2F+2F \mathbb{Z}/2\mathbb{Z}^{\oplus 3}\] (88) \[2C+2(C+F)+2(2C+3F)+2F+2F \mathbb{Z}/2\mathbb{Z}^{\oplus 3}\] (89) \[2(2C+2F)+2(2C+2F)+2F+2F \mathbb{Z}/2\mathbb{Z}^{\oplus 3}\] (90) \[2C+2(C+F)+2(2C+2F)+4F+4F \mathbb{Z}/2\mathbb{Z}^{\oplus 2}\oplus\mathbb{Z}/4\mathbb{Z}\] (91) \[2C+2(C+F)+2(2C+2F)+2F+2F \mathbb{Z}/2\mathbb{Z}^{\oplus 4}\] (92) \[3(C+F)+3(C+F)+3(C+F)+6F+6F \mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/3\mathbb{Z}^{\oplus 2}\] (93) \[3C+3(C+F)+3(C+F)+6F+6F \mathbb{Z}/2\mathbb{Z}\oplus 2\mathbb{Z}3\mathbb{Z}^{\oplus 3}\] (94) \[2C+2(C+2F)+2(C+F)+2(C+F)+2F+2F \mathbb{Z}/2\mathbb{Z}^{\oplus 4}\] (95) \[6C+2(C+F)+3(C+F)+12F+12F \mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/3\mathbb{Z}^{\oplus 2}\oplus \mathbb{Z}/4\mathbb{Z}\] (96) \[2(C+F)+2(C+F)+2(C+F)+2(C+F)+2F+2F \mathbb{Z}/2\mathbb{Z}^{\oplus 4}\] (97) \[2C+2(C+F)+2(C+F)+2(C+F)+4F+4F \mathbb{Z}/2\mathbb{Z}^{\oplus 3}\oplus\mathbb{Z}/4\mathbb{Z}\] (98) \[2C+2(C+F)+2(C+F)+2(C+F)+2F+2F+2F \mathbb{Z}/2\mathbb{Z}^{\oplus 5}\] (99) \[2C+4(2C+2F)+4F+4F \mathbb{Z}/4\mathbb{Z}^{\oplus 2}\] (100) \[2C+4(2C+2F)+2F+2F+2F \mathbb{Z}/2\mathbb{Z}^{\oplus 2}\oplus\mathbb{Z}/4\mathbb{Z}\] (101) \[4C+2(C+F)+4(C+F)+8F+8F \mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/4\mathbb{Z}\oplus\mathbb{Z}/8\mathbb{Z}\] (102) \[3(2C+2F)+3(C+F)+2F+2F\] (103) \[4(2C+2F)+2(C+3F)\] (104) \[4(2C+2F)+2(C+2F)+2F\] (105) \[4(2C+2F)+2(C+F)+2F+2F\] (106) \[2(C+3F)+3(C+F)+6(C+F)\] (107) \[2(C+2F)+3(C+F)+6(C+F)+2F\] (108) \[2(C+F)+3(C+F)+6(C+F)+2F+2F\] (109) \[2(C+3F)+4(C+F)+4(C+F)\] (110) \[2(C+2F)+4(C+F)+4(C+F)+2F\] (111) \[2(C+F)+4(C+F)+4(C+F)+2F+2F\] (112) \[2(3C+aF)+2(C+(6-a)F),a\geq 3\] (113) \[2(3C+4F)+2(C+F)+2F\] (114) \[2(3C+3F)+2(C+2F)+2F\] (115) \[2(3C+3F)+2(C+F)+2F+2F\] (116) \[2(2C+3F)+2(2C+3F)\] (117)
\[2(2C+3F)+2(2C+2F)+2F \tag{119}\] \[2(2C+4F)+2(C+F)+2(C+F)\] (120) \[2(2C+3F)+2(C+2F)+2(C+F)\] (121) \[2(2C+3F)+2(C+F)+2(C+F)+2F\] (122) \[2(2C+2F)+2(C+2F)+2(C+2F)\] (123) \[2(2C+2F)+2(C+2F)+2(C+F)+2F\] (124) \[2(2C+2F)+2(C+F)+2(C+F)+2F+2F\] (125) \[2(C+2F)+2(C+2F)+2(C+F)+2(C+F)\] (126) \[2(C+2F)+2(C+F)+2(C+F)+2(C+F)+2F\] (127) \[3C+3(2C+3F)+2F+2F\] (128) \[3C+3(2C+2F)+2F+2F+3F\] (129) \[3C+3(2C+2F)+4F+12F\] (130) \[2C+4(2C+4F)\] (131) \[2C+4(2C+3F)+4F\] (132) \[2C+4(2C+2F)+3F+6F\] (133) \[2C+3(C+2F)+6(C+2F)\] (134) \[2C+3(C+2F)+6(C+F)+6F\] (135) \[2C+3(C+F)+6(C+2F)+3F\] (136) \[2C+3(C+F)+6(C+F)+4F+4F\] (137) \[2C+3(C+F)+6(C+F)+3F+6F\] (138) \[2C+3(C+F)+6(C+F)+2F+2F\] (139) \[2C+4(C+2F)+4(C+2F)\] (140) \[2C+4(C+F)+4(C+3F)\] (141) \[2C+4(C+F)+4(C+2F)+4F\] (142) \[2C+4(C+F)+4(C+F)+4F+4F\] (143) \[2C+4(C+F)+4(C+F)+3F+6F\] (144) \[2C+4(C+F)+4(C+F)+2F+2F\] (145) \[3C+2(C+3F)+6(C+F)+3F\] (146) \[3C+2(C+2F)+6(C+F)+2F+3F\] (147) \[3C+2(C+F)+6(C+3F)\] (148) \[3C+2(C+F)+6(C+2F)+6F\] (149) \[3C+2(C+F)+6(C+F)+6F+6F\] (150) \[3C+2(C+F)+6(C+F)+4F+12F\] (151) \[3C+2(C+F)+6(C+F)+2F+2F+3F\] (152) \[3C+3(C+F)+3(C+F)+4F+12F\] (153) \[3C+3(C+F)+3(C+F)+2F+2F\] (154) \[3C+3(C+F)+3(C+F)+2F+2F+3F \tag{118}\]
(155) \[4C+2(C+3F)+4(C+2F)\] (156) \[4C+2(C+3F)+4(C+F)+4F\] (157) \[4C+2(C+2F)+4(C+2F)+2F\] (158) \[4C+2(C+2F)+4(C+F)+2F+4F\] (159) \[4C+2(C+F)+4(C+F)+6F+12F\] (160) \[4C+2(C+F)+4(C+F)+5F+20F\] (161) \[4C+2(C+F)+4(C+2F)+2F+2F\] (162) \[4C+2(C+F)+4(C+F)+2F+2F+4F\] (163) \[6C+2(C+3F)+3(C+F)+6F\] (164) \[6C+2(C+2F)+3(C+2F)+3F\] (165) \[6C+2(C+2F)+3(C+F)+3F+3F\] (166) \[6C+2(C+2F)+3(C+F)+2F+6F\] (167) \[6C+2(C+F)+3(C+3F)+2F\] (168) \[6C+2(C+F)+3(C+3F)+2F\] (169) \[6C+2(C+F)+3(C+2F)+2F+3F\] (170) \[6C+2(C+F)+3(C+F)+10F+15F\] (171) \[6C+2(C+F)+3(C+F)+9F+18F\] (172) \[6C+2(C+F)+3(C+F)+8F+24F\] (173) \[6C+2(C+F)+3(C+F)+7F+42F\] (174) \[6C+2(C+F)+3(C+F)+2F+3F+3F\] (175) \[6C+2(C+F)+3(C+F)+2F+2F+6F\] (176) \[2C+2(3C+6F)\] (177) \[2C+2(3C+5F)+2F\] (178) \[2C+2(3C+4F)+2F+2F\] (179) \[2C+2(3C+3F)+3F+6F\] (180) \[2C+2(C+4F)+2(2C+2F)\] (181) \[2C+2(C+3F)+2(2C+3F)\] (182) \[2C+2(C+2F)+2(2C+4F)\] (183) \[2C+2(C+F)+2(2C+5F)\] (184) \[2C+2(C+3F)+2(2C+3F)+2F\] (185) \[2C+2(C+2F)+2(2C+2F)+2F+2F\] (186) \[2C+2(C+F)+2(2C+4F)+2F\] (187) \[2C+2(C+F)+2(2C+2F)+3F+6F\] (188) \[2C+2(C+4F)+2(C+F)+2(C+F)\] (189) \[2C+2(C+3F)+2(C+2F)+2(C+F)\] (190) \[2C+2(C+3F)+2(C+F)+2(C+F)+2F\] (191) \[2C+2(C+2F)+2(C+2F)+2(C+F)+2F\] (192) \[2C+2(C+F)+2(C+F)+2(C+F)+2F\] (193) \[2C+2(C+F)+2(C+F)+2(C+F)+3F+6F\] (194)
If there are a \(K3\) surface \(X\) and a finite subgroup \(G\) of \(\operatorname{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{2}\), then by Theorem 2.7 the numerical class of \(B\) is one of the following:
\[3(3C+6F) \mathbb{Z}/3\mathbb{Z} \tag{195}\] \[2(4C+8F) \mathbb{Z}/2\mathbb{Z}\] (196) \[2(2C+4F)+2(2C+4F) \mathbb{Z}/2\mathbb{Z}^{\oplus 2}\] (197) \[2C+2(C+2F)+2(2C+6F) \mathbb{Z}/2\mathbb{Z}^{\oplus 2}\] (198) \[3(C+2F)+3(C+2F)+3(C+2F) \mathbb{Z}/3\mathbb{Z}^{\oplus 2}\] (199) \[2C+2(C+4F)+2(C+2F)+2(C+2F) \mathbb{Z}/2\mathbb{Z}^{\oplus 3}\] (200) \[2(C+2F)+2(C+2F)+2(C+2F) \mathbb{Z}/2\mathbb{Z}^{\oplus 3}\] (201) \[3C+3(2C+4F)+3F+3F \mathbb{Z}/3\mathbb{Z}^{\oplus 2}\] (202) \[2C+2(3C+6F)+2F+2F \mathbb{Z}/2\mathbb{Z}^{\oplus 2}\] (203) \[2C+2(C+2F)+2(2C+4F)+2F+2F \mathbb{Z}/2\mathbb{Z}^{\oplus 3}\] (204) \[3C+3(C+3F)+3(C+3F) \mathbb{Z}/3\mathbb{Z}^{\oplus 2}\] (205) \[3C+3(C+2F)+3(C+2F)+3F+3F \mathbb{Z}/3\mathbb{Z}^{\oplus 3}\] (206) \[2C+2(C+2F)+2(C+2F)+2F+2F \mathbb{Z}/2\mathbb{Z}^{\oplus 4}\] (207) \[2C+4(2C+4F)+2F+2F \mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/4\mathbb{Z}\] (208) \[4C+2(C+3F)+4(C+2F)+2F+2F \mathbb{Z}/2\mathbb{Z}^{\oplus 2}\oplus\mathbb{Z}/4\mathbb{Z}\] (209) \[4C+2(C+2F)+4(C+2F)+4F+4F \mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/4\mathbb{Z}^{\oplus 2}\] (210) \[4C+2(C+2F)+4(C+2F)+2F+2F+2F \mathbb{Z}/2\mathbb{Z}^{\oplus 3}\oplus\mathbb{Z}/4\mathbb{Z}\] (211) \[6C+2(C+2F)+3(C+2F)+6F+6F \mathbb{Z}/2\mathbb{Z}^{\oplus 2}\oplus\mathbb{Z}/3\mathbb{Z}^{\oplus 2}\] (212) \[3(C+2F)+3(2C+4F)\] (213) \[2(C+2F)+4(2C+4F)\] (214) \[2(C+2F)+3(C+2F)+6(C+2F)\] (215) \[2(C+2F)+4(C+2F)+4(C+2F)\] (216) \[2(3C+6F)+2(C+2F)\] (217) \[2(2C+4F)+2(C+2F)+2(C+2F)\] (218) \[3C+3(2C+6F)\] (219) \[3C+3(2C+5F)+3F\] (220) \[3C+3(2C+4F)+2F+6F\] (221) \[2C+3(C+2F)+6(C+2F)+2F+2F\] (222) \[2C+4(C+2F)+4(C+2F)+2F+2F\] (223) \[3C+2(C+3F)+6(C+3F)\] (224) \[3C+2(C+3F)+6(C+2F)+6F\] (225) \[3C+2(C+2F)+6(C+3F)+2F\] (226) \[3C+2(C+2F)+6(C+2F)+3F+3F\] (227) \[3C+2(C+2F)+6(C+2F)+2F+6F\] (228) \[3C+3(C+2F)+3(C+4F) \tag{194}\]
\[3C+3(C+2F)+3(C+3F)+3F \tag{230}\] \[3C+3(C+2F)+3(C+2F)+2F+6F\] (231) \[4C+2(C+5F)+4(C+2F)\] (232) \[4C+2(C+4F)+4(C+2F)+2F\] (233) \[4C+2(C+2F)+4(C+4F)\] (234) \[4C+2(C+2F)+4(C+3F)+4F\] (235) \[4C+2(C+2F)+4(C+2F)+3F+6F\] (236) \[6C+2(C+4F)+3(C+3F)\] (237) \[6C+2(C+4F)+3(C+2F)+3F\] (238) \[6C+2(C+3F)+3(C+3F)+2F\] (239) \[6C+2(C+3F)+3(C+2F)+2F+3F\] (240) \[6C+2(C+2F)+3(C+3F)+2F+2F\] (241) \[2C+2(3C+8F)\] (242) \[2C+2(3C+7F)+2F\] (243) \[2C+2(C+4F)+2(2C+4F)\] (244) \[2C+2(C+3F)+2(2C+5F)\] (245) \[2C+2(C+3F)+2(2C+4F)+2F\] (246) \[2C+2(C+2F)+2(2C+5F)+2F\] (247) \[6C+2(C+2F)+3(C+2F)+4F+12F\] (248) \[6C+2(C+2F)+3(C+2F)+2F+2F+3F\] (249) \[2C+2(C+3F)+2(C+3F)+2(C+2F)\] (250) \[2C+2(C+3F)+2(C+2F)+2(C+2F)+2F \tag{229}\]
If there are a \(K3\) surface \(X\) and a finite subgroup \(G\) of \(\operatorname{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{3}\), then by Theorem 2.7 the numerical class of \(B\) is one of the following:
\[3C+3(2C+6F)+2F+2F\ \ \mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/3 \mathbb{Z} \tag{252}\] \[3C+3(C+3F)+3(C+3F)+2F+2F\ \ \mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/3 \mathbb{Z}\oplus 2\] (253) \[6C+2(C+3F)+3(C+3F)+4F+4F\ \ \mathbb{Z}/2\mathbb{Z}\oplus 2/3 \mathbb{Z}\oplus\mathbb{Z}/4\mathbb{Z}\] (254) \[6C+2(C+3F)+3(C+3F)+2F+2F+2F\ \ \mathbb{Z}/2\mathbb{Z}\oplus 3 \oplus\mathbb{Z}/3\mathbb{Z}\] (255) \[2C+4(2C+6F)+2F\] (256) \[2C+3(C+3F)+6(C+3F)+2F\] (257) \[2C+4(C+3F)+4(C+3F)+2F\] (258) \[3C+2(C+5F)+6(C+3F)\] (259) \[3C+2(C+4F)+6(C+3F)+2F\] (260) \[3C+2(C+3F)+6(C+3F)+2F+2F\] (261) \[4C+2(C+4F)+4(C+4F) \tag{251}\]
\[4C+2(C+4F)+4(C+3F)+4F \tag{263}\] \[4C+2(C+3F)+4(C+4F)+2F\] (264) \[4C+2(C+3F)+4(C+3F)+2F+4F\] (265) \[6C+2(C+6F)+3(C+3F)\] (266) \[6C+2(C+5F)+3(C+3F)+2F\] (267) \[6C+2(C+4F)+3(C+3F)+2F+2F\] (268) \[6C+2(C+3F)+3(C+4F)+6F\] (269) \[6C+2(C+3F)+3(C+3F)+3F+6F\] (270) \[2C+2(3C+10F)\] (271) \[2C+2(3C+9F)+2F\] (272) \[2C+2(C+4F)+2(2C+6F)\] (273) \[2C+2(C+3F)+2(2C+7F)\] (274) \[2C+2(C+3F)+2(2C+6F)+2F\] (275) \[2C+2(C+4F)+2(C+3F)+2(C+3F)\] (276) \[2C+2(C+3F)+2(C+3F)+2F \tag{262}\]
If there are a \(K3\) surface \(X\) and a finite subgroup \(G\) of \(\operatorname{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{4}\), then by Theorem 2.7 the numerical class of \(B\) is one of the following:
\[2C+2(3C+12F)\ \ \mathbb{Z}/2\mathbb{Z} \tag{278}\] \[2C+4(2C+8F)\ \ \mathbb{Z}/4\mathbb{Z}\] (279) \[2C+2(C+4F)+2(2C+8F)\ \ \mathbb{Z}/2\mathbb{Z}^{\oplus 2}\] (280) \[4C+2(C+6F)+4(C+4F)\ \ \mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/4\mathbb{Z}\] (281) \[4C+2(C+4F)+4(C+4F)+2F+2F\ \ \mathbb{Z}/2\mathbb{Z}^{\oplus 2} \oplus\mathbb{Z}/4\mathbb{Z}\] (282) \[2C+2(C+4F)+2(C+4F)+2(C+4F)\ \ \mathbb{Z}/2\mathbb{Z}^{\oplus 3}\] (283) \[6C+2(C+4F)+3(C+4F)+3F+3F\ \ \mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/3 \mathbb{Z}^{\oplus 2}\] (284) \[3C+3(2C+9F)\] (285) \[3C+3(2C+8F)+3F\] (286) \[2C+3(C+4F)+6(C+4F)\] (287) \[2C+4(C+4F)+4(C+4F)\] (288) \[3C+2(C+4F)+6(C+4F)+3F\] (289) \[3C+3(C+4F)+3(C+5)\] (290) \[3C+3(C+4F)+3(C+4F)+3F\] (291) \[4C+2(C+5F)+4(C+4F)+2F\] (292) \[6C+2(C+5F)+3(C+4F)+6F\] (293) \[6C+2(C+4F)+3(C+6F)\] (294) \[6C+2(C+4F)+3(C+5F)+3F\] (295) \[6C+2(C+4F)+3(C+4F)+2F+6F \tag{277}\]
If there are a \(K3\) surface \(X\) and a finite subgroup \(G\) of \(\operatorname{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{5}\), then by Theorem 2.7 the numerical class of \(B\) is one of the following:
\[4C+2(C+5F)+4(C+6F) \tag{297}\] \[4C+2(C+5F)+4(C+5F)+4F\] (298) \[6C+2(C+6F)+3(C+6F)\] (299) \[6C+2(C+6F)+3(C+5F)+3F\] (300) \[6C+2(C+5F)+3(C+6F)+2F\] (301) \[6C+2(C+5F)+3(C+5F)+2F+3F \tag{296}\]
If there are a \(K3\) surface \(X\) and a finite subgroup \(G\) of \(\operatorname{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{6}\), then by Theorem 2.7 the numerical class of \(B\) is one of the following:
\[3C+3(2C+12F)\ \ \mathbb{Z}/3\mathbb{Z} \tag{303}\] \[3C+3(C+6F)+3(C+6F)\ \ \mathbb{Z}/3\mathbb{Z}^{\oplus 2}\] (304) \[6C+2(C+6F)+3(C+6F)+2F+2F\ \ \mathbb{Z}/2\mathbb{Z}^{\oplus 2} \oplus\mathbb{Z}/3\mathbb{Z}\] (305) \[3C+2(C+6F)+6(C+6F)\] (306) \[4C+2(C+7F)+4(C+6F)\] (307) \[4C+2(C+6F)+4(C+6F)+2F\] (308) \[6C+2(C+8F)+3(C+6F)\] (309) \[6C+2(C+7F)+3(C+6F)+2F \tag{302}\]
If there are a \(K3\) surface \(X\) and a finite subgroup \(G\) of \(\operatorname{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{7}\), then by Theorem 2.7 the numerical class of \(B\) is one of the following:
\[6C+2(C+7F)+3(C+7F)+6F \tag{310}\]
If there are a \(K3\) surface \(X\) and a finite subgroup \(G\) of \(\operatorname{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{8}\), then by Theorem 2.7 the numerical class of \(B\) is one of the following:
\[4C+2(C+8F)+4(C+8F)\ \ \mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/4 \mathbb{Z} \tag{312}\] \[6C+2(C+8F)+3(C+9F)\] (313) \[6C+2(C+8F)+3(C+8F)+3F \tag{311}\]
If there are a \(K3\) surface \(X\) and a finite subgroup \(G\) of \(\operatorname{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{9}\), then by Theorem 2.7 the numerical class of \(B\) is one of the following:
\[6C+2(C+10F)+3(C+9F) \tag{315}\] \[6C+2(C+9F)+3(C+9F)+2F \tag{314}\]
By Theorem 2.7 there are not a \(K3\) surface \(X\) and a finite subgroup \(G\) of \(\operatorname{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{l}\) for \(l=10,11\).
If there are a \(K3\) surface \(X\) and a finite subgroup \(G\) of \(\operatorname{Aut}(X)\) such that \(X/G\cong\mathbb{F}_{12}\), then by Theorem 2.7 the numerical class of \(B\) is the following:
\[6C+2(C+12F)+3(C+12F)\ \ \mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/3\mathbb{Z} \tag{316}\] |
2305.00576 | Joint Learning of Policy with Unknown Temporal Constraints for Safe
Reinforcement Learning | In many real-world applications, safety constraints for reinforcement
learning (RL) algorithms are either unknown or not explicitly defined. We
propose a framework that concurrently learns safety constraints and optimal RL
policies in such environments, supported by theoretical guarantees. Our
approach merges a logically-constrained RL algorithm with an evolutionary
algorithm to synthesize signal temporal logic (STL) specifications. The
framework is underpinned by theorems that establish the convergence of our
joint learning process and provide error bounds between the discovered policy
and the true optimal policy. We showcased our framework in grid-world
environments, successfully identifying both acceptable safety constraints and
RL policies while demonstrating the effectiveness of our theorems in practice. | Lunet Yifru, Ali Baheri | 2023-04-30T21:15:07Z | http://arxiv.org/abs/2305.00576v1 | # Joint Learning of Policy with Unknown Temporal Constraints for Safe Reinforcement Learning
###### Abstract
In many real-world applications, safety constraints for reinforcement learning (RL) algorithms are either unknown or not explicitly defined. We propose a framework that concurrently learns safety constraints and optimal RL policies in such environments, supported by theoretical guarantees. Our approach merges a logically-constrained RL algorithm with an evolutionary algorithm to synthesize signal temporal logic (STL) specifications. The framework is underpinned by theorems that establish the convergence of our joint learning process and provide error bounds between the discovered policy and the true optimal policy. We showcased our framework in grid-world environments, successfully identifying both acceptable safety constraints and RL policies while demonstrating the effectiveness of our theorems in practice.
1 West Virginia University
2 Rochester Institute of Technology
[email protected], [email protected]
## Introduction
RL has emerged as a powerful computational approach for training agents to achieve complex objectives through interactions within stochastic environments [12]. RL algorithms have demonstrated significant success in a wide range of applications and domains [23, 24]. However, when deploying RL policies in real-world scenarios, particularly those involving safety-critical operations, ensuring the safety of the learning process becomes a paramount concern. Traditional RL algorithms tend to focus on reward maximization, which may inadvertently lead to violation of safety constraints. Safe RL aims to address this challenge by learning policies that not only maximize the expected return but also respect safety constraints throughout the learning process. One promising avenue of research in safe RL involves the use of formal methods, such as temporal logic, for specifying safety constraints in a mathematically rigorous manner. The use of temporal logic constraints in the reward function can enable RL agents to acquire policies that are not only efficient but also secure. However, this approach assumes the availability of accurate temporal logic specifications, which may not always be the case, especially in complex real-world environments. In this brief, we propose a novel framework for jointly learning RL policies and safety specifications. Our approach combines the strengths of RL for policy optimization with computational techniques for discovering temporal logic constraints from data. This joint learning strategy allows us to efficiently derive an optimal policy and a suitable safety constraint for a given environment, even in situations where the safety constraints are not explicitly provided in advance.
## Related Work
**Safe RL.** Safe RL has garnered significant attention in recent years, as researchers aim to address safety concerns associated with deploying RL agents in safety-critical domains [13, 14, 1, 1]. A prevalent approach to safe RL involves formulating the problem as a constrained optimization task, where the primary objective is to maximize the expected return while satisfying given safety constraints [1]. Another direction in safe RL is risk-sensitive RL, which aims to balance the trade-off between exploration, exploitation, and risk management [12]. Risk-sensitive RL algorithms incorporate risk measures, such as conditional value-at-risk (CVaR) [10] and risk envelope [11], to guide the learning process. An additional approach to ensure safety in RL is through shielding, which intervenes in the agent's actions when it might violate safety constraints [1]. Integrating formal methods, like temporal logic and Lyapunov-based techniques, into RL algorithms has emerged as a promising direction for safe RL [1, 12, 13].
**STL Mining.** STL has emerged as an essential formalism for specifying complex temporal properties and constraints in various applications, including robotics and cyber-physical systems. In recent years, researchers have focused on inferring or mining STL specifications from data, to facilitate the development of safe and robust systems. A key approach to mining STL from data is the use of algorithmic techniques, such as optimization-based algorithms [1] and machine learning methods [1]. Optimization-based techniques seek to minimize an objective function that captures the distance between the candidate STL formulas and the given data traces. Data-driven techniques have shown promise in learning STL specifications from data. Another direction in mining STL
from data is the development of automated, scalable, and robust techniques for the discovery of interpretable STL specifications [14]. [1] provides a comprehensive survey of the various techniques for mining STL specifications from data.
## Methodology
We cast the joint learning of policy with unknown specifications as a bi-level optimization problem [13]. In this formulation, the upper level optimization aims to infer the correct STL safety constraint, while the lower level optimization focuses on learning the optimal policy under the inferred constraint. A human expert assists the learning by labeling trajectories based on their safety. In this context, safety is attained when a trajectory achieves the environmental objective without violating any safety constraints, i.e., the trace should have a positive robustness value against the true safety constraint. These components are iteratively called upon to simultaneously identify the optimal policy and the suitable STL constraint with the aid of the human expert. The outer loop, an evolutionary algorithm, is designed to infer both the template and the parameters of an STL specification that can classify the labeled dataset. This method is inspired by the work in [12], which has been shown to result in simpler, more interpretable outputs, as well as an improved misclassification rate compared to those in [1, 13]. The algorithm implements the following procedures: random generation of the initial STL population, evaluation of fitness, \(\mathcal{F}\), following the Eq. 1, ranking population members based on fitness, discarding the bottom half of the population, and applying genetic alterations such as mutation and crossover. For a positively labeled trace, \(X_{p}\), and a negatively labeled trace, \(X_{n}\), in their respective positive and negative datasets, \(D_{p}\) and \(D_{n}\), the fitness function is,
\[\mathcal{F}(\phi)=N_{\rho(\phi)^{+}|X_{p}}+N_{\rho(\phi)^{\cdot}|X_{n}}+\mid \overline{\rho}(\phi)_{D_{p}}-\overline{\rho}(\phi)_{D_{n}}\mid \tag{1}\]
where, \(\mathcal{F}\) is the fitness function for STL \(\phi\). The first term in Eq. 1 represents the number of true positive classifications for the positive samples, the second term represents the number of true negative classifications for the negative samples, and the third term computes the absolute value difference between the average of the robustness values, \(\overline{\rho}(\phi)\), for samples in \(D_{p}\) and \(D_{n}\).
The inner loop is comprised of a logically-constrained Q-learning in which the reward is based solely on the robustness of a trajectory throughout an episode against a given STL constraint. The definition of the reward function is shown in Eq. 2.
\[\mathcal{R}=\begin{cases}\rho(\phi_{[0:T]}),&\text{if }\rho(\phi_{[0:T]})<0\\ \rho(\phi_{[0:T]})+100,&\text{if }\rho(\phi_{[0:T]})\geq 0\end{cases} \tag{2}\]
where, \(\mathcal{R}\) is the reward value determined by the robustness degree \(\rho(\phi)\) of the sample \(s\) over the horizon of the STL, \(T\).
The reward is sparse because it is given at the end of an episode, and not at each step. This is due to the fact that, with timed STL constraints, the robustness value cannot be quantified at every step and can only be computed over a signal at least as long as the horizon, \(T\), of the STL. After training, the algorithm generates a certain number of rollout traces that are presented to the human expert for labeling based on their safety, which is our final component. In our experiments, we have automated the human labeling process by computing the robustness value of the traces against the true safety constraint of the environment. However, it is important to note that this is only done for automation purposes, and as per the basis of our problem, this true safety constraint is unknown, and traces should actually be labeled by an expert. The labeled traces are then used as input to the evolutionary algorithm. This process is repeated iteratively until convergence, which, in this case, is defined by the number of rollout traces that are labeled safe by the human expert. This metric is chosen because the safety of the rollout traces reflects the quality of the STL used as a safety constraint as well as the quality of the policy generated using that constraint. The framework is depicted graphically in Fig. 1.
## Results
We consider the problem of implementing RL algorithms in an environment where the safety constraint is unknown in advance. Specifically, our goal is to simultaneously infer the correct STL safety constraint and an optimal policy. To evaluate our framework, we have designed grid-world environments of varying sizes for an agent to navigate through to reach a goal at a specific location, under temporal constraints. Initially, neither the goal location nor the time constraint are known, making it impossible to design a traditional reward function. The problem was initiated with \(1000\) random \(2\)-dimensional coordinate traces within the environment, which were then labeled by a human to create a dataset for the evolutionary algorithm. The algorithm proceeds with the steps outlined in the methodology until the number of safe traces, as labeled by the human expert, meets a certain threshold. The experiment was performed on \(6\times 6\), \(8\times 8\), and \(10\times 10\) grid environments. The outputs were evaluated
Figure 1: Schematic representation of the integrated framework for concurrently learning STL constraints and optimal policies. The framework employs genetic algorithms for STL mining, Q-learning for policy learning, and incorporates human expert feedback for refining the learned constraints and policies.
based on two metrics: (i) the change in the percentage of the number of unsafe traces from the first batch of rollout traces to the last batch and (ii) the average misclassification rate (MCR) of the inferred STL specification against a dataset labeled by the human expert. The first metric evaluates how closely the inferred STL specification is able to capture the true environmental constraints by assessing how the number of unsafe samples in the rollout traces has decreased over the iterations, indicating the STL specification is getting closer to the true (but unknown) constraint. The second metric conveys the classification capability of the inferred STL against labeled datasets. It quantifies how well the STL distinguishes between safe and unsafe trajectories, as compared to that of a human expert. The results are given in Table 1.
## 3 Theoretical Results
To complement these empirical findings, we now delve into the theoretical underpinnings of our approach. In this section, we will present the theoretical results that support the convergence properties and error bounds of our joint learning framework, providing a more rigorous understanding of its performance.
### Joint Convergence of the Inner Loop and Outer Loop
Our objective is to demonstrate that the combined convergence of the inner and outer loops results in the overall convergence of the framework. In essence, we aim to prove that when the inner loop (Q-learning) reaches an optimal policy and the outer loop (evolutionary algorithm for STL synthesis) attains an optimal STL constraint, the entire framework converges to a stable solution. By establishing these two implications, we can show that the proposed framework effectively tackles the given problem and converges to a solution that satisfies both the policy's optimality and the STL constraint's optimality. To prove this joint convergence, we must address the following two implications:
* Convergence of the inner loop (Q-learning) to an optimal policy \(\pi^{*}\) implies the convergence of the outer loop (evolutionary algorithm) to an optimal STL constraint \(\phi^{*}\).
* Convergence of the outer loop (evolutionary algorithm) to an optimal STL constraint \(\phi^{*}\) implies the convergence of the inner loop (Q-learning) to an optimal policy \(\pi^{*}\).
**Implication 1.** This implication aims to show that if the inner loop converges to an optimal policy, the outer loop converges to an optimal STL constraint. This is achieved by demonstrating that the fitness function is maximized for the optimal STL constraint and that the evolutionary algorithm converges to this optimal constraint under certain conditions. Assume that the inner loop converges to an optimal policy \(\pi^{*}\). Under this assumption, we need to prove that the outer loop converges to an optimal STL constraint \(\phi^{*}\). We approach this by showing that:
* The fitness function \(\mathcal{F}(\pi^{*},\phi)\) is maximized for the optimal STL constraint \(\pi^{*}\). This can be done by analyzing the properties of the fitness function and the reward function \(\mathcal{R}(\pi,\phi)\) under the optimal policy.
* The evolutionary algorithm for STL synthesis converges to the optimal STL constraint \(\pi^{*}\) under certain conditions, such as proper selection pressure, sufficient exploration, and well-defined mutation and crossover operators.
By showing that the fitness function is maximized for the optimal STL constraint, and that the evolutionary algorithm converges to this optimal constraint, we establish the convergence of the outer loop under the assumption that the inner loop converges to the optimal policy.
**Implication 2.** This implication aims to show that if the outer loop converges to an optimal STL constraint, the inner loop converges to an optimal policy. This is achieved by demonstrating that the reward function provides the necessary guidance under the optimal STL constraint, and that the Q-learning algorithm converges to the optimal policy under this guidance. We assume that the outer loop converges to an optimal STL constraint \(\pi^{*}\). Under this assumption, we need to prove that the inner loop converges to an optimal policy \(\pi^{*}\). We approach this by showing that:
* The reward function \(\mathcal{R}(\pi,\phi^{*})\) has the necessary properties to guide the Q-learning algorithm towards the optimal policy. This can be done by analyzing the reward function under the optimal STL constraint and ensuring that it provides proper guidance and exploration-exploitation trade-off.
* The Q-learning algorithm converges to the optimal policy \(\pi^{*}\) under certain conditions, such as proper learning rates, sufficient exploration, and well-defined state and action spaces.
We demonstrate the convergence of the inner loop by illustrating that the reward function offers the needed guidance when operating under the optimal STL constraint, and that the Q-learning algorithm converges to the optimal policy with this guidance. This convergence is established based on the assumption that the outer loop successfully converges to the optimal STL constraint.
**PROOF of Implication 1.** _(a) Maximizing the fitness function for the optimal STL constraint._
Here, we provide a mathematical presentation of Implication 1. Let us first define the key components of the framework: (i) policy: \(\pi:\mathcal{S}\rightarrow\mathcal{A}\), a mapping from states to actions, (ii) reward function: \(\mathcal{R}(\pi,\varphi):\Pi\times\Phi\rightarrow\mathbb{R}\), a function that measures the reward for a given policy \(\pi\) and STL
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline Size & \multicolumn{2}{c|}{\(\%\) of Unsafe Traces} & \multicolumn{1}{c}{Inferred STL} \\ \cline{2-4} & First rollout & Last rollout & MCR \\ \hline
6\(\times\)6 & 73.2\(\%\) & 1.2\(\%\) & 0.02\(\pm\)0.0013 \\
8\(\times\)8 & 86.7\(\%\) & 4.3\(\%\) & 0.04\(\pm\)0.001 \\
10\(\times\)10 & 91.4\(\%\) & 11.1\(\%\) & 0.06\(\pm\)0.009 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Percentage of unsafe traces (per rollout sample) at the beginning and the end of the process, and average MCR for inferred STL.
constraint \(\varphi\), (iii) fitness function: \(\mathcal{F}(\pi,\varphi):\Pi\times\Phi\rightarrow\mathbb{R}\), a function that measures the fitness of a given policy \(\pi\) and STL constraint \(\varphi\). Now, let's proceed with the proof of Implication 1:
Assume that the inner loop converges to an optimal policy \(\pi^{*}\). We want to show that the fitness function \(\mathcal{F}(\pi^{*},\varphi)\) is maximized for the optimal STL constraint \(\varphi^{*}\). Let \(\mathcal{R}^{*}=\max_{\pi,\varphi}\mathcal{R}(\pi,\varphi)\) be the maximum achievable reward. We know that the reward function \(\mathcal{R}(\pi,\varphi)\) is maximized for the optimal policy \(\pi^{*}\) and the optimal STL constraint \(\varphi^{*}\), i.e., \(\mathcal{R}(\pi^{*},\varphi^{*})=\mathcal{R}^{*}\). We define the fitness function \(\mathcal{F}(\pi,\varphi)\) as a function of the reward function \(\mathcal{R}(\pi,\varphi)\):
\[\mathcal{F}(\pi,\varphi)=\frac{R(\pi,\varphi)}{R^{*}}\]
since \(\mathcal{R}(\pi^{*},\varphi^{*})=\mathcal{R}^{*}\), we have:
\[\mathcal{F}\left(\pi^{*},\varphi^{*}\right)=\frac{\mathcal{R}\left(\pi^{*}, \varphi^{*}\right)}{\mathcal{R}^{*}}=\frac{\mathcal{R}^{*}}{\mathcal{R}^{*}}=1\]
This result shows that the fitness function \(\mathcal{F}(\pi^{*},\varphi)\) is indeed maximized for the optimal STL constraint \(\varphi^{*}\), given that the inner loop converges to the optimal policy \(\pi^{*}\). The maximum value of the fitness function is 1, which occurs when both the policy and the STL constraint are optimal.
_(b) Convergence of the evolutionary algorithm for STL synthesis._
Now, we provide insights into the convergence of the evolutionary algorithm for STL synthesis. Let \(\varphi_{i}\) be the STL constraint at iteration \(i\) of the evolutionary algorithm. We want to show that the evolutionary algorithm converges to the optimal STL constraint \(\varphi^{*}\) under certain conditions. Let \(P(\varphi_{i})\) be the probability distribution of the STL constraint population at iteration \(i\). The evolutionary algorithm updates \(P(\varphi_{i})\) through selection, mutation, and crossover operators. Let \(P_{\text{sel}}(\varphi_{i})\), \(P_{\text{mut}}(\varphi_{i})\), and \(P_{\text{cross}}(\varphi_{i})\) be the updated probability distributions after applying the selection, mutation, and crossover operators, respectively. Then, the probability distribution at iteration \(i+1\) is given by:
\[P\left(\varphi_{i+1}\right)=P_{\text{cross}}\,\left(P_{\text{mut}}\,\left(P_ {\text{sel}}\,\left(\varphi_{i}\right)\right)\right)\]
Under proper selection pressure, sufficient exploration, and well-defined mutation and crossover operators, it can be shown that the evolutionary algorithm converges to the optimal STL constraint \(\varphi^{*}\) as the number of iterations approaches infinity:
\[\lim_{i\rightarrow\infty}P\left(\varphi_{i}\right)=\delta\left(\varphi- \varphi^{*}\right)\]
where \(\delta(\cdot)\) is the Dirac delta function, meaning that the probability distribution converges to a distribution concentrated on the optimal STL constraint \(\varphi^{*}\). Through the demonstration of both points (a) and (b), we confirm that the outer loop approaches the optimal STL constraint, denoted as \(\varphi^{*}\), provided that the inner loop approaches the optimal policy, denoted as \(\pi^{*}\). This evidence indicates that the combined convergence of both the inner and outer loops results in the comprehensive convergence of the entire framework.
### PROOF of Implication 2.
Assume that the outer loop converges to an optimal STL constraint \(\varphi^{*}\). Under this assumption, we need to prove that the inner loop converges to an optimal policy \(\pi^{*}\). We approach this by showing that:
* The reward function \(\mathcal{R}(\pi,\varphi^{*})\) has the necessary properties to guide the Q-learning algorithm towards the optimal policy. This can be done by analyzing the reward function under the optimal STL constraint and ensuring that it provides proper guidance and exploration-exploitation trade-off. Specifically, we show that \(\mathcal{R}(\pi,\varphi^{*})\) is Lipschitz continuous and has a unique maximum at the optimal policy \(\pi^{*}\). Moreover, the reward function should encourage sufficient exploration of the state-action space while exploiting the knowledge acquired during the learning process.
* The Q-learning algorithm converges to the optimal policy \(\pi^{*}\) under certain conditions, such as proper learning rates, sufficient exploration, and well-defined state and action spaces. According to the Q-learning convergence theorem, the Q-learning algorithm converges to the optimal Q-function \(Q^{*}(s,a)\) if the following conditions are satisfied:
1. Each state-action pair \((s,a)\) is visited infinitely often, i.e., \(\lim_{t\rightarrow\infty}N_{t}(s,a)=\infty\), where \(N_{t}(s,a)\) is the number of visits to the state-action pair \((s,a)\) up to time \(t\).
2. The learning rate \(\alpha_{t}(s,a)\) satisfies \(\sum_{t=1}^{\infty}\alpha_{t}(s,a)=\infty\) and \(\sum_{t=1}^{\infty}\alpha_{t}^{2}(s,a)<\infty\). This condition ensures that the learning rate decays slowly enough to guarantee convergence.
Assuming that we have the optimal STL constraint \(\varphi^{*}\), we consider well-defined state and action spaces, along with an exploration strategy (such as an \(\epsilon\)-greedy approach) that ensures each state-action pair is visited infinitely often. If these conditions hold, the Q-learning algorithm converges to the optimal policy \(\pi^{*}\). The convergence of the inner loop is based on two critical observations: first, the reward function provides essential guidance when used under the optimal STL constraint, and second, the Q-learning algorithm moves towards the optimal policy when steered by this guidance. This convergence is contingent on the outer loop effectively converging to the optimal STL constraint.
### Bounds on the Error
Deriving bounds on the error between the discovered policy and the true optimal policy involves analyzing the mathematical relationship between the error and various factors influencing it. Here's a possible way to approach this analysis: Let \(\pi^{*}\) be the true optimal policy and \(\pi^{\prime}\) be the discovered policy. Define the error between these policies as:
\[\epsilon\left(\pi^{\prime},\pi^{*}\right)=E\left[R\left(s,\pi^{*}(s)\right)-R \left(s,\pi^{\prime}(s)\right)\right]\]
where \(E[\cdot]\) denotes the expected value, \(\mathcal{R}(s,a)\) is the reward function for taking action \(a\) in state \(s\), and the expectation is taken over all states \(s\) in the state space. Now, consider the following factors that may affect the error bounds:
* Granularity of the state abstraction (denoted by \(G\)): A coarse state abstraction may lead to a larger error between the discovered policy and the true optimal policy. The impact of state abstraction granularity on the error can be represented as: \[\epsilon_{G}(G)\leq C_{1}\cdot G\] where \(C_{1}\) is a constant that depends on the problem's specific characteristics.
* Quality of the learned STL specifications (denoted by \(Q\)): If the learned STL specifications are not accurate or expressive enough, the error between the discovered policy and the true optimal policy may be larger. The impact of the quality of the learned STL specifications on the error can be represented as: \[\epsilon_{Q}(Q)\leq C_{2}\cdot(1-Q)\] where \(C_{2}\) is a constant that depends on the problem's specific characteristics.
* Amount of human feedback provided (denoted by \(H\)): Human feedback can help guide the learning process and reduce the error between the discovered policy and the true optimal policy. The impact of the amount of human feedback on the error can be represented as: \[\epsilon_{H}(H)\leq C_{3}\cdot e^{-H}\] where \(C_{3}\) is a constant that depends on the problem's specific characteristics, and \(e^{-H}\) represents the exponential decay in error with increasing human feedback.
Integrating these individual error bounds, we obtain an overall error bound:
\[\epsilon\left(\pi^{\prime},\pi^{*}\right)\leq C_{1}\cdot G+C_{2}\cdot(1-Q)+C_{ 3}\cdot e^{-H}.\]
The error bound reveals how the error is influenced by various contributing factors, including the level of detail in state abstraction, the accuracy of the learned STL specifications, and the extent of human feedback received. By carefully analyzing the error bound, we gain insight into the trade-offs between these factors. Armed with this knowledge, we can formulate strategies that minimize the error and ultimately improve the effectiveness of the bi-level optimization framework.
## Conclusions
In this paper, we have studied a joint learning framework for the safety constraint and the RL policy of an environment where the safety constraints are unknown _a priori_. We have implemented an algorithm that optimizes the safety constraint and the RL policy simultaneously. Our preliminary results have shown that our framework is capable of identifying safety constraints that are suitable for the environment and an optimal RL policy that results in safe behavior. Future directions for this work will include testing our algorithm in complex environments, assessing adaptability, and improving the algorithm's computational efficiency.
|
2309.11976 | Multi-Channel MOSRA: Mean Opinion Score and Room Acoustics Estimation
Using Simulated Data and a Teacher Model | Previous methods for predicting room acoustic parameters and speech quality
metrics have focused on the single-channel case, where room acoustics and Mean
Opinion Score (MOS) are predicted for a single recording device. However,
quality-based device selection for rooms with multiple recording devices may
benefit from a multi-channel approach where the descriptive metrics are
predicted for multiple devices in parallel. Following our hypothesis that a
model may benefit from multi-channel training, we develop a multi-channel model
for joint MOS and room acoustics prediction (MOSRA) for five channels in
parallel. The lack of multi-channel audio data with ground truth labels
necessitated the creation of simulated data using an acoustic simulator with
room acoustic labels extracted from the generated impulse responses and labels
for MOS generated in a student-teacher setup using a wav2vec2-based MOS
prediction model. Our experiments show that the multi-channel model improves
the prediction of the direct-to-reverberation ratio, clarity, and speech
transmission index over the single-channel model with roughly 5$\times$ less
computation while suffering minimal losses in the performance of the other
metrics. | Jozef Coldenhoff, Andrew Harper, Paul Kendrick, Tijana Stojkovic, Milos Cernak | 2023-09-21T11:21:52Z | http://arxiv.org/abs/2309.11976v2 | Multi-Channel Mosa: Mean Opinion Score and Room Acoustics Estimation Using Simulated Data and a Teacher Model
###### Abstract
Previous methods for predicting room acoustic parameters and speech quality metrics have focused on the single-channel case, where room acoustics and Mean Opinion Score (MOS) are predicted for a single recording device. However, quality-based device selection for rooms with multiple recording devices may benefit from a multi-channel approach where the descriptive metrics are predicted for multiple devices in parallel. Following our hypothesis that a model may benefit from multi-channel training, we develop a multi-channel model for joint MOS and room acoustics prediction (MOSRA) for five channels in parallel. The lack of multi-channel audio data with ground truth labels necessitated the creation of simulated data using an acoustic simulator with room acoustic labels extracted from the generated impulse responses and labels for MOS generated in a student-teacher setup using a wav2vec2-based MOS prediction model. Our experiments show that the multi-channel model improves the prediction of the direct-to-reverberation ratio, clarity, and speech transmission index over the single-channel model with roughly 5\(\times\) less computation while suffering minimal losses in the performance of the other metrics.
Jozef Coldenhoff\({}^{1,2}\) Andrew Harper\({}^{1}\) Paul Kendrick\({}^{1}\) Tijana Stojkovic\({}^{1}\) Milos Cernak\({}^{1}\)\({}^{1}\)Logitech Europe S.A., Lausanne, Switzerland
\({}^{2}\)Ecole Polytechnique Federale de Lausanne (EPFL), Lausanne, Switzerland
Speech quality assessment, joint learning, room acoustics, neural networks
## 1 Introduction
The Mean Opinion Score (MOS) is one of the simplest yet effective metrics of subjective audio quality and can be evaluated using a subjective listening test based on the ITU-T P.800 recommendation [1], or its crowdsourcing approach described in the ITU-T P.808 [2].
However, given the costly nature of carrying out subjective listening tests, many methods have been developed to estimate this subjective metric for unevaluated audio recordings blindly. Most methods are based on neural networks trained to map audio to MOS scores. Nevertheless, many of these systems lack robustness due to the lack of large human-labeled datasets. Therefore, the ConferencingSpeech 2022 challenge [3] recently released a larger dataset consisting of 86k audio files with crowdsourced MOS labels, with top performers in the challenge achieving a Pearson correlation of roughly 0.8 with human labels.
An overall speech quality metric is very useful but does not provide insights into the causes of degraded speech quality. To counteract this, multi-valued non-intrusive speech quality assessment (NISQA) methods have been proposed [4, 5]. Characterization of the listening environment also provides such insights. Therefore, efforts have been made in the blind estimation of room acoustic descriptors, where deep neural networks (DNNs) estimate a single acoustic descriptor, such as reverberation time measured in seconds, T60 [6], or the speech transmission index [7]. Besides, joint prediction of more acoustic parameters using recurrent neural networks can estimate reverberation time (T60), clarity metrics in decibels (C50 and C80), direct to reverberant ratio (DRR) also in decibels [8]. Finally, Lopez et al. [9] further expanded the joint prediction by adding the signal-to-noise ratio (SNR, in decibels) and the speech transmission index (STI, ranging from 0 to 1). A lightweight model that jointly predicted MOS and room acoustic parameters was presented in [10]. Later, Hajal et al. investigated self-supervised learning methods for the same joint prediction and achieved state-of-the-art results [11]. More recently, Sarabia et al. [12] released a corpus of simulated audio data with corresponding acoustic parameters where the authors showed that models trained on simulated data were able to generalize to real world conditions.
The recent work on speech quality assessment and room acoustics estimation has focused on the single-device case where descriptive metrics are predicted for a single recording device in a one-to-one fashion. However, there has recently been interest in a many-to-one setup where multiple input channels map to a single metric. One such setting is the clarity prediction challenge [13], where the challenge entrants are tasked with predicting the intelligibility of binaural hearing aid recordings achieved by human listeners.
Another application of a many-to-one setup is the task of device selection. The rise of smart home devices and personal digital devices led to the situation where multiple spatially disjoint microphones can simultaneously record a speaker. This naturally leads to the challenge of selecting which audio stream to transmit. Thus, in previous work on device selection in the context of smart home devices, Barber et al. [14] designed a multi-channel system to predict the closest device to the current speaker, where an improvement was found over a signal processing baseline. Similarly, Yoshioka et al. [15] presented PickNet, a system to predict the closest device, where they also note an improvement in word error rate (WER) when using a multi-channel model over their single-channel baseline.
As noted, these previous works use the heuristic that the best device is the closest to the current speaker. However, choosing the closest device may not be optimal regarding audio quality, as many factors besides distance can influence the quality of an audio stream. For example, the closest device may be placed in a region with many reflections, decreasing DRR and clarity. It may also be placed next to a noise source or run a less powerful denoising model than other devices in the room.
Inspired by these previous works, we aim to further extend the task setup to a many-to-many case where the model is tasked to predict multiple sets of metrics given multiple input channels. Specifically, we aim to train a model that predicts speech quality MOS and
a set of room acoustic parameters (STI, T60, DRR, and C50) for five channels in parallel. Our motivation for this is two-fold. Firstly, having access to MOS and room acoustics will improve the interpretability of the device selection, thus aiding in making informed decisions. Secondly, we conjecture that a model trained to predict the metrics of interest may benefit from multi-channel training, as information contained in the different audio streams may give the model global information about the overall acoustic environment.
Thus, in this paper, we present multi-channel MOSRA, a model that predicts the MOS and room acoustics for five channels in parallel. Given the lack of multi-device data with ground truth labels, we simulate it using an acoustic simulator where the ground truth room acoustic labels are extracted from the room impulse responses, while the labels for MOS are generated by student-teacher setup using a wav2vec2 (XLSR)-based model trained to predict MOS scores [10]. Our experiments show that the multi-channel model improves the prediction of DRR, C50, and STI over the single-channel model with roughly 5\(\times\) less computation while suffering minimal losses in the performance of the other metrics.
## 2 Methods
This section proposes an extension to the MOSRA model that allows it to generate predictions for five channels in parallel. The overall model architecture is shown on the right in Fig. 1, and has a total number of 411k trainable parameters. Moreover, the used data simulation pipeline is described in detail.
### Multi-channel feature extractor
The multi-channel feature extractor is an adapted version of the one used in [11], where the first convolutional neural network (CNN) layer is modified to take as input five mel-spectrogram segments instead of a single one, with the rest of the network left unchanged.
These multi-channel Mel-spectrogram segments are obtained by Mel transforming the STFT representations of the five audio files. We use 48 Mel bands with an FFT window size of 20ms and a hop size of 10ms. Super wideband audio is supported as the maximum Mel band frequency is 16kHz. The resulting Mel-spectrogram is further divided into overlapping frames, each spanning 150ms with a hop size of 40ms.
Consequently, the Mel-spectrogram segments are passed to the CNN, which generates an embedding per segment. These multi-channel embeddings are then further processed by a transformer encoder [11], which accounts for temporal dependencies.
### Multi-channel prediction heads
Similarly to the feature extractor, we adapt the original prediction heads to output a metric estimate per channel. This is done by changing the final fully connected layer to have an output size of 5 instead of 1.
### Multi-channel meeting data generation
Given the lack of suitable training data, we leverage acoustic simulation to generate it. An overview of the proposed simulation pipeline is shown on the left in Fig. 1. PyRoomAcoustics [16] is used to generate the room impulse responses (RIRs) which are convolved with the speech and noise sources. We use a combination of the image-source method (ISM), and ray tracing to generate the RIRs, where the maximum order for the ISM is set to 3 and the number of rays is computed automatically following Vorlander 2008, Eq. (11.12) [17].
Then, for every unique scenario, we randomly generate a shoebox-shaped room between 2.1-10m in width/length and 2-4m in height. The absorption coefficients of the room materials are randomized to reflect a realistic acoustic environment. This results in a mean T60 of 0.41s with a standard deviation of 0.18. We then place a speech source at a height between 1.3-2m at least 10cm from a wall. Then, one or two noises are added to the room in a random location with the constraint that they are at least 10cm from any wall. Finally, each of the five microphones is placed in one of two configurations, either being mounted on a wall or in approximately the center of the room, reflecting a device placed on a table.
We use clean speech from the LibriSpeech train-clean-100, dev-clean, test-clean sets [18] as source data. The noise data is taken from the DNS challenge [19], which we randomly partitioned in train, validation, and test splits at fractions of 0.8, 0.1, and 0.1, leading to 43902, 5487, 5489 noise files per subset respectively.
Thus, after generating the room impulse responses, we randomly choose a clean speech file and up to two noise files and resample them from 16kHz to 32kHz. The noise and speech samples are then repeated to have a length of 10 seconds. Then, the dB full scale
Figure 1: Overview of the proposed framework. On the left, a high-level overview of the data generation process is given. On the right, the details of the model architecture are shown, with the number of parameters shown in the red boxes.
(dBFS) levels of the speech and noise are sampled as defined in Eq. 1, which is done to obtain a desirable distribution of SNR at the outputs.
\[\begin{split}\text{dBFS(Noise)}&\sim(Beta(1.5,1.5) \cdot-40)-20\\ \text{dBFS(Speech)}&\sim(Beta(1.5,1.5)\cdot-30)-10 \end{split} \tag{1}\]
The scaled audio segments are then convolved with the RIRs to obtain the five output mixtures. These are then uniformly scaled such that the loudest signal in the mixtures has a dBFS between -20 and 0. To obtain the final audio, the wet mixtures are passed through three different internal Logitech denoising models to simulate the case where the recording devices apply denoising before transmitting the signal to a central hub.
To obtain the labels, we compute the room acoustic parameters from the RIRs: STI [20], DRR [21], T60 and C50 [22],. Since we do not have access to MOS labels for our generated data, we leverage a student-teacher setup. Consequently, the audio outputs are passed to a larger single-channel network trained to estimate MOS. We used an XLSR-based model trained to predict MOS scores on single-channel recordings as our teacher model [10].
In the end we obtain three multi-channel datasets with a size in hours of 79.3, 7.5, and 6.3 for training, validation, and testing, respectively. The length of every individual audio clip is slightly longer than 10 seconds due to the convolution with the RIRs. Finally, the simulation resulted in the distributions of T60 and DRR versus distance to the source, shown in Fig. 2. The figure highlights that distance is only loosely correlated with acoustic parameters.
### Loss function
To train the multi-channel MOSRA network, we used mean squared error (MSE) with the same loss weights as the original MOSRA paper [11]. The loss used for training is shown in Eq. 2.
\[\begin{split}\text{MSE}&=\frac{1}{N}\Sigma_{1}^{N} (y_{pred_{n}}-y_{true_{n}})^{2}\\ \mathcal{L}_{\text{Room acoustics}}&=\text{MSE}_{T60}+ \text{MSE}_{C50}+\text{MSE}_{STI}+\text{MSE}_{DRR}\\ \mathcal{L}_{overall}&=2\cdot\text{MSE}_{\text{MOS} }+0.2\cdot\mathcal{L}_{\text{Room acoustics}}\end{split} \tag{2}\]
Similarly to the original training scheme, we choose to normalize our labels with the mean and standard deviations to ensure that all the model outputs have the same range.
## 3 Experimental Setup
### Baseline system
To show the benefit of multi-channel training and inference, we compare the proposed model against a baseline. The chosen baseline model is the original single-channel MOSRA model [11] trained on the individual channels of our simulated data. Both the single and multi-channel models have a similar number of parameters with 411k and 413K parameters, respectively.
### Training and evaluation
Both models were trained using the ADAM [23] optimizer, a batch size of 32, a learning rate of 5e-4, and early stopping with patience of 30 on the Pearson correlation of the MOS predictions. Furthermore, the learning rate was decreased by a factor of 10 if the validation loss did not decrease for 15 epochs. Note that the single-channel baseline model had 5\(\times\) more training steps per epoch, one for each channel per unique simulated room.
During the multi-channel model training, we sample five devices from a room with replacement to simulate the case that less than five microphones are present in a given space i.e. some channels are repeated in the input to the model. This also ensures that the model does not learn any patterns from the ordering of the channels. Note that during testing, all channels in the room are passed to the model.
For evaluation purposes, RMSE will be used for the acoustic parameters (STI, T60, DRR, and C50). For MOS, the Pearson correlation metric is used.
## 4 Results
### Acoustic parameters
The results for the prediction of acoustic parameters are shown in Table 1. We can observe that the multi-channel system outperforms the baseline single-channel system in STI, DRR, and C50. The confidence intervals also show that the improvement in the three acoustic parameters is significant.
### Speech quality
Table 2 shows the performance of the baseline single-channel and multi-channel systems when trained on the simulated training data. The results show that in terms of MOS, the baseline single-channel system slightly but significantly outperforms the multi-channel system in Pearson correlation coefficient and RMSE metrics.
### Simulated dataset generalization
To show the generalization capability of models trained on the simulated training data, we compare the original single-channel MOSRA model trained on human-labeled data [11] with the multi-channel model trained on our simulated data. To generate results with the multi-channel model on the single-channel data, we repeat the single-channel audio five times and take the average prediction at the
\begin{table}
\begin{tabular}{l l l} \hline \hline Model & Multi-channel & Single-channel \\ & RMSE & (baseline) RMSE \\ \hline STI & **0.017** [0.0170, 0.0176] & 0.018 [0.0178, 0.0184] \\ DRR [dB] & **2.78** [2.736, 2.839] & 3.23 [3.190, 3.267] \\ T60 [s] & 0.11 [0.105, 0.112] & **0.09** [0.090, 0.097] \\ C50 [dB] & **1.80** [1.757, 1.834] & 1.87 [1.835, 1.912] \\ \hline \hline \end{tabular}
\end{table}
Table 1: RMSE (\(\downarrow\)) of the baseline and multi-channel models on acoustic parameters using the simulated test set. The confidence intervals are obtained by bootstrapping with 1000 repetitions.
Figure 2: T60 and DRR versus distance to the active speaker.
output. We show results for in-distribution data, i.e., data generated in the same fashion where speech is convolved with an RIR and noise is added. We consider the NISQA TEST LIVETALK dataset as also in-distribution, as it contains only near and far field loud and quiet speech spoken in various noisy environments. The results in Table 3 show that for the in-distribution data, the model performs at least equally well as the original model. Out-of-distribution data is also shown in the form of the other NISQA TEST datasets, where the performance of the model trained on synthetic data is lower than the model trained on human-labeled data. These test data include distortions not seen during simulation (such as clipping, packet loss, and various codec distortions).
## 5 Discussion and Conclusion
Given that the architecture of our model allows for any length of audio input, the model could be used to give near real-time predictions on multiple channels using a circular buffer. Fig. 3 shows the model predictions for MOS and DRR on three-channel audio where the speech is crossfaded between the different input channels. The figure shows that the model can detect the transitions of the speaker between channels and thus provides a proof of concept for the use of quality-based audio stream selection. Besides, Fig. 2 underscores the limited correlation between DRR and the distance to speakers. This observation emphasizes the drawback of distance as a criterion for selecting the optimal device regarding audio quality, as it may overlook crucial factors such as DRR and T60.
Our experimental findings confirm the advantages of a multi-channel model in predicting acoustic parameters, surpassing the single-channel counterpart. Notably, the multi-channel model improves prediction of STI, DRR, and C50, demonstrating a clear advantage over the single-channel model while utilizing 5\(\times\) less computational resources per channel. Lower T60 prediction compared to the single-channel baseline might be caused by the model lacking capacity, as it is predicting five channels in parallel with roughly the same number of parameters and computational cost. However, we have not seen improvement in multi-channel quality (MOS) prediction, speculating the slightly worse performance against the single-channel version caused by missing multi-channel human speech quality labels.
On the other hand, the experiment on generalization shows that simulated data can allow for generalization to real data. Specifically, we observe that the model performs well on the NISQA TEST LIVETALK and ReverbSpeechQuality datasets but lacks performance on the two other NISQA datasets. Our conjecture is that the observed performance degradation is due to a mismatch in data distribution, as the high performance on the ReverbSpeechQuality could be attributed to the fact that data is generated in the same fashion. In line with previous findings on denoising [25], we emphasize that future work should extend the types of degradations applied to speech to improve the generalization capability of MOS estimation models.
\begin{table}
\begin{tabular}{l l l l} \hline \hline \multicolumn{2}{l}{MOS on the test set} & \multicolumn{3}{c}{95 \% CI} \\ \hline \multirow{2}{*}{PCC} & Single-channel (baseline) & **0.97** & [0.971, 0.973] \\ & Multi-channel & 0.96 & [0.957, 0.960] \\ \hline \multirow{2}{*}{RMSE} & Single-channel (baseline) & **0.21** & [0.207, 0.213] \\ & Multi-channel & 0.28 & [0.277, 0.285] \\ \hline \hline \end{tabular}
\end{table}
Table 2: PCC (\(\uparrow\)) and RMSE (\(\downarrow\)) of the single-channel baseline and multi-channel models on MOS labels using the simulated test set. The confidence intervals are obtained by bootstrapping with 1000 repetitions.
\begin{table}
\begin{tabular}{l l l} \hline \hline Pearson correlation & Original [11] & Simulated \\ after third-order mapping & MOSRA & data \\ \hline \multicolumn{2}{l}{In-distribution data} \\ ReverbSpeechQuality[24] & 0.85 & **0.87** \\ NISQA TEST LIVETALK[4] & 0.68 & **0.71** \\ \hline \multicolumn{2}{l}{Out-of-distribution data} \\ NISQA TEST FOR[4] & **0.87** & 0.53 \\ NISQA TEST P501[4] & **0.89** & 0.60 \\ \hline \hline \end{tabular}
\end{table}
Table 3: PCC (\(\uparrow\)) after third order mapping on NISQA and internal test sets for the original MOSRA model trained on real human-labeled data vs. the multi-channel model trained on our simulated data.
Figure 3: Multi-channel MOSRA predictions using a circular buffer of roughly 4 seconds of audio. The audio is recorded in a real room where the speaker is crossfaded between recording devices placed in three spatially disjoint locations. The model makes predictions on three channels, where the first two are repeated, e.g., the input to the model is channel [1,2,3,1,2]. Note that the scores are standardized across channels for each time step to aid interpretability. |
2301.00095 | Sharp $L^p$ estimates and size of nodal sets for generalized Steklov
eigenfunctions | We prove sharp $L^p$ estimates for the Steklov eigenfunctions on compact
manifolds with boundary in terms of their $L^2$ norms on the boundary. We prove
it by establishing $L^p$ bounds for the harmonic extension operators as well as
the spectral projection operators on the boundary. Moreover, we derive lower
bounds on the size of nodal sets for a variation of the Steklov spectral
problem. We consider a generalized version of the Steklov problem by adding a
non-smooth potential on the boundary but some of our results are new even
without potential. | Xiaoqi Huang, Yannick Sire, Xing Wang, Cheng Zhang | 2022-12-31T02:16:41Z | http://arxiv.org/abs/2301.00095v1 | # Sharp \(L^{p}\) estimates and size of nodal sets for generalized Steklov eigenfunctions
###### Abstract.
We prove sharp \(L^{p}\) estimates for the Steklov eigenfunctions on compact manifolds with boundary in terms of their \(L^{2}\) norms on the boundary. We prove it by establishing \(L^{p}\) bounds for the harmonic extension operators as well as the spectral projection operators on the boundary. Moreover, we derive lower bounds on the size of nodal sets for a variation of the Steklov spectral problem. We consider a generalized version of the Steklov problem by adding a non-smooth potential on the boundary but some of our results are new even without potential.
## 1. Introduction
Eigenfunction estimates have been recently considered in the case of Schrodinger operators with singular potentials (see e.g. [43], [1], [2], [30], [20], [31], [32], [33]). In the present paper, we investigate a generalization of the well-known Steklov problem with non-smooth potentials. For surveys on the Steklov problem, see e.g. [26], [12].
Let \((\Omega,h)\) be a smooth manifold with boundary \((M,g)\), where \(\dim\,\Omega=n+1\geq 2\) and \(h|_{M}=g\). The Steklov eigenvalue problem with potential \(V\) is
\[\begin{cases}\Delta_{h}e_{\lambda}(x)=0,\ x\in\Omega\\ \partial_{\nu}e_{\lambda}(x)+V(x)e_{\lambda}(x)=\lambda e_{\lambda}(x),\ x\in \partial\Omega=M.\end{cases}\]
Here \(\nu\) is an unit outer normal vector on \(M\). Then the restriction of the eigenfunction \(e_{\lambda}(x)\) (denote also by \(e_{\lambda}\) to simplify notations) to the boundary \(M\) is an eigenfunction of \(\mathcal{D}+V\):
\[(\mathcal{D}+V)e_{\lambda}(x)=\lambda e_{\lambda}(x),\ x\in M.\]
Here \(\mathcal{D}\) is the Dirichlet-to-Neumann operator \(\mathcal{D}\): \(H^{\frac{1}{2}}(M)\to H^{-\frac{1}{2}}(M)\)
\[\mathcal{D}f=\partial_{\nu}u|_{M},\]
where \(u\) is the harmonic extension of \(f\):
\[\begin{cases}\Delta_{h}u(x)=0,\ x\in\Omega,\\ u(x)=f(x),\ x\in\partial\Omega=M.\end{cases} \tag{1.1}\]
Such a type of Steklov problem with potential has been considered in [13] from the point of view of conformal geometry, where the potential \(V\) is the mean curvature on the boundary \(\partial\Omega\). See e.g. [16], [17], [18], [38] for related works on Yamabe problem on compact manifolds with boundary. In the current paper, we derive estimates whenever the potential is merely bounded or Lipschitz.
For \(m\in\mathbb{R}\), we denote \(OPS^{m}\) the class of pseudodifferential operators of order \(m\). It is known that \(\mathcal{D}\in OPS^{1}\) and one can write (see e.g. [56, Proposition C.1])
\[\mathcal{D}=\sqrt{-\Delta_{g}}+P_{0},\]
for some \(P_{0}\in OPS^{0}.\) Therefore, up to a classical pseudo-differential operator of order zero, the problem of eigenfunction bounds (among other results) on the boundary \(M\) has been treated in our previous paper [30]. In this setting the model is related to relativistic matter (see e.g. [9, 15, 21, 22, 23, 36, 37]).
In our first result below, we provide a control of the \(L^{p}\) norms of the Steklov eigenfunctions in the domain by their \(L^{2}\) norms on the boundary.
**Theorem 1**.: _Let \(V\in L^{\infty}(M)\). Then for \(\lambda\geq 1\) we have_
\[\|e_{\lambda}\|_{L^{p}(\Omega)}\lesssim\lambda^{-\frac{1}{p}+\sigma(p)}\|e_{ \lambda}\|_{L^{2}(M)},\ \ 2\leq p\leq\infty, \tag{1.2}\]
_where_
\[\sigma(p)=\begin{cases}\frac{n-1}{2}(\frac{1}{2}-\frac{1}{p}),&2\leq p<\frac{ 2(n+1)}{n-1}\\ \frac{n-1}{2}-\frac{n}{p},&\frac{2(n+1)}{n-1}\leq p\leq\infty.\end{cases}\]
The previous result is new, even for \(V\equiv 0\). Note that the estimate is sharp when \(V\equiv 0\) and \(\Omega\) is the unit ball \(B(0,1)\subset\mathbb{R}^{n+1}\) with boundary \(M=S^{n}\). In this case, the Steklov eigenfunction \(e_{\lambda}(x)=r^{k}e_{k}(\omega)\) in the polar coordinate \(r\in[0,1]\), \(\omega\in S^{n}\), where \(\lambda^{2}=k(k+n-1)\), \(k\in\mathbb{N}\) and \(e_{k}(\omega)\) is a spherical harmonic of degree \(k\), that is, the restriction to \(S^{n}\) of homogeneous harmonic polynomials of degree \(k\). It is straightforward to see that
\[\|e_{\lambda}\|_{L^{p}(B(0,1))}\approx\lambda^{-\frac{1}{p}}\|e_{\lambda}\|_ {L^{p}(S^{n})}. \tag{1.3}\]
The following \(L^{p}\) estimates of the Laplacian eigenfunctions on the sphere \(S^{n}\) are sharp
\[\|e_{\lambda}\|_{L^{p}(S^{n})}\lesssim\lambda^{\sigma(p)}\|e_{\lambda}\|_{L^{ 2}(S^{n})}, \tag{1.4}\]
and they are saturated by zonal spherical harmonic for \(p\geq\frac{2(n+1)}{n-1}\) and highest weight spherical harmonic for \(p\leq\frac{2(n+1)}{n-1}\) (see e.g. [47, 49]). Thus, combining (1.3) with (1.4), we see that (1.2) is sharp.
The motivation for this result is to investigate the feature that Steklov eigenfunctions concentrate near the boundary, and rapidly decay away from the boundary (see e.g. [29], [39], [14], [23]). Motivated by the elliptic inverse boundary value problems such as Calderon problem (see e.g. [8], [35]), Hislop-Lutzer [29] proved that for any compact set \(K\subset\Omega\),
\[\|e_{\lambda}\|_{L^{2}(K)}\leq C_{N}\lambda^{-N}\|e_{\lambda}\|_{L^{2}(M)},\ \forall N.\]
This bound reflects the fact that the Steklov eigenfunctions become highly oscillatory as the eigenvalue increases, hence they decay rapidly away from the boundary. Hislop-Lutzer [29] conjectured that the decay is actually of order \(e^{-\lambda d_{g}(K,\partial\Omega)}\). One may see by examining the case of unit ball \(B(0,1)\subset\mathbb{R}^{n+1}\) that the exponential decay is optimal. This is confirmed for real-analytic surfaces (\(n=1\)) by Polterovich-Sher-Toth [39] and the eigenfunction decay is a key feature in their main results on nodal length. They proved that for any real-analytic compact Riemannian surface \(\Omega\) with boundary \(M=\partial\Omega\), and any compact set \(K\subset\Omega\), there exist constants \(C,c>0\) such that
\[\|e_{\lambda}\|_{L^{\infty}(K)}\leq Ce^{-c\lambda d_{g}(K,\partial\Omega)}\|e _{\lambda}\|_{L^{2}(M)}.\]
Their methods are specific to the case of real-analytic surfaces. A different method of proving this bound has been communicated to them by M. Taylor. Recently, Hislop-Lutzer's conjecture is confirmed for higher dimensional real-analytic manifolds by Galkowski-Toth [23]. Furthermore, this interesting concentration feature is also related to the restriction estimates of eigenfunctions to submanifolds (see e.g. [5], [6], [48], [54])
In our second result, we prove the following lower bound on the measure of the nodal set
\[N_{\lambda}=\{x\in M:e_{\lambda}(x)=0\}.\]
**Theorem 2**.: _If \(V\in Lip^{1}(M)\) and zero is a regular value of \(e_{\lambda}\), then_
\[|N_{\lambda}|\gtrsim\lambda^{\frac{3-n}{2}}.\]
When \(V\equiv 0\), this result is due to Wang-Zhu [59], which follows from the idea in Sogge-Zelditch [51]. The Lipschitz assumption is used to ensure that the eigenfunctions is in \(C^{1}\), so that the restriction of \(\nabla e_{\lambda}\) to the nodal sets makes sense. The assumption that zero is a regular value is used to ensure the validity of Gauss-Green theorem.
To prove the theorems, we will need the following key lemmas. Incidentally, one does not require Lipschitz potentials but only bounded ones.
**Lemma 1**.: _If \(V\in L^{\infty}(M)\), then the following two eigenfunction estimates hold_
\[\|e_{\lambda}\|_{L^{p}(M)}\lesssim\lambda^{\sigma(p)}\|e_{\lambda}\|_{L^{2}(M )},\ 2\leq p\leq\infty. \tag{1.5}\]
_Moreover,_
\[\|e_{\lambda}\|_{L^{1}(M)}\gtrsim\lambda^{-\frac{n-1}{4}}\|e_{\lambda}\|_{L^{2 }(M)}. \tag{1.6}\]
For smooth \(V\), (1.5) was proved by Seeger-Sogge [42]. Indeed, they obtained the eigenfunction estimates for self-adjoint elliptic pesudo-differential operators satisfying a convexity assumtion on the principal symbol. In the case of the pure power (i.e. \(P_{0}=0\)), (1.5) was stated in [30, Remark 1] by three of us. Both (1.5) and (1.6) are sharp on \(S^{n}\). Indeed, they can be saturated by zonal spherical harmonic or highest weight spherical harmonic (see e.g. [47, 49]).
**Lemma 2**.: _If \(V\in L^{\infty}(M)\), then_
\[\|e_{\lambda}\|_{L^{p}(\Omega)}\lesssim\lambda^{-1/p}\|e_{\lambda}\|_{L^{p}(M )},\ 2\leq p\leq\infty. \tag{1.7}\]
The endpoint \(p=\infty\) follows from the maximum principle, since \(e_{\lambda}\) is harmonic in \(\Omega\). The other endpoint \(p=2\) can be obtained from the trace theorem and standard regularity estimates. And then (1.7) is proved by an interpolation argument involving the harmonic extension operator on \(\Omega\). From (1.3), we see that the estimate (1.7) is sharp for \(\Omega=B(0,1)\).
The paper is organized as follows. In Section 2, we prove sharp heat kernel estimates that will be used later. In Section 3, we prove Lemma 1. In section 4, we prove some kernel estimates for pseudo-differential operators on compact manifolds. In Section 5, we prove the interior eigenfunction estimates in Theorem 1. In Section 6, we prove the size estimates of the nodal sets in Theorem 2.
Throughout this paper, \(X\lesssim Y\) (or \(X\gtrsim Y\)) means \(X\leq CY\) (or \(X\geq CY\)) for some positive constant \(C\) independent of \(\lambda\). This constant may depend on \(V\) and the domain \(\Omega\). \(X\approx Y\) means \(X\lesssim Y\) and \(X\gtrsim Y\).
## 2. Heat kernel bounds
In this section, we prove the heat kernel estimates for the operators
\[H_{V}=(-\Delta_{g})^{\alpha/2}+P_{\alpha-1}+V,\]
where \(P_{\alpha-1}\) is a classical pseudo-differential operator of order \(\alpha-1\), and the real-valued potential \(V\) belongs to the Kato class on the closed manifold \((M,g)\). These generalize the results of Gimperlein-Grubb [25, Theorem 4.3]. When \(P_{\alpha-1}=0\), the Euclidean version was proved in [10] and [52]. We give a detailed proof for this special case on compact manifolds by using Duhamel's principle and Picard iterations. And then we slightly modify this argument to obtain the upper bound of the heat kernel of \(H_{V}\). Although the potentials in our main theorems are just bounded, we prove the heat kernel estimates under the minimal assumption so that they may be used for related reseach.
**Definition 1**.: _For \(n\geq 2\) and \(0<\alpha<2\), the potential \(V\) is said to be in the Kato class \(\mathcal{K}_{\alpha}(M)\) if_
\[\lim_{r\downarrow 0}\sup_{x\in M}\int_{B_{r}(x)}d_{g}(x,y)^{\alpha-n}|V(y)|dy=0 \tag{2.1}\]
_where \(d_{g}(\cdot,\cdot)\) denotes geodesic distance and \(B_{r}(x)\) is the geodesic ball of radius \(r\) about \(x\) and \(dy\) denotes the volume element on \((M,g)\). To define the Kato class for \(n=1\) and \(0<\alpha<2\), we replace the function \(d_{g}(x,y)^{\alpha-n}\) in (2.1) by_
\[w(x,y)=\begin{cases}d_{g}(x,y)^{\alpha-1},&\alpha<1\\ \log(2+d_{g}(x,y)^{-1}),&\alpha=1\\ 1,&\alpha>1.\end{cases}\]
Since \(M\) is compact we have \(\mathcal{K}_{\alpha}(M)\subset L^{1}(M)\), and for any \(p>\frac{n}{\alpha}\), we have \(L^{p}(M)\subset\mathcal{K}_{\alpha}(M)\) by Holder's inequality. We recall that the assumption \(V\in\mathcal{K}_{\alpha}(M)\) implies that the operators \(H_{V}=(-\Delta_{g})^{\alpha/2}+V\) are self-adjoint and bounded from below. See the proof of [30, Proposition 2]. The same argument is still valid to prove that \(H_{V}=(-\Delta_{g})^{\alpha/2}+P_{\alpha-1}+V\) is self-adjoint and bounded from below, whenever \(P_{\alpha-1}\) is self-adjoint.
**Proposition 1**.: _Let \(n\geq 1\), \(0<\alpha<2\) and \(t>0\). Let \(p^{V}(t,x,y)\) be the heat kernel of \(H_{V}=(-\Delta_{g})^{\alpha/2}+V\), where \(V\in\mathcal{K}_{\alpha}(M)\). Then for any \(t\in(0,1]\), \(x,y\in M\)_
\[p^{V}(t,x,y)\approx q_{\alpha}(t,x,y) \tag{2.2}\]
_where \(q_{\alpha}(t,x,y)=\min\{t^{-n/\alpha},\ td_{g}(x,y)^{-n-\alpha}\}\). Moreover, for any \(t>0\), \(x,y\in M\)_
\[e^{-C_{1}t}q_{\alpha}(t,x,y)\lesssim p^{V}(t,x,y)\lesssim e^{C_{2}t}q_{\alpha} (t,x,y) \tag{2.3}\]
_for some constants \(C_{1},C_{2}>0\)._
**Proposition 2**.: _Let \(n\geq 1\), \(0<\alpha<2\) and \(t>0\). Let \(p^{V}(t,x,y)\) be the heat kernel of \(H_{V}=(-\Delta_{g})^{\alpha/2}+P_{\alpha-1}+V\), where \(V\in\mathcal{K}_{\alpha}(M)\) and \(P_{\alpha-1}\) is a classical pseudo-differential operator of order \(\alpha-1\). Then for any \(t>0\), \(x,y\in M\)_
\[|p^{V}(t,x,y)|\lesssim e^{Ct}q_{\alpha}(t,x,y). \tag{2.4}\]
_for some constant \(C>0\)._
The following key lemma is called 3P-inequality in [3, Theorem 4] and [58, Proposition 2.4]. We remark that such 3P-inequality holds for all \(\alpha\in(0,2)\) but fails to hold for the Gaussian kernel \((\alpha=2)\).
**Lemma 3**.: _We have for any \(s,t>0\) and \(x,y,z\in M\)_
\[q_{\alpha}(t,x,z)q_{\alpha}(s,z,y)\leq Cq_{\alpha}(s+t,x,y)(q_{\alpha}(t,x,z)+q_ {\alpha}(s,z,y)),\]
_where \(C>0\) is a constant._
Proof.: The proof is straightforward. Indeed, by using the fact that for \(A,B>0\)
\[\min\{A,B\}\approx\frac{AB}{A+B},\]
\[(A+B)^{\frac{n}{\alpha}}\approx A^{\frac{n}{\alpha}}+B^{\frac{n}{\alpha}},\]
and the triangle inequality \(d_{g}(x,y)\leq d_{g}(x,z)+d_{g}(z,y)\), we have
\[\frac{q_{\alpha}(t,x,z)+q_{\alpha}(s,z,y)}{q_{\alpha}(t,x,z)q_{ \alpha}(s,z,y)} =\frac{1}{q_{\alpha}(t,x,z)}+\frac{1}{q_{\alpha}(s,z,y)}\] \[\approx t^{\frac{n}{\alpha}}+t^{-1}d_{g}(x,z)^{n+\alpha}+s^{ \frac{n}{\alpha}}+s^{-1}d_{g}(z,y)^{n+\alpha}\] \[\approx(t+s)^{\frac{n}{\alpha}}+t^{-1}d_{g}(x,z)^{n+\alpha}+s^{- 1}d_{g}(z,y)^{n+\alpha}\] \[\geq(t+s)^{\frac{n}{\alpha}}+(s+t)^{-1}(d_{g}(x,z)^{n+\alpha}+d_ {g}(z,y)^{n+\alpha})\] \[\approx(t+s)^{\frac{n}{\alpha}}+(s+t)^{-1}(d_{g}(x,z)+d_{g}(z,y)) ^{n+\alpha}\] \[\geq(t+s)^{\frac{n}{\alpha}}+(s+t)^{-1}d_{g}(x,y)^{n+\alpha}\] \[\approx\frac{1}{q_{\alpha}(s+t,x,y)}.\]
The implicit constants may depend on \(n\) and \(\alpha\). This completes the proof of Lemma 3.
**Proof of Proposition 1.** It is not hard to see that (2.3) follows from (2.2) and the semigroup property. So it suffices to prove (2.2).
Since \((M,g)\) is a closed manifold, the heat kernel of \(-\Delta_{g}\) satisfies the two-sided estimates (see Li-Yau [34], Sturm [53], Saloff-Coste [40])
\[t^{-n/2}e^{-C_{1}d_{g}(x,y)^{2}/t}\lesssim p_{t}(x,y)\lesssim t^{-n/2}e^{-C_{ 2}d_{g}(x,y)^{2}/t},\ t>0,\ x,y\in M\]
for some constants \(C_{1},C_{2}>0\). Moreover, it is well-known that the semigroups \(e^{t\Delta_{g}}\) and \(e^{-t(-\Delta_{g})^{\alpha/2}}\) are related by subordination formulas (see e.g. [25, (4.8)], [27], [60]), which imply that the heat kernel of \(H^{0}=(-\Delta_{g})^{\alpha/2}\) is continuous and satisfies the two-sided estimates (see e.g. [25, Theorem 4.2], [4, Theorem 3.1])
\[C^{-1}q_{\alpha}(t,x,y)\leq p_{0}(t,x,y)\leq Cq_{\alpha}(t,x,y),\ t>0,\ x,y\in M. \tag{2.5}\]
The heat kernel \(p_{0}(t,x,y)\) is the Schwartz kernel of \(f\to e^{-tH^{0}}f=u^{0}(t,x)\), which solves the heat equation
\[\begin{cases}(\partial_{t}+H^{0})u^{0}(t,x)=0,\ \ (t,x)\in(0,\infty)\times M,\\ u^{0}|_{t=0}=f.\end{cases} \tag{2.6}\]
Similarly, the heat kernel \(p^{V}(t,x,y)\) is the Schwartz kernel of \(f\to e^{-tH_{V}}f=u_{V}(t,x)\), which solves the heat equation
\[\begin{cases}(\partial_{t}+H_{V})u_{V}(t,x)=0,\ \ (t,x)\in(0,\infty)\times M,\\ u_{V}|_{t=0}=f.\end{cases} \tag{2.7}\]
Note that (2.6) and (2.7) imply that
\[(\partial_{t}+H^{0})(e^{-tH_{V}}f-e^{-tH^{0}}f)=-V(x)e^{-tH_{V}}f\]
and
\[(e^{-tH_{V}}f-e^{-tH^{0}}f)|_{t=0}=0.\]
By Duhamel's principle for the heat equation, we have
\[e^{-tH_{V}}f-e^{-tH^{0}}f =-\int_{0}^{t}e^{-(t-r)H^{0}}(Ve^{-rH_{V}}f)dr\] \[=-\int_{0}^{t}\int_{M}\int_{M}p_{0}(t-r,x,z)V(z)p^{V}(r,z,y)f(y)dydzdr.\]
where \(dy\) and \(dz\) denote the volume element on \((M,g)\). So the heat kernel of \(H_{V}\) satisfies the integral equation
\[p^{V}(t,x,y)=p_{0}(t,x,y)-\int_{0}^{t}\int_{M}p_{0}(t-r,x,z)p^{V}(r,z,y)V(z)dzdr. \tag{2.8}\]
To prove (2.2), we use Picard iterations (see e.g. [3], [58]) to construct a solution to (2.8). For \(t>0\), \(x,y\in M\), let
\[p_{m}(t,x,y)=p_{0}(t,x,y)-\int_{0}^{t}\int_{M}p_{0}(t-r,x,z)p_{m-1}(r,z,y)V(z) dzdr,\ \ m\geq 1. \tag{2.9}\]
Moreover, let
\[\Theta_{m}(t,x,y)=p_{m}(t,x,y)-p_{m-1}(t,x,y),\ \ m\geq 1\]
and \(\Theta_{0}(t,x,y)=p_{0}(t,x,y)\). Clearly,
\[\Theta_{m}(t,x,y)=-\int_{0}^{t}\int_{M}p_{0}(t-r,x,z)V(z)\Theta_{m-1}(r,z,y)dzdr. \tag{2.10}\]
We claim that for some constant \(c_{0}>0\) and \(\ c(t)>0\)
\[|\Theta_{m}(t,x,y)|\leq(c_{0}c(t))^{m}p_{0}(t,x,y),\ \ m\geq 0. \tag{2.11}\]
To prove the claim, we define
\[c(t)=\sup_{y\in M}\int_{0}^{t}\int_{M}q_{\alpha}(r,y,z)|V(z)|dzdr. \tag{2.12}\]
It is straightforward to see that \(V\) is in the Kato class implies that
\[\lim_{t\downarrow 0}c(t)=0. \tag{2.13}\]
Indeed, for \(n\geq 2\),
\[\int_{0}^{t}\int_{M}q_{\alpha}(r,y,z)|V(z)|dzdr\lesssim\int_{d_{g}(z,y)<t^{ \frac{1}{2\alpha}}}d_{g}(z,y)^{\alpha-n}|V(z)|dz+\int_{M}td_{g}(z,y)^{\alpha-n }|V(z)|dz,\]
which implies (2.13) by the definition (2.1). The case \(n=1\) is similar.
The claim (2.11) is clear for \(m=0\). If the claim is true for \(m-1\), then by (2.10) we have
\[|\Theta_{m}(t,x,y)| \leq(c_{0}c(t))^{m-1}\int_{0}^{t}\int_{M}p_{0}(t-r,x,z)p_{0}(r,z,y) |V(z)|dzdr\] \[\leq C(c_{0}c(t))^{m-1}\int_{0}^{t}\int_{M}p_{0}(t,x,y)(p_{0}(t-r,x,z)+p_{0}(r,z,y))|V(z)|dzdr\] \[\leq 2C^{2}(c_{0}c(t))^{m-1}p_{0}(t,x,y)\sup_{y\in M}\int_{0}^{t} \int_{M}q_{\alpha}(r,y,z)|V(z)|dzdr\] \[=\frac{2C^{2}}{c_{0}}(c_{0}c(t))^{m}p_{0}(t,x,y)\]
where we use Lemma 3 and the upper bound in (2.5). Here \(C>0\) is a constant independent of \(m,s,t,x,y,z\). So we may fix \(c_{0}\geq 2C^{2}\), and the claim (2.11) is proved by induction.
By (2.13), there is \(0<t_{0}<1\) so that for any \(t\in(0,t_{0}]\), we have \(c_{0}c(t)\leq\frac{1}{3}\). Let
\[p^{V}(t,x,y)=\sum_{m=0}^{\infty}\Theta_{m}(t,x,y),\ \ t\in(0,t_{0}],\ x,y\in M.\]
This series is uniformly convergent, and
\[|p^{V}(t,x,y)-p_{0}(t,x,y)|\leq\sum_{m=1}^{\infty}|\Theta_{m}(t,x,y)|\leq \frac{c_{0}c(t)}{1-c_{0}c(t)}p_{0}(t,x,y)\leq\frac{1}{2}p_{0}(t,x,y).\]
Combining this with (2.5), we have
\[p^{V}(t,x,y)\approx q_{\alpha}(t,x,y),\ \ t\in(0,t_{0}]. \tag{2.14}\]
By letting \(m\to\infty\) in (2.9), we get (2.8) for \(t\in(0,t_{0}]\).
Moreover, when \(t\in(0,t_{0}]\), \(p^{V}(t,x,y)\) is the unique solution to the integral equation (2.8) satisfying (2.14). Indeed, let \(\tilde{p}(t,x,y)\) be another solution satisfying (2.14), and \(\Theta=p^{V}-\tilde{p}\). Note that \(|\Theta(t,x,y)|\leq Cp_{0}(t,x,y)\) for some constant \(C>0\). Then by the same induction argument above we obtain
\[|\Theta(t,x,y)|\leq C(c_{0}c(t))^{m}p_{0}(t,x,y),\ \forall m\geq 0.\]
By letting \(m\to\infty\) we get \(\Theta(t,x,y)=0\) for \(t\in(0,t_{0}]\).
For \(t>t_{0}\), we recursively define
\[p^{V}(t,x,y)=\int_{M}p^{V}(t/2,x,z)p^{V}(t/2,z,y)dz.\]
Then \(p^{V}(t,x,y)\) is extended to be a jointly continuous function on \((0,\infty)\times M\times M\). Moreover, the estimate (2.14) can be recursively extended to
\[p^{V}(t,x,y)\approx q_{\alpha}(t,x,y),\ \ t\in(0,1].\]
This completes the proof of Proposition 1.
**Proof of Proposition 2.** The proof is similar to Proposition 1. It suffices to prove (2.4) for \(t\in(0,1]\) by the semigroup property. Then the argument above is still valid for (2.4), if we replace (2.5) by the heat kernel bounds of \(H^{0}=(-\Delta_{g})^{\alpha/2}+P_{\alpha-1}\) (see [25, Theorem 4.3])
\[|p_{0}(t,x,y)|\leq Cq_{\alpha}(t,x,y),\ \ t\in(0,1],\ x,y\in M.\]
## 3. Global eigenfunction estimates: proof of Lemma 1
To prove Lemma 1, we begin with the following resolvent estimate.
**Proposition 3**.: _For \(\lambda\geq 1\), we have_
\[\|(\sqrt{-\Delta_{g}}-(\lambda+i))^{-1}\|_{L^{2}\to L^{p}}\lesssim\lambda^{ \sigma(p)},\ 2<p\leq\tfrac{2(n+1)}{n-1}, \tag{3.1}\]
_where_
\[\sigma(p)=\begin{cases}\frac{n-1}{2}(\frac{1}{2}-\frac{1}{p}),&2\leq p<\frac{ 2(n+1)}{n-1}\\ \frac{n-1}{2}-\frac{n}{p},&\frac{2(n+1)}{n-1}\leq p\leq\infty.\end{cases} \tag{3.2}\]
Proof.: For \(k\in\mathbb{N}\), let \(\chi_{[k,k+1)}\) denote the spectral projection operators for \(\sqrt{-\Delta_{g}}\) corresponds to the spectral interval \([k,k+1)\), and let \(\chi_{[2[\lambda],\infty)}\) spectral projection operator onto the interval \([2[\lambda],\infty)\), where \([\lambda]\) denotes the largest integer that is smaller than \(\lambda\). Then for any function \(f\), by Cauchy-Schwarz inequality
\[(\sqrt{-\Delta_{g}}-(\lambda+i))^{-1}f =\sum_{k<2[\lambda]}\frac{1}{k-(\lambda+i)}(k-(\lambda+i))(\sqrt {-\Delta_{g}}-(\lambda+i))^{-1}\chi_{[k,k+1)}f\] \[\quad+\chi_{[2[\lambda],\infty)}(\sqrt{-\Delta_{g}}-(\lambda+i))^ {-1}f\] \[\lesssim(\sum_{k<2[\lambda]}\left|(k-(\lambda+i))(\sqrt{-\Delta_{ g}}-(\lambda+i))^{-1}\chi_{[k,k+1)}f\right|^{2})^{\frac{1}{2}}\] \[\quad+\left|\chi_{[2[\lambda],\infty)}(\sqrt{-\Delta_{g}}-(\lambda +i))^{-1}f\right| \tag{3.3}\]
Thus, by Minkowski's inequality
\[\|(\sqrt{-\Delta_{g}}-(\lambda+i))^{-1}f\|_{L^{p}} \leq(\sum_{k<2[\lambda]}\|(k-(\lambda+i))(\sqrt{-\Delta_{g}}-( \lambda+i))^{-1}\chi_{[k,k+1)}f\|_{L^{p}}^{2})^{\frac{1}{2}}\] \[\quad+\|\chi_{[2[\lambda],\infty)}(\sqrt{-\Delta_{g}}-(\lambda+i ))^{-1}f\|_{L^{p}} \tag{3.4}\]
To handle the first term on the right, note that \(\chi_{[k,k+1)}=\chi_{[k,k+1)}\circ\chi_{[k,k+1)}\), and by the classical results in [44],
\[\|\chi_{[k,k+1)}f\|_{L^{p}}\lesssim(1+k)^{\sigma(p)}\|f\|_{L^{2}}\lesssim \lambda^{\sigma(p)}\|f\|_{L^{2}},\ \text{if}\ k<2[\lambda]. \tag{3.5}\]
Thus,
\[\sum_{k<2[\lambda]} \|(k-(\lambda+i))(\sqrt{-\Delta_{g}}-(\lambda+i))^{-1}\chi_{[k,k+ 1)}f\|_{L^{p}}^{2})^{\frac{1}{2}}\] \[\lesssim\lambda^{\sigma(p)}\sum_{k<2[\lambda]}\|(k-(\lambda+i))( \sqrt{-\Delta_{g}}-(\lambda+i))^{-1}\chi_{[k,k+1)}f\|_{L^{2}}^{2})^{\frac{1}{2}}\] \[\lesssim\lambda^{\sigma(p)}\sum_{k<2[\lambda]}\|\chi_{[k,k+1)}f \|_{L^{2}}^{2})^{\frac{1}{2}}\] \[\lesssim\lambda^{\sigma(p)}\|f\|_{L^{2}}, \tag{3.6}\]
where in the second inequality we used the fact that by spectral theorem,
\[\|(k-(\lambda+i))(\sqrt{-\Delta_{g}}-(\lambda+i))^{-1}\chi_{[k,k+1)}f\|_{L^{2 }}\lesssim\|\chi_{[k,k+1)}f\|_{L^{2}},\ \forall\,k\in\mathbb{N}. \tag{3.7}\]
To handle the second term, we use Sobolev estimates to see that
\[\begin{split}&\|\chi_{[2[\lambda],\infty)}(\sqrt{-\Delta_{g}}-( \lambda+i))^{-1}f\|_{L^{p}}\\ &\lesssim\|\chi_{[2[\lambda],\infty)}(\sqrt{-\Delta_{g}})^{n( \frac{1}{2}-\frac{1}{p})}(\sqrt{-\Delta_{g}}-(\lambda+i))^{-1}f\|_{L^{2}}. \end{split} \tag{3.8}\]
When \(2<p\leq\frac{2(n+1)}{n-1}\), it is straightforward to check that \(n(\frac{1}{2}-\frac{1}{p})<1\), thus by spectral theorem,
\[\|\chi_{[2[\lambda],\infty)}(\sqrt{-\Delta_{g}})^{n(\frac{1}{2}-\frac{1}{p})} (\sqrt{-\Delta_{g}}-(\lambda+i))^{-1}f\|_{L^{2}}\lesssim\|f\|_{L^{2}}, \tag{3.9}\]
which is better than the desired bound in (3.1).
Now we shall prove Lemma 1, this follows from similar strategies as in [2]. Recall that \(\mathcal{D}=\sqrt{-\Delta_{g}}+P_{0}\), by using the second resolvent formula, we have
\[(\mathcal{D}+V-(\lambda+i))^{-1}=(\sqrt{-\Delta_{g}}-(\lambda+i) )^{-1}\\ -(\sqrt{-\Delta_{g}}-(\lambda+i))^{-1}(P_{0}+V)(\mathcal{D}+V-( \lambda+i))^{-1}. \tag{3.10}\]
Since \(P_{0}\in OPS^{0}\) and the eigenvalues of \(\mathcal{D}+V\) are real, by spectral theorem, we have
\[\|P_{0}(\mathcal{D}+V-(\lambda+i))^{-1}\|_{L^{2}\to L^{2}}\lesssim\|( \mathcal{D}+V-(\lambda+i))^{-1}\|_{L^{2}\to L^{2}}\lesssim 1. \tag{3.11}\]
Similarly, since \(V\in L^{\infty}(M)\), we have
\[\|V(\mathcal{D}+V-(\lambda+i))^{-1}\|_{L^{2}\to L^{2}}\lesssim 1. \tag{3.12}\]
Thus, (3.10), (3.11), (3.12) and (3.1) yield that
\[\|(\mathcal{D}+V-(\lambda+i))^{-1}\|_{L^{2}\to L^{p}}\lesssim\lambda^{\sigma(p )},\ 2<p\leq\tfrac{2(n+1)}{n-1}. \tag{3.13}\]
If we let \(\chi^{V}_{[\lambda,\lambda+1)}\) denote the spectral projection operator associated with \(\sqrt{-\Delta_{g}}+P_{0}+V\) for the interval \([\lambda,\lambda+1)\), then (3.13) implies the following
**Corollary 1**.: _Let \(V\in L^{\infty}(M)\), we have_
\[\|\chi^{V}_{[\lambda,\lambda+1)}f\|_{L^{p}}\lesssim\lambda^{\sigma(p)}\|f\|_ {L^{2}},\ 2<p\leq\infty. \tag{3.14}\]
Note that if we take \(f=e_{\lambda}\) in (3.14), and use the fact that \(\chi^{V}_{[\lambda,\lambda+1)}e_{\lambda}=e_{\lambda}\), we obtain (1.5).
Proof of Corollary 1.: If \(2<p\leq\frac{2(n+1)}{n-1}\), this follows from (3.13) by letting \(f=\chi^{V}_{[\lambda,\lambda+1)}f\) there along with the fact that
\[\|(\mathcal{D}+V-(\lambda+i))\chi^{V}_{[\lambda,\lambda+1)}\|_{L^{2}\to L^{2} }\lesssim 1. \tag{3.15}\]
If \(p>\frac{2(n+1)}{n-1}\), we shall use the heat kernel bounds in Proposition 2. More explicitly, let \(H_{V}=\sqrt{-\Delta_{g}}+P_{0}+V\), note that if \(V\in L^{\infty}(M)\), then (2.1) holds with \(\alpha=1\), thus \(V\in\mathcal{K}_{1}(M)\), which, by Proposition 2, implies that we have the kernel estimate (2.4) for \(e^{-tH_{V}}\). As a result, by (2.4) and Young's inequality, we have the following:
\[\|e^{-tH_{V}}\|_{L^{p}(M)\to L^{q}(M)}\lesssim t^{-n(\frac{1}{p}-\frac{1}{q})}, \quad\text{if}\ \ 0<t\leq 1,\ \text{and}\ \ 1\leq p\leq q\leq\infty. \tag{3.16}\]
If we fix \(t=\lambda^{-1}\) and \(p_{c}=\frac{2(n+1)}{n-1}\), and apply the above bound, we have for \(p>\frac{2(n+1)}{n-1}\),
\[\begin{split}\|\chi^{V}_{[\lambda,\lambda+1)}f\|_{L^{p}}& \lesssim\lambda^{n(\frac{1}{pe}-\frac{1}{p})}\|e^{\lambda^{-1}H_{V}} \chi^{V}_{[\lambda,\lambda+1)}f\|_{L^{pe}}\\ &=\lambda^{n(\frac{1}{pe}-\frac{1}{p})}\|\chi^{V}_{[\lambda, \lambda+1)}e^{\lambda^{-1}H_{V}}\chi^{V}_{[\lambda,\lambda+1)}f\|_{L^{p_{c}}} \\ &\lesssim\lambda^{n(\frac{1}{pe}-\frac{1}{p})}\lambda^{\frac{n-1} {2}-\frac{n}{pe}}\|e^{\lambda^{-1}H_{V}}\chi^{V}_{[\lambda,\lambda+1)}f\|_{L^ {2}}\\ &\lesssim\lambda^{\frac{n-1}{2}-\frac{n}{p}}\|f\|_{L^{2}},\end{split} \tag{3.17}\]
where in the third line we applied (3.14) at \(p=p_{c}\) and in the last line we applied spectral theorem. Since \(\frac{n-1}{2}-\frac{n}{p}=\sigma(p)\) when \(p\geq p_{c}\), the proof of Corollary 1 is complete.
To prove Lemma 1 it remains to prove (1.6). By using the arguments from Sogge-Zelditch [51], we note that (1.6) can be obtained from Holder's inequality and (1.5)
\[\|e_{\lambda}\|_{L^{2}(M)}^{\frac{1}{p}}\leq\|e_{\lambda}\|_{L^{1}(M)}\|e_{ \lambda}\|_{L^{p}(M)}^{\frac{1}{p}-1}\lesssim\|e_{\lambda}\|_{L^{1}(M)}( \lambda^{\sigma(p)}\|e_{\lambda}\|_{L^{2}(M)})^{\frac{1}{\theta}-1}=\|e_{ \lambda}\|_{L^{1}(M)}\lambda^{\frac{n-1}{4}}\|e_{\lambda}\|_{L^{2}(M)}^{\frac{ 1}{\theta}-1}.\]
Here \(2<p<\frac{2(n+1)}{n-1}\), and \(\theta=\frac{p}{p-1}(\frac{1}{2}-\frac{1}{p})\).
## 4. Kernels of Pseudo-differential operators
In this section, we prove a useful lemma concerning the kernel estimates of the pseudo-differential operators on compact manfolds.
**Lemma 4**.: _Let \(\mu\in\mathbb{R}\), and \(m\in C^{\infty}(\mathbb{R})\) belong to the symbol class \(S^{\mu}\), that is, assume that_
\[|\partial_{t}^{\alpha}m(t)|\leq C_{\alpha}(1+|t|)^{\mu-\alpha},\quad\forall\alpha. \tag{4.1}\]
_If \(P=\sqrt{-\Delta_{g}}\), then \(m(P)\) is a pseudo-differential operator of order \(\mu\). Moreover, if \(R\geq 1\), then the kernel of the operator \(m(P/R)\) satisfies for all \(N\in\mathbb{N}\)_
\[|m(P/R)(x,y)|\lesssim\begin{cases}R^{n}\big{(}Rd_{g}(x,y)\big{)}^{-n-\mu} \big{(}1+Rd_{g}(x,y)\big{)}^{-N},&n+\mu>0\\ R^{n}\log(2+(Rd_{g}(x,y))^{-1}\big{)}\big{(}1+Rd_{g}(x,y)\big{)}^{-N},&n+\mu=0 \\ R^{n}(1+Rd_{g}(x,y))^{-N},&n+\mu<0.\end{cases} \tag{4.2}\]
See [45, Theorem 4.3.1] for the proof of the fact that \(m(P)\) is a pseudo-differential operator of order \(\mu\). The kernel bounds (4.2) can be viewed as the rescaled version on compact manifolds compared to the Euclidean estimates in [55, Proposition 1 on page 241]. We mean that the bounds hold near the diagonal (so that \(d_{g}(x,y)\) is smaller than the injectivity radius of \(M\)) and that outside the neighborhood of the diagonal they are \(O(R^{-N})\). Roughly speaking, modulo lower order terms, \(m(P/R)(x,y)\) equals
\[(2\pi)^{-n}\int_{\mathbb{R}^{n}}m(|\xi|/R)e^{id_{g}(x,y)\xi_{1}}d\xi\]
near the diagonal, which satisfies the bounds in (4.2), while outside of a fixed neighborhood of the diagonal \(m(P/R)(x,y)\) is \(O(R^{-N})\). For completeness, we give a detailed proof by using the Hadamard parametrix.
**Proof of (4.2).** Since the spectrum of \(P=\sqrt{-\Delta_{g}}\) is nonnegative, we may assume that \(m(t)\) is an even function on \(\mathbb{R}\). Let \(\delta>0\) be smaller than the injectivity radius of \((M,g)\). Let \(\rho\in C^{\infty}_{0}(-1,1)\)
be even and satisfy \(\rho\equiv 1\) on \((-\frac{\delta}{2},\frac{\delta}{2})\). So we can write
\[m(P/R) =\frac{R}{2\pi}\int_{\mathbb{R}}\hat{m}(tR)\cos(tP)dt \tag{4.3}\] \[=\frac{R}{2\pi}\int\rho(t)\hat{m}(tR)\cos(tP)dt+\frac{R}{2\pi} \int(1-\rho(t))\hat{m}(tR)\cos(tP)dt.\]
To handle the first term in (4.3), we need to use the Hadamard parametrix (see e.g. [46, Section 1.2 and Theorem 3.1.5]). For \(0<t<\delta\) and \(N_{0}>n+3\), we have
\[\cos tP(x,y)=\sum_{\nu=0}^{N_{0}}\omega_{\nu}(x,y)\partial_{t}E_{\nu}(t,d_{g}( x,y))+R_{N_{0}}(t,x,y) \tag{4.4}\]
where the leading term
\[\partial_{t}E_{0}=(2\pi)^{-n}\int_{\mathbb{R}^{n}}e^{id_{g}(x,y)\xi_{1}}\cos (t|\xi|)d\xi \tag{4.5}\]
and \(E_{\nu}\) satisfies \(2\partial_{t}E_{\nu}=tE_{\nu-1}\), and \(\partial_{t}E_{\nu}(t/R,r)=R^{n-2\nu}\partial_{t}E_{\nu}(t,Rr)\) for any \(R>0\). Here \(\omega_{\nu}\in C^{\infty}(M\times M)\), and \(\omega_{0}(x,x)=1,\forall x\in M\). For \(\nu\geq 1\), we have the following explicit formula (see e.g. [46, Section 1.2])
\[E_{\nu}=\nu!(2\pi)^{-n}\int_{0\leq s_{1}\leq\ldots\leq s_{\nu} \leq t}\int_{\mathbb{R}^{n}}e^{id_{g}(x,y)\xi_{1}}\frac{\sin(t-s_{\nu})|\xi|}{ |\xi|} \frac{\sin(s_{\nu}-s_{\nu-1})|\xi|}{|\xi|}\cdot\cdot\cdot\] \[\cdot\frac{\sin(s_{2}-s_{1})|\xi|}{|\xi|}\frac{\sin s_{1}|\xi|}{ |\xi|}d\xi ds_{1}...ds_{\nu}.\]
So for \(\nu\geq 1\) we can obtain (see e.g. [46, Section 1.2])
\[\partial_{t}E_{\nu}=\tfrac{1}{2}tE_{\nu-1} =\int e^{id_{g}(x,y)\xi_{1}}a_{\nu}(t,|\xi|)d\xi\] \[=\sum_{\pm}\sum_{j=0}^{\nu-1}a_{j\nu}^{\pm}\int e^{id_{g}(x,y)\xi _{1}\pm it|\xi|}t^{j+1}|\xi|^{-2\nu+1+j}d\xi, \tag{4.6}\]
where \(a_{j\nu}^{\pm}\) are constants, and \(a_{\nu}\in C^{\infty}\). The remainder kernel \(R_{N_{0}}\in C^{N_{0}-n-3}\) satisfies
\[|\partial_{t,x,y}^{\alpha}R_{N_{0}}(t,x,y)|\lesssim|t|^{2N_{0}+2-n-|\alpha|}, \ \ |\alpha|\leq N_{0}-n-2. \tag{4.7}\]
Then we plug (4.4) into the first term of (4.3). We first handle the contribution of the leading term in (4.4). By (4.5), we can write
\[\frac{R}{2\pi}\iint\rho(t)\hat{m}(tR)\cos(t|\xi|)e^{id_{g}(x,y) \xi_{1}}dtd\xi=\int m(|\xi|/R)e^{id_{g}(x,y)\xi_{1}}d\xi+\] \[\frac{R}{2\pi}\iint(1-\rho(t))\hat{m}(tR)\cos(t|\xi|)e^{id_{g}(x,y )\xi_{1}}dtd\xi:=I_{1}+I_{2}.\]
Using the property (4.1) and integration by parts, we see that for any \(N\in\mathbb{N}\)
\[|I_{1}|\lesssim\begin{cases}R^{n}\big{(}Rd_{g}(x,y)\big{)}^{-n-\mu}\big{(}1+Rd _{g}(x,y)\big{)}^{-N},&n+\mu>0\\ R^{n}\log(2+(Rd_{g}(x,y))^{-1})\big{(}1+Rd_{g}(x,y)\big{)}^{-N},&n+\mu=0\\ R^{n}(1+Rd_{g}(x,y))^{-N},&n+\mu<0\end{cases} \tag{4.8}\]
and
\[|I_{2}| \lesssim\Big{|}R\iiint(1-\rho(t))(tR)^{-N}m^{(N)}(s)e^{-itRs}\cos(t| \xi|)e^{id_{g}(x,y)\xi_{1}}dsdtd\xi\Big{|}\] \[\lesssim R^{-N+1}\iint(1+||\xi|-R|s||)^{-N_{1}}(1+|s|)^{-N+\mu}dsd\xi\] \[\lesssim R^{-N}\int(1+|\xi|/R)^{-N+\mu}d\xi \tag{4.9}\] \[\lesssim R^{-N+n}.\]
Here we choose \(N_{1}>N>n+\mu\).
Similarly, we can handle the contributions of the remaining terms in (4.4). For each \(\nu\geq 1\), we can write
\[\frac{R}{2\pi}\int\rho(t)\hat{m}(tR)\partial_{t}E_{\nu}(t,d_{g}( x,y))dt=\frac{R}{2\pi}\int\hat{m}(tR)\partial_{t}E_{\nu}(t,d_{g}(x,y))dt-\] \[\frac{R}{2\pi}\int(1-\rho(t))\hat{m}(tR)\partial_{t}E_{\nu}(t,d_{ g}(x,y))dt:=I_{3}+I_{4}.\]
Using the scaling property \(\partial_{t}E_{\nu}(t/R,r)=R^{n-2\nu}\partial_{t}E_{\nu}(t,Rr)\) and the formula (4.6), we can integrate by parts to see that
\[|I_{3}| =(2\pi)^{-1}R^{n-2\nu}\Big{|}\int\hat{m}(t)\partial_{t}E_{\nu}(t,Rd_{g}(x,y))dt\Big{|}\] \[=(2\pi)^{-1}R^{n-2\nu}\Big{|}\sum_{\pm}\sum_{j=0}^{\nu-1}a_{j\nu} ^{\pm}\iint e^{iRd_{g}(x,y)\xi_{1}\pm it|\xi|}\hat{m}(t)t^{j+1}|\xi|^{-2\nu+1 +j}dtd\xi\Big{|}\] \[=R^{n-2\nu}\Big{|}\sum_{\pm}\sum_{j=0}^{\nu-1}i^{-j-1}a_{j\nu}^{ \pm}\int e^{iRd_{g}(x,y)\xi_{1}}m^{(j+1)}(\pm|\xi|)|\xi|^{-2\nu+1+j}d\xi\Big{|} \tag{4.10}\] \[\lesssim R^{n-2\nu}(1+Rd_{g}(x,y))^{-N}+\sum_{j=0}^{\nu-1}\Big{|} \int e^{iRd_{g}(x,y)\xi_{1}}m^{(j+1)}(|\xi|)|\xi|^{-2\nu+1+j}\varphi(|\xi|)d \xi\Big{|}\] (4.11) \[\lesssim\begin{cases}R^{n-2\nu}\big{(}Rd_{g}(x,y)\big{)}^{-n-\mu} \big{(}1+Rd_{g}(x,y)\big{)}^{-N},&n+\mu>0\\ R^{n-2\nu}\log(2+(Rd_{g}(x,y))^{-1})\big{(}1+Rd_{g}(x,y)\big{)}^{-N},&n+\mu=0 \\ R^{n-2\nu}(1+Rd_{g}(x,y))^{-N},&n+\mu<0\end{cases}\]
where \(\varphi\in C^{\infty}\) vanishes near the origin but equals one near infinity. The first term in (4.10) follows from the smoothness of \(a_{\nu}\) in (4.6) near \(\xi=0\) and integration by parts. Moreover,
\[|I_{4}| \lesssim\sum_{\pm}\sum_{j=0}^{\nu-1}\Big{|}R\iiint(1-\rho(t))(tR) ^{-N}m^{(N+j+1)}(s)e^{-itRs}e^{id_{g}(x,y)\xi_{1}\pm it|\xi|}\phi_{j\nu}(|\xi| )d\xi dsdt\Big{|}\] \[\lesssim\ R^{-N+1}\iint(1+||\xi|-R|s||)^{-N_{1}}(1+|s|)^{-N+\mu} dsd\xi\] \[\lesssim R^{-N}\int(1+|\xi|/R)^{-N+\mu}d\xi \tag{4.12}\] \[\lesssim R^{-N+n}.\]
The remainder term \(R_{N_{0}}\) in (4.4) is easy to handle. Indeed, for \(n+\mu<N\leq N_{0}-n-2\), using (4.7) we integrate by parts to obtain
\[\Big{|}\frac{R}{2\pi}\int\rho(t)\hat{m}(tR)R_{N_{0}}(t,x,y)dt\Big{|} \lesssim R^{-N+1}\Big{|}\iint\rho(t)t^{-N}R_{N_{0}}(t,x,y)m^{(N)}(s )e^{-itRs}dsdt\Big{|}\] \[\lesssim R^{-N+1}\int(1+R|s|)^{-N}(1+|s|)^{\mu-N}ds \tag{4.13}\] \[\lesssim R^{-N+1}.\]
To handle the second term in (4.3), we notice that for \(\lambda\geq 0\)
\[\Big{|}\frac{R}{2\pi}\int(1-\rho(t))\hat{m}(tR)\cos(t\lambda)dt\Big{|} \lesssim\Big{|}R\iint(1-\rho(t))(tR)^{-N}m^{(N)}(s)e^{-itRs}\cos(t \lambda)dtds\Big{|}\] \[\lesssim R^{-N+1}\int(1+|\lambda-R|s||)^{-N_{1}}(1+|s|)^{-N+\mu}ds\] \[\lesssim R^{-N}(1+\lambda/R)^{-N+\mu}.\]
Thus, we obtain
\[\Big{|}\frac{R}{2\pi}\int(1-\rho(t))\hat{m}(tR)\cos(tP)(x,y)dt\Big{|} \lesssim R^{-N}\sum_{j}(1+\lambda_{j}/R)^{-N+\mu}|e_{j}(x)e_{j}(y)|\] \[\lesssim R^{-N}\sum_{k}(1+k/R)^{-N+\mu}\sum_{\lambda_{j}\in[k,k+1 )}|e_{j}(x)e_{j}(y)|\] \[\lesssim R^{-N}\sum_{k}(1+k/R)^{-N+\mu}(1+k)^{n-1} \tag{4.14}\] \[\lesssim R^{-N+n}.\]
Here we used the \(L^{\infty}\) bound of Laplace eigenfunctions (see e.g. [45, Lemma 4.2.4])
\[\sum_{\lambda_{j}\in[k,k+1)}|e_{j}(x)e_{j}(y)|\lesssim\sup_{x\in M}\sum_{ \lambda_{j}\in[k,k+1)}|e_{j}(x)|^{2}\lesssim(1+k)^{n-1}.\]
Combining the bounds (4.8), (4.9), (4.11), (4.12), (4.13), (4.14), we complete the proof.
## 5. Interior Eigenfunction estimates
In this section, we prove the eigenfunction estimates in Theorem 1. We just need to prove Lemma 2, and then Theorem 1 follows from the \(L^{p}\) bounds in Lemma 1. To proceed, we shall use the following lemma.
**Lemma 5**.: _For any \(f\in H^{1/2}(\partial\Omega)\), let \(u\in H^{1}(\Omega)\) be the weak solution to the Dirichlet boundary value problem (1.1). Then there exists a constant \(C>0\) such that_
\[\|u\|_{L^{2}(\Omega)}\leq C\|f\|_{H^{-1/2}(\partial\Omega)}. \tag{5.1}\]
This lemma was proved in [14, Proposition 2.17]. It follows from the trace theorem and standard regularity estimates (see e.g. [28, Theorem 1.5.1.2, Theorem 1.5.1.3, Corollary 2.2.2.4, Corollary 2.2.2.6]).
**Lemma 6**.: _Let \(Q\in OPS^{0}\). Then \(Q\) is bounded on \(L^{p}\) for \(1<p<\infty\), i.e._
\[\|Qf\|_{L^{p}}\leq C\|f\|_{L^{p}}.\]
Here the \(L^{p}\) norm can be taken on \(\mathbb{R}^{n}\) and compact manifolds. See e.g. [45, Theorem 3.1.6, Theorem 4.3.1] for the proofs.
**Proof of Theorem 1.** It suffices to consider two cases, \(p=\infty\) and \(p<\infty\).
**Case 1**: \(p=\infty\). In this case, from the maximal principle (see e.g. [24, Theorem 8.1]), since \(e_{\lambda}\) is harmonic in \(\Omega\). We get
\[\|e_{\lambda}\|_{L^{\infty}(\Omega)}\lesssim\|e_{\lambda}\|_{L^{\infty}( \partial\Omega)} \tag{5.2}\]
And since \(V\in L^{\infty}(M)\), by Lemma 1, we have
\[\|e_{\lambda}\|_{L^{\infty}(\partial\Omega))}\lesssim\lambda^{\frac{n-1}{2}} \|e_{\lambda}\|_{L^{2}(\partial\Omega))},\]
which yields (1.2) for the case \(p=\infty\).
**Case 2**: \(p<\infty\). In this case, let us fix a Littlewood-Paley bump function \(\beta\in C^{\infty}_{0}((1/2,2))\) satisfying
\[\sum_{\ell=-\infty}^{\infty}\beta(2^{-\ell}s)=1,\quad s>0.\]
And define
\[\beta_{0}(s)=1-\sum_{\ell>0}\beta(2^{-\ell}|s|),\ \ \beta_{\ell}(s)=\beta(2^{- \ell}|s|),\ \ \text{for}\ \,\ell>0.\]
Let \(P=\sqrt{-\Delta_{g}}\). Then we have for \(\ell\geq 0\),
\[\|\beta_{\ell}(P)f\|_{L^{p}(\partial\Omega))}\lesssim\|f\|_{L^{p}(\partial \Omega))},\ \ 1\leq p\leq\infty. \tag{5.3}\]
The implicit constant is independent of \(\ell\). Indeed, by Lemma 4 we have the kernel estimates
\[|\beta_{\ell}(P)(x,y)|\lesssim 2^{n\ell}(1+2^{\ell}d_{g}(x,y))^{-N}.\]
Then (5.3) follows from Young's inequality.
Let \(T_{H}\) be the harmonic extension operator from \(\partial\Omega\) to \(\Omega\). Then by Lemma 5, we have
\[\|T_{H}(\beta_{\ell}(P)f)\|_{L^{2}(\Omega)}\lesssim\|\beta_{\ell}(P)f\|_{H^{- 1/2}(\partial\Omega)}\lesssim 2^{-\ell/2}\|f\|_{L^{2}(\partial\Omega)}. \tag{5.4}\]
And from the maximal principle and (5.3), we have
\[\|T_{H}(\beta_{\ell}(P)f)\|_{L^{\infty}(\Omega)}\lesssim\|\beta_{\ell}(P)f\|_ {L^{\infty}(\partial\Omega)}\lesssim\|f\|_{L^{\infty}(\partial\Omega)}. \tag{5.5}\]
By (5.4), (5.5) and interpolation, we have the following \(L^{p}\) estimate of the frequency-localized harmonic extension operator
\[\|T_{H}(\beta_{\ell}(P)f)\|_{L^{p}(\Omega)}\lesssim 2^{-\frac{\ell}{p}}\|f\|_{L ^{p}(\partial\Omega)},\ 2\leq p\leq\infty. \tag{5.6}\]
Thus, if \(2^{\ell}\gtrsim\lambda\), we have
\[\|T_{H}(\sum_{2^{\ell}\gtrsim\lambda}\beta_{\ell}(P)e_{\lambda})\|_{L^{p}( \Omega)}\lesssim\sum_{2^{\ell}\gtrsim\lambda}2^{-\frac{\ell}{p}}\|e_{\lambda} \|_{L^{p}(\partial\Omega)}\lesssim\lambda^{-1/p}\|e_{\lambda}\|_{L^{p}( \partial\Omega)}. \tag{5.7}\]
So it remains to consider \(2^{\ell}\lesssim\lambda\). Let \(\tilde{\beta}\in C^{\infty}_{0}\) with \(\tilde{\beta}\equiv 1\) in a neighborhood of \((1/2,2)\) and define \(\tilde{\beta}_{\ell}(s)=\tilde{\beta}(2^{-\ell}|s|)\). Then by (5.6)
\[\|T_{H}(\beta_{\ell}(P)e_{\lambda})\|_{L^{p}(\Omega)}=\|T_{H}(\beta_{\ell}(P) \tilde{\beta}_{\ell}(P)e_{\lambda})\|_{L^{p}(\Omega)}\lesssim 2^{-\frac{\ell}{p}}\| \tilde{\beta}_{\ell}(P)e_{\lambda}\|_{L^{p}(\partial\Omega)}. \tag{5.8}\]
Moreover, for \(2\leq p<\infty\)
\[\begin{split}&\|\tilde{\beta}_{\ell}(P)e_{\lambda}\|_{L^{p}(\partial \Omega)}\\ &=(1+\lambda)^{-1}\|\tilde{\beta}_{\ell}(P)(1+\sqrt{-\Delta_{g}}+P_ {0}+V)e_{\lambda}\|_{L^{p}(\partial\Omega)}\\ &\lesssim(1+\lambda)^{-1}\|\tilde{\beta}_{\ell}(P)(1+\sqrt{- \Delta_{g}})e_{\lambda}\|_{L^{p}(\partial\Omega)}+(1+\lambda)^{-1}\|\tilde{ \beta}_{\ell}(P)(P_{0}+V)e_{\lambda}\|_{L^{p}(\partial\Omega)}\\ &\lesssim(1+\lambda)^{-1}2^{\ell}\|e_{\lambda}\|_{L^{p}( \partial\Omega)}+(1+\lambda)^{-1}\|e_{\lambda}\|_{L^{p}(\partial\Omega)}\end{split} \tag{5.9}\]
where we used (5.3), Lemma 6, and the fact that \(V\in L^{\infty}\). Using (5.8) and (5.9), we have
\[\|T_{H}(\sum_{2^{\ell}\lesssim\lambda}\beta_{\ell}(P)e_{\lambda})\|_{L^{p}( \Omega)}\lesssim\sum_{2^{\ell}\lesssim\lambda}(1+\lambda)^{-1}2^{\ell}2^{- \frac{\ell}{p}}\|e_{\lambda}\|_{L^{p}(\partial\Omega)}\lesssim\lambda^{-1/p} \|e_{\lambda}\|_{L^{p}(\partial\Omega)}. \tag{5.10}\]
So we obtain (1.7) in Lemma 2. Using the \(L^{p}\) bounds in Lemma 1, we complete the proof of Theorem 1.
## 6. Measure of nodal set
In this section, we prove the nodal set estimates in Theorem 2.
First, we establish some general results for Sobolev spaces on compact manifolds. These results will be used to prove the regularity of eigenfunctions. They are likely to be useful for future research, so we give detailed proofs for them.
Let \(s>0\) and \(1<p<\infty\). We can define the Sobolev norm on \(M\) by local coordinates
\[\|f\|_{W^{s,p}(M)}=\sum_{\nu}\|(I-\Delta)^{s/2}f_{\nu}\|_{L^{p}(\mathbb{R}^{n})}. \tag{6.1}\]
where \(f_{\nu}=(\phi_{\nu}f)\circ\kappa_{\nu}^{-1}\), and \(\{\phi_{\nu}\}\) is a partition of unity subordinate to a finite covering \(M=\cup\Omega_{\nu}\), and \(\kappa_{\nu}:\Omega_{\nu}\to\tilde{\Omega}_{\nu}\subset\mathbb{R}^{n}\) is the coordinate map. For simplicity, we sometimes do not distinguish between \(\Omega_{\nu}\) and \(\tilde{\Omega}_{\nu}\), \(f_{\nu}\) and \(\phi_{\nu}f\), since they are identical up to the coordinate map.
Moreover, we can also define another Sobolev norm by pseudo-differential operators
\[\|f\|_{H^{s,p}(M)}=\|(I-\Delta_{g})^{s/2}f\|_{L^{p}(M)}. \tag{6.2}\]
By [45, Theorem 4.3.1], we see that \((I-\Delta_{g})^{s/2}\) is an invertible pseudo-differential operator of order \(s\) with elliptic principal symbol \((\sum g^{jk}(x)\xi_{j}\xi_{k})^{s/2}\). Moreover, if we replace \((I-\Delta_{g})^{s/2}\) in (6.2) by any invertible pseudo-differential operator of order \(s\), then it still gives a comparable norm, by Lemma 6.
We prove that these two Sobolev norms are equivalent.
**Proposition 4**.: _For \(s>0\) and \(1<p<\infty\), we have_
\[\|f\|_{W^{s,p}(M)}\approx\|f\|_{H^{s,p}(M)}.\]
_The implicit constants are independent of \(f\)._
As a corollary, different partitions of unity and such coordinate atlases in the definition (6.1) give comparable norms. When \(p=2\), Proposition 4 follows from Plancherel theorem and the \(L^{2}\)-boundedness of zero order pseudo-differential operators, see e.g. [46, section 4.2]. The case \(p\neq 2\) is more complicated, and it is very difficult to find good references. To prove this on our own, we start with the following key lemma. Roughly speaking, this lemma establishes a "linear relation" between any two pseudo-differential operators of the same order.
**Lemma 7**.: _Let \(s>0\). Let \(V_{1},\ V,\ \Omega\) be open sets such that \(\bar{V}_{1}\subset V\subset\Omega\). Let \(P_{1},\ P\in OPS^{s}\) with symbols supported in \(V_{1},\ V\) respectively. If the principal symbol \(\bar{p}(x,\xi)\) of \(P\) is elliptic on \(\bar{V}_{1}\), i.e., for any \(x\in\bar{V}_{1},\)_
\[\bar{p}(x,\xi)\neq 0,\ \forall\xi\neq 0,\]
_then there is a \(Q\in OPS^{0}\) with symbol supported in \(V_{1}\) such that_
\[P_{1}-QP\in OPS^{0}. \tag{6.3}\]
Proof.: Let \(p_{1}(x,\xi)\) be the symbols of \(P_{1}\) on \(\Omega\). Since \(\bar{p}(x,\xi)\) is elliptic on the support of \(p_{1}(x,\xi)\), we have
\[\frac{\varphi(\xi)p_{1}(x,\xi)}{\bar{p}(x,\xi)}\in S^{0} \tag{6.4}\]
where \(\varphi\in C^{\infty}\) vanishes near the origin but equals one near infinity. Denote the associated zero order pseudo-differential operator by \(Q_{0}\). Let \(R_{-1}=P_{1}-Q_{0}P\). Then by the Kohn-Nirenberg theorem (see e.g. [45, Theorem 3.1.1]), we have \(R_{-1}\in OPS^{s-1}\). The symbol of \(R_{-1}\) is supported in \(V_{1}\). If \(s\leq 1\), then we are done by setting \(Q=Q_{0}\), since \(P_{1}-Q_{0}P\in OPS^{s-1}\subset OPS^{0}\).
Next, it remains to consider \(s>1\). Let \(k=\lceil s\rceil\geq 2\). We need to construct \(Q_{-i}\in OPS^{-i},\ R_{-i-1}\in OPS^{s-i-1}\) recursively for \(1\leq i\leq k-1\). If \(r_{i}(x,\xi)\) is the symbol of \(R_{-i}\), and \(Q_{-i}\) has the symbol
\[\frac{\varphi(\xi)r_{i}(x,\xi)}{\bar{p}(x,\xi)}\in S^{-i}, \tag{6.5}\]
then using the Kohn-Nirenberg theorem we have \(R_{-i-1}=R_{-i}-Q_{-i}P\in OPS^{s-i-1}\). The symbol of \(R_{-i-1}\) is supported in \(V_{1}\). Let
\[Q=\sum_{i=1}^{k-1}Q_{-i}.\]
The symbol of \(Q\) is supported in \(V_{1}\). Then \(P_{1}-QP=R_{-k}\in OPS^{s-k}\subset OPS^{0}\).
Proof of Proposition 4.: The basic idea is to verify these two equivalences
\[\|(I-\Delta_{g})^{s/2}f\|_{L^{p}(M)}\approx\sum_{\nu}\|(I-\Delta_{g})^{s/2}f_{ \nu}\|_{L^{p}(M)}\approx\sum_{\nu}\|(I-\Delta)^{s/2}f_{\nu}\|_{L^{p}(\mathbb{ R}^{n})}. \tag{6.6}\]
The first equivalence is straightforward. Indeed, The relation \(\lesssim\) follows from Minkowski inequality. And for the other direction, we use Lemma 6 to see that
\[\begin{split}\|(I-\Delta_{g})^{s/2}f_{\nu}\|_{L^{p}(M)}& =\|(I-\Delta_{g})^{s/2}M_{\phi_{\nu}}(I-\Delta_{g})^{-s/2}((I-\Delta _{g})^{s/2}f)\|_{L^{p}(M)}\\ &\lesssim\|(I-\Delta_{g})^{s/2}f\|_{L^{p}(M)},\end{split} \tag{6.7}\]
where \(M_{\phi_{\nu}}\) stands for the operator of multiplying by \(\phi_{\nu}(x)\). Summing up of (6.7) over \(\nu\) we obtain the first equivalence in (6.6).
To prove the second equivalence in (6.6), it suffices to show that for each \(\nu\)
\[\|(I-\Delta_{g})^{s/2}f_{\nu}\|_{L^{p}(M)}\approx\|(I-\Delta)^{s/2}f_{\nu}\|_ {L^{p}(\mathbb{R}^{n})}. \tag{6.8}\]
For each \(\Omega_{\nu},\ \phi_{\nu}\in C^{\infty}_{0}(\Omega_{\nu})\) in (6.1), we can find open subsets \(V_{\nu},\ U_{\nu},\ W_{\nu}\) of \(\Omega_{\nu}\), and cutoff functions \(\psi_{\nu}\in C^{\infty}_{0}(V_{\nu}),\ \psi_{\nu 1}\in C^{\infty}_{0}(U_{\nu}),\ \psi_{\nu 2} \in C^{\infty}_{0}(W_{\nu}),\ \eta_{\nu}\in C^{\infty}_{0}(\Omega_{\nu})\) such that
\[\text{supp}\ \phi_{\nu}\subset\subset U_{\nu}\subset V_{\nu}\subset\subset W _{\nu}\]
and \(\psi_{\nu}\equiv 1\) on \(\bar{U}_{\nu}\), \(\psi_{\nu 2}\equiv 1\) on \(\bar{W}_{\nu}\).
Let \(P_{\nu}=\psi_{\nu}(I-\Delta)^{s/2},\ P_{\nu 1}=\psi_{\nu 1}(I-\Delta_{g})^{s/2}M_{ \eta_{\nu}}\). We see that \(M_{\eta_{\nu}}\in OPS^{0}\), and \(P_{\nu},P_{\nu 1}\in OPS^{s}\). Note that the principal symbol of \(P_{\nu}\) is \(\psi_{\nu}(x)|\xi|^{s}\), which is elliptic on \(\bar{U}_{\nu}\). By Lemma 7, we can find \(Q_{\nu 1}\in OPS^{0}\) supported in \(U_{\nu}\) such that
\[P_{\nu 1}-Q_{\nu 1}P_{\nu}\in OPS^{0}.\]
Then by Lemma 6 we obtain the local estimate
\[\|P_{\nu 1}(f_{\nu})\|_{L^{p}(\Omega_{\nu})}=\|(P_{\nu 1}-Q_{\nu 1}P_{\nu})(f_{ \nu})+Q_{\nu 1}P_{\nu}(f_{\nu})\|_{L^{p}(\Omega_{\nu})}\lesssim\|f_{\nu}\|_{L^{p}( \Omega_{\nu})}+\|P_{\nu}f_{\nu}\|_{L^{p}(\Omega_{\nu})}. \tag{6.9}\]
Moreover, if \(P_{\nu 2}=\psi_{\nu 2}(I-\Delta_{g})^{s/2}M_{\eta_{\nu}}\), then \(P_{\nu 2}\) has the principal symbol \(\psi_{\nu 2}(x)(\sum g^{jk}(x)\xi_{j}\xi_{k})^{s/2}\), which is elliptic on \(\bar{V}_{\nu}\). Similarly, by applying Lemma 7 to \(P_{\nu}\) and \(P_{\nu 2}\), we obtain the local estimate
\[\|P_{\nu}(f_{\nu})\|_{L^{p}(\Omega_{\nu})}\lesssim\|f_{\nu}\|_{L^{p}(\Omega_{ \nu})}+\|P_{\nu 2}f_{\nu}\|_{L^{p}(\Omega_{\nu})}. \tag{6.10}\]
Next, we handle the nonlocal part. We write
\[(1-\psi_{\nu})(I-\Delta)^{s/2}f_{\nu}=(1-\psi_{\nu})(I-\Delta)^{s/2}(\phi_{\nu }\eta_{\nu}f)=(1-\psi_{\nu})(I-\Delta)^{s/2}M_{\phi_{\nu}}(\eta_{\nu}f). \tag{6.11}\]
Since \(\operatorname{dist}(\operatorname{supp}\ (1-\psi_{\nu}),\ \operatorname{supp}\ \phi_{\nu})= \delta_{\nu}>0\), using integration by parts, we see that the kernel of \((1-\psi_{\nu})(I-\Delta)^{s/2}M_{\phi_{\nu}}\) satisfies
\[\Big{|}\int_{\mathbb{R}^{n}}(1-\psi_{\nu}(x))e^{i(x-y)\cdot\xi}\phi_{\nu}(y)( 1+|\xi|^{2})^{s/2}d\xi\Big{|}\lesssim(1+|x-y|)^{-N},\ \forall N.\]
By Young's inequality, we get
\[\|(1-\psi_{\nu})(I-\Delta)^{s/2}(f_{\nu}))\|_{L^{p}(\mathbb{R}^{n})}\lesssim\| \eta_{\nu}f\|_{L^{p}(\mathbb{R}^{n})}=\|f_{\nu}\|_{L^{p}(\Omega_{\nu})}. \tag{6.12}\]
Similarly, using the fact that the kernel of pseudo-differential operators on compact manifolds is smooth away from diagonal, we have
\[\|(1-\psi_{\nu 1})(I-\Delta_{g})^{s/2}(f_{\nu})\|_{L^{p}(M)}=\|(1-\psi_{\nu 1})(I- \Delta_{g})^{s/2}(\phi_{\nu}\eta_{\nu}f)\|_{L^{p}(M)}\lesssim\|\eta_{\nu}f\|_ {L^{p}(M)}=\|f_{\nu}\|_{L^{p}(\Omega_{\nu})} \tag{6.13}\]
and
\[\|(1-\psi_{\nu 2})(I-\Delta_{g})^{s/2}(f_{\nu})\|_{L^{p}(M)}\lesssim\|f_{\nu}\|_{L ^{p}(\Omega_{\nu})}. \tag{6.14}\]
Combining (6.9) with the nonlocal estimates (6.12) and (6.13), we obtain
\[\begin{split}\|(I-\Delta_{g})^{s/2}f_{\nu}\|_{L^{p}(M)}& \lesssim\|(1-\psi_{\nu 1})(I-\Delta_{g})^{s/2}f_{\nu}\|_{L^{p}(M)}+\|\psi_{ \nu 1}(I-\Delta_{g})^{s/2}f_{\nu}\|_{L^{p}(M)}\\ &\lesssim\|f_{\nu}\|_{L^{p}(\Omega_{\nu})}+\|\psi_{\nu 1}(I- \Delta_{g})^{s/2}f_{\nu}\|_{L^{p}(\Omega_{\nu})}\\ &=\|f_{\nu}\|_{L^{p}(\Omega_{\nu})}+\|P_{\nu 1}f_{\nu}\|_{L^{p}( \Omega_{\nu})}\\ &\lesssim\|f_{\nu}\|_{L^{p}(\Omega_{\nu})}+\|(I-\Delta)^{s/2}f_{ \nu}-(1-\psi_{\nu})(I-\Delta)^{s/2}f_{\nu}\|_{L^{p}(\Omega_{\nu})}\\ &\lesssim\|f_{\nu}\|_{L^{p}(\Omega_{\nu})}+\|(I-\Delta)^{s/2}f_{ \nu}\|_{L^{p}(\mathbb{R}^{n})}\\ &\lesssim\|(I-\Delta)^{s/2}f_{\nu}\|_{L^{p}(\mathbb{R}^{n})}. \end{split} \tag{6.15}\]
Here in the last step we apply Lemma 6 to \((I-\Delta)^{-s/2}\in OPS^{0}\).
Similarly, combining (6.10) with the nonlocal estimates (6.12) and (6.14), we have
\[\begin{split}\|(I-\Delta)^{s/2}f_{\nu}\|_{L^{p}(\mathbb{R}^{n})}& \lesssim\|(1-\psi_{\nu})(I-\Delta)^{s/2}f_{\nu}\|_{L^{p}(\mathbb{R}^{n})}+\| \psi_{\nu}(I-\Delta)^{s/2}f_{\nu}\|_{L^{p}(\mathbb{R}^{n})}\\ &\lesssim\|f_{\nu}\|_{L^{p}(\Omega_{\nu})}+\|\psi_{\nu}(I-\Delta) ^{s/2}f_{\nu}\|_{L^{p}(\Omega_{\nu})}\\ &=\|f_{\nu}\|_{L^{p}(\Omega_{\nu})}+\|P_{\nu}f_{\nu}\|_{L^{p}( \Omega_{\nu})}\\ &\lesssim\|f_{\nu}\|_{L^{p}(\Omega_{\nu})}+\|P_{\nu 2}f_{\nu}\|_{L^{p} (\Omega_{\nu})}\\ &=\|f_{\nu}\|_{L^{p}(\Omega_{\nu})}+\|(I-\Delta_{g})^{s/2}f_{ \nu}-(1-\psi_{\nu 2})(I-\Delta_{g})^{s/2}f_{\nu}\|_{L^{p}(\Omega_{\nu})}\\ &\lesssim\|f_{\nu}\|_{L^{p}(\Omega_{\nu})}+\|(I-\Delta_{g})^{s/2 }f_{\nu}\|_{L^{p}(M)}\\ &\lesssim\|(I-\Delta_{g})^{s/2}f_{\nu}\|_{L^{p}(M)}.\end{split} \tag{6.16}\]
In the last step we used Lemma 6 for \((I-\Delta_{g})^{-s/2}\in OPS^{0}\). So we finish the proof of (6.8). Thus, the proof of Proposition 4 is complete.
Let \([\mathcal{D},V]=\mathcal{D}V-V\mathcal{D}\). We need to following commutator estimate.
**Lemma 8**.: _Let \(1<p<\infty\). Given \(P\in OPS^{1}\),_
\[\|[P,f]u\|_{L^{p}}\leq C\|f\|_{\operatorname{Lip}^{1}}\|u\|_{L^{p}}.\]
_Here \(\|f\|_{\operatorname{Lip}^{1}}\) is the Lipschitz norm of \(f\)._
Here the \(L^{p}\) norm can be taken on \(\mathbb{R}^{n}\) and compact manifolds. See Proposition 1.3 in Taylor [57]. The result was proven in Calderon [7] for classical first-order pseudodifferential operators and by Coifman-Meyer [11] for \(OPS^{1}\).
**Lemma 9**.: _If \(V\in Lip^{1}(M)\), then \(e_{\lambda}\in C^{1,\alpha}(M)\), for any \(0<\alpha<1\)._
Proof.: By Sobolev imbedding (see e.g. [19]), we only need to show \(\|e_{\lambda}\|_{W^{2,p}(M)}<\infty\) for any \(1<p<\infty\). Indeed, using the commutator estimate in Lemma 8 and the equation \((\mathcal{D}+V)e_{\lambda}=\lambda e_{\lambda}\), we have
\[\begin{split}\|\mathcal{D}(Ve_{\lambda})\|_{L^{p}(M)}& \leq\|V(\mathcal{D}+V)e_{\lambda}\|_{L^{p}(M)}+\|V^{2}e_{\lambda }\|_{L^{p}(M)}+\|[\mathcal{D},V]e_{\lambda}\|_{L^{p}(M)}\\ &\lesssim\lambda\|V\|_{L^{\infty}}\|e_{\lambda}\|_{L^{p}(M)}+\|V \|_{L^{\infty}}^{2}\|e_{\lambda}\|_{L^{p}(M)}+\|V\|_{\operatorname{Lip}^{1}} \|e_{\lambda}\|_{L^{p}(M)}\\ &\lesssim(1+\lambda)\|e_{\lambda}\|_{L^{p}(M)}.\end{split}\]
So by Proposition 4, we obtain
\[\begin{split}\|e_{\lambda}\|_{W^{2,p}(M)}& \approx\|(1+\mathcal{D})^{2}e_{\lambda}\|_{L^{p}(M)}\\ &\lesssim\|(1+\mathcal{D})(1+\mathcal{D}+V)e_{\lambda}\|_{L^{p}(M) }+\|\mathcal{D}(Ve_{\lambda})\|_{L^{p}(M)}+\|V\|_{L^{\infty}}\|e_{\lambda}\|_ {L^{p}(M)}\\ &\lesssim(1+\lambda)(\|(1+\mathcal{D})e_{\lambda}\|_{L^{p}(M)}+\| e_{\lambda}\|_{L^{p}(M)})\\ &\leq(1+\lambda)(\|(1+\mathcal{D}+V)e_{\lambda}\|_{L^{p}(M)}+\|V \|_{L^{\infty}}\|e_{\lambda}\|_{L^{p}(M)}+\|e_{\lambda}\|_{L^{p}(M)})\\ &\lesssim(1+\lambda)^{2}\|e_{\lambda}\|_{L^{p}(M)}.\end{split}\]
Next, we prove the nodal set estimates. Let
\[N_{\lambda} =\{x\in M:e_{\lambda}(x)=0\},\] \[D_{+} =\{x\in M:e_{\lambda}(x)>0\},\]
\[D_{-}=\{x\in M:e_{\lambda}(x)<0\}.\]
We have \(\partial D_{\pm}=N_{\lambda}\). We first express the manifold \(M\) as a (essentially) disjoint union
\[M=\bigcup_{j\geq 1}D_{j,+}\cup\bigcup_{j\geq 1}D_{j,-}\cup N_{\lambda}\]
where \(D_{j,+}\) and \(D_{j,-}\) are are the positive and negative nodal domains of \(e_{\lambda}\), i.e, the connected components of the sets \(D_{+}\) and \(D_{-}\). For simplicity, we assume that there are only two nodal domains \(D_{+}\) and \(D_{-}\). Since \(\nabla e_{\lambda}\) is continuous by Lemma 9 and we are assuming that zero is a regular value of \(e_{\lambda}\), we can apply Gauss-Green theorem on each nodal domain \(D_{\pm}\) with boundary \(\partial D_{\pm}\). We have
\[\int_{D_{+}}div(f\nabla e_{\lambda})dV_{g}=\int_{N_{\lambda}} \langle f\nabla e_{\lambda},\nu_{-}\rangle dS=-\int_{N_{\lambda}}f|\nabla e_{ \lambda}|dS\] \[\int_{D_{-}}div(f\nabla e_{\lambda})dV_{g}=\int_{N_{\lambda}} \langle f\nabla e_{\lambda},\nu_{+}\rangle dS=\int_{N_{\lambda}}f|\nabla e_{ \lambda}|dS.\]
\[2\int_{N_{\lambda}}f|\nabla e_{\lambda}|=\int_{D_{-}}div(f\nabla e_{\lambda} )-\int_{D_{+}}div(f\nabla e_{\lambda}). \tag{6.17}\]
Note that by Cauchy-Schwarz
\[\int_{N_{\lambda}}|\nabla e_{\lambda}|\lesssim(\int_{N_{\lambda}}|\nabla e_{ \lambda}|^{2})^{\frac{1}{2}}|N_{\lambda}|^{\frac{1}{2}}.\]
So to estimate the lower bound of \(|N_{\lambda}|\), it suffices to estimate \(\int_{N_{\lambda}}|\nabla e_{\lambda}|\) and \(\int_{N_{\lambda}}|\nabla e_{\lambda}|^{2}\).
**Lemma 10**.: _If \(V\in Lip^{1}(M)\), then_
\[\int_{N_{\lambda}}|\nabla e_{\lambda}|\geq\frac{\lambda^{2}}{4}\|e_{\lambda} \|_{L^{1}(M)}.\]
**Lemma 11**.: _If \(V\in Lip^{1}(M)\), then_
\[\int_{N_{\lambda}}|\nabla e_{\lambda}|^{2}\lesssim\lambda^{3}\|e_{\lambda}\| _{L^{2}(M)}.\]
Using the these two lemmas and the eigenfunction estimate (1.6), we get the lower bound of the nodal set in Theorem 2
\[|N_{\lambda}|\gtrsim\lambda^{\frac{3-n}{2}}.\]
### Proof of Lemma 10
We set \(f=1\) in (6.17). So
\[2\int_{N_{\lambda}}|\nabla e_{\lambda}|=\int_{D_{-}}\Delta_{g}e_{\lambda}- \int_{D_{+}}\Delta_{g}e_{\lambda}.\]
Since \(\sqrt{-\Delta_{g}}=\mathcal{D}-P_{0}\), we have
\[-\Delta_{g}=(\mathcal{D}+V)^{2}-(\mathcal{D}V-V\mathcal{D})-2V(\mathcal{D}+V)+ V^{2}-2P_{0}(\mathcal{D}+V)+2P_{0}V+Q_{0},\]
where \(Q_{0}=P_{0}\mathcal{D}-\mathcal{D}P_{0}+P_{0}^{2}\in OPS^{0}\). Thus,
\[2\int_{N_{\lambda}}|\nabla e_{\lambda}| =\int_{D_{+}}-\int_{D_{-}}(\lambda^{2}e_{\lambda}-[\mathcal{D},V]e _{\lambda}-2\lambda Ve_{\lambda}+V^{2}e_{\lambda}-2\lambda P_{0}e_{\lambda}+2P_ {0}Ve_{\lambda}+Q_{0}e_{\lambda})\] \[\geq\lambda^{2}\|e_{\lambda}\|_{L^{1}(M)}-\|[\mathcal{D},V]e_{ \lambda}\|_{L^{1}(M)}-2\lambda\|Ve_{\lambda}\|_{L^{1}(M)}-\|V^{2}e_{\lambda}\| _{L^{1}(M)}\] \[\qquad\qquad\qquad-2\lambda\|P_{0}e_{\lambda}\|_{L^{1}(M)}-2\|P_ {0}Ve_{\lambda}\|_{L^{1}(M)}-\|Q_{0}e_{\lambda}\|_{L^{1}(M)}.\]
By Holder's inequality and (1.6), we have
\[\|e_{\lambda}\|_{L^{1+\varepsilon}(M)}\lesssim\lambda^{\frac{(n-1)\varepsilon }{2(1+\varepsilon)}}\|e_{\lambda}\|_{L^{1}(M)},\ 0<\varepsilon<1.\]
Combining this estimate with Lemma 8, we have
\[\|[\mathcal{D},V]e_{\lambda}\|_{L^{1}(M)}\lesssim\|[\mathcal{D},V]e_{\lambda} \|_{L^{1+\varepsilon}(M)}\lesssim\|V\|_{\mathrm{Lip}^{1}}\|e_{\lambda}\|_{L^ {1+\varepsilon}(M)}\lesssim\lambda\|V\|_{\mathrm{Lip}^{1}}\|e_{\lambda}\|_{L ^{1}(M)},\]
if \(\varepsilon>0\) is small enough. Moreover, if \(\varepsilon>0\) is small enough, then by Lemma 6 we have
\[\lambda\|Ve_{\lambda}\|_{L^{1}(M)}\lesssim\lambda\|V\|_{L^{\infty}}\|e_{\lambda }\|_{L^{1}(M)}\]
\[\|V^{2}e_{\lambda}\|_{L^{1}(M)}\lesssim\|V\|_{L^{\infty}}^{2}\|e_{\lambda}\|_{ L^{1}(M)}\]
\[\lambda\|P_{0}e_{\lambda}\|_{L^{1}(M)}\lesssim\lambda\|P_{0}e_{\lambda}\|_{L^ {1+\varepsilon}(M)}\lesssim\lambda\|e_{\lambda}\|_{L^{1+\varepsilon}(M)} \lesssim\lambda^{\frac{3}{2}}\|e_{\lambda}\|_{L^{1}(M)}\]
\[\|P_{0}Ve_{\lambda}\|_{L^{1}(M)}\lesssim\|P_{0}Ve_{\lambda}\|_{L^{1+ \varepsilon}(M)}\lesssim\|Ve_{\lambda}\|_{L^{1+\varepsilon}(M)}\lesssim \lambda\|V\|_{L^{\infty}}\|e_{\lambda}\|_{L^{1}(M)}\]
\[\|Q_{0}e_{\lambda}\|_{L^{1}(M)}\lesssim\|Q_{0}e_{\lambda}\|_{L^{1+\varepsilon} (M)}\lesssim\|e_{\lambda}\|_{L^{1+\varepsilon}(M)}\lesssim\lambda\|e_{\lambda }\|_{L^{1}(M)}.\]
So we finish the proof Lemma 10.
### Proof of Lemma 11
We set \(f=\sqrt{1+|\nabla e_{\lambda}|^{2}}\) in (6.17). And then
\[2\int_{N_{\lambda}}\sqrt{1+|\nabla e_{\lambda}|^{2}}\ |\nabla e_{\lambda}| =\int_{D_{-}}div(\sqrt{1+|\nabla e_{\lambda}|^{2}}\nabla e_{ \lambda})-\int_{D_{+}}div(\sqrt{1+|\nabla e_{\lambda}|^{2}}\nabla e_{\lambda})\] \[\lesssim\int_{M}|div(\sqrt{1+|\nabla e_{\lambda}|^{2}}\nabla e_{ \lambda})|\] \[\lesssim\int_{M}\sqrt{1+|\nabla e_{\lambda}|^{2}}\ |\nabla^{2}e_{\lambda}|\] \[\lesssim(\|e_{\lambda}\|_{L^{2}(M)}+\|\nabla e_{\lambda}\|_{L^{2 }(M)})\|\nabla^{2}e_{\lambda}\|_{L^{2}(M)}\] \[\lesssim\lambda^{3}\|e_{\lambda}\|_{L^{2}(M)}.\]
Here we use the Sobolev estimates of eigenfunctions in the last step. Indeed, we have the following Sobolev estimates
\[\|\nabla e_{\lambda}\|_{L^{2}(M)} \lesssim\|\mathcal{D}e_{\lambda}\|_{L^{2}(M)}+\|e_{\lambda}\|_{L^ {2}(M)}\] \[\leq\|(\mathcal{D}+V)e_{\lambda}\|_{L^{2}(M)}+\|Ve_{\lambda}\|_{L^ {2}(M)}+\|e_{\lambda}\|_{L^{2}(M)}\] \[\lesssim\lambda\|e_{\lambda}\|_{L^{2}(M)}+\|V\|_{L^{\infty}}\|e_{ \lambda}\|_{L^{2}(M)}\] \[\lesssim\lambda\|e_{\lambda}\|_{L^{2}(M)},\]
and similarly, we may exploit Lemma 8 to obtain
\[\|\nabla^{2}e_{\lambda}\|_{L^{2}(M)} \lesssim\|\mathcal{D}^{2}e_{\lambda}\|_{L^{2}(M)}+\|\mathcal{D}e_{ \lambda}\|_{L^{2}(M)}+\|e_{\lambda}\|_{L^{2}(M)}\] \[\lesssim\|(\mathcal{D}+V)^{2}e_{\lambda}\|_{L^{2}(M)}+\|[ \mathcal{D},V]e_{\lambda}\|_{L^{2}(M)}+\|V(\mathcal{D}+V)e_{\lambda}\|_{L^{2}( M)}+\] \[\qquad\|V^{2}e_{\lambda}\|_{L^{2}(M)}+\lambda\|e_{\lambda}\|_{L^ {2}(M)}\] \[\lesssim\lambda^{2}\|e_{\lambda}\|_{L^{2}(M)}+\lambda\|V\|_{\text {Lip}^{1}}\|e_{\lambda}\|_{L^{2}(M)}+\|V\|_{L^{\infty}}^{2}\|e_{\lambda}\|_{L^{ 2}(M)}\] \[\lesssim\lambda^{2}\|e_{\lambda}\|_{L^{2}(M)}.\]
So Lemma 11 is proved.
## Acknowledgements
Y.S. is partially supported by the NSF DMS Grant 2154219. C.Z. is partially supported by a startup grant from Tsinghua University.
|
2309.05917 | Public key cryptosystems based on Iterated Functions Systems | Let $f=(f_0,f_1,\dots, f_{\nu-1})$ be a collection of one-to-one functions
from some space~$X$ into itself such that the sets $f_j(X)$ are disjoint. If
$w=w_1w_2\cdots w_k$ is a word on the alphabet $\{0,1,\dots,\nu-1\}$, let
$\Phi_{f,w} = f_{w_1}\circ f_{w_2}\circ\cdots\circ f_{w_k}$. Given a
function~$F$ of which we know that it can be written as $\Phi_{f,w}$, it is
easy to recover~$w$. We give some examples of this situation where everything
can be scrambled up by using some private key to get a new system
$g=(g_1,g_2,\dots,g_{\nu-1})$ on another set~$Y$ in such a way that the images
of the $g_j$ are no longer disjoint. We define a cryptosystem whose public key
is~$g$. The message to be encrypted is a word~$w$ and the associated cryptogram
is $\Phi_{g,w}$. The private key allows to recover $\Phi_{f,w}$ from
$\Phi_{g,w}$. | Jacques Peyriere, Fengxia Liu, Zhiyong Zheng, Zixian Gong | 2023-09-12T02:12:39Z | http://arxiv.org/abs/2309.05917v1 | # Public key cryptosystems based on Iterated Functions Systems
###### Abstract
Let \(f=(f_{0},f_{1},\ldots,f_{\nu-1})\) be a collection of one-to-one functions from some space \(X\) into itself such that the sets \(f_{j}(X)\) are disjoint. If \(w=w_{1}w_{2}\cdots w_{k}\) is a word on the alphabet \(\{0,1,\ldots,\nu-1\}\), let \(\Phi_{f,w}=f_{w_{1}}\circ f_{w_{2}}\circ\cdots f_{w_{k}}\). Given a function \(F\) of which we know that it can be written as \(\Phi_{f,w}\), it is easy to recover \(w\). We give some examples of this situation where everything can be scrambled up by using some private key to get a new system \(g=(g_{1},g_{2},\ldots,g_{\nu-1})\) on another set \(Y\) in such a way that the images of the \(g_{j}\) are no longer disjoint. We define a cryptosystem whose public key is \(g\). The message to be encrypted is a word \(w\) and the associated cryptogram is \(\Phi_{g,w}\). The private key allows to recover \(\Phi_{f,w}\) from \(\Phi_{g,w}\).
**Keywords:** public key cryptography, message authentication, iterated functions system, IFS.
0
Footnote 0: _*_ Corresponding author (Fengxia Liu).
E-mail addresses: [email protected]
## 1 Introduction
Let \(f=(f_{0},\,f_{1},\ldots,\,f_{\nu-1})\) be a collection of mappings from a set \(X\) into itself. If \(w=w_{1}w_{2}\cdots w_{n}\) is a word over the alphabet \(\{0,1,\cdots,\nu-1\}\) we set
\[\Phi_{f,w}=f_{w_{1}}\circ f_{w_{2}}\circ\cdots\circ f_{w_{n}}. \tag{1}\]
Such a system is called an IFS (Iterated Functions System).
The IFS have been introduced by Hutchinson [1] to rationalize several pre-existing constructions of fractal sets and measures having some self-similarity properties. This formalism has been adopted by the fractalists' community and gave rise to an abundant literature. Subsequently this formalism has been further developed by Mauldin and Williams [2] by the so called "Graph directed constructions".
**Definition 1**: _The system \(f\) is said to have the separation property relatively to \(\mathcal{S}=(S;S_{0},S_{1},\ldots,S_{\nu-1})\) if_
_._
* \(S\subset X\)_,_ \(S\neq\emptyset\)_, and_ \(S_{j}\subset S\) _for all_ \(j\)_,_
* _for all_ \(j\)_,_ \(f_{j}\) _defines a bijection from_ \(S\) _onto_ \(S_{j}\)_,_
* \(S_{i}\cap S_{j}=\emptyset\) _for_ \(0\leq i<j<\nu\)_._
Several separation conditions have been defined for IFS which aimed at computing or estimating the Hausdorff dimension of the limit set of the said IFS. This is not our present goal and the separation condition we consider does not coincide with the previous ones, although the closest one is the so-called "strong separation condition". Indeed the system \((x/2,\ 1-x/2)\) satisfies our condition on the interval \([0,1)\) and also on the interval \((0,1]\), while it satisfies on \([0,1]\) the "open set separation condition" which is important un fractal geometry. But for our purpose, this is our definition which is meaningful. And in the sequel, when we say that a system satisfies (or has, or fulfills) a separation condition this means that there exists an \(\mathcal{S}=(S;S_{0},S_{1},\ldots,S_{\nu})\) as in definition 1.
When we have an IFS we can consider the following two problems which are closely related and which are easily solved when this IFS satisfies a separation properties.
**Problem 1**. "Given an integer \(n\) and a point \(y\in S\), can we find \(w\in\{0,1,\ldots,\nu-1\}^{n}\) such that there exists \(x\in S\) such that \(y=\Phi_{f,w}(x)\)?".
Indeed, here is the solution. If \(y\notin\bigcup f_{j}(S)\) there is no solution. Otherwise, if \(y\in f_{j}(S)\), then, if a solution exists, we must have \(w_{1}=j\). Then we are confronted with the same problem: to find \(w^{\prime}\) of length \(n-1\) such that \(\Phi_{f,w^{\prime}}(x)=f_{j}^{-1}(y)\), where \(f_{j}^{-1}\) is the reciprocal of the restriction of \(f_{j}\) to \(S\). Here is a description of this algorithm.
**Algorithm 1**:
_w \(<\!\!\!\!\!\!\!<\) empty word while \(|w|<n\) repeat\(\{\)_
_if \(y\notin S_{1}\cup S_{2}\cup\cdots\cup S_{\nu-1}\) then return "failed"_
_if \(y\in S_{j}\) then \(\{\)_
_\(w\mathrel{<\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
_return \(w\)_
The main difference between these problems is that the length of the unknown word \(w\) is not specified in the second one.
To get a cryptosystem from an IFS \(f\) with the separation condition, the strategy is to construct by some scrambling method a set of functions \(g=(g_{0},\,g_{1},\ldots,\,g_{\nu-1})\) related to \(f\) but not obviously fulfilling a separation condition. This strategy will be developed in the next sections.
The graph directed constructions give rise to more complex systems of iteration of functions. In the same way as the ones described below they can give rise to cryptosystems, but we will not elaborate on this subject.
## 2 First protocol: Affine cryptosystems
### Preliminaries and notation
Let \(p\) be a prime number. Let \(\varphi\) be the canonical projection of \(\mathbb{Z}\) onto the quotient \(\mathbb{F}_{p}:=\mathbb{Z}/p\mathbb{Z}\). For the sake of precision let us have a notation for the usual section of \(\varphi\): for \(\alpha\in\mathbb{F}_{p}\), let \(\operatorname{rem}_{p}(\alpha)\) stand for the integer \(m\in\{0,1,2,\ldots,p-1\}\) such that \(\varphi(m)=\alpha\).
Consider the following sub-ring \(\mathcal{R}_{p}\) of \(\mathbb{Q}\):
\[\mathcal{R}_{p}=\left\{r\in\mathbb{Q}\ :\ \exists m\in\mathbb{Z},\exists n \geq 1\text{ such that }(n,p)=1\text{ and }r=m/n\right\}. \tag{2}\]
For \(m\in\mathbb{Z}\) and \(n>0\) such that \((n,p)=1\), set \(\widetilde{\psi}(m,n)=\varphi(m)\varphi(n)^{-1}\). If we have another couple \((m^{\prime},n^{\prime})\) such that \(m^{\prime}/n^{\prime}=m/n\), we have
\[\widetilde{\psi}(m,n)=\varphi(mn^{\prime})\varphi(n^{\prime})^{-1}\varphi(n) ^{-1}=\varphi(m^{\prime}n)\varphi(n^{\prime})^{-1}\varphi(n)^{-1}=\widetilde {\psi}(m^{\prime},n^{\prime}).\]
This means that if \(r=m/n\in\mathcal{R}_{p}\), it is legitimate to set \(\psi(r)=\widetilde{\psi}(m,n)\). One can check that \(\psi\) is a ring homomorphism from \(\mathcal{R}_{p}\) to \(\mathbb{F}_{p}\) which extends \(\varphi\).
As usual, \(\psi\) is extended as a ring homomorphism from \(\mathcal{R}_{p}[x]\) to \(\mathbb{F}_{p}[x]\). When \(A=\sum_{j=0}^{k}a_{j}x^{j}\) and \(B\) are polynomials on some ring the polynomial \(A(B)\) or \(A\circ B\) is defined to be the polynomial \(\sum_{j=0}^{k}a_{j}B^{j}\). It is clear that for polynomials \(A\) and \(B\) in \(\mathcal{R}_{p}[x]\) that we have \(\psi(A\circ B)=\psi(A)\circ\psi(B)\). In particular if \(A\in\mathcal{R}_{p}[x]\) and \(b\in\mathcal{R}_{p}\) we have \(\psi\big{(}A(b)\big{)}=\psi(A)\big{(}\psi(b)\big{)}\).
Using \(\varphi,\,\psi\), and \(\operatorname{rem}_{p}\) may seem pedantic. Nevertheless this has the merit to be precise. This is why we use this notation in the next example. But afterwards we adopt a more familiar convention. Depending on the context '\(\,\mathrm{mod}\,\,p^{i}\) will have several meanings: if \(q\in\mathcal{R}_{p}[x]\), then \(q\mod p\quad\text{could stand}\) for \(\varphi(q)\in\mathbb{F}_{p}[x]\), \(\psi(q)\in\mathbb{F}_{p}[x]\), or for the corresponding polynomial in \(\mathbb{Z}[x]\) whose coefficients lie between \(0\) and \(p-1\).
### An example
Let us begin with an example. Take two positive integers \(\alpha\) and \(\beta\). Let \(f_{0}(x)=\frac{x+2\alpha}{3}\) and \(f_{1}(x)=\frac{x+2(\alpha+\beta)}{3}\). Observe that this system \(f=(f_{0},f_{1})\) fulfills the separation condition in the interval \(S:=[\alpha,\alpha+\beta]\), i.e.,
\[\mathcal{S}=\left([\alpha,\alpha+\beta];\left[\alpha,\alpha+\frac{\beta}{3} \right],\left[\alpha+\frac{2\beta}{3},\alpha+\beta\right]\right).\]
The plain text message to be encrypted will be a word \(w\) of length \(n\) on the alphabet \(\{0,1\}\). Observe that, for such a word \(w\), we have \(\Phi_{f,w}(\alpha)=\alpha+m3^{-n}\in[\alpha,\alpha+\beta]\), where \(m\) is an integer. It results that \(0\leq m\leq 3^{n}\beta\).
#### Key generation
First, choose a prime number \(p>3^{n}\beta\). Then take two different coprime numbers \(a\) and \(b\) such that \(2\leq a,\,b<p\). Set \(u(x)=ax+b\). Then the reciprocal function of \(u\) is \(u^{-1}(x)=(x-b)/a\). We have \(u^{-1}\circ f_{0}\circ u\in\mathcal{R}_{p}[x]\) (see (2)), and the same for \(f_{1}\). So we define the IFS \(g=(g_{0},g_{1})\) on \(\mathbb{F}_{p}\) as follows:
\[g_{\varepsilon}=\psi\left(u^{-1}\circ f_{\varepsilon}\circ u\right)\quad \text{for $\varepsilon\in\{0,1\}$.}\]
In theses conditions we have
\[\Phi_{g,w}=\psi\left(u^{-1}\circ\Phi_{f,w}\circ u\right). \tag{3}\]
Define \(\gamma=\phi\left(u^{-1}(\alpha)\right)\). Let the public and private keys be
\[pk=\{n,p,\,\gamma,\,g\}\quad\text{and}\quad sk=\{n,\,p,\,u,\,f,\mathcal{S}\}. \tag{4}\]
#### Encryption
If Bob wishes to send Alice the message \(w\) of length \(n\), he computes
\[c=\Phi_{g,w}(\gamma)\]
and sends Alice the cryptogram \(c\).
#### Decryption
To decipher this cryptogram Alice first compute \(u(c)\). Due to (3) we have \(\psi\big{(}\Phi_{f,w}(\alpha)\big{)}=u(c)\). Therefore we have \(\psi\Big{(}3^{n}\big{(}\Phi_{f,w}(\alpha)-\alpha\big{)}\Big{)}=3^{n}\left(u(c )-\varphi(\alpha)\right)\) (this is an equality between elements of \(\mathbb{F}_{p}\)). But, because \(3^{n}\big{(}\Phi_{f,w}(\alpha)-\alpha\big{)}\) is an integer less than or equal to \(3^{n}\beta<p\), we have \(3^{n}\Phi_{f,w}(\alpha)=\alpha+\text{rem}_{p}\big{(}3^{n}u(c)-3^{n}\varphi( \alpha)\big{)}\). Finally we obtain
\[\Phi_{f,w}(\alpha)=\alpha+\frac{\text{rem}_{p}\big{(}3^{n}u(c)-3^{n}\varphi( \alpha)\big{)}}{3^{n}},\]
which allows to use the algorithm 1 to get \(w\).
#### Remarks
First we notice that the new system \(g\) does not exhibit an obvious separation property. Indeed we illustrate this fact by considering the following simple example. \(n=8\), \(p=19687\), \(f_{0}(x)=x/3\), \(f_{1}(x)=(2+x)/3\). \(a=15296\), \(b=8026\). Then \(g_{0}(x)=13125x+8750\mod p\) and \(g_{1}(x)==13125x+10515\mod p\). Then there are four minimal invariant sets \(E_{0,0},E_{0,1},E_{0,2},E_{0,3}\) under \(g_{0}\). The set \(E_{0,0}\) has one element only. The other ones have \(6562\) elements. The same siyuation occurs for \(g_{1}\): we have the invariant sets \(E_{1,0},E_{1,1},E_{1,2},E_{1,3}\) with the same cardinality as previously. All the intersections \(E_{0,i}\cap E_{1,j}\) are non empty for \(1\leq i,\,j<3\). This means that there is no non trivial set invariant under \(g_{0}\) and \(g_{1}\). As a consequence the system \(g=(g_{0},g_{1})\) does not satisfy any separation condition. So to break this cryptosystem one is left with the brute force attack. As the key length is much larger that the length of the messages, the attack consists in feeding the encryption algorithm with all the possible messages until obtaining the cryptogram. This requires \(O(2^{n})\) operations (\(n\) being the length of the message).
As we just realized, an eavesdropper cannot easily decode the cryptogram, but if he has the ability to usurpate Bob's identity he can tamper with the cryptogram \(c\), for instance by replacing \(c\) by \(g_{0}(c)\) and send the modified cryptogram to Alice. When decoding there are two possibilities.
1. The decoding fails, and Alice knows that the cryptogram has been tampered with.
2. The decoding works for n steps. Then the cryptogram is wrongly deciphered and Alice is not aware of it.
It is not difficult to obviate this problem. This time the message to be transmitted is a binary word \(w\) of length less than or equal to \(n\). We choose a prime number \(p>3^{n+\lceil\log_{2}n\rceil}\beta\). The keys are constructed as previously. To make the cryptogram associated with the message \(w\) Bob appends to \(w\) a dyadic word of length \(\lceil\log_{2}n\rceil\) containing the dyadic expansion of \(|w|\) (where \(|w|\) stands for the length of \(w\)) padded on the left by zeros, if necessary. Let the resulting word be \(w^{\prime}\). Then the cryptogram is \(c=\Phi_{g,w^{\prime}}(\gamma)\).
As previously, the knowledge of \(u(c)\) gives the value of \(\Phi_{f,w^{\prime}}(\alpha)\). Alice first recovers \(w^{\prime}\) by applying the algorithm 2 to \(\Phi_{f,w^{\prime}}(\alpha)\). Then she obtains \(w\) by removing the added rightmost bits. Moreover since she knows the length of \(w\) she is able to detect an adulteration.
### A general framework
We deal with an IFS \(f=(f_{j})_{0\leq j<\nu}\) consisting in \(\nu\) affine maps from \(\mathbb{Q}\) to \(\mathbb{Q}\). We suppose that this IFS satisfies the separation condition on an interval \([\alpha,\alpha+\beta)\subset[0,+\infty)\). Let \(\varpi\) be the lcm of the denominators of the coefficients of the \(f_{j}\). Then if \(w\in\{0,1,\ldots,\nu-1\}^{n}\) we have \(\Phi_{f,w}(\alpha)=\alpha+k/\varpi^{n}\), where \(0\leq k\leq\varpi^{n}\beta\).
Choose a prime number \(p>\beta\varpi^{n+\lceil\log_{\nu}n\rceil}\) and two coprime numbers \(a\) and \(b\) less than \(p\). Set \(u(x)=ax+b\), \(v(x)=(x-a)/b\), \(\gamma=v(\alpha)\mod p\), and \(g=\left(g_{j}\right)_{0\leq j<\nu}\), where \(g_{j}=v\circ f_{j}\circ u\mod p\) (for \(0\leq j<\nu\)). Let the keys to be
\[pk=(n,p,\gamma,g)\quad\mbox{and}\quad\ sk=(n,p,u,f,\mathcal{S}).\]
The cryptogram \(c\) attached to the message \(w\in\{0,1,\ldots,\nu-1\}^{m}\) with \(m\leq n\) is constructed as follows: let \(w^{\prime}\) the word obtained by appending to \(w\) the base-\(\nu\) expansion of \(m\), padded on the left by zeros, if necessary. Then \(c=\Phi_{g,w^{\prime}}(\gamma)\mod p\).
The decryption runs as follows: Alice obtains \(\Phi_{f,w^{\prime}}(\alpha)\) as
\[\alpha+\frac{\varpi^{n}\big{(}u(c)-\alpha\big{)}\mod p}{\varpi^{n}}\]
and uses the algorithm 2 to get \(w^{\prime}\). Then she obtains \(w\) by removing from \(w^{\prime}\) the digits added to the right.
#### 2.3.1 Remark
Of course, one can use systems of affine maps in higher dimension.
## 3 Second protocol: Projective cryptosystems
### Preliminaries
#### Projective spaces and homographies
The projective space of dimension \(d\) on some field \(K\), denoted by \(\mathbb{P}_{d}(K)\), is the set of one dimensional vector subspaces of \(K^{d+1}\). Any \(x=(x_{1},x_{2},\ldots,x_{d+1})\in K^{d+1}\setminus\{0\}\) determines a 1-D vector space, i.e., a point of the projective space; call it \(P_{x}\). Obviously, if \(\lambda\in K^{*}\), \(\lambda x=(\lambda x_{1},\lambda x_{2},\ldots,\lambda x_{d+1})\), then \(P_{\lambda x}=P_{x}\).
If \(P=P_{x}\) we say that \(x\) is a set of homogeneous coordinates for \(P\). The point \(P_{x}\) has many sets of homogeneous coordinates: they are all the \(\lambda x\) where \(\lambda\) is any non-zero element of \(K\). The affine space \(K^{d}\) is embedded in \(\mathbb{P}_{d}(K)\): this is the set of points of which the last term of any of their system of homogeneous coordinates is non-zero; in other terms, a point \((x_{1},x_{2},\ldots,x_{d})\in K^{d}\) is identified with the point of homogeneous coordinates \((x_{1},x_{2},\ldots,x_{d},1)\). Of course there are many embeddinds of the affine space \(K^{d}\) in \(\mathbb{P}_{d}(K)\).
A non-zero endomorphism \(u\) of \(K^{d+1}\) induces a mapping, called an _homography_ of \(\mathbb{P}_{d}(K)\), indeed \(u\) transforms as set projective coordinates into a set of projective coordinates of the image. Obviously, \(u\) and \(\lambda u\) (for \(\lambda\in K^{*}\)) induce the same homography.
#### Projective IFS
We consider an IFS \(f=(f_{0},f_{1},\ldots,f_{\nu-1})\) whose elements are invertible homographies on \(\mathbb{P}_{d}(K)\). This means that we have \(\nu\) invertible \((d+1)\times(d+1)\)-matrices \(A_{0},A_{1},\ldots,A_{\nu-1}\) with coefficients in \(K\). If \(w=w_{1}w_{2}\cdots w_{n}\) is a word \(n\) on the alphabet \(\{0.1.2.\ldots,\nu-1\}\) we set
\[\Phi_{A,w}=A_{w_{1}}A_{w_{2}}\cdots A_{w_{n}},\]
where \(A\) stands for \((A_{0},A_{1},\ldots,A_{\nu-1})\). Obviously \(\Phi_{f,w}\) is the homography associated with the matrix \(\Phi_{A,w}\).
In case when this IFS fulfills a separation condition the following problem is easy to solve.
**Problem3:** Given a matrix \(M\) of which we know that there exist an integer \(k\) and a word \(w\in\{0,1,2,\ldots,\nu-1\}^{k}\) such that \(M=\Phi_{A,w}\), recover the word \(w\).
**Solution:** Choose a point \(P\) in \(S\) (we stick to the notation of the introduction); more precisely we choose a system of homogeneous coordinates of \(P\) that define a column \(X\). We certainly have \(f_{w_{j}}\circ f_{w_{j+1}}\circ\cdots\circ f_{w_{k}}(P)\in S\) for all \(j\leq k\). So there exists \(j\) such that \(MX\in S_{j}\) (with a slight abuse of notation). This means that \(w_{1}=j\). And now we have to solve the same problem with \(A_{j}^{-1}M\). We then repeat this step until reaching the unit matrix.
**Algorithm 3**
_choose \(P\in S\)_
\(w<\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
#### Encryption
The message to be transmitted is a word \(w\) of length \(k\leq n\) on the alphabet \(\{0,1,2,\ldots,\nu-1\}\). The corresponding cryptogram is the matrix
\[C=B_{w_{1}}B_{w_{2}}\cdots B_{w_{k}}\mod p.\]
#### Decryption
We have \(A_{w_{1}}A_{w_{2}}\cdots A_{w_{k}}=UCU^{-1}\mod p\).
But all the coefficients of \(A_{w_{1}}A_{w_{2}}\cdots A_{w_{k}}\) have an absolute value less than \(p/2\). So we have
\[A_{w_{1}}A_{w_{2}}\cdots A_{w_{k}}=UCU^{-1}\mod p,\]
where \(a\mod p\) stands for the number \(b\) such that \(|b|<p/2\) and \(b=a\mod p\). Then we apply the algorithm 3 to recover \(w\)..
#### Remarks
As previously the brute force attack requires \(O(\nu^{n})\) operations. Also, as previously an eaves dropper with the ability of usurpating Bob's identity can tamper the cryptogram by pre or post multiplying \(C\) by some of the \(B_{j}\). To thwart this tampering one can put extra information in the middle of the message \(w\), for instance its length. In the next section we show how this protocol allows message authentication and integrity.
This procol seems to be not well suited, although usable, when all the homographies are affine maps. Indeed in this case the matrices \(B_{j}\) have a common eigendirection, which could weaken the cipher. This is specially the case when they are similitudes on a one-dimensional space..
In section 3.2 we need a bound for the coefficients of a product of matrices. For reader's convenience, we give a possible way to get such a bound.
We can define two norms on the space of real \((d+1)\times(d+1)\)-matrices. If \(M=(m_{i,j})_{1\leq i,j\leq d+1}\) then
\[\|M\|_{h}=\max_{1\leq i\leq d+1}\sum_{1\leq j\leq d+1}|m_{i,j}|\quad\mbox{and} \quad\|M\|_{v}=\max_{1\leq j\leq d+1}\sum_{1\leq i\leq d+1}|m_{i,j}|.\]
Since these norms are the operator norms associated with the \(\ell^{\infty}\) and the \(\ell^{1}\) norms on \(\mathbb{R}^{d+1}\) we have \(\|M_{1}M_{2}\|_{h}\leq\|M_{1}\|_{h}\|M_{2}\|_{h}\) and the same for \(\|\ ||_{v}\).
So, we can choose the prime number \(p\) so that
\[p>2\,\max\bigl{\{}\max_{0\leq i<\nu}\|A_{i}\|_{h}^{n},\max_{0\leq i<\nu}\|A_{i }\|_{v}^{n}\bigr{\}}.\]
## 4 Message authentication and integrity
Now Alice wants to make sure that the message supposedly coming from Bob actually comes from him. They may use the following procedure. Both Alice and Bob will use the second protocol. They share \(n\) and \(p\) and the dimension \(k+1\)
of the matrices. Alice has already chosen a projective IFS with the separation condition whose keys are
\[(n,p,B)\quad\text{and}\quad(n,p,A,U,\mathcal{S}).\]
Bob has already prepared the cryptogram \(C\) to be transmitted. In his turn he sets up an IFS with matrices of the same dimension of Alice's. His keys are
\[(n^{\prime},p,B^{\prime})\quad\text{and}\quad(n^{\prime},p,A^{\prime},U^{ \prime},\mathcal{S}^{\prime}).\]
Alice choose two words \(w^{\prime}\) and \(w^{\prime\prime}\) on the alphabet \(\{0,1,\ldots,\nu^{\prime}-1\}\) both of length \(m\) with \(m\leq n^{\prime}/2\). We suppose that some bits in \(w^{\prime}w^{\prime\prime}\) code the number \(m\). She computes \(\Phi_{B^{\prime},w^{\prime}}\) and \(\Phi_{B^{\prime},w^{\prime\prime}}\) and sends \(\Phi_{B^{\prime},w^{\prime}w^{\prime\prime}}=\Phi_{B^{\prime},w^{\prime}} \Phi_{B^{\prime},w^{\prime\prime}}\) to Bob who then is able to extract \(w^{\prime}w^{\prime\prime}\) and check if the length \(m\) agrees with the information contained in \(w^{\prime}w^{\prime\prime}\). If it is so, it is unlikely that there was tampering. Then he sends \(C^{\prime}=\Phi_{B^{\prime},w^{\prime}}C\Phi_{B^{\prime},w^{\prime\prime}}\) to Alice who then recovers \(C\): \(C=\Phi_{B^{\prime},w^{\prime}}^{-1}C^{\prime}\Phi_{B^{\prime},w^{\prime\prime}}^ {-1}\). If the decoding of \(C\) is successful, it is likely that \(C^{\prime}\) has not been tampered. Since Bob was the only one who can easily get \(w^{\prime}w^{\prime\prime}\) this is him that emitted the cryptogram \(C\).
|
2305.19755 | New Physics in Neutrino Oscillation: Nonunitarity or Nonorthogonality? | Neutrino oscillation phenomenon is a definite evidence of physics beyond the
Standard Model (SM) and high precision measurement of neutrino properties will
certainly give us clue about what lies beyond the SM. In particular, precise
measurements of the mixing matrix elements $U_{\alpha i}$ which relate the
neutrino flavor $\alpha$ and mass $i$ eigenstates are crucial since new physics
at scale beyond experimental reach can lead to a nonunitary $U$. This in turns
results in nonorthogonal neutrino flavor states. How to calculate the
oscillation probability in this scenario is an important theoretical issue that
will be treated here. We show that probability constructed using theory of
projection probability will ensure that the theory remains unitary in time
evolution and the probabilities of neutrino of certain flavor being detected as
all possible flavor states always sum up to unity. This result is crucial for
discovery of new physics through neutrino oscillation phenomena. | Chee Sheng Fong | 2023-05-31T11:39:34Z | http://arxiv.org/abs/2305.19755v1 | # New Physics in Neutrino Oscillation: Nonunitarity or Nonorthogonality?
###### Abstract
Neutrino oscillation phenomenon is a definite evidence of physics beyond the Standard Model (SM) and high precision measurement of neutrino properties will certainly give us clue about what lies beyond the SM. In particular, precise measurements of the mixing matrix elements \(U_{\alpha i}\) which relate the neutrino flavor \(\alpha\) and mass \(i\) eigenstates are crucial since new physics at scale beyond experimental reach can lead to a _nonunitary_\(U\). This in turns results in _nonorthogonal_ neutrino flavor states. How to calculate the oscillation probability in this scenario is an important theoretical issue that will be treated here. We show that probability constructed using theory of projection probability will ensure that the theory remains _unitary_ in time evolution and the probabilities of neutrino of certain flavor being detected as all possible flavor states always sum up to unity. This result is crucial for discovery of new physics through neutrino oscillation phenomena.
Introduction
Neutrino mass is a definite evidence of physics beyond the Standard Model (SM). It cannot be overemphasized the importance of scrutinizing the neutrino sector to as high precision as our current and future technology would allow [1; 2], as this will most certainly lead us towards new physics that not only give rise to neutrino mass but address more fundamental questions like why only the left-handed fields feel the weak force, why is the weak scale so much smaller than the Planck scale, why is electric charge quantized and so on. From the Effective Field Theory (EFT) point of view, even if the new physics scale is beyond our experimental reach, with high enough precision, we will learn important clues about the new physics.
Treating the SM as an EFT, neutrino mass arises, after the electroweak symmetry breaking, from the unique dimension-5 Weinberg operator [3; 4]
\[{\cal O}_{5} = \frac{\lambda_{\alpha\beta}}{\Lambda_{5}}\left(\overline{L_{ \alpha}}\epsilon H\right)\left(L_{\beta}^{T}\epsilon H\right), \tag{1}\]
where \(L_{\alpha}(\alpha=e,\mu,\tau)\) and \(H\) are the \(SU(2)_{L}\) lepton and Higgs doublets, respectively, and \(\epsilon\) is an \(SU(2)_{L}\) antisymmetric tensor used to contract the two doublets to form a singlet. Here \(\lambda\) is a dimensionless symmetric matrix, and \(\Lambda_{5}\) is the new physics scale below which the operator \({\cal O}_{5}\) is valid. The existence of this operator is in good agreement with experimental observations in neutrino oscillation phenomena [5] while the other key prediction is in neutrinoless double-beta decay which might be discovered in current or future experiments [6; 7]. Next, considering the following dimension-6 operator [8; 9; 10; 11]
\[{\cal O}_{6} = \frac{\eta_{\alpha\beta}}{\Lambda_{6}^{2}}\left(\overline{L_{ \alpha}}\epsilon H^{*}\right)i\partial\!\!\!\!/\left(L_{\beta}^{T}\epsilon H \right), \tag{2}\]
where \(\eta\) is a dimensionless Hermitian matrix and \(\Lambda_{6}\) is the new physics scale (which might or might not be related to \(\Lambda_{5}\)) below which the operator is valid. The kinetic term of the SM neutrinos will be modified after the electroweak symmetry breaking. Once neutrino fields are redefined such that kinetic terms are again canonical, the matrix \(U\) which relates the neutrino mass and flavor eigenstates is no longer unitary. Implicitly, we are assuming that the center of mass energy in an experiment \(E\ll\Lambda_{5},\Lambda_{6}\) such that there is no new degrees of freedom beyond those of the SM. As a remark, not all ultraviolet models which generate \({\cal O}_{5}\) also generate \({\cal O}_{6}\). For instance, type-I and type-III seesaw models generate both \({\cal O}_{5}\) and
while type-II seesaw model only generates \({\cal O}_{5}\) (see for example a review article [12]). Here we clearly see the importance of measuring \(U\) to a very high precision as it gives us a clue what UV models can give rise to modifications in neutrino sector and the scale in which they reside. Clearly, this cannot be achieved without a precise theoretical formalism of how to deal with possible nonunitary \(U\).
Our main result is to show that the net effect of a nonunitary \(U\) is that the neutrino flavor states become _nonorthogonal_ while the theory remain unitary, as opposed to what have been studied so far in both phenomenological and experimental work involving neutrino oscillation phenomena with nonunitary \(U\)[8; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32]. With hindsight, this surprising conclusion is perhaps to be expected from the outset since without new degrees of freedom accessible in experiments, i.e. the flavor states are complete, the theory should remains unitary.1
Footnote 1: This is in contrast to the case where kinematically accessible light sterile neutrinos which mix with the SM neutrinos are produced in the experiments and lead to apparent unitarity violation due to additional states that are not detected [33; 34; 35; 36].
It is interesting to note that nonorthogonal basis states are commonplace in quantum chemistry [37; 38; 39; 40; 41; 42; 43] since it is convenient to express molecular orbitals as linear combinations of atomic orbitals which are in general not orthogonal. In this study, nonorthogonal neutrino flavor states are imposed on us due to new physics which results in nonunitary \(U\). While the theory is _unitary_, we will continue to use the term _high scale unitarity violation_ (in reference to the mixing matrix \(U\)) to describe this scenario.
## II Neutrino effective field theory
Here we will assume the center-of-mass energy \(E\) involved is below the electroweak symmetry breaking \(E<v_{\rm EW}\equiv 174\) GeV and also \(E<\Lambda_{5},\Lambda_{6}\). Due to the operators (1) and (2), the general neutrino Lagrangian allowed by the SM electromagnetic gauge symmetry \(U(1)_{\rm EM}\) in the charged lepton mass basis is given by2
Footnote 2: The lepton flavors are defined by the corresponding charged lepton masses.
\[{\cal L}_{\nu}=\frac{1}{2}\left(i\overline{\nu_{\alpha}}\partial \!\!\!/D_{\alpha\beta}\nu_{\beta}-\overline{\nu_{\alpha}^{c}}m_{\alpha\beta} \nu_{\beta}+{\rm h.c.}\right)\] \[-\left(\frac{g}{2}W_{\mu}^{-}\overline{\ell_{\alpha}}\gamma^{\mu} P_{L}\nu_{\alpha}+\frac{g}{\sqrt{2}\cos\theta_{W}}Z_{\mu}\overline{\nu_{ \alpha}}\gamma^{\mu}P_{L}\nu_{\alpha}+{\rm h.c.}\right), \tag{3}\]
where the flavor indices are \(\alpha,\beta=e,\mu,\tau\), \(D=I+\eta v_{\rm EW}^{2}/\Lambda_{6}^{2}\) with \(I\) the \(3\times 3\) identity matrix, \(m=\lambda v_{\rm EW}^{2}/\Lambda_{5}\), \(g\) is the \(SU(2)_{L}\) gauge coupling, \(\theta_{W}\) is the weak angle, \(P_{L}=\frac{1}{\sqrt{2}}\left(1-\gamma^{5}\right)\) is
the left-handed projector, \(\{\ell_{e},\ell_{\mu},\ell_{\tau}\}\equiv\{e^{-},\mu^{-},\tau^{-}\}\) are the charged leptons and \(W^{\mp}\) and \(Z\) are the charged and neutral weak bosons, respectively.
A canonical normalized kinetic term can be obtained by diagonalizing the kinetic term as \(D=Y^{\dagger}\hat{D}Y\) where \(Y\) is unitary and \(\hat{D}\) is real positive and diagonal and then redefining the neutrino fields as \(\widetilde{\nu}=\sqrt{\hat{D}}Y\nu\). Next the symmetric mass matrix \(\widetilde{m}\equiv\sqrt{\hat{D}}^{-1}Y^{*}mY^{\dagger}\sqrt{\hat{D}}^{-1}\) can be diagonalized by a unitary matrix \(V\) as \(\widetilde{m}=V^{*}\hat{m}V^{\dagger}\) where \(\hat{m}\) is real and diagonal. Denoting the neutrino fields in the mass basis as \(\hat{\nu}\equiv V^{\dagger}\widetilde{\nu}\), we have [8]
\[\mathcal{L}_{\nu} = \frac{1}{2}\left(i\widetilde{\nu}_{i}\not{\partial}\hat{\nu}_{i} -\overline{\hat{\nu}_{i}^{\varepsilon}}\hat{m}_{ii}\hat{\nu}_{i}+\text{h.c.}\right) \tag{4}\] \[-\left[\frac{g}{2}W_{\mu}\overline{\ell_{\alpha}}\gamma^{\mu}P_{ L}U_{\alpha i}\hat{\nu}_{i}+\frac{g}{\sqrt{2}\cos\theta_{W}}Z_{\mu}\overline{ \hat{\nu}_{i}}\left(U^{\dagger}U\right)_{ij}\gamma^{\mu}P_{L}\hat{\nu}_{j}+ \text{h.c.}\right],\]
where we denote \(i,j=1,2,3\) to be the indices in mass basis and we have defined
\[U_{\alpha i} \equiv \left(Y^{\dagger}\sqrt{\hat{D}}^{-1}V\right)_{\alpha i}, \tag{5}\]
which is not unitary in general
\[UU^{\dagger} = Y^{\dagger}\hat{D}^{-1}Y,\qquad U^{\dagger}U=V^{\dagger}\hat{D}^ {-1}V. \tag{6}\]
unless \(\hat{D}=I\).
## III Neutrino Oscillation with nonorthogonal flavor basis
From (4), the neutrino flavor states \(|\nu_{\alpha}\rangle\) are related to the mass eigenstates \(|\nu_{i}\rangle\) as follows
\[|\nu_{\alpha}\rangle = \sum_{i}\overline{U}_{\alpha i}^{*}\,|\nu_{i}\rangle\,, \tag{7}\]
where we have defined \(\overline{U}_{\alpha i}\equiv U_{\alpha i}/\sqrt{\left(UU^{\dagger}\right)_{ \alpha\alpha}}\). From the orthogonality of mass eigenstates \(\langle\nu_{j}|\nu_{i}\rangle=\delta_{ji}\), the flavor states (7) are properly normalized \(\langle\nu_{\alpha}|\nu_{\alpha}\rangle=1\) though they are in general _nonorthogonal_
\[\langle\nu_{\beta}|\nu_{\alpha}\rangle = \left(\overline{U}\,\overline{U}^{\dagger}\right)_{\beta\alpha} \equiv\mathcal{N}_{\beta\alpha}, \tag{8}\]
where \(\mathcal{N}\) is a \(3\times 3\) matrix with diagonal elements all equal to one. While the orthogonality of mass basis \(\{|\nu_{i}\rangle\}\) implies the usual completeness relation
\[\sum_{i}|\nu_{i}\rangle\,\langle\nu_{i}| = \mathbf{I}, \tag{9}\]
with \(\mathbf{I}\) the identity operator, the nonorthogonality flavor basis (8) implies a modified completeness relation [36]3
Footnote 3: One can take \(g^{\alpha\beta}\equiv(\mathcal{N}^{-1})_{\alpha\beta}\) as the metric which raises the indices of \(\left|{}_{\alpha}\right\rangle\equiv\left|{}_{\alpha}\right\rangle\) as \(\left|{}^{\alpha}\right\rangle=g^{\alpha\beta}\left|{}_{\beta}\right\rangle\) to form the dual vector. We will avoid using this notation in this work.
\[\sum_{\alpha,\beta}\left|{\nu_{\alpha}}\right\rangle\left(\mathcal{N}^{-1} \right)_{\alpha\beta}\left\langle{\nu_{\beta}}\right|\:=\:\mathbf{I}. \tag{10}\]
In quantum mechanics, probability is not an observable and there is no associated Hermitian operator. For orthonormal basis, the probability can be determined by inserting the projection operator \(\left|{\nu_{\beta}}\right\rangle\left\langle{\nu_{\beta}}\right|\) in between \(\left\langle{\nu_{\alpha}}\right|\nu_{\alpha}\right\rangle\) to obtain the probability of finding \(\left|{\nu_{\beta}}\right\rangle\) in the original \(\left|{\nu_{\alpha}}\right\rangle\): \(P_{\beta\alpha}=\left|{\left\langle{\nu_{\beta}}\right|\nu_{\alpha}}\right\rangle \left|{}^{2}\) which is the Born rule. If the complete set \(\left\{{\left|{\nu_{\alpha}}\right\rangle}\right\}\) are not orthogonal, from eq. (10), the projection operator is given by
\[P_{\alpha}\:\equiv\:\sum_{\beta}\left|{\nu_{\alpha}}\right\rangle\left( \mathcal{N}^{-1}\right)_{\alpha\beta}\left\langle{\nu_{\beta}}\right|, \tag{11}\]
which satisfies \(P_{\alpha}^{2}=P_{\alpha}\) and \(\sum_{\alpha}P_{\alpha}=\mathbf{I}\). Inserting \(P_{\alpha}\) in between \(\left\langle{\nu_{\alpha}}\right|\nu_{\alpha}\right\rangle\), we obtain
\[\left\langle{\nu_{\alpha}}\right|P_{\beta}\left|{\nu_{\alpha}}\right\rangle\:= \:\left|{\left\langle{\nu_{\beta}}\right|\nu_{\alpha}}\right\rangle\left|{}^{2 }+\sum_{\gamma\neq\beta}\left\langle{\nu_{\alpha}}\right|\nu_{\beta}\right) \left(\mathcal{N}^{-1}\right)_{\beta\gamma}\left\langle{\nu_{\gamma}}\right| \nu_{\alpha}\right\rangle, \tag{12}\]
where the second term is in general complex and cannot be interpreted as a probability. Besides being real and positive, the probabilities of finding \(\left|{\nu_{\alpha}}\right\rangle\) in all possible \(\left|{\nu_{\beta}}\right\rangle\) should sum up to _unity_ as required since the set \(\left\{{\left|{\nu_{\alpha}}\right\rangle}\right\}\) is _complete_. We will discuss this construction in Section III.2.
### Evolution of neutrino flavor state
Let us give brief review of evolution of a flavor state. Starting from an initial state \(\left|{\nu_{\alpha}\left(0\right)}\right\rangle=\left|{\nu_{\alpha}}\right\rangle\), the time-evolved state \(\left|{\nu_{\alpha}\left(t\right)}\right\rangle\) is described by the Schrodinger equation
\[i\frac{d}{dt}\left|{\nu_{\alpha}\left(t\right)}\right\rangle\:=\:\mathcal{H} \left|{\nu_{\alpha}\left(t\right)}\right\rangle, \tag{13}\]
where the Hamiltonian is \(\mathcal{H}=\mathcal{H}_{0}+\mathcal{H}_{I}\) with \(\mathcal{H}_{0}\) the free Hamiltonian \(\mathcal{H}_{0}\left|{\nu_{i}}\right\rangle=E_{i}\left|{\nu_{i}}\right\rangle\) and \(E_{i}=\sqrt{\vec{p}_{i}\,^{2}+m_{i}^{2}}\), and \(\mathcal{H}_{I}\) the interaction Hamiltonian with matrix elements \(\left\langle{\nu_{\beta}}\right|\mathcal{H}_{I}\left|{\nu_{\alpha}}\right\rangle =V_{\beta\alpha}\) where \(V_{\beta\alpha}^{*}=V_{\alpha\beta}\) since \(\mathcal{H}_{I}^{\dagger}=\mathcal{H}_{I}\).
Multiplying \(|\nu_{\beta}\rangle\) from the left of eq. (13) and inserting the completeness relations (9) and (10), we arrive at
\[i\frac{d}{dt}\left\langle\nu_{\beta}|\nu_{\alpha}\left(t\right)\right\rangle = \sum_{\eta}\left\{\sum_{i}\overline{U}_{\beta i}E_{i}\big{(} \overline{U}^{-1}\big{)}_{i\eta}+\sum_{\gamma}V_{\beta\gamma}\left(\mathcal{N }^{-1}\right)_{\gamma\eta}\right\}\left\langle\nu_{\eta}|\nu_{\alpha}\left(t \right)\right\rangle. \tag{14}\]
Assuming relativistic neutrinos \(E\gg m_{i}\), we expand \(E_{i}\simeq E+\frac{m_{i}^{2}}{2E}\) and trade time for distance \(t=x\) and obtain, in matrix notation
\[i\frac{dS\left(x\right)}{dx} = \left[\overline{U}\Delta\overline{U}^{-1}+V\left(\mathcal{N}^{- 1}\right)\right]S\left(x\right), \tag{15}\]
where we have defined \(S_{\beta\alpha}(x)\equiv\left\langle\nu_{\beta}|\nu_{\alpha}\left(x\right)\right\rangle\) and
\[\Delta \equiv \frac{1}{2E}\text{diag}\left(m_{1}^{2},m_{2}^{2},...,m_{3+N}^{2} \right)=\text{diag}\left(\Delta_{1},\Delta_{2},...,\Delta_{3+N}\right). \tag{16}\]
We have dropped the constant \(E\) which, as an overall phase in \(S(x)\), is not observable.
From eq. (15), the Hamiltonian in the flavor basis given by
\[H \equiv \overline{U}\Delta\overline{U}^{-1}+V\left(\mathcal{N}^{-1} \right), \tag{17}\]
is not Hermitian \(H^{\dagger}\neq H\). Through a similarity transformation, we obtain the Hamiltonian in the _vacuum mass basis_
\[\widetilde{H} \equiv \overline{U}^{-1}H\overline{U}=\Delta+\overline{U}^{-1}V \overline{U}^{\dagger,-1}. \tag{18}\]
Since \(\widetilde{H}=\widetilde{H}^{\dagger}\) is Hermitian, it can be diagonalized by a unitary matrix \(X\) and has real eigenvalues. So \(H\) has the same eigenvalues as \(\widetilde{H}\) and the time evolution of the system is _unitary_. The apparent non-Hermicity of \(H\) is just due to nonunitary transformation matrix \(U\). Up to the eigenvalues, we can solve for \(S\) analytically for an arbitrary matter potential as shown in refs. [35; 36].
### Oscillation probability
In refs. [39; 40], the theory of projected probabilities on nonorthogonal states are developed and applied to determine the atomic populations in molecules. We will follow their procedure to derive the neutrino oscillation probability. Given an arbitrary state \(|\psi\rangle\), the basic idea is to project it to a chosen \(|\nu_{\alpha}\rangle\) and to the corresponding orthogonal component
\(\left|\nu_{\alpha}\right\rangle_{\perp}\). The orthogonal component is further projected to the (hyper)plane formed by the rest of the basis states and to the orthogonal component to this plane. Then the new orthogonal component is further projected to \(\left|\nu_{\alpha}\right\rangle\) and \(\left|\nu_{\alpha}\right\rangle_{\perp}\) and so on. It turns out that for two and three states system, the probability operators have closed forms and we will refer the reader to the companion paper for details [36]. After solving the amplitude \(S(x)\) as in Section III.1, the probability of an initial state \(\left|\nu_{\alpha}\right\rangle\) being detected as \(\left|\nu_{\beta}\right\rangle\) at distance \(x\) can be written as
\[P_{\beta\alpha}\left(x\right) = \sum_{\xi,\lambda}S_{\alpha\xi}(x)(\hat{P}_{\beta})_{\xi\lambda} S_{\lambda\alpha}(x), \tag{19}\]
where \(\sum_{\beta}(\hat{P}_{\beta})_{\xi\lambda}=(\mathcal{N}^{-1})_{\xi\lambda}\) which together with eq. (10) guarantees that \(\sum_{\beta}P_{\beta\alpha}(x)=1\) as required by unitarity. The appearance of \((\hat{P}_{\beta})_{\xi\lambda}\) (which only depends on \(U\)) takes into account possible nonorthogonality of flavor states where for vanishing off-diagonal elements of \(\mathcal{N}\), we have
\[(\hat{P}_{\beta})_{\xi\lambda} = \delta_{\xi\beta}\delta_{\beta\lambda}, \tag{20}\]
and we recover the standard result. Utilizing eq. (20) for nonorthogonal flavor states will lead to inconsistent result \(\sum_{\beta}P_{\beta\alpha}(x)\neq 1\). To prove this, it is sufficient to show the case for \(x=0\) in which we obtain
\[P_{\beta\alpha}\left(0\right) = |\mathcal{N}_{\beta\alpha}|^{2}, \tag{21}\]
which gives \(\sum_{\beta}P_{\beta\alpha}(0)>1\) since \(\mathcal{N}_{\alpha\alpha}=1\).
Next we define \(\mathcal{N}_{\alpha}\) as a \(2\times 2\) submatrix formed from the matrix \(\mathcal{N}\) excluding the row and column involving \(\nu_{\alpha}\) state. Following the construction of refs. [39; 40], we obtain our main result (see the companion paper [36] for details)
\[(\hat{P}_{\alpha})_{\xi\lambda} = \frac{1}{3}\left[(E_{\alpha})_{\xi\lambda}+\sum_{\beta\neq\alpha} (F_{\alpha\beta})_{\xi\lambda}\right], \tag{22}\]
with
\[\left(E_{\alpha}\right)_{\xi\lambda} = \begin{cases}1+\frac{X_{\alpha}^{2}}{1-X_{\alpha}^{2}},&\xi= \lambda=\alpha\\ \frac{\left|\mathcal{N}_{\alpha\xi}-\mathcal{N}_{\alpha\gamma\xi}\right|^{2}} {\left(\det\mathcal{N}_{\alpha}\right)^{2}\left(1-X_{\alpha}^{2}\right)},\; \gamma\neq\{\alpha,\xi\},&\xi=\lambda\neq\alpha\\ -\frac{1}{2}\frac{\mathcal{N}_{\xi\lambda}-\mathcal{N}_{\xi\gamma\lambda}}{ \det\mathcal{N}},\;\gamma\neq\{\alpha,\xi\}\,,&\xi\neq\lambda\;\text{and}\;( \xi=\alpha\;\text{or}\;\lambda=\alpha)\\ \frac{\left(\mathcal{N}_{\alpha\lambda}-\mathcal{N}_{\alpha\xi\lambda}\right) \left(\mathcal{N}_{\xi\alpha}-\mathcal{N}_{\xi\lambda\alpha}\right)}{\left( \det\mathcal{N}_{\alpha}\right)^{2}\left(1-X_{\alpha}^{2}\right)},&\xi\neq \lambda\;\text{and}\;\left\{\xi,\lambda\right\}\neq\alpha\end{cases}, \tag{23}\]
and for \(\beta\neq\alpha\) and \(\gamma\neq\{\alpha,\beta\}\)
\[\left(F_{\alpha\beta}\right)_{\xi\lambda} = \left\{\begin{aligned} & 1+\frac{\left|\mathcal{N}_{ \alpha\gamma}\right|^{4}}{1-\left|\mathcal{N}_{\alpha\gamma}\right|^{4}}+ \frac{\langle(p_{\alpha})_{\{\alpha,\gamma\}}\rangle_{\beta\beta}\left| \mathcal{N}_{\alpha\beta}-\mathcal{N}_{\alpha\gamma\beta}\right|^{2}}{\left( \det\mathcal{N}_{\beta}\right)^{2}\left(1-X_{\beta}^{2}\right)},& \xi=\lambda=\alpha\\ &-\frac{1}{2}\frac{1}{1-\left|\mathcal{N}_{\alpha\gamma}\right|^{ 4}}\left(\mathcal{N}_{\xi\lambda}-\frac{1+\left|\mathcal{N}_{\alpha\gamma} \right|^{2}}{2}\mathcal{N}_{\xi\gamma\lambda}\right),&\xi\lambda= \alpha\beta\text{ or }\xi\lambda=\beta\alpha\\ &-\frac{1}{2}\frac{\langle(p_{\alpha})_{\{\alpha,\gamma\}} \rangle_{\beta\beta}}{1-X_{\beta}^{2}}\frac{\mathcal{N}_{\xi\lambda}- \mathcal{N}_{\xi\gamma\lambda}}{\det\mathcal{N}_{\beta}}\left(1+\left\langle p _{\alpha\gamma}\right\rangle_{\beta\beta}\right)\\ &-\frac{1}{2}\frac{\mathcal{N}_{\xi\lambda}}{\det\mathcal{N}_{ \beta}}+\frac{\langle(p_{\alpha})_{\{\alpha,\gamma\}}\rangle_{\beta\beta} \left(\mathcal{N}_{\xi\beta}-\mathcal{N}_{\xi\lambda\beta}\right)\left( \mathcal{N}_{\beta\lambda}-\mathcal{N}_{\beta\xi\lambda}\right)}{\left(\det \mathcal{N}_{\beta}\right)^{2}\left(1-X_{\beta}^{2}\right)},&\xi \lambda=\alpha\gamma\text{ or }\xi\lambda=\gamma\alpha\\ &\frac{1}{2}\left(1+\frac{1+\left\langle p_{\alpha\gamma}\right\rangle _{\beta\beta}^{2}}{1-X_{\beta}^{2}}\right)\langle(p_{\alpha})_{\{\alpha,\gamma \}}\rangle_{\beta\beta},&\xi=\lambda=\beta\\ &-\frac{1}{2}\frac{1}{1-\left|\mathcal{N}_{\alpha\gamma}\right|^{ 4}}\left(\left|\mathcal{N}_{\alpha\gamma}\right|^{2}\mathcal{N}_{\xi\lambda}- \frac{1+\left|\mathcal{N}_{\alpha\gamma}\right|^{2}}{2}\mathcal{N}_{\xi\alpha \lambda}\right),&\xi\lambda=\beta\gamma\text{ or }\xi\lambda=\gamma\beta\\ &\frac{\left|\mathcal{N}_{\alpha\gamma}\right|^{2}}{1-\left| \mathcal{N}_{\alpha\gamma}\right|^{4}}+\frac{\langle(p_{\alpha})_{\{\alpha, \gamma\}}\rangle_{\beta\beta}\left|\mathcal{N}_{\gamma\beta}-\mathcal{N}_{ \gamma\alpha\beta}\right|^{2}}{\left(\det\mathcal{N}_{\beta}\right)^{2}\left(1 -X_{\beta}^{2}\right)},&\xi=\lambda=\gamma\end{aligned}\right. \tag{24}\]
where we have defined
\[\mathcal{N}_{\alpha\beta\gamma} \equiv \mathcal{N}_{\alpha\beta}\mathcal{N}_{\beta\gamma}, \tag{25}\] \[\left\langle p_{\alpha\gamma}\right\rangle_{\beta\beta} \equiv \frac{\left|\mathcal{N}_{\alpha\beta}\right|^{2}+\left|\mathcal{N }_{\beta\gamma}\right|^{2}-2\text{Re}\left(\mathcal{N}_{\beta\alpha}\mathcal{ N}_{\alpha\gamma}\mathcal{N}_{\gamma\beta}\right)}{1-\left|\mathcal{N}_{\alpha \gamma}\right|^{2}},\] (26) \[\left\langle(p_{\alpha})_{\{\alpha,\gamma\}}\right\rangle_{\beta\beta} \equiv \frac{\left|\mathcal{N}_{\alpha\beta}\right|^{2}+\left|\mathcal{N }_{\alpha\gamma}\mathcal{N}_{\beta\gamma}\right|^{2}-\left(1+\left|\mathcal{N} _{\alpha\gamma}\right|^{2}\right)\text{Re}\left(\mathcal{N}_{\beta\alpha} \mathcal{N}_{\alpha\gamma}\mathcal{N}_{\gamma\beta}\right)}{1-\left|\mathcal{N} _{\alpha\gamma}\right|^{4}}. \tag{27}\]
The subscript \(\{\alpha,\gamma\}\) in the last expression are not indices but refers to the set of basis states of the corresponding operator. For example, with \(\{\mu,\tau\}\), we can have \(\langle(p_{\mu})_{\{\mu,\tau\}}\rangle_{\beta\beta}\) or \(\langle(p_{\tau})_{\{\mu,\tau\}}\rangle_{\beta\beta}\). In the case where off-diagonal elements of \(\mathcal{N}\) are zero, we recover the standard result in eq. (20). This also shows that high scale nonunitarity scenario is only sensitive to off-diagonal elements of \(U\) and as shown in ref. [36], in the limit of vanishing off diagonal elements, the scenario is indistinguishable from the unitarity scenario.
Using the public code NuProbe[35; 44], in Figure 1, we plot the probability of \(\nu_{\mu}\rightarrow\nu_{\beta}\) at a distance of 1300 km with a constant matter density of 3 g/cm\({}^{3}\) for \(\beta=e,\mu,\tau\) which correspond to purple cross, blue dot and red star curves, respectively. The solid curves are for the standard scenario with unitary \(U\) while the dashed curves are for the high scale unitarity violation scenario with nonunitary \(U\). The black solid curves on the top denote
\(\sum_{\beta}P_{\beta\alpha}\) for nonunitary \(U\). The standard parameters are set to the global best fit values for Normal mass Ordering (NO) from [5; 45]. The nonorthogonal parameters are set to \(\left(UU^{\dagger}\right)_{e\mu}=\left(UU^{\dagger}\right)_{e\tau}=\left(UU^{ \dagger}\right)_{\mu\tau}=0.03e^{-i\pi/3}\) and \(\left(UU^{\dagger}\right)_{ee}=\left(UU^{\dagger}\right)_{\mu\mu}=\left(UU^{ \dagger}\right)_{\tau\tau}=0.96\). On the left plot, the expression (22) for \(\hat{P}_{\beta}\) is used while on the right plot, the expression (20) is used. As we can see, with nonunitary \(U\), the latter expression gives rise to spurious result in which the total probability can be larger or smaller than 1 (black curve on the right plot). The deviations from the standard scenario are more apparent for appearance channels \(\nu_{\mu}\rightarrow\nu_{e}\) and \(\nu_{\mu}\rightarrow\nu_{\tau}\).
In Figure 2, we repeat the calculations using the same neutrino parameters but for neutrino crossing through the Earth core with a simplified (Preliminary Reference Earth Model) PREM model [46] implemented in NuProbe[44; 35]. Even with nontrivial matter potential, unitarity is preserved as can be seen in the black curve on the left plot. We also see that matter effects noticeably enhance the differences between the standard and high scale unitarity violation scenario for all the channels.
## IV Conclusions
To answer the question posted in the title, new physics can result in _nonunitary_ lepton mixing matrix \(U\) which further implies _nonorthogonal_ neutrino flavor states. This _high scale unitarity violation_ scenario can be distinguished from the standard scenario although the theory itself remains unitary in time evolution and the total probability always sum up to unity. If new physics scale were beyond the energy reach of our current and foreseeable future experiments, one might have to focus on intensity frontier to carry out precision measurements. In such a scenario, neutrino sector might be the unique place that guarantees clues to new physics. Then precision measurements of neutrino oscillation phenomena rely on precise theoretical treatment and to prepare for discovery of new physics, one should use eq. (22) in the probability calculation which preserves unitarity.
## V Acknowledgments
C.S.F. acknowledges the support by grant 2019/11197-6 and 2022/00404-3 from Sao Paulo Research Foundation (FAPESP), and grant 301271/2019-4 and 407149/2021-0 from National Council for Scientific and Technological Development (CNPq). This work is inspired by a "dangerous idea" of Hisakazu Minakata who said "_A closed theory without unitarity must
Figure 2: The probability of \(\nu_{\mu}\to\nu_{\beta}\) for neutrino passing through the Earth core using simplified PREM model as a function of neutrino energy \(E_{\nu}\). The notations are the same as in Figure 1.
not make sense. With ignoring inelastic scattering, absorption, and decay, etc, the three neutrino system cannot lose the probability. To where probability loss goes?_" He would like to thank Celso Nishi for reading and commenting on the manuscript. He also acknowledges support from the ICTP through the Associates Programme (2023-2028) while this work was being completed.
|